Did somebody say it's time to start testing?
Every conversion optimization project should begin by clearly identifying goals and doing the deep research we mentioned in our previous post (add link). Once the proper research is complete you should have the insights needed to develop a solid testing plan, and ultimately begin your first test. As a reminder, it is important to check egos and opinions at the door. My favorite conversion expert, Peep Laja, stated it best when he said: "would you rather have a doctor operate on you based on opinion, or careful examination and tests?" I know what I would like. Opinions aside, the question usually at this point is "where do we begin?"
At Simpleview we use our own S.I.M.P.L.E FrameworkTM where we rank each possible optimization so we can determine which areas will move the needle the furthest and the fastest. We do this by reviewing the data compiled from phase two's findings research report. Once this roadmap is in place we begin setting up our first hypothesis and test. All tests run for a minimum of two weeks and we aim for a 95% confidence level (sample size is also a big consideration). Testing is serious business and bad testing is even worse than no testing at all so take this phase very seriously.
Testing is an ongoing process so with each test review the results, learn from the data, create new hypotheses and report results to stakeholders. It's worth mentioning that big returns usually require big changes, but I've often seen smaller, simpler tests have an incredible impact as well. We recently conducted a test where adding a simple line of text with a good call-to-action yielded double-digit conversion rate lifts. In the end, whatever system you decide to use you need to make sure that a framework and process is in place so that there is data-driven reasoning behind all tests. Once the pages are scored and your first hypothesis is documented, you can move on to determining the type of test to run.
Types of Tests
Although there are many types of tests one can conduct, most of the time they fall into either A/B or Multivariate. The best type of test to use varies on what you are testing. That said, tests typically fall into one of three website components: page, page elements (headers, sidebar, widgets) or sub-elements (phrases, copy length, image placement, etc).
At Simpleview we only suggest changes after we know they work. A/B testing is the least complex method of testing and is used when comparing two versions of a webpage against each other. A/B testing is great in that it allows you to test the performance of two entirely different versions of a page. Typically A is the existing design (called the control or original), and B is the new design (often called the variant). Your team should split your website traffic between these two versions and measure their performance using metrics based on your goals such as conversion rates, cost per acquisition, revenue per visitor or bounce rate.
Multivariate testing uses the same principles of A/B testing but compares a higher number of variables. This test is good to use when you need to see which page elements are having a positive or negative impact on visitor interaction. Therefore, this test is typically done only on sites or pages with a lot of traffic, as many variations require a lot of traffic to reach statistical significance.
At Simpleview we prefer to use our own A/B testing tool (built into CMS 3.0) whenever possible. The tool uses Google Content Experiments making setup easy and results trustworthy. The data also gets pushed directly into Google Analytics without having to set up any additional tracking or fancy plugins, eliminating the need for a developer. If you aren't currently on the Simpleview CMS 3.0 platform or have yet to purchase the testing module, I would recommend using either Optimizely or VWO (Visual Website Optimizer). Both are great testing platforms with a lot of bells and whistles, however, they can get very expensive very quickly if you have a lot of traffic or are testing multiple pages at one time across your site. Personally, I like free so running a test through Google Content Experiments (right through CMS 3.0) is my preference.
Whatever tool you use for testing, just make sure to have all of the data imported into your analytics platform, and don't forget to let it run until it reaches statistical significance. A minimum sample size and statistical confidence level of 95% should be used as that means there is a very small probability your results are bogus. Be aware that if the test has not reached statistical significance the results may be caused by random factors. Also, the sample size is just as important. Even with a 95% statistical significance, the test is invalid if it has only reached a small sampling of people. Using a test calculator can help and there is no harm in running a test for a longer period of time. Remember, the goal of testing is to take the guesswork out of site optimization, and use data to make informed decisions rather than "I feel" decisions. Most conversion projects yield dozens of pages full of issues to test so there should be more than enough to keep you and your team occupied for quite some time.
Go here for part four of our series where we'll be tackling Results.