Whether you’re new to A/B testing or you’re a seasoned marketer who has run hundreds of tests over the years, the one CRO-related experience that everyone can relate to is a failed A/B test. It’s the nature of the game, with marketers normally winning only one out of three tests. While A/B testing can provide huge lifts in site engagement, growth in lead generation, bookings, and partner referrals, a sizeable percentage of tests yield inconclusive or negative results.
Due to the percentage of tests that result as inconclusive or negative, it is important to go back and revisit failed tests as they can still be valuable. To start out, you have to try and understand why your test failed.
Reasons a Test Could Have Failed
1. Bad Hypothesis: Was your hypothesis based on an opinion rather than data? If so, then there is a good chance that you have a poor hypothesis, which may have led to your inconclusive or negative test result. Before building a test hypothesis, do your research. Dig into your analytics, have proof to show why an element or page should be tested and present the expected outcome (Page, 2016).
2. Device: Was your test suitable for all devices? Did it load or render properly on desktop, tablet and mobile? Was the length of content an issue for mobile users? Did the fact that the test element opened in a new window cause any issues? Examine if a particular device didn’t perform well, go look at the test variation on the specific device and start writing down potential issues or barriers (Chan & McCabe, 2018).
3. Traffic Channels: Did you take into account the different types of traffic? Did the visitors get what they expected? Did the ad overpromise and under deliver once they hit the landing page? Analyze each of the traffic channels. Prioritize by high traffic channels and low goal performers to identify areas that may need improvement. Traffic channels are an especially good place to analyze if you were running a campaign and wanted visitors to complete a desired action (Deshdeep, 2016).
4. Visitor Type: Were the majority of your visitors new or returning? Perhaps your experiment assumed visitors to the page had a previous knowledge of your destination, yet analytics show that majority of your visitors are actually new to the site. This is a great opportunity to find areas where you can adjust copy, calls to action or value propositions to target both visitor types appropriately. Analyzing visitor type can be particularly helpful for tests focused on bookings or lead conversions (Deshdeep, 2016).
For example, Experience Columbus wanted to improve the click-throughs on their hotel packages (micro-conversion measurement). We decided to change how the packages looked with the goal making them more uniform and stand out more prominently:
The test ran for 61 days and resulted in a 30.98% conversion rate increase in click-throughs. We were very pleased but wanted to see how the conversion rate for hotel package purchases were. We found that the conversion rate for package purchases was lower for returning visitors and decided to run another iteration of the test targeting new and returning visitors with different calls to action. While the calls to action for the new visitors are a softer sell “Learn More,” return visitors are encouraged to “Book Now,” as they are more likely to be familiar with the package offers.
5. Too Many Changes: Did you change too many elements on the page? Did you reskin the entire page? Too many changes often make it difficult to determine what did or didn’t work for a test experiment. Start with changing one or two elements on a page. If you want to test more elements then progressively test additional iterations to uncover other potential problems (Chan & McCabe, 2018).
6. Minimal Changes: Was there too minimal of change? Would visitors even notice the change? If so, make a radical change between the two versions of the test. It could be changing a text link to a button, adding a colored background to create more contrast or moving an element higher on the page so more visitors see it (Page, 2016).
Conduct Further Analysis to Identify the Problem
If analytics do not provide enough proof to justify running another test, then try analyzing the original version of the page with the tools below. Each of these can provide additional insights that can’t be seen in analytics data.
- Heat Maps: shows where visitors are clicking and how frequently
- Site Polls: allows you to ask live visitors specific questions about whether they had trouble finding the information they were looking for, what information is missing, why did they come to the site, etc.
- Visitor Recordings: allows you to analyze individual visitor page interactions such as mouse, clicks, or scroll movements. (For example, Hotjar’s various testing tools include visitor recordings.)
In Conclusion
Take time to revisit failed tests. Put in the effort, time, and additional research to understand why the test failed. Determine whether there is enough data to justify running another version of the test. If there is, then take the risk and run the test again. You may be pleasantly surprised with the results and positive impact on your DMO website.
Want some help with testing on your website? Simpleview’s CRO services can help. Please reach out to your account manager or CRO specialist.