Tracy Osborn

loves to chat about entrepreneurship, teaching, design, development, and more.

SXSWi: Your user interface is your laboratory

Session going over AB/split tests run by several well-known companies, including Wildfire, Wufoo, Rent.com, and Freshbooks.

What is AB or split testing? Find something you want to test (such as a headline), find two headlines you want to test against eachother, and use a product like Google Website Optimizer to split your users between the two tests (50% going to one test, and 50% to the other). Track the results, and whichever test performs better "wins" the test and gets implemented.

Examples

Each company went over several tests they ran and the results they saw. I have pages of written notes on the results which I won't transcribe here, simply because running these tests on your website is what's important - not the tests that others have run. Their results are not going to translate into your results until you test them yourselves.

If you're new to testing, here's a small run-down:

  1. Determine your metrics for "conversion" - conversion what we're trying to improve. This is faily simple on a ecommerce site, and would be the amount of orders completed. Conversion could also be signups for a newsletter, increased page-views, or time spent on a site as well.

  2. Determine what you want to test. You can test almost anything on a website, but the best way to start is to take a look at the funnel. The funnel is the path you want your visitors to take on your website, such as Homepage --> Product Page --> Shopping Cart --> Checkout --> Payment --> Thank You Page. It's essential to have analytics installed before this point (such as Google Analytics, Kissmetrics, and Mixpanel) where you can set up tracking of visitors through the funnel. Once we have that set up, determine your problem points; how many visitors leave before reaching the final page in the funnel, and where does the biggest drop-off occur -- after the product page? After the shopping cart page?

  3. Once you have the "problem" page determined, choose elements to test within the page, such as the headline, the "add to cart" button, etc. A typical split test means you'll test a new version of an element (such as a different headline) against the existing to determine whether you can improve it or not.

  4. In whatever product you're using to split test, it'll help set you up with creating the test. Typically I recommend tests to run no shorter than a week, recommended two weeks. First, your test needs to be statistically significant and your testing package will likely let you know when you've reached this point. Secondly, even if you have enough visitors in the first day to reach significance, there are day-to-day and week-to-week variations that can affect the results; your Monday visitors may have an entirely different pattern than your Wednesday visitors. Running for at least a week means you can average out the difference between the days, and running for two weeks means you can average out between two weeks.

  5. If the new version is declared the "winner", implement and enjoy the improved conversion!

A large problem with split testing is it gives you a strictly numbers based view of your website and doesn't give you the "whys" - you might know one headline performs better than another but only be able to guess as to why. During this session, Jared Spool came up to give a very important point - that there could be very large usability problems with a website that split testing cannot fix. It's essential to also run usability tests in tandem with split tests - watch the user go through your site and find out the reasons why something might not work. Combining these both is the optimal situation.

This topic is one near and dear to my heart and I was disappointed to find it so poorly covered. Hopefully this post helped clear up some of the "whys" of split testing, and not simply some company's results.