Disappointing A/B Test Results

A few weeks ago, at work, we set up an A/B test. I'm not a big fan of this type of testing. A/B testing, in case you don't know (which is totally cool if you don't), is where you set up your site (or it could be an app) so that a certain percentage of your visitors get one version of the site, while the rest get the other version. Version A, version B. A/B. Certainly you can do A/B/C/.., but… don't.

The reason I am not a big fan of A/B tests is they really don't tell you much. Other than, during the time-period the test was running, 50% of the people did X and 50% of the people did Y. The test results don't tell you why a person did something. What their motivation was prior to interacting with either A or B. What they thought when they encountered A or B. Solely what happened.

But, in this case, I was a bit curious. And, it seemed like the test results would support something I wanted to do. So, why not?

The premise was this: upon arrival at the web site, 50% of the people would receive a screen (an interstitial) giving them a choice: Download an app to their phone or continue to the site. If you are a UX person, or a human being, you're probably cringing. No one likes being slowed down to be asked if they want to download an app they'll never use. Just take them to the site! After all, Responsive Web Design!

The other 50% would go right to the web page.

Prior to the test 100% of the people who came to the site got the interstitial.

So, scientifically, since we all know people hate those things, obviously the 50% of the people who didn't get the interstitial would spend more time on the site. But, I am writing about this, so you can probably guess where things are going.

Suffice to say, I do not understand human behavior. The people who got the interstitial stayed on the site almost twice as long as the people who didn't get the option to download the app. None of the people who got the interstitial downloaded the app. They just got slowed down from getting to the content. They were forced to make a choice before seeing what they came for. And they stuck around longer?

People are weird.

Again, A/B testing doesn't tell us why. So, we have to guess. Or go out and ask the people why. But in this case, we are only going to guess. For good reasons!

Here's my guess: people who interacted with the site were more engaged with the site. They invested time, so they were more likely to stay longer because they basically made the same choice twice. The first time when they clicked the link to get to the site, and the second time when they had to say, “No, really.”

I don't have some wrap-up paragraph that will make you think twice about A/B testing, or tell you how important it is. After the results of this test (which I did validate were p≥.001), I'm less, well, snooty about A/B tests. I'm open to using them, but I won't make design decisions based solely on them.