2. Testing is a great thing.
For any bootstrapped SaaS company, A/B testing can often be the key to
wasting less capitol. But there are many situations in which testing is a
waste of time and others where it’s downright destructive to the
business as a whole.
3. Here are eight ways to preserve growth and
momentum by avoiding those A/B testing
quagmires.
4. 1. Sample size is a
blocker.
If your sample size is niche startup small,
you can test nothing. Small sample sizes
invalidate everything but perhaps the most
black and white of options. Why? The more
different alternatives are, the faster a test
will likely reach statistical significance.
If you’re A/B testing shades of gray, you’ll
need a larger sample size to find an answer.
5. 2. Statistical confidence
is a fork in the road.
The certainty attributed to a result is its statistical
confidence. Do you want to be 80% sure or 99% sure of
the outcome? Everyone logically says 99%, but higher
confidence takes longer to declare, and time is often the
enemy of the agile SaaS. And, if your sample sizes are
small and you want 99% confidence, you’ll have to wait a
long time. The flip side is that you have to go into your
CEO’s office and tell her that you’re only 80% sure of an
outcome. Is that enough? The answer to that question
often depends on the context and value of the decision
being made. And remember, testing is just tilting the odds
in your favor. There are no absolute certainties.
6. 3. All A/B tests are
not equal.
This tracks back to both sample size and confidence —
all A/B tests are not of equal value, and consequently, all
don’t require the same certainty. For example, you might
be message testing and be perfectly happy to declare
winners at 80%. That keeps you responsive to the
market, but scientific in your process and decision
making. On the other end of the spectrum, you might be
A/B testing price for a mass-market product. In that
case, there may be millions of dollars at stake. You need
to be sure you’re right when you declare a winner (99%).
7. 4. Variables add time.
Adding variables adds time. The most common example is a
clear A/B test muddied with C and D alternatives (A/B/n test).
Traffic being equal, it’s likely to take a lot longer to declare a
winner from four options instead of two options. Another
example comes from the world of multivariate testing (MVT)
where customers can test any number of elements within a
page. That might mean six headlines, three images, three calls
to action, and a couple of forms. That’s 108 alternatives and a
recipe for a years-long test unless you have Google homepage-
level traffic. So just because you CAN test more options,
doesn’t mean you SHOULD.
8. 5. Email A/B
testing trip lines.
Email marketing is ripe for A/B testing, but there are a couple of
common ways it goes badly. One is misidentifying what is
reasonably testable. Email is great for testing because you get a big,
instant burst of traffic. So something like the subject line is perfect
because everyone sees it (significant sample) and opens are super
measurable. But once you get beyond the subject line to elements
within the email, your sample size is drastically smaller and so your
time to declare is drastically longer. Since email A/B testing is
typically done on a sample of the overall drop, in advance of the
drop, time is not typically something you have laying around. You
want to test, get a fast winner and send that version to the other
80% of your list. This is one reason why subject line testing should
be measured by click-throughs more than by opens — because
opens aren’t the goal — deeper engagement is. So there are your
two email trip lines — what to test and what metric to measure.
9. 6. Overgeneralization
of results.
Some people are methodical when testing and treat
it as an everyday discipline. Some think that by
testing something in one context, they are okay to
apply the results to other contexts. This does not
work. It’s dangerous. If you’re going to test in PPC and
apply the findings to social, you’re doing yourself a
disservice to think that’s valid. Test everywhere you
can, as often as you have volume and value.
10. 7. A/B testing for
no good reason.
Someone at some point has to ask why.
If the outcome of a test will not have value, don’t run
the test. Testing isn’t busy work and it shouldn’t be
done without good reason. When it’s limited to testing
alternatives that are different and have organizational
value, then it’s important and meaningful. If testing
decisions degrade towards things that are merely
interesting or clever, you’re missing the point and
robbing energy from other areas that will more directly
grow your SaaS.
11. 8. Small differences,
small wins.
The easiest way to think about this one is testing
big things for big gains and little things for little
gains. Since all tests require roughly the same
amount of work, why not swing for the fences and
test big things that are fundamentally different
from one another? The more different
alternatives are, the greater the gains. Other
upsides are faster results and potentially higher
confidence intervals.
12. There you have it!
Keep these eight A/B testing potholes in mind on your road to your
SaaS growth utopia and you’ll arrive unscathed.
14. Want more?
These slides were inspired by our article ‘A/B Testing Myths and Quagmires’
on the Married2Growth blog.
Visit us for more stuff like this!
15. Thank you!
We hope this was helpful,
and we’d love to hear from you!
info@married2growth.com