Are Your Website Experiments Really Producing Significant Results?

Let’s face it – today’s digital marketing world is becoming ever more sophisticated at a rapid pace. Companies have to keep up with the times and innovate, or they’ll be left behind. But there are so many different ways to do this that executives and business leaders often get confused.

Let’s talk about one key way to up your game in conversion rate optimization (CRO) and online sales. You can focus on specific techniques like A/B testing, or you can refer to the general practice of conducting website experiments, which is the overall way that digital marketers pursue these types of goals.

Website experiments are important, because they allow companies to learn more about how audiences respond to different types of digital marketing efforts. This type of learning can absolutely boost conversions and profits. It’s a game changer, and that’s why so many companies choose to embark on this kind of journey.

Breaking Down the Jargon

To start, let’s get to what website experiments are, and what kind of terminology might be attached to this process.

First, you have the term conversion rate optimization, which sounds fancy, but is actually really simple. Conversion rate optimization, or CRO, refers to the discipline of increasing the percentage of your website visitors who convert – whether to leads you can later nurture, or to actual sales.

Then you have the term continuous improvement, or CI, which refers to the virtuous loop of processes used for CRO, consisting of various phases. Basically, first you formulate a plan for an experiment based on a hypothesis, then you review the plan, then you execute it, then you measure the results of the experiment, and then you go back to the beginning and start all over again.

The reason this is important is because cyclical approaches often drive results. To further illustrate how this all works in practice, let’s introduce two more terms – split testing and A/B testing. If you Google these terms, you’re likely to see articles indicating that split testing and A/B testing are the same thing. Both involve testing one example of a digital marketing process against another.

If you want to get more into the weeds, you can learn about how people often use split testing a little bit differently than A/B testing. Split testing often implies that you’re actually delivering visitors to two different landing pages or domains in order to optimize performance. So in general, split testing is high-level testing against the baseline or control sample.

A/B testing also tests two different options, but it is used to test two similar results, usually on the same landing page, but with different characteristics. That’s why experts sometimes suggest doing a split test and then doing A/B testing on each branch of that split test. So if you’re just testing a particular color or website element on the landing page against another embedded in the same landing page, you’d be more likely to use A/B testing for that.

What both of these processes have in common is that they can reduce bounce rates, increase conversions, boost audience engagement and ultimately, drive sales.

Utilizing Vendor Services

The easiest way to do web experimentation is to use a software package made to ease the process, and many well-reputed products are helpful in this sense.

Vendor offerings for web experimentation will often be served up out of the box with sophisticated cloud delivery. The client will just utilize some forms, click boxes and drop down menus to set up the kind of experiment they want, and the web app takes care of the rest.

However, some companies might find that they want to do more of these experiments in-house, and that leads to the process of server-side web experimentation.

To understand this method, you can think of it as a direct pipeline. Your web host will be splitting both of these comparative landing pages or other features off of your own servers – that is, if you still keep your own servers in-house. Other companies use serverless computing, collocation or other methods to abstract their hardware environments, but for the purposes of server-side web experimentation, that doesn’t matter much. The key point is that the testing is taking place “close to the edge” of the customer’s web use, and not patched on from elsewhere.

Another way to think about server-side experimentation and do-it-yourself split testing versus A/B testing is that platforms like Facebook offer their own client-side preconfigured testing and experimentation options for advertising campaigns.

With easy Facebook online forms for web experimentation, you can work with elements like audience and placement, set your budget and schedule, and order up the kinds of experimentation that will drive your key business insights and help with your CRO. This is by design, because Facebook loves to keep many of these processes on-site (on its site, that is). But again, you might not want Facebook to be handling all of your experimentation with its own methods.

Statistical Significance and CRO Experiments

There’s one more important aspect of split tests that we need to address here, even on a cursory level, and it’s relevant regardless of if you’re using server-side or client-side web experimentation.

Companies have to figure out whether the results that they get are statistically significant. A statistically significant result is one that is not just due to ordinary chance, small sample sizes  or random margins of error. What that threshold looks like is different for different kinds of projects, but we often intuitively understand statistical significance with a percentage.

Statistical significance is a method to mathematically prove that a statistic is reliable. When you make decisions depending upon the results of experiments, you’ll need to ensure that the relationship really exists. So, simply use the online Sig Fig calculator that helps you to turn the number into a new number with desired number of significant figures. Also, use the online significant figures calculator that tells how many sig fig a number has as well as which number is significant.

For example, if version A of your landing page drives only 5% more conversions than version B does, that may not be very significant or conclusive, but if your conversion lift amounts to 50%, then usually, you’re talking about quite a bit of significance, assuming your data sample size is large enough.

It’s also useful to note that if you are using a cloud host to perform server-side experiments, then you might be logging results in a data warehouse such as an Amazon Redshift. In this case, you’re likely able to calculate significance from within your data queries themselves. By looking at the standard deviation in Redshift and how your results measure up, you can gain  more authoritative insights.

Software platforms will do this for you as part of their reporting modules, but if you’re managing your own server-side split tests, then you might need to have a serious data science and business analytics team member on board to ensure statistical significance. If you don’t have anyone in-house who can help, then you can turn to freelance marketplaces. As this review of Gun.io points out, skilled, vetted technical help can be affordable on a gig by gig basis.

Conclusion

Web experimentation that adheres to statistical significance best practices can help your marketing team take the guesswork out of conversion optimization. Whether you’re using a web app to display variants on your audience’s screens or server-side experiments to serve up variants from the get-go, a rigorous process of ongoing improvement can help you capture more leads, learn more about what makes your audience tick and drive more sales.

Alex
 

Alex is a small business blogger with a focus on entrepreneurship and growth. With over 5 years of experience covering the startup and small business landscape, Alex has a reputation for being a knowledgeable, approachable and entrepreneurial-minded blogger. He has a keen understanding of the challenges and opportunities facing small business owners, and is able to provide actionable advice and strategies for success. Alex has interviewed successful entrepreneurs, and covered major small business events such as the Small Business Expo and the Inc. 500|5000 conference. He is also a successful entrepreneur himself, having started and grown several small businesses in different industries.