A/B Testing in React

Thank you to Isabela Moreira for this amazing article!Building a good UI is hard. It takes a unique set of skills that involves both technical feasibility and an eye for aesthetically-pleasing, accessible, and usable design. Building a good UI that wil…


This content originally appeared on JavaScript January and was authored by Emily Freeman

Thank you to Isabela Moreira for this amazing article!

Building a good UI is hard. It takes a unique set of skills that involves both technical feasibility and an eye for aesthetically-pleasing, accessible, and usable design. Building a good UI that will delight your users is even harder because of how subjective the field is. In this post, I'll go over what A/B testing is and how we can leverage it to build data-driven UI and bring some objectivity to a subjective field.

What is A/B testing?

A/B testing really boils down to a fancy name for an experiment. In software engineering, we use the term to describe the process by which we compare two or more versions of something to determine which is more effective. This might seem like a new, trendy approach to building software, but the process has actually been around for a long time. Back in the day, farmers used to A/B test the fertilizers they used for their crops. They'd section off areas of their crop and use a different fertilizer in each area and see which area yielded the healthiest crops.

Why A/B testing?

This might seem like a lot of work. You have to think about the different variations you want to test, define an infrastructure to handle experimentations in your codebase, and define what success looks like in the product. Running A/B tests is certainly a non-zero amount of work, so why are they so important that we're going to decide to invest the time and resources into running them? Why not pick the design or approach you think is best for your product and ship it? There are broad design guidelines that can largely influence the way we approach designing our products, but there's rarely guidelines set in stone for smaller details like what color a button should be or what icon to use in a specific scenario. It all comes down to how we define this idea of the "best option".

Imagine you're wrapping up work on a specific feature and you're demoing it to your team. During this demo, someone speaks up and says, "I have a great idea! Why don't we make this button green instead of blue? I think it'll lead to more clicks." While it's fine to make this suggestion, it comes with an inherent problem - it's purely based on subjective feelings towards a design and therefore we have no substantial conclusion to draw from the outcome of this change. A/B testing gives us the power to turn this subjective decision into an objective one. With A/B testing, we can run UI experiments without having to change and redeploy client-side code once the initial experiment setup is complete. You deploy once and users will be directed to a specific variant of the experiment when they visit. So if we wanted to test which color button leads to more engagement, we can set up an experiment where 50% of users see the blue button and 50% of users see the green button. In this case, the blue button is our control since that was the initial design or existing experience in the product and the green button is the variant.

Screen_Shot_2019-12-31_at_11.34.03_AM.png

You're not limited to only a single variant when running experiments, but it can be helpful to not get carried away with the number of variants in your experiment because decision paralysis and achieving the right statistical significance can be blockers when running experiments.

Screen_Shot_2019-12-31_at_11.36.56_AM.png

How to run an experiment

In order for an experiment to produce meaningful data, it needs to be structured properly. Thinking back to middle school science class, there's clear steps to defining an experiment according to the scientific process.

  1. Research First, you need to research the current behavior of your users. The goal of this is to focus on areas where your user engagement is dropping off and what's not going right. Maybe users keep opening a menu or page in your product and then immediately closing it.
  2. Identify the problem In the example above, immediately closing a menu could be a sign of a larger problem where the user might not understand or remember what the menu is until it's opened.
  3. Hypothesize Once you've identified a problem area, you can create a hypothesis. This is where you state a claim that you will later prove to be true or reject. In this case, our hypothesis could be "Changing the menu icon will result in fewer immediate closures of the menu". The thinking behind this is that users are opening the menu, quickly realizing they didn't really mean to go there, and then closing it. So by making it more apparent what the menu is used for will result in it being opened fewer times, but of the times it is opened, there is a greater chance that it was intentional and useful.
  4. Experiment Now that we have a hypothesis, we can start an experiment. You start with your control, which is whatever is in the product or design today, and then come up with one or several variants to test with. In our case, we can come up with a few different icon options for the menu. From there, we'll split the product's traffic equally between the control and all the variants and run the experiment until we have a statistically significant value (more on that in a bit).
  5. Analyze When a statistically value is reached, you can then stop the experiment and analyze the data to decide on which option should be used in the product. You can use a tool to determine approximately how long you'll have to run your test in order to reach statistical significance, which can be helpful for task estimation.

Experimentation tips:

  • Results must be statistically significant. This will tell you that the outcome of the experiment was due to chance (bad) or due to the thing we actually changed (good). You want to aim for 95 - 99% statistical significance.
  • Sample size matters! In order to get statistically significant results, you can't run your experiment on only a handful of people. At least 1000 users is a good starting point for a sample size, but there are calculators online that can help you get a more accurate number.
  • Test things that will bring value. You wouldn't want to A/B test whether or not users prefer a comma in a certain paragraph in your UI, because that's not going to drive any business value. Instead, A/B tests usually focus on things like calls to action, forms, images, icons, and colors.
  • Test in a contained manner. Don't run experiments on two completely different layouts for your website because you'll never know what change in the layout led to the results. Instead, focus on testing small, isolated components that can then be incorporated into the product.
  • Test in parallel. If you run experiments sequentially, you'll introduce uncontrolled variables between different rounds of the experiment that can impact the results. For example, on week 1 you run variant 1 of your experiment and on week 2 you run variant 2. But it just so happened that week 1 had a holiday, so you had a smaller user base because your product is enterprise software. So now you can't know for certain if it was the variant or the differing user base that made a difference in the results of the experiment.

Implementing an A/B test in React

Assuming we already have a React app create, we need a library to help us write our A/B tests. There's a few different ones available. I've recently been using Optimizely and they have a free trial, but for this post, I'll use MarvelApp to handle the experiments and MixPanel to process the data since they have a free tier.

I'll be building out this example for our very first situation of the green vs. blue button debate. This experiment will test which button variant is better - blue, green, or red (our control). We're trying to test which variant will lead to the most clicks, so the win condition for a variant is it being clicked on.

Set up

    yarn add @marvelapp/react-ab-test

    yarn add mixpanel

After installing the libraries, we need to do some initial set up in the component we want to host the experiment.

    import { Experiment, Variant, emitter } from '@marvelapp/react-ab-test';
    
    experimentDebugger.enable();
    emitter.defineVariants('landingPageCTAExperiment', ['control', 'blue-variant', 'green-variant'], [34, 33, 33]);

Now, we can define the UI that corresponds with the variants of this experiment:

    <Experiment name='landingPageCTAExperiment'>
       <Variant name='control'>
          <button className="callToAction" onClick=>Learn more</button>
       </Variant>
       <Variant name='blue-variant'>
          <button className="callToAction blue" onClick=>Learn more</button>
       </Variant>
       <Variant name='green-variant'>
          <button className="callToAction green" onClick=>Learn more</button>
       </Variant>
    </Experiment>

Defining the win condition

We can then hook up the buttons' onClick event to call emmiter.emitWin, since a click means the win condition we defined previously has been met.

Our component now looks like:

    import React from 'react';
    import './LandingPage.css';
    import { Experiment, Variant, emitter, experimentDebugger } from '@marvelapp/react-ab-test';
    
    experimentDebugger.enable();
    emitter.defineVariants('landingPageCTAExperiment', ['control', 'blue-variant', 'green-variant'], [34, 33, 33]);
    
    class LandingPage extends React.Component {
        onButtonClick(e) {
           emitter.emitWin('landingPageCTAExperiment');
        }
    
        render() {
            return (
    <div className="mainComponent">
                    <header>
                        Doggo App
                    </header>
                    <div className="description">
                        Doggo ipsum wow such tempt borkf extremely cuuuuuute tungg stop it fren you are doing me the shock yapper, big ol h*ck waggy wags smol. Long bois dat tungg tho shoob doge doing me a frighten, long doggo tungg fluffer woofer, pupper shoob borkdrive. Heckin good boys shibe heckin very jealous pupper fat boi, thicc heckin good boys and girls. Bork smol what a nice floof long doggo, shibe doggorino. Very jealous pupper smol he made many woofs, boof.
                    </div>
                    <Experiment name='landingPageCTAExperiment'>
                        <Variant name='control'>
                            <button className="callToAction" onClick=>Learn more</button>
                        </Variant>
                        <Variant name='blue-variant'>
                            <button className="callToAction blue" onClick=>Learn more</button>
                        </Variant>
                        <Variant name='green-variant'>
                            <button className="callToAction green" onClick=>Learn more</button>
                        </Variant>
                    </Experiment>
                </div>
            );
        }
    };
    
    export default LandingPage;
    
    // Called when the experiment is displayed to the user.
    emitter.addPlayListener(function(experimentName, variantName) {
        console.log(`Displaying experiment ${experimentName} variant ${variantName}`);
    });
    
    
    // Called when a 'win' is emitted, in this case by this.refs.experiment.win()
    emitter.addWinListener(function(experimentName, variantName) {
        console.log(
            `Variant ${variantName} of experiment ${experimentName} was clicked`
        );
    });

When we run our app, we'll see the following control panel at the bottom of the screen. This lets us manually switch between the different variants so we can easily test the UI for each of these cases without having to randomly get assigned to it.

Screen_Shot_2019-12-31_at_12.55.24_PM.png

In the developer console, we can see that a KV pair is stored in localStorage defining which variant the user is set to receive. This allows the user to receive a consistent experience while using our product, despite actually being involved in a live experiment.

Screen_Shot_2019-12-31_at_12.56.53_PM.png

At this point, we have an experiment running locally and we can test it by clearing our localStorage and seeing that we get assigned to a random variant each time. But all this is still only happening locally. If we deployed this to our product right now, we would have no mechanism to gather data on the performance of each variant or the results of the experiment. This is where MixPanel comes in.

Logging data

We've already installed the MixPanel React library, so we can skip ahead to setting it up.

First, create an account in MixPanel and then create a new project. Take note of the project's token. In the component running the experiment, import MixPanel and initialize it with your token.

    import Mixpanel from 'mixpanel';
    import secrets from '../secrets.json';
    
    var mixpanel = Mixpanel.init(secrets.mixpanelToken);

Then in the win listener, we can fire an event with the MixPanel library:

    // Called when a 'win' is emitted, in this case by this.refs.experiment.win()
    emitter.addWinListener(function(experimentName, variantName) {
        console.log(
            `Variant ${variantName} of experiment ${experimentName} was clicked`
        );
        mixpanel.track(experimentName + " " + variantName, {
            name: experimentName,
            variant: variantName,
        });
    });

We can pass along the experiment name and the active variant so the platform knows what data to log. This data will look something like this in the dashboard. From here, you can analyze the data to determine when the experiment is over and what the outcome of the experiment is.

Screen_Shot_2019-12-31_at_1.24.23_PM.png

Wrapping up

A/B testing is a great tool to make data-driven decisions about subjective topics and can provide a lot of value to a product. It's important to listen to your users and use them as a sounding board for design decisions, but it's also important to provide a stable, fairly consistent product that isn't changing on them every time they use it. You should be very deliberate in what parts of your UI your decide to user test to put as little overhead on your users and aim to maximize the value your experiments provide by choosing small pieces of the UI to test and testing variants in a parallelized manner.

Keep A/B testing in your toolbox and don't be afraid to get users involved in decisions that affect their use of the product!

Resources


This content originally appeared on JavaScript January and was authored by Emily Freeman


Print Share Comment Cite Upload Translate Updates
APA

Emily Freeman | Sciencx (2020-01-13T16:49:00+00:00) A/B Testing in React. Retrieved from https://www.scien.cx/2020/01/13/a-b-testing-in-react/

MLA
" » A/B Testing in React." Emily Freeman | Sciencx - Monday January 13, 2020, https://www.scien.cx/2020/01/13/a-b-testing-in-react/
HARVARD
Emily Freeman | Sciencx Monday January 13, 2020 » A/B Testing in React., viewed ,<https://www.scien.cx/2020/01/13/a-b-testing-in-react/>
VANCOUVER
Emily Freeman | Sciencx - » A/B Testing in React. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2020/01/13/a-b-testing-in-react/
CHICAGO
" » A/B Testing in React." Emily Freeman | Sciencx - Accessed . https://www.scien.cx/2020/01/13/a-b-testing-in-react/
IEEE
" » A/B Testing in React." Emily Freeman | Sciencx [Online]. Available: https://www.scien.cx/2020/01/13/a-b-testing-in-react/. [Accessed: ]
rf:citation
» A/B Testing in React | Emily Freeman | Sciencx | https://www.scien.cx/2020/01/13/a-b-testing-in-react/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.