the marketing experiments blog

From Hypothesis to Results: Mastering the Art of Marketing Experiments

image

Max 16 min read

From Hypothesis to Results: Mastering the Art of Marketing Experiments

Click the button to start reading

Suppose you’re trying to convince your friend to watch your favorite movie. You could either tell them about the intriguing plot or show them the exciting trailer.

To find out which approach works best, you try both methods with different friends and see which one gets more people to watch the movie.

Marketing experiments work in much the same way, allowing businesses to test different marketing strategies, gather feedback from their target audience, and make data-driven decisions that lead to improved outcomes and growth.

By testing different approaches and measuring their outcomes, companies can identify what works best for their unique target audience and adapt their marketing strategies accordingly. This leads to more efficient use of marketing resources and results in higher conversion rates, increased customer satisfaction, and, ultimately, business growth.

Marketing experiments are the backbone of building an organization’s culture of learning and curiosity, encouraging employees to think outside the box and challenge the status quo.

In this article, we will delve into the fundamentals of marketing experiments, discussing their key elements and various types. By the end, you’ll be in a position to start running these tests and securing better marketing campaigns with explosive results.

Why Digital Marketing Experiments Matter

Why Digital Marketing Experiments Matter

One of the most effective ways to drive growth and optimize marketing strategies is through digital marketing experiments. These experiments provide invaluable insights into customer preferences, behaviors, and the overall effectiveness of marketing efforts, making them an essential component of any digital marketing strategy.

Digital marketing experiments matter for several reasons:

  • Customer-centric approach: By conducting experiments, businesses can gain a deeper understanding of their target audience’s preferences and behaviors. This enables them to tailor their marketing efforts to better align with customer needs, resulting in more effective and engaging campaigns.
  • Data-driven decision-making: Marketing experiments provide quantitative data on the performance of different marketing strategies and tactics. This empowers businesses to make informed decisions based on actual results rather than relying on intuition or guesswork. Ultimately, this data-driven approach leads to more efficient allocation of resources and improved marketing outcomes.
  • Agility and adaptability: Businesses must be agile and adaptable to keep up with emerging trends and technologies. Digital marketing experiments allow businesses to test new ideas, platforms, and strategies in a controlled environment, helping them stay ahead of the curve and quickly respond to changing market conditions.
  • Continuous improvement: Digital marketing experiments facilitate an iterative process of testing, learning, and refining marketing strategies. This ongoing cycle of improvement enables businesses to optimize their marketing efforts, drive better results, and maintain a competitive edge in the digital marketplace.
  • ROI and profitability: By identifying which marketing tactics are most effective, businesses can allocate their marketing budget more efficiently and maximize their return on investment. This increased profitability can be reinvested into the business, fueling further growth and success.

Developing a culture of experimentation allows businesses to continuously improve their marketing strategies, maximize their ROI, and avoid being left behind by the competition.

The Fundamentals of Digital Marketing Experiments

The Fundamentals of Digital Marketing Experiments

Marketing experiments are structured tests that compare different marketing strategies, tactics, or assets to determine which one performs better in achieving specific objectives.

These experiments use a scientific approach, which involves formulating hypotheses, controlling variables, gathering data, and analyzing the results to make informed decisions.

Marketing experiments provide valuable insights into customer preferences and behaviors, enabling businesses to optimize their marketing efforts and maximize returns on investment (ROI).

There are several types of marketing experiments that businesses can use, depending on their objectives and available resources.

The most common types include:

A/B testing

A/B testing, also known as split testing, is a simple yet powerful technique that compares two variations of a single variable to determine which one performs better.

In an A/B test, the target audience is randomly divided into two groups: one group is exposed to version A (the control). In contrast, the other group is exposed to version B (the treatment). The performance of both versions is then measured and compared to identify the one that yields better results.

A/B testing can be applied to various marketing elements, such as headlines, calls-to-action, email subject lines, landing page designs, and ad copy. The primary advantage of A/B testing is its simplicity, making it easy for businesses to implement and analyze.

Multivariate testing

Multivariate testing is a more advanced technique that allows businesses to test multiple variables simultaneously.

In a multivariate test, several elements of a marketing asset are modified and combined to create different versions. These versions are then shown to different segments of the target audience, and their performance is measured and compared to determine the most effective combination of variables.

Multivariate testing is beneficial when optimizing complex marketing assets, such as websites or email templates, with multiple elements that may interact with one another. However, this method requires a larger sample size and more advanced analytical tools compared to A/B testing.

Pre-post analysis

Pre-post analysis involves comparing the performance of a marketing strategy before and after implementing a change.

This type of experiment is often used when it is not feasible to conduct an A/B or multivariate test, such as when the change affects the entire customer base or when there are external factors that cannot be controlled.

While pre-post analysis can provide useful insights, it is less reliable than A/B or multivariate testing because it does not account for potential confounding factors. To obtain accurate results from a pre-post analysis, businesses must carefully control for external influences and ensure that the observed changes are indeed due to the implemented modifications.

How To Start Growth Marketing Experiments

How To Start Growth Marketing Experiments

To conduct effective marketing experiments, businesses must pay attention to the following key elements:

Clear objectives

Having clear objectives is crucial for a successful marketing experiment. Before starting an experiment, businesses must identify the specific goals they want to achieve, such as increasing conversions, boosting engagement, or improving click-through rates. Clear objectives help guide the experimental design and ensure the results are relevant and actionable.

Hypothesis-driven approach

A marketing experiment should be based on a well-formulated hypothesis that predicts the expected outcome. A reasonable hypothesis is specific, testable, and grounded in existing knowledge or data. It serves as the foundation for experimental design and helps businesses focus on the most relevant variables and outcomes.

Proper experimental design

A marketing experiment requires a well-designed test that controls for potential confounding factors and ensures the reliability and validity of the results. This includes the random assignment of participants, controlling for external influences, and selecting appropriate variables to test. Proper experimental design increases the likelihood that observed differences are due to the tested variables and not other factors.

Adequate sample size

A successful marketing experiment requires an adequate sample size to ensure the results are statistically significant and generalizable to the broader target audience. The required sample size depends on the type of experiment, the expected effect size, and the desired level of confidence. In general, larger sample sizes provide more reliable and accurate results but may also require more resources to conduct the experiment.

Data-driven analysis

Marketing experiments rely on a data-driven analysis of the results. This involves using statistical techniques to determine whether the observed differences between the tested variations are significant and meaningful. Data-driven analysis helps businesses make informed decisions based on empirical evidence rather than intuition or gut feelings.

By understanding the fundamentals of marketing experiments and following best practices, businesses can gain valuable insights into customer preferences and behaviors, ultimately leading to improved outcomes and growth.

Setting up Your First Marketing Experiment

Setting up Your First Marketing Experiment

Embarking on your first marketing experiment can be both exciting and challenging. Following a systematic approach, you can set yourself up for success and gain valuable insights to improve your marketing efforts.

Here’s a step-by-step guide to help you set up your first marketing experiment.

Identifying your marketing objectives

Before diving into your experiment, it’s essential to establish clear marketing objectives. These objectives will guide your entire experiment, from hypothesis formulation to data analysis.

Consider what you want to achieve with your marketing efforts, such as increasing website conversions, improving open email rates, or boosting social media engagement.

Make sure your objectives are specific, measurable, achievable, relevant, and time-bound (SMART) to ensure that they are actionable and provide meaningful insights.

Formulating a hypothesis

With your marketing objectives in mind, the next step is formulating a hypothesis for your experiment. A hypothesis is a testable prediction that outlines the expected outcome of your experiment. It should be based on existing knowledge, data, or observations and provide a clear direction for your experimental design.

For example, suppose your objective is to increase email open rates. In that case, your hypothesis might be, “Adding the recipient’s first name to the email subject line will increase the open rate by 10%.” This hypothesis is specific, testable, and clearly linked to your marketing objective.

Designing the experiment

Once you have a hypothesis in place, you can move on to designing your experiment. This involves several key decisions:

Choosing the right testing method:

Select the most appropriate testing method for your experiment based on your objectives, hypothesis, and available resources.

As discussed earlier, common testing methods include A/B, multivariate, and pre-post analyses. Choose the method that best aligns with your goals and allows you to effectively test your hypothesis.

Selecting the variables to test:

Identify the specific variables you will test in your experiment. These should be directly related to your hypothesis and marketing objectives. In the email open rate example, the variable to test would be the subject line, specifically the presence or absence of the recipient’s first name.

When selecting variables, consider their potential impact on your marketing objectives and prioritize those with the greatest potential for improvement. Also, ensure that the variables are easily measurable and can be manipulated in your experiment.

Identifying the target audience:

Determine the target audience for your experiment, considering factors such as demographics, interests, and behaviors. Your target audience should be representative of the larger population you aim to reach with your marketing efforts.

When segmenting your audience for the experiment, ensure that the groups are as similar as possible to minimize potential confounding factors.

In A/B or multivariate testing, this can be achieved through random assignment, which helps control for external influences and ensures a fair comparison between the tested variations.

Executing the experiment

With your experiment designed, it’s time to put it into action.

This involves several key considerations:

Timing and duration:

Choose the right timing and duration for your experiment based on factors such as the marketing channel, target audience, and the nature of the tested variables.

The duration of the experiment should be long enough to gather a sufficient amount of data for meaningful analysis but not so long that it negatively affects your marketing efforts or causes fatigue among your target audience.

In general, aim for a duration that allows you to reach a predetermined sample size or achieve statistical significance. This may vary depending on the specific experiment and the desired level of confidence.

Monitoring the experiment:

During the experiment, monitor its progress and performance regularly to ensure that everything is running smoothly and according to plan. This includes checking for technical issues, tracking key metrics, and watching for any unexpected patterns or trends.

If any issues arise during the experiment, address them promptly to prevent potential biases or inaccuracies in the results. Additionally, avoid making changes to the experimental design or variables during the experiment, as this can compromise the integrity of the results.

Analyzing the results

Once your experiment has concluded, it’s time to analyze the data and draw conclusions.

This involves two key aspects:

Statistical significance:

Statistical significance is a measure of the likelihood that the observed differences between the tested variations are due to the variables being tested rather than random chance. To determine statistical significance, you will need to perform a statistical test, such as a t-test or chi-squared test, depending on the nature of your data.

Generally, a result is considered statistically significant if the probability of the observed difference occurring by chance (the p-value) is less than a predetermined threshold, often set at 0.05 or 5%. This means there is a 95% confidence level that the observed difference is due to the tested variables and not random chance.

Practical significance:

While statistical significance is crucial, it’s also essential to consider the practical significance of your results. This refers to the real-world impact of the observed differences on your marketing objectives and business goals.

To assess practical significance, consider the effect size of the observed difference (e.g., the percentage increase in email open rates) and the potential return on investment (ROI) of implementing the winning variation. This will help you determine whether the experiment results are worth acting upon and inform your marketing decisions moving forward.

A systematic approach to designing growth marketing experiments helps you to design, execute, and analyze your experiment effectively, ultimately leading to better marketing outcomes and business growth.

Examples of Successful Marketing Experiments

Examples of Successful Marketing Experiments

In this section, we will explore three fictional case studies of successful marketing experiments that led to improved marketing outcomes. These examples will demonstrate the practical application of marketing experiments across different channels and provide valuable lessons that can be applied to your own marketing efforts.

Example 1: Redesigning a website for increased conversions

AcmeWidgets, an online store selling innovative widgets, noticed that its website conversion rate had plateaued.

They conducted a marketing experiment to test whether a redesigned landing page could improve conversions. They hypothesized that a more visually appealing and user-friendly design would increase conversion rates by 15%.

AcmeWidgets used A/B testing to compare their existing landing page (the control) with a new, redesigned version (the treatment). They randomly assigned website visitors to one of the two landing pages. They tracked conversions over a period of four weeks.

At the end of the experiment, AcmeWidgets found that the redesigned landing page had a conversion rate 18% higher than the control. The results were statistically significant, and the company decided to implement the new design across its entire website.

As a result, AcmeWidgets experienced a substantial increase in sales and revenue.

Example 2: Optimizing email marketing campaigns

EcoTravel, a sustainable travel agency, wanted to improve the open rates of their monthly newsletter. They hypothesized that adding a sense of urgency to the subject line would increase open rates by 10%.

To test this hypothesis, EcoTravel used A/B testing to compare two different subject lines for their newsletter:

  • “Discover the world’s most beautiful eco-friendly destinations” (control)
  • “Last chance to book: Explore the world’s most beautiful eco-friendly destinations” (treatment)

EcoTravel sent the newsletter to a random sample of their subscribers. Half received the control subject line, and the other half received the treatment. They then tracked the open rates for both groups over one week.

The results of the experiment showed that the treatment subject line, which included a sense of urgency, led to a 12% increase in open rates compared to the control.

Based on these findings, EcoTravel incorporated a sense of urgency in their future email subject lines to boost newsletter engagement.

Example 3: Improving social media ad performance

FitFuel, a meal delivery service for fitness enthusiasts, was looking to improve its Facebook ad campaign’s click-through rate (CTR). They hypothesized that using an image of a satisfied customer enjoying a FitFuel meal would increase CTR by 8% compared to their current ad featuring a meal image alone.

FitFuel conducted an A/B test on their Facebook ad campaign, comparing the performance of the control ad (meal image only) with the treatment ad (customer enjoying a meal). They targeted a similar audience with both ad variations and measured the CTR over two weeks. The experiment revealed that the treatment ad, featuring the customer enjoying a meal, led to a 10% increase in CTR compared to the control ad. FitFuel decided to update its

Facebook ad campaign with the new image, resulting in a more cost-effective campaign and higher return on investment.

Lessons learned from these examples

These fictional examples of successful marketing experiments highlight several key takeaways:

  • Clearly defined objectives and hypotheses: In each example, the companies had specific marketing objectives and well-formulated hypotheses, which helped guide their experiments and ensure relevant and actionable results.
  • Proper experimental design: Each company used the appropriate testing method for their experiment and carefully controlled variables, ensuring accurate and reliable results.
  • Data-driven decision-making: The companies analyzed the data from their experiments to make informed decisions about implementing changes to their marketing strategies, ultimately leading to improved outcomes.
  • Continuous improvement: These examples demonstrate that marketing experiments can improve marketing efforts continuously. By regularly conducting experiments and applying the lessons learned, businesses can optimize their marketing strategies and stay ahead of the competition.
  • Relevance across channels: Marketing experiments can be applied across various marketing channels, such as website design, email campaigns, and social media advertising. Regardless of the channel, the principles of marketing experimentation remain the same, making them a valuable tool for marketers in diverse industries.

By learning from these fictional examples and applying the principles of marketing experimentation to your own marketing efforts, you can unlock valuable insights, optimize your marketing strategies, and achieve better results for your business.

Common Pitfalls of Marketing Experiments and How to Avoid Them

Common Pitfalls of Marketing Experiments and How to Avoid Them

Conducting marketing experiments can be a powerful way to optimize your marketing strategies and drive better results.

However, it’s important to be aware of common pitfalls that can undermine the effectiveness of your experiments. In this section, we will discuss some of these pitfalls and provide tips on how to avoid them.

Insufficient sample size

An insufficient sample size can lead to unreliable results and limit the generalizability of your findings. When your sample size is too small, you run the risk of not detecting meaningful differences between the tested variations or incorrectly attributing the observed differences to random chance.

To avoid this pitfall, calculate the required sample size for your experiment based on factors such as the expected effect size, the desired level of confidence, and the type of statistical test you will use.

In general, larger sample sizes provide more reliable and accurate results but may require more resources to conduct the experiment. Consider adjusting your experimental design or testing methods to accommodate a larger sample size if necessary.

Lack of clear objectives

Your marketing experiment may not provide meaningful or actionable insights without clear objectives. Unclear objectives can lead to poorly designed experiments, irrelevant variables, or difficulty interpreting the results.

To prevent this issue, establish specific, measurable, achievable, relevant, and time-bound (SMART) objectives before starting your experiment. These objectives should guide your entire experiment, from hypothesis formulation to data analysis, and ensure that your findings are relevant and useful for your marketing efforts.

Confirmation bias

Confirmation bias occurs when you interpret the results of your experiment in a way that supports your pre-existing beliefs or expectations. This can lead to inaccurate conclusions and suboptimal marketing decisions.

To minimize confirmation bias, approach your experiments with an open mind and be willing to accept results that challenge your assumptions.

Additionally, involve multiple team members in the data analysis process to ensure diverse perspectives and reduce the risk of individual biases influencing the interpretation of the results.

Overlooking external factors

External factors, such as changes in market conditions, seasonal fluctuations, or competitor actions, can influence the results of your marketing experiment and potentially confound your findings. Ignoring these factors may lead to inaccurate conclusions about the effectiveness of your marketing strategies.

To account for external factors, carefully control for potential confounding variables during the experimental design process. This might involve using random assignment, testing during stable periods, or controlling for known external influences.

Consider running follow-up experiments or analyzing historical data to confirm your findings and rule out the impact of external factors.

Tips for avoiding these pitfalls

By being aware of these common pitfalls and following best practices, you can ensure the success of your marketing experiments and obtain valuable insights for your marketing efforts. Here are some tips to help you avoid these pitfalls:

  • Plan your experiment carefully: Invest time in the planning stage to establish clear objectives, calculate an adequate sample size, and design a robust experiment that controls for potential confounding factors.
  • Use a hypothesis-driven approach: Formulate a specific, testable hypothesis based on existing knowledge or data to guide your experiment and focus on the most relevant variables and outcomes.
  • Monitor your experiment closely: Regularly check the progress of your experiment, address any issues that arise, and ensure that your experiment is running smoothly and according to plan.
  • Analyze your data objectively: Use statistical techniques to determine the significance of your results and consider the practical implications of your findings before making marketing decisions.
  • Learn from your experiments: Apply the lessons learned from your experiments to continuously improve your marketing strategies and stay ahead of the competition.

By avoiding these common pitfalls and following best practices, you can increase the effectiveness of your marketing experiments, gain valuable insights into customer preferences and behaviors, and ultimately drive better results for your business.

Building a Culture of Experimentation

Building a Culture of Experimentation

To truly reap the benefits of marketing experiments, it’s essential to build a culture of experimentation within your organization. This means fostering an environment where curiosity, learning, data-driven decision-making, and collaboration are valued and encouraged.

Encouraging curiosity and learning within your organization

Cultivating curiosity and learning starts with leadership. Encourage your team to ask questions, explore new ideas, and embrace a growth mindset.

Promote ongoing learning by providing resources, such as training programs, workshops, or access to industry events, that help your team stay up-to-date with the latest marketing trends and techniques.

Create a safe environment where employees feel comfortable sharing their ideas and taking calculated risks. Emphasize the importance of learning from both successes and failures and treat every experiment as an opportunity to grow and improve.

Adopting a data-driven mindset

A data-driven mindset is crucial for successful marketing experimentation. Encourage your team to make decisions based on data rather than relying on intuition or guesswork. This means analyzing the results of your experiments objectively, using statistical techniques to determine the significance of your findings, and considering the practical implications of your results before making marketing decisions.

To foster a data-driven culture, invest in the necessary tools and technologies to collect, analyze, and visualize data effectively. Train your team on how to use these tools and interpret the data to make informed marketing decisions.

Regularly review your data-driven efforts and adjust your strategies as needed to continuously improve and optimize your marketing efforts.

Integrating experimentation into your marketing strategy

Establish a systematic approach to conducting marketing experiments to fully integrate experimentation into your marketing strategy. This might involve setting up a dedicated team or working group responsible for planning, executing, and analyzing experiments or incorporating experimentation as a standard part of your marketing processes.

Create a roadmap for your marketing experiments that outlines each project’s objectives, hypotheses, and experimental designs. Monitor the progress of your experiments and adjust your roadmap as needed based on the results and lessons learned.

Ensure that your marketing team has the necessary resources, such as time, budget, and tools, to conduct experiments effectively. Set clear expectations for the role of experimentation in your marketing efforts and emphasize its importance in driving better results and continuous improvement.

Collaborating across teams for a holistic approach

Marketing experiments often involve multiple teams within an organization, such as design, product, sales, and customer support. Encourage cross-functional collaboration to ensure a holistic approach to experimentation and leverage each team’s unique insights and expertise.

Establish clear communication channels and processes for sharing information and results from your experiments. This might involve regular meetings, shared documentation, or internal presentations to keep all stakeholders informed and engaged.

Collaboration also extends beyond your organization. Connect with other marketing professionals, industry experts, and thought leaders to learn from their experiences, share your own insights, and stay informed about the latest trends and best practices in marketing experimentation.

By building a culture of experimentation within your organization, you can unlock valuable insights, optimize your marketing strategies, and drive better results for your business.

Encourage curiosity and learning, adopt a data-driven mindset, integrate experimentation into your marketing strategy, and collaborate across teams to create a strong foundation for marketing success.

If you’re new to marketing experiments, don’t be intimidated—start small and gradually expand your efforts as your confidence grows. By embracing a curious and data-driven mindset, even small-scale experiments can lead to meaningful insights and improvements.

As you gain experience, you can tackle more complex experiments and further refine your marketing strategies.

Remember, continuous learning and improvement is the key to success in marketing experimentation. By regularly conducting experiments, analyzing the results, and applying the lessons learned, you can stay ahead of the competition and drive better results for your business.

So, take the plunge and start experimenting today—your marketing efforts will be all the better.

#ezw_tco-2 .ez-toc-title{ font-size: 120%; ; ; } #ezw_tco-2 .ez-toc-widget-container ul.ez-toc-list li.active{ background-color: #ededed; } Table of Contents

Manage your remote team with teamly. get your 100% free account today..

the marketing experiments blog

PC and Mac compatible

image

Teamly is everywhere you need it to be. Desktop download or web browser or IOS/Android app. Take your pick.

Get Teamly for FREE by clicking below.

No credit card required. completely free.

image

Teamly puts everything in one place, so you can start and finish projects quickly and efficiently.

Keep reading.

Challenges of Managing Remote Employees

Remote Work

The Challenges of Remote Teams: Are They Right For You?

The Challenges of Remote Teams: Are They Right For You?It seems like somebody is talking about working remotely everywhere you turn these days. And for a good reason – remote teams can have a lot of benefits for both employees and employers. But that doesn’t mean that they’re suitable for every business. From communication issues …

Continue reading “The Challenges of Remote Teams: Are They Right For You?”

Max 13 min read

How to Overcome challenges in Project Management

Project Management

How to Overcome Challenges in Project Management—The New Manager’s Guide

@teamly For additional information on this topic, feel free to check out this Youtube video from our channel. Now, onto the main content… How to Overcome Challenges in Project Management—The New Manager’s GuideEvery project is unique. The people, the problems, the creative solutions, the setbacks—the experience of managing a project never repeats itself exactly. But …

Continue reading “How to Overcome Challenges in Project Management—The New Manager’s Guide”

Max 10 min read

Project Management Artifacts

Unlocking the Mystery of Project Management Artifacts: The Essential Guide to Navigating Your Projects Successfully

Unlocking the Mystery of Project Management Artifacts: The Essential Guide to Navigating Your Projects SuccessfullyUnderstanding the intricacies of project management artifacts isn’t just about getting to grips with fancy terminology or adding another layer of complexity to your projects. It’s about wielding a set of tools that can profoundly transform your approach to managing tasks, …

Continue reading “Unlocking the Mystery of Project Management Artifacts: The Essential Guide to Navigating Your Projects Successfully”

Max 8 min read

Project Management Software Comparisons

Asana

Asana vs Wrike

Basecamp

Basecamp vs Slack

Smartsheet

Smartsheet vs Airtable

Trello

Trello vs ClickUp

Monday.com

Monday.com vs Jira Work Management

Trello vs asana.

Get Teamly for FREE Enter your email and create your account today!

You must enter a valid email address

You must enter a valid email address!

What Is Experiment Marketing? (With Tips and Examples)

the marketing experiments blog

Do you feel like your marketing efforts aren’t quite hitting the mark? There’s an approach that could open up a whole new world of growth for your business: marketing experimentation.

This isn’t your typical marketing spiel. It’s about trying new things, seeing what sticks, and learning as you go. Think of it as the marketing world’s lab, where creativity meets strategy in a quest to wow audiences and break the internet.

In this article, we’ll talk about what are marketing experiments, offer some killer tips to implement and analyze marketing experiments, and showcase examples that turned heads.

Ready to dive in?

Shortcuts ✂️

What is a marketing experiment, why should you run marketing experiments, how to design marketing experiments, how to implement marketing experimentation, how to analyze your experiment marketing campaign, 3 real-life examples of experiment marketing.

Marketing experimentation is like a scientific journey into how customers respond to your marketing campaigns.

Imagine you’ve got this wild idea for your PPC ads. Instead of just hoping it’ll work, you test it. That’s your experiment. You’re not just throwing stuff at the wall to see what sticks. You’re carefully choosing your shot, aiming, and then checking the impact.

Marketing experiments involve testing lots of things, like new products and how your marketing messages affect people’s actions on your website.

Running a marketing experiment before implementing new strategies is essential because it serves as a form of insurance for future marketing endeavors.

By conducting marketing experiments, you can assess potential risks and ensure that your efforts align with the desired outcomes you seek.

One of the main advantages of marketing experiments is that they provide insight into your target audience, helping you better understand your customers and optimize your marketing strategies for better results. 

By ensuring that your new marketing strategies are the most impactful, you’ll achieve better campaign performance and a better return on investment.

Now that we’ve unpacked what are marketing experiments, let’s dive deeper. To design a successful marketing experiment, follow the steps below.

1. Identify campaign objectives

Establishing clear campaign objectives is essential. What do you want to accomplish? What are your most important goals? 

To identify campaign objectives, you can:

  • Review your organizational goals
  • Brainstorm with your team
  • Use the SMART framework (Specific, Measurable, Achievable, Relevant, Time-bound) to define your objectives

Setting specific objectives ensures that your marketing experiment is geared towards addressing critical business challenges and promoting growth. This focus will also help you:

  • Select the most relevant marketing channels
  • Define success metrics
  • Create more successful campaigns
  • Make better business decisions

2. Make a good hypothesis

Making a hypothesis before conducting marketing experiments is crucial because it provides a clear direction for the experiment and helps in setting specific goals to be achieved.

A hypothesis allows marketers to articulate their assumptions about the expected outcomes of various changes or strategies they plan to implement.

By formulating a hypothesis, marketers can create measurable and testable statements that guide the experiment and provide a basis for making informed decisions based on results.

It helps in understanding what impact certain changes may have on your customers or desired outcomes, thus enabling marketers to design effective experiments that yield valuable insights.

3. Select the right marketing channels

Choosing the right marketing channels is crucial for ensuring that your campaign reaches your customers effectively. 

To select the most appropriate channels, you should consider factors such as the demographics, interests, and behaviors of your customers, as well as the characteristics of your product or service.

Additionally, it’s essential to analyze your competitors and broader industry trends to understand which marketing channels are most effective in your niche. 

4. Define success metrics

Establishing success metrics is a crucial step in evaluating the effectiveness of your marketing experiments. 

Defining success metrics begins with identifying your experiment’s objectives and then choosing relevant metrics that can help you measure your success. You’ll also want to set targets for each metric.

Common success metrics include:

  • conversion rate,
  • cost per acquisition,
  • and customer lifetime value.

When selecting appropriate metrics for measuring the success of your marketing experiments, you should consider the nature of the experiment itself – whether it involves email campaigns, landing pages, blogs, or other platforms.

For example, if the experiment involves testing email subject lines, tracking the open rate would be crucial to understanding how engaging the subject lines are for the audience.

When testing a landing page, metrics such as the submission rate during the testing period can reveal how effective the page is in converting visitors.

On the other hand, if the experiment focuses on blogs, metrics like average time on page can indicate the level of reader engagement.

Once you’ve finished designing your marketing experiments, it’s time to put them into action.

This involves setting up test groups, running tests, and then monitoring and adjusting the marketing campaigns as needed.

Let’s see the implementation process in more detail!

1. Setting up test groups

Establishing test groups is essential for accurately comparing different marketing strategies. To set up test groups, you need to define your target audience, split them into groups, create various versions of your content, and configure the test environment.

Setting up test groups ensures your marketing experiment takes place under controlled conditions, enabling you to compare results more accurately. 

This, in turn, will help you identify the most effective tactics for your audience.

2. Running multiple tests simultaneously

By conducting multiple tests at the same time, you’ll be able to:

  • Collect more data and insights
  • Foster informed decision-making
  • Improve campaign performance

A/B testing tools that allow for simultaneous experiments can be a valuable asset for your marketing team. By leveraging these tools, you can streamline your experiment marketing process and ensure that you’re getting the best results from your efforts.

3. Monitoring and adjusting the campaign

Monitoring and adjusting your marketing experiment campaign is essential to ensure that the experiment stays on track and achieves its objectives. 

To do so, you should regularly:

  • Review the data from your experiment to identify any issues.
  • Make necessary adjustments to keep the experiment on track.
  • Evaluate the results of those adjustments.

Proactive monitoring and adjustment of your campaign helps identify potential problems early, enabling you to make decisions based on data and optimize your experiments.

As discussed above, after implementing your marketing experiment you’ll want to analyze the results and learn from the insights gained.

Remember that the insights gained from your marketing experiments are not only valuable for the current campaign you’re running but also for informing your future marketing initiatives.

By continuously iterating and improving your marketing efforts based on what you learn from your experiments, you can unlock sustained growth and success for your business.

1. Evaluating the success of your campaign

Assessing the success of your marketing experiment is vital, and essentially it involves determining if the campaign met its objectives and whether the marketing strategies were effective. 

To evaluate the success of your marketing campaigns, you can:

  • Compare website visits during the campaign period with traffic from a previous period
  • Utilize control groups to measure the effect of the campaign
  • Analyze data such as conversion rates and engagement levels

2. Identifying patterns and trends

Recognizing patterns and trends in the data from your marketing experiments can provide valuable insights that can be leveraged to optimize future marketing efforts. 

Patterns indicate that many different potential customers are experiencing the same reaction to your campaigns, for better or for worse. 

To identify these patterns and trends, you can:

  • Visualize customer data
  • Combine experiments and data sources
  • Conduct market research
  • Analyze marketing analytics

By identifying patterns and trends in your marketing experiment data, you can uncover insights that will help you refine your marketing strategies and make data-driven decisions for your future marketing endeavors.

3. Applying learnings to future campaigns

Leveraging the insights gained from your marketing experiment in future campaigns ensures that you can continuously improve and grow the effectiveness of your marketing efforts. 

Applying learnings from your marketing experiments, quite simply, involves:

  • analyzing the data,
  • identifying the successful strategies,
  • documenting key learnings, and
  • applying these insights to future campaigns

By consistently applying the learnings from your marketing experiments to your future  digital marketing efforts , you can ensure that your marketing strategies are data-driven, optimized for success, and always improving.

Now that we’ve talked about the advantages of experiment marketing and the steps involved, let’s dive into real-life cases that showcase the impact of this approach.

By exploring these experiment ideas, you’ll get a clear picture of how you can harness experiment marketing to get superior results.

You can take these insights and apply them to your own marketing experiments, boosting your campaign’s performance and your ROI.

Example 1: Homepage headline experiment

Bukvybag , a Swedish fashion brand selling premium bags, was on a mission to find the perfect homepage headline that would resonate with its website visitors.

They tested multiple headlines with OptiMonk’s Dynamic Content feature to discover which headline option would be most successful with their customers and boost conversion rates.

Take a look at the headlines they experimented with, which all focused on different value propositions.

Original: “ Versatile bags & accessories”

Bukvybag successful marketing experiment example

Variant A: “Stand out from the crowd with our fashion-forward and unique bags”

Bukvybag successful marketing experiment example

Variant B: “Discover the ultimate travel companion that combines style and functionality”

Bukvybag successful marketing experiment example

Variant C: “ Premium quality bags designed for exploration and adventure”

Bukvybag successful marketing experiment example

The results? Bukvybag’s conversions shot up by a whopping 45% as a result of this A/B testing!

Example 2: Product page experiment

Varnish & Vine , an ecommerce store selling premium plants, discovered that there was a lot they could do to optimize their product pages.

They turned to OptiMonk’s Smart Product Page Optimizer and used the AI-powered tool to achieve a stunning transformation.

First, the tool analyzed their current product pages. Then, it crafted captivating headlines, subheadlines, and lists of benefits for each product page automatically, which were tailored to their audience.

Varnish & Vine marketing experiments of a landing page example

After the changes, the tool ran A/B tests automatically, so the team was able to compare their previous results with their AI-tailored product pages.

The outcome? A 12% boost in orders and a jaw-dropping 43% surge in revenue, all thanks to A/B testing the AI-optimized product pages.

Example 3: Email popup experiment

Crown & Paw , an ecommerce brand selling artistic pet portraits, had been using a simple Klaviyo popup that was underperforming, so they decided to kick it up a notch with a multi-step popup instead. 

On the first page, they offered an irresistible discount, and as a plus they promised personalized product recommendations.

Crown & Paw marketing experiments for an email popup

In the second step, once visitors had demonstrated that they wanted to grab that 10% off, they asked simple questions to learn about their interests. Here are the questions they asked:

Crown & Paw marketing experiments for an email popup

For the 95% who answered their questions, Crown & Paw revealed personalized product recommendations alongside the discount code in the final step.

Crown & Paw marketing experiments for an email popup

The result? A 4.03% conversion rate, and a massive 2.5X increase from their previous email popup strategy. 

This is tangible proof that creatively engaging your audience can work wonders.

What is an example of a market experiment?

An example of a marketing experiment could involve an e-commerce company testing the impact of offering free shipping on orders over $50 for a month. If they find that the promotion significantly increases total sales revenue and average order value, they may decide to implement the free shipping offer as a permanent strategy.

What is experimental data in marketing?

Experimental data in marketing refers to information collected through tests or experiments designed to investigate specific hypotheses. This data is obtained by running experiments and measuring outcomes to draw conclusions about marketing strategies.

How do you run a marketing experiment?

To run a marketing experiment, start by defining your objective and hypothesis. Then, create control and experimental groups, collect relevant data, analyze the results, and make decisions based on the findings. This iterative process helps refine marketing strategies for better performance.

What are some real-life examples of experiment marketing?

Real-life examples of marketing experiments include A/B testing email subject lines to determine which leads to higher open rates, testing different ad creatives to measure click-through rates, and experimenting with pricing strategies to see how they affect sales and customer behavior. 

How to brainstorm and prioritize ideas for marketing experiments?

Start by considering your current objectives and priorities for the upcoming quarter or year. Reflect on your past marketing strategies to identify successful approaches and areas where performance was lacking. Analyze your historical data to gain insights into what has worked previously and what has not. This examination may reveal lingering uncertainties or gaps in your understanding of which strategies are most effective. Use this information to generate new ideas for future experiments aimed at improving performance. After generating a list of potential strategies, prioritize them based on factors such as relevance to your goals, timeliness of implementation, and expected return on investment.

Wrapping up

Experiment marketing is a powerful tool for businesses and marketers looking to optimize their marketing strategies and drive better results. 

By designing, implementing, analyzing, and learning from marketing experiments, you can ensure that your marketing efforts are data-driven, focused on the most impactful tactics, and continuously improving.

Want to level up your marketing strategy with a bit of experimenting? Then give OptiMonk a try today by signing up for a free account!

Picture of Nikolett Lorincz

Nikolett Lorincz

You may also like.

30 Fall Marketing Tips to Drive Sales This Season

30 Fall Marketing Tips to Drive Sales This Season

12 Ways to Optimize User Experience on Your Website

12 Ways to Optimize User Experience on Your Website in 2024

13 Ecommerce Category Page Best Practices to Build a Perfect Category Page

13 Ecommerce Category Page Best Practices to Build a Perfect Category Page

  • Összes funkció
  • Grow Your Email List
  • Grow Your Messenger List
  • Reduce Cart Abandonment
  • Increase Avg. Cart Value
  • Promote Special Offers
  • Collect Customer Feedback
  • Facilitate Social Sharing
  • For Mid-Market/Enterprise users
  • Partners program OLD
  • Terms & Conditions
  • Security & Privacy
  • We’re Hiring! ????
  • eCommerce Guides
  • Case Studies
  • All features
  • Book a demo

Partner with us

  • Partner program
  • Become an affiliate
  • Agency program
  • Success Stories
  • We're hiring
  • Tactic Library
  • Help center / Support
  • Optimonk vs. Optinmonster
  • OptiMonk vs. Klaviyo
  • OptiMonk vs. Privy
  • OptiMonk vs. Dynamic Yield
  • OptiMonk vs. Justuno
  • OptiMonk vs. Nosto
  • OptiMonk vs. VWo
  • © OptiMonk. All rights reserved!
  • Terms of Use
  • Privacy Policy
  • Cookie Policy

Product updates: Introducing OptiMonk AI

g2 popups builder leader

Home Growth Marketing 21 Marketing Experiments & Test Ideas

21 Marketing Experiments & Test Ideas

marcus+aaron@ventureharbour.com

Marketing works best when it’s led by evidence.

In other words, a series of marketing experiments that progressively build upon one another through iterative testing. 

This scientific approach to marketing works because it improves with every success or failure and plays into the strengths of marketing; An abundance of data, a clear goal, and a disproportionate amount of ideas to resources.  

One of the toughest parts of implementing a marketing experimentation program is getting the process right. Many teams flit from one shiny A/B test to another, with little accountability or long-term strategy. 

In this post, we’ll focus on two essential parts of marketing experimentation; Keeping a healthy backlog of ideas and experiments to test, as well as how to prioritise and track the outcomes once complete.

What are we looking at in this article?

The focus of this article is to suggest a range of marketing experiment and test ideas that will improve the performance of your website and campaigns. You should be able to take these ideas, apply them to your marketing strategies and work your way to better results.

More importantly, though, I hope these ideas provide enough examples for you to go away and find your own experiments, too.

So we’re going to start by explaining why marketing experiments are important and look at how you can run experiments successfully. Then, we’ll get into the experiment and test ideas that put these concepts into actionable examples that you can try for yourself.

As you’ll see from the experiment and test ideas later, some of these focus on very specific elements, such as landing page navigation, while others are broader concepts, like optimising pricing pages, that you’ll need to play around with more for yourself.

Regardless, each idea we cover in this article will look at specific concepts and insights that you can learn from and apply to your experiments.

Why are marketing experiments important?

Marketing experiments generally have one of two aims: to prove an existing theory or try something new entirely. In other words, they provide a means of knowing which marketing strategies work (or don’t) and a measurable approach to trying out new ideas – both of which are incredibly important.

In today’s data-driven age, marketers don’t have to put their faith in concepts and theories. We can prove the value of marketing campaigns, design choices, growth strategies and key business decisions.

As Google puts it, in a guide entitled Proving Marketing Impact (PDF) , “one of the most serious challenges in marketing is identifying the true impact of a given marketing spend change.”

“The ripple effect is that marketers must function as scientists, conducting experiments when it comes to allocating budgets—whether by adapting the media mix, trying out different forms of creative, or exploring new marketing channels altogether. By measuring the full value of digital media investments through controlled marketing experiments, we can prove what is effective and what is not.” Proving Marketing Impact: 3 Keys to Doing It Right , Think with Google

Without a refined experiment and testing system in place, you’re always shooting in the dark with your marketing actions. If you can’t prove the effectiveness of your campaigns, then you can’t know they’re profitable and business growth is going to suffer.

Essentially, you’re gambling with your marketing investment and the odds are stacked against you.

On the other hand, a data-driven model of testing and experimentation puts you in control of your marketing spend. It allows you to prove the ROI of marketing actions, optimise strategies to improve performance and make intelligent marketing decisions.

In practical terms, this allows you to:

  • Prove the effectiveness of marketing campaigns
  • Stop wasting money on ineffective strategies
  • Test new design ideas
  • Optimise campaigns and pages to maximise performance
  • Try new marketing strategies
  • Learn from previous experiments
  • Make smarter business decisions
  • Identify & stop expensive mistakes
  • Innovate new ideas

A good example of this would be our journey with web form optimisation, which not only led us to 700+% higher conversion rates but also led us to innovate a multi-step form concept that nobody else was using at the time.

By innovating this idea (and proving that it worked), we reaped the benefits of the most effective web forms around while everyone else was stuck with lower conversion rates. And this is the thing about marketing innovation: by getting there first, you capitalise on the benefits while everyone else is playing catch-up.

Marketing experimentation is the only path toward true innovation and climbing your way to the top of your industry.

The secrets of a successful marketing experiment

In this article for HubSpot , Kayla Carmicheal runs through the five key steps you should follow for any successful marketing experiment:

  • Make a hypothesis: Whether you’re proving an existing theory or going into an experiment hoping to prove that a new strategy is effective, the first step is always to define a specific and measurable hypothesis.
  • Collect research: This should include audience research, reaffirming (possibly even modifying) your hypothesis and identifying challenges you’ll face throughout your experiment – e.g.: variables to consider, data attribution issues, etc.
  • Define your measurement metrics: With your research complete, it’s time to confirm and define the measurement metrics that will prove your outcome and make sure you have the necessary tools to collect that data.
  • Run your experiment: By now, you should have everything in place to run your experiment until it reaches statistical significance and your hypothesis is either proven right or wrong.
  • Analyse the results: With your experiment complete, the final stage is to analyse the results and extract meaningful insights from your tests.

That’s a pretty good assessment of the process you should follow and you can find more details about each step in Kayla Carmicheal’s article – note: I’ve paraphrased each step above with my own explanation.

This process is applicable to every marketing experiment and test you could think of but it doesn’t guarantee success. To ensure your marketing experiments deliver valuable insights that result in better marketing and business decisions, there are a few key principles you need to master:

  • Choosing the right experiments/tests: This starts with prioritising hypotheses and opportunities with the greatest potential benefit.
  • Achieving reliable results: The key to this is mitigating variables that could skew results and running experiments long enough to achieve statistical significance.
  • Learn from your results: Not only should you learn valuable lessons from individual experiments but also be building up a history of experience and insights from your ongoing, collective testing strategy.

As I explain in our Landing Page Optimisation: 101 Tips, Strategies & Examples article, the art of deciding what to test and experiment is understanding what truly impacts user behaviour.

“One of the worst A/B testing mistakes you can make is testing every minute detail (button colours, font styles, single words). Stick to testing meaningful variations like two different landing pages or one with multi-step forms and one without. These are factors that genuinely impact the consumer experience and whether people do business with you.”

Running marketing experiments require time and a certain amount of financial investment. So, as with all marketing strategies, you need to make sure that your tests generate a positive ROI .

Start by testing the strategies and elements on your page that should deliver the strongest positive outcome, use your research to reinforce your decisions and apply what you learn to future experiments.

If you’re short of ideas, don’t worry. We’ve got you covered through the remainder of this article.

Website test & experiment ideas

#1: optimise loading times.

It still amazes me that the average loading time of websites and landing pages remain as slow as they are in 2020. It’s not like the impact of page speed upon conversions and just about every user metric that matters is a secret – it’s been documented time and again.

Yet, according to research from Unbounce in 2019, only 3% of marketers listed improving page speed as one of their top priorities.

the marketing experiments blog

Unbelievably, page speed sat at the bottom of the priority list for marketers in the report while A/B testing or optimising pages came out on top. You have to wonder, what on Earth are marketers optimising if they’re not interested in making their pages faster.

Let’s go back to the key principle we talked about earlier: start with the factors that will have the greatest positive impact.

Page speed is clearly one of these so it earns the first place in our list. If you’re not sure how to approach page speed optimisation, you can find out how we improved our loading times by 70.39% with roughly 40 minutes’ worth of work .

Optimising page speed should be one of the most profitable experiments you run, too. There are plenty of free tools available for measuring loading times and pinpointing specific issues – such as GTmetrix and Google’s PageSpeed Insights tool.

the marketing experiments blog

Here’s a quick summary of the steps we took in the article linked above:

  • Use a content delivery network (CDN)
  • Use WP Engine for hosting WordPress sites
  • Use a web caching plugin/service
  • Add Expires headers
  • Use a fast theme for WordPress or other CMS platforms
  • Compress your images
  • Clean up your database
  • Compress your website with Gzip
  • Fix your broken links
  • Reduce the number of redirects
  • Minify your CSS & JS files
  • Replace PHP with static HTML where possible
  • Link to stylesheets – don’t use @import
  • Turn off pingbacks and trackbacks
  • Enable Keep-Alive
  • Specify image dimensions
  • Specify a character set in HTTP headers
  • Put CSS at the top and JS at the bottom
  • Disable hotlinking of images
  • Switch off all plugins you don’t use
  • Minimise round trip times (RTTs)
  • Use CSS Sprites

You can find more details about each of these steps in our page speed optimisation article . Collectively, these make a significant impact on loading times and, in turn, all of your marketing KPIs should benefit.

#2: Prioritise hero sections

The hero section is the main feature of every page (and email) that matters. This is the first view users see when they click through to your website, landing page or open up one of your emails and this is the space where your primary message makes its first impression.

In other words, this is one of the most important features of any page with a conversion goal.

the marketing experiments blog

Here at Venture Harbour, we’re constantly testing and experimenting with hero section designs to increase conversion rates, engagement and other crucial KPIs. If there’s one theme that’s been present throughout the years, it’s that simple design and clear messaging are key.

We’ve covered this before in our 3 Homepage Design Tweaks That Increased Our Conversion Rates By 500% article. Here’s a snippet that’s particularly relevant:

“As soon as we lifted the CTA from our hero section and replaced it with a more compelling value proposition, we saw email sign-ups from our homepage increase from 11% to 17% (a 55% increase). That’s pretty impressive when all we really did was remove something designed to convert and rethink our message.”

We’ve consistently found that hero sections without CTAs perform better than those with them – provided our message delivers enough of a value proposition to encourage users to scroll beyond the fold.

This may come as a surprise but multiple tests over the years have confirmed this for us, which is a perfect example of learning from previous tests and applying insights to future experiments.

#3: Remove navigation from landing pages

CTAs in hero sections aren’t the only common design convention that you can prove wrong through testing. In fact, you’ll find some of the most prevalent design patterns are surprisingly damaging for website performance once your testing process is up and running.

Another good example is the use of navigation on landing pages. If you stick to the Goody Two-Shoes guide to UX design, every page should have accessible navigation so that users can move to other parts of your website.

In the real world, though, we’ve got marketing objectives to think about and the last thing you want someone to do on your landing page is click through to another part of your site – away from your CTAs.

Over time, an increasing number of businesses have found that removing the header navigation from landing pages is the way to go.

the marketing experiments blog

There are genuine UX benefits for the end user, too. By removing the option of navigation, you increase the clarity of your landing page and improve user focus on your key message while reducing the risk of distractions and decision fatigue – all of which is good for your conversion goals and the short attention span of your typical visitor.

#4: Optimise your sales funnel

Optimising your sales funnel is one of the biggest tasks you can take on in marketing experimentation and conversion optimisation. There are no shortcuts to doing this (none that are worth taking, anyway) and this is something you’ll have to continue doing for the foreseeable future.

For an in-depth look at funnel optimisation, we’ve got two guides that you can read through and save for reference:

  • Funnel Optimisation 101: 5 Steps to Fixing a Leaky Marketing Funnel
  • Marketing Funnel Strategies: 5 Steps to Increasing Sales in 2020

the marketing experiments blog

Your sales funnel is designed to capture leads at every stage of the consumer journey and act as a framework for nurturing prospects along the buying process – from first-time visitor to paying customer.

The aim of optimising your sales funnel is to increase the percentage of leads that go on to make the initial purchase and the percentage of existing customers who go on to make further purchases.

One of your key objectives is to identify where leads are dropping out of your sales funnel and put strategies in place to keep these prospects moving along the buying process. This is where our guide to fixing a leaky marketing funnel will come in handy .

Content & messaging experiment ideas

#5: test different messages.

When it comes to convincing users to take profitable action, it’s your content and web copy that has the biggest influence. Ultimately, it’s your marketing messages that decide whether you convert users into leads or customers.

So the majority of your experiments and testing should be aimed at pinpointing the right message for campaigns, ads, pages and emails.

Focus on the key selling points and benefits of your offer, as well as the pain points your target audiences experience (and the solutions you provide). Be specific. Don’t try and squeeze every point into one message. Test different, highly-focused messages that hone in on specific selling points and run with the variation that performs most effectively.

If you can only deliver one message to every user, you need to find the message that’s most effective in the highest percentage of scenarios.

You can also test personalisation features to deliver different messages to users, based on their individual interests. For example, Unbounce offers a great feature called Dynamic Text Replacement , which matches the headline of your landing page to the search terms a user types into Google prior to clicking through.

This allows you to create and test multiple messages for different buyer personas and deliver them with accuracy. Instead of showing one message to all, you can pinpoint the unique pain points of different user types and deliver a more compelling offer.

#6: Experiment with cognitive biases

The secret of effective marketing messages is understanding the psychology of consumers. Even the biggest, baddest CEO is a human being flawed with cognitive biases that affect their decision-making capabilities.

The most successful marketing and advertising campaigns use these biases to create incentive, play with emotions and change the perception of potential buyers.

the marketing experiments blog

For an in-depth look at how you can use cognitive biases to create killer marketing messages, take a look at our 9 Cognitive Biases That Influence Buyer Decisions article where we explore the following:

  • Confirmation bias : Where people seek, interpret and remember information in a way that confirms their existing ideas.
  • Loss aversion : Which describes how we fear loss considerably more than we value gaining something of the same worth – e.g.: we’re more frustrated by losing £10 than we are happy to gain £10.
  • Anchoring bias : Where people place more significance on the first piece of information they receive.
  • The bandwagon effect : Which explains why people queue up for days to buy an iPhone, think they’re getting good deals on Black Friday and social media trends even exist.
  • The mere exposure effect : The reason people are more likely to buy from brands they know well.
  • The endowment effect : Explains why people place inflated value on items they own – or believe they own.
  • Sunk cost bias : Where people continue to do something because they’ve already invested time, effort or money into it and fear this investment will be lost – even if they’re better off calling it quits now.
  • The halo effect : How our first impressions influence the way we interpret further information.
  • The serial position effect : The reason people interpret the first and last pieces of information on a list as the most important and remember them more clearly.

You can click on the bold link text of each cognitive bias listed above and it will take you to an article looking at how you can use each of them to create more effective marketing messages.

#7: Capture leads from content

Marketers and brands spend an immeasurable time creating content but how much of it actually generates leads? Think of all those blog posts just sitting there on your website and the time/money it took to produce them – are they paying their way?

If the answer is no, then you’ve got a content marketing ROI problem because everything you publish should contribute to your key marketing objectives: generating leads, sales and profit.

the marketing experiments blog

One of the most popular ways to generate leads from web content is to use exit-intent pop-ups and they’ve got their merits. In fact, we take a look at the pros and cons of using them in our Do Exit-Intent Popups Actually Increase Conversions? article.

You can also use live chat widgets and chatbots to prompt user action or request permission to send users notifications – two other strategies with their own benefits and drawbacks.

After testing out various solutions, our current approach is to add dynamic CTAs throughout our blog posts, promoting our most recent ventures.

The key to making this strategy work is to dynamically insert our CTAs in every blog post and continue to update our old content so that it continues to generate traffic, get CTAs in front of eyes and maintain a return on our content investment.

#8: Experiment with content types

As the internet and consumer habits evolve, different types of content assume new roles. Just about every content marketing trends article for the past five years has been talking about the rise of video content and we’ve seen this with the transformation of Instagram from an image-only network to one dominated by video clips.

In the past few years, podcasts have been one of the biggest growth opportunities for content marketers, journalists, celebrities and anyone with a voice.

Video game streaming is now a multibillion dollar industry where people pay to watch strangers play anything from the latest Fifa to retro console games.

Meanwhile, the infographics craze is over and some of the worst content trends like clickbait and listicles are dying a slow death.

Some of these trends are here to stay but it’s very difficult for new publishers to enter the podcast race or new gamers build a huge following on Twitch. You have to react quickly to content trends and build your audience before they’re soaked up by everyone else – just take a look at how hard it is for YouTubers starting out now.

Experiment with content types, find out what works with your target audiences and make your mark before opportunities are saturated by your rivals.

CRO test & experiment ideas

#9: test your ctas.

We’ve written plenty about testing and optimising calls to action (CTAs) on the Venture Harbour blog. Here’s a selection of some articles that will help you run the best tests and maximise results:

  • 19 Compelling Call to Action Examples You Can Steal For Yourself
  • Call to Action Best Practices: 15 Tips to Write Irresistible CTAs
  • CTA Psychology: 11 Principles That Make Users Click

That should give you plenty of guidance on how to create effective CTAs, optimise them to improve results and an understanding of the psychology behind irresistible CTAs.

the marketing experiments blog

As mentioned earlier, the most important principle is to remember that the message in your CTA has the greatest influence upon users. If your CTA copy is compelling, you stand the best chance of convincing users to take action.

There are plenty of design concepts that are important, too, of course:

  • Text size and weights
  • Colour (again, mostly for contrast

Those are the fundamental design concepts you should focus on while refining the message in your CTA to make it as convincing as possible. Try not to get lost in testing button colours and font styles because they don’t have a great influence.

As always, it comes down to testing the details that have maximum impact.

In terms of design, it’s mainly about bold, full-screen CTAs with plenty of contrast that grabs attention and makes your message jump out of the page. Then, it’s all about the message itself and how successfully it compels users to take action.

One final thing to keep in mind is that the content before your CTAs is crucially important too (higher up the page and possibly on previous pages, too). This is the content that primes users for the message in your CTAs and warms them up for taking action – the CTA itself should give them the final push.

#10: Test multi-step forms

After almost a decade of marketing experiments and testing, one of our biggest breakthroughs to date has been the discovery of multi-step forms. You find out how we’ve managed to increase conversions by up to +743% over the course of several.

If we had simply followed the usual form design best practices, we never would have stumbled across the multi-step format. Instead, we would have simply tried to make our forms as short as possible.

However, we tested each and every best practice we knew (or thought we did) and consistently found that shorter forms were being outperformed by longer, more optimised alternatives – and it turned out we weren’t alone .

Generally speaking, longer forms convert more users, as long as you’re able to increase incentive sufficiently enough to justify the length. However, we were still finding conversion rates were lower than we wanted so we started pinpointing the real UX issues of completing forms:

  • Typing fatigue
  • Mobile optimisation
  • Irrelevant fields
  • Completion time

We perceive longer forms as being a UX issue because traditional forms are a pain to complete. It can take minutes for users to fill out a few fields successfully due to the amount of typing required and the workload only increases on mobile where selecting fields and typing is even more frustrating.

It’s only natural that, when users see a long form, they want to run for the hills.

So we decided to remove this perception of workload by only showing the first question in a multi-step format. Now, the psychological barrier preventing users from starting the form process disappears.

the marketing experiments blog

Next, we decided to solve the typing problem by switching from text inputs to selectable image buttons wherever possible. Not only does this remove typing fatigue and speed up completion times, it also allows us to guide users through predefined paths and segment leads as they complete our forms.

Taking this concept one step further, we began to use a rule-based technology called conditional logic that filters out form questions based on user inputs so they only see questions relevant to the information they’ve already provided.

the marketing experiments blog

For example, if they tell us that they’re a web design agency, they only see questions relevant to web design projects. This shortens the form experience, reduces fatigue and increases the relevance of every form experience.

You can find out more about the science behind our high-converting multi-step form here .

#11: Simplify onboarding

Web forms play a key role in the onboarding process of every customer and optimising to simplify the sign-up process is a high-impact experiment worth embarking on.

Expanding beyond this concept, you should consider running a series of experiments and tests looking to simplify the entire onboarding process with the aim of reducing the number of leads that slip away during these final hurdles.

You’ve done all the hard work and it sucks to lose them at this point.

dashthis-website

Not long ago, we published an article looking at 10 CRO Case Studies You Can Replicate to Increase Conversions and one of the case studies listed provides a perfect example of simplifying the onboarding process.

By using conversion optimisation platform Hotjar , DashThis turned 50% more free trial users into paying customers by identifying onboarding issues and simplifying the process for users.

You can read their case study here .

After using Hotjar for 10 months, the company increased onboarding completion by 50%, increased customer satisfaction by 140% and created new features based on customer feedback to improve its product – not a bad set of results, by any means.

“After having analysed where users encountered roadblocks during the onboarding process, DashThis’ UX team discovered that users simply didn’t know where to click in order to add integrations. Buttons were not displayed in bold enough colours and on some smaller screen resolutions, they were completely hidden at the bottom of the page… To fix these problems, the team modified the layout of the list, added a search bar, and incorporated pop-ups, videos, and different types of content to guide the user step-by-step. The buttons were also modified to be bigger and bolder, and to accommodate those with a smaller screen resolution, buttons were added higher in the page.”

Once again, we see a business running a broad marketing experiment to identify and address specific issues at one of the most important stages of the consumer journey.

This is an experiment that promises high returns on your testing investment and makes a genuine impact upon the customer experience.

#12: Test on human beings

Sometimes, there’s no substitute for testing on real-life human beings but we’re not talking about clinical trials or anything scary like that – just some good, old-fashioned user testing.

The kind of user testing Evernote ran to increase user retention by 15% across all devices (PDF) .

Screenshot-2019-07-02-at-00.49.37

Using a platform called UserTesting , Evernote connected with real-world users. While the company had run its own user testing programs in the past, it found them too time and cost-inefficient – a common problem for companies trying to run marketing experiments and CRO campaigns.

Thankfully, there are plenty of platforms like UserTesting available these days that make it easier for companies to test their websites and software products on real-world users.

“In addition to hearing the study participants as they narrate through their actions and decisions, Evernote product managers and designers are able to watch where the testers’ hands are physically tapping, swiping, and even resting. This was especially helpful on Android since multiple devices run on the platform. Because the device type had an impact on the experience, the team needed to be able to identify and fix ergonomics issues before new products were released to the public.”

By identifying and addressing these issues, Evernote increased user retention by 15%, which is the basis of the company’s entire business model of lead generation and upselling through prolonged engagement.

Lead generation experiment & test ideas

#13: test automated webinars.

Earlier, we talked about testing different content formats and this is something we take seriously at Venture Harbour. Over the years, we’ve heard time and again that B2B marketers prefer high-quality webinars over all other content formats and this is something our own testing supports.

the marketing experiments blog

The problem with webinar marketing is that it’s incredibly time consuming and it can be very expensive. Manually organising and recording events on a periodic basis requires a lot of resources, which doesn’t really fit with our company ethos of efficiency and automated strategies.

So we decided to experiment with methods of automating our own webinar strategy to tap into the effectiveness of this content format without excessive manual workload.

It was a bumpy process in the beginning, as we’d never automated a webinar strategy before and we didn’t have any real data to work with. However, we started running tests from day one and, within a year, we achieved an attendance rate of 75.62% – more than double the industry average of 46%.

The best part of this strategy is that it’s fully automated so, once you’ve got it up and running, it does all of the hard work for you.

#14: Find a way to offer free value

This strategy predates digital marketing and it’s so common that it borders on cliche but it remains one of the most effective tactics. The key is finding a way to offer perceived value that’s relatively low-cost to your brand but acts as a gateway to something of much higher value.

the marketing experiments blog

My favourite example of this is software companies creating free tools and using them to reel in potential customers. Ahrefs does this with its free Backlink Checker that allows anyone to type in a domain or URL and receive a backlink report.

The report in itself is genuinely useful for analysing a single domain or URL but you quickly realise it’s worth looking at the paid version of Ahrefs to get full access to its data and run multiple reports.

The free tool shows you what the platform is capable of and then leaves you craving the full features. Suddenly, that monthly price tag doesn’t seem so unreasonable and signing up for a free trial is only a few clicks away.

vh-b2b-free1

We see a similar approach from Crazy Egg , which offers a free heatmap to reel in the leads. Again, you can type in your website URL for a free report. However, the company takes a slightly more aggressive approach than Ahrefs and asks users to create a free account before receiving their report.

vh-b2b2-free2

We’ve used the same strategy ourselves at Venture Harbour with free tools, such as Marketing Automation Insider , which marketers and business owners can use to find the best marketing software for their needs.

More recently, we’ve published a marketing ROI calculator that you can use and embed on your website.

#15: Test new ad networks

For years, the paid advertising landscape has been dominated by Google and Facebook but things are slowly changing. The rise of social advertising helped initiate change but growth has stalled in that area.

TikTok is writing a lot of headlines as the latest social trend and potential future advertising giant but we’ve seen flash-pan networks crash and burn countless times before.

We’re seeing more robust developments in the retail advertising space where Google is fighting it out with the likes of Amazon and eBay – two giant names without a long history in paid advertising.

the marketing experiments blog

Of course, Amazon and eBay are appealing to retailers and e-commerce entrepreneurs but the platforms are pioneering concepts for industries beyond the limits of online retail.

For example, B2B marketers should pay attention to the new advertising network available on eBay, which allows you to target eBay sellers directly – from major retail brands to solopreneurs selling on the platform.

Select your targeting options and you can reach eBay sellers to promote products and services that will help them grow their business – anything from website builders and accounting software to insurance and delivery services.

With competition increasing in the paid advertising space, marketers need to keep up with developments and be ready to test new ad networks as they emerge.

#16: Experiment with personalisation

Personalisation has been one of the biggest trends in marketing for half a decade or so but it doesn’t come easily. Delivering personalised experiences can be especially tricky in the age of GDPR and increased awareness about data privacy but plenty of brands are making it work.

If you’re struggling with data regulations, you should take a look at our 10 GDPR-Friendly Ways to Personalise Your Sales Funnel article for some ideas.

the marketing experiments blog

One of the brands we look at in that article is Stitch Fix, which has created an entirely personalised experience across the customer journey. The great thing about this approach is that the experience itself justifies the use of user data and customers understand that the more data they provide, the better their experience becomes.

Studies have shown that users are willing to exchange data for the right incentive (it comes back to perceived value). The challenge is designing GDPR-compliant consent in a way that doesn’t cripple your conversion rates.

Here’s an article that might help:

  • 5 Ways to Get Form Consent Without Killing Conversions

Lead nurturing & customer retention experiment ideas

#17: optimise your pricing pages.

Going back to the principle of optimising elements that have the greatest impact, your pricing pages certainly fit into this category. This is the page that’s tasked with convincing users that your product is worth the asking price and alleviating any remaining purchase anxiety.

So, of course, making it clear which features users get in return for their money is crucial and it often helps to offer some kind of free plan or trial to get them on board. Money-back guarantees and other financial reassurances can also help them take the plunge, especially if you create a sense of them having nothing to lose.

Screenshot-2019-07-08-at-17.43.54

On a more psychological level, one of the most common tricks you’ll see on pricing pages is the use of anchoring – one of the cognitive biases we discussed earlier. By placing the more expensive plans to the left or top of the screen, users instinctively use this as a gauge for the following prices, which all seem very reasonable in comparison.

Lead with the $0 free plan and everything that follows seems expensive.

the marketing experiments blog

On the Leadformly pricing page, we’re currently testing an interactive pricing page where users select how many leads they generate on a monthly basis to determine the price.

Our approach is to demonstrate how much potential customers are paying for each individual lead to demonstrate the fact that Leadformly more than pays for itself.

the marketing experiments blog

We also combine some classic reassurances and guarantees with a free trial to reduce purchase anxiety and create that sense of having nothing to lose, as mentioned above.

#18: Increase customisability / personalised experiences

We talked about personalisation in the previous section and one of the most effective strategies when it comes to customer retention is encouraging customisation.

This is particularly true for software products and account-based services where individualism is an asset. My favourite example of this is Spotify, which encourages you to create playlists of your favourite songs while its AI algorithms suggest new types of music you might enjoy, based on your listening habits.

the marketing experiments blog

By the time you’ve created a few playlists, you’re locked into the platform because you feel like you’re going to lose the time it took to create them (sunk cost bias) and those recommendations just keep on getting better.

The same principle can apply to any software product – all you need to do is build some kind of database that individual users don’t want to lose. It could be analytics reports, performance data, precious memories or anything of perceived value.

This can work for account-based retailers, too, whether you’re selling online or offline. Personalised offers, product recommendations, loyalty rewards and anything you can do to individualise the customer journey is an effective retention strategy.

#19: Test customer retention campaigns

Speaking of which, another high-impact priority for your marketing experiments is always going to be customer retention campaigns. The experiment is to find what works for your customers and keeps them coming back for more.

the marketing experiments blog

Email marketing is generally the easiest strategy to automate and this is the perfect platform for a low-input, high-output retention strategy.

#20: Speed up customer service

Customer service is another key component of retention. You have to keep customers happy if you want them to come back for more, even if the initial experience isn’t quite what they expected.

Studies show that customers are generally understanding reasonable issues if they’re dealt with in a timely manner.

Speed matters when it comes to dealing with complaints and this is one situation where automated emails can be frustrating. Chatbots and live chat widgets are perfect for providing an instant response to basic customer problems, buying you essential time to deal with tickets that need handling by human support members.

By automating instant responses, the frustration of one-way email exchanges disappears and you can use bots as an interface for prioritising cases that need the most attention.

#21: Automate re-engagement email campaigns

A common problem you’ll experience with customer retention is that a certain percentage simply stop engaging with your brand. Maybe they no longer use your software product or they’re not visiting your website as often as they used to, even if they haven’t cancelled their subscription or account.

Technically, you’ve still got this customer on board but they’re inactive and in danger of churning at some point.

image4

This is where re-engagement campaigns are crucial as a strategy for reigniting the flame with your precious customers. You can automate these campaigns, too, so that they trigger after a certain period of inactivity, such as the Grammarly email above, which is sent to users who haven’t used its free app recently.

You don’t want to be too pushy with re-engagement campaigns but this is the magic of testing: you can establish the perfect time frames for sending out re-engagement emails to give your customers a gentle nudge without annoying them enough to unsubscribe from your email list.

Test & learn your way to success

The key theme throughout this article has been choosing experiments that are likely to yield high-impact results. If there’s one takeaway from everything we’ve looked at, make sure it’s this concept of high-impact testing.

Hopefully, the examples we’ve covered give you plenty of ideas and help you develop a sense of where the greatest opportunities are.

Just remember that marketing experimentation is an ongoing process and you should learn from the results, even if you don’t get the expected outcome. Invest some time in getting the best insights from your results and truly understanding what they mean so you can apply these findings to future marketing and business decisions.

marcus+aaron@ventureharbour.com

Aaron Brooks is a copywriter & digital strategist specialising in helping agencies & software companies find their voice in a crowded space.

More from Aaron

You may also like.

the marketing experiments blog

Growth Marketing Software: 10 Essential Tools for Growth Marketers

Our team curated ten of our favourite tools for managing, planning and executing marketing experiments in 2022.

marcus+aaron@ventureharbour.com

30+ Growth Marketing Strategy Examples

Growth marketing is the agile, evidence-based alternative to slow, opinion-based traditional marketing. But what does it look like in practice?

Crystal Ball

What Is Predictive Analytics & Can It Give You an Advantage?

Data empowers us with knowledge of what did or didn’t work in the past. In hindsight, we all have 20/20 vision. But what if you had 20/20 vision, not in…

marcus@ventureharbour.com

TrueNorth Serene Automation Insider StackUp

Popular Guides

Email Marketing Software Marketing Automation Software ActiveCampaign Review Webinar Software

About Blog Careers Contact

Privacy Policy   |  Cookie Policy    | Editorial Policy  | Terms of Use

© 2024 Venture Harbour. Ltd is a company registered in England and Wales. Company No. 8291791. VAT No. 290356105. Registered office: Lytchett House, 13 Freeland Park, Wareham Road, Poole, Dorset, BH16 6FA

  • Growth marketing articles
  • Marketing ops articles
  • Marketing strategy articles
  • Marketing acquisition articles
  • CRO articles
  • Content marketing articles
  • Affiliate marketing articles
  • Lead nurturing articles
  • A/B testing tools
  • CRMs (with automation)
  • Email marketing software
  • Growth Marketing Software
  • Landing page builders
  • Lead generation tools
  • Marketing calendar software
  • Transactional email services
  • Webinar software
  • Start free trial

Start selling with Shopify today

Start your free trial with Shopify today—then use these resources to guide you through every step of the process.

the marketing experiments blog

How to Conduct a Successful Marketing Experiment

Marketing experiments allow you to test different marketing strategies to see which ones resonate with your customers and increase engagement.

Experiment marketing

Imagine a small business concurrently running two sales. One sale trumpets: “Buy one, get one 50% off!” A second sale offers, “Get 25% off when you buy two!” These two sales represent the same amount of savings. A customer purchasing two units would end up saving 25% on each unit. No matter which offer a potential customer chooses, the business’s bottom line remains the same.

However, when the business tests these two promotions, it notes a trend. The first sales pitch, which mentions getting a product for 50% off, is drawing more customers than the second pitch, which mentions 25% off two products. The company chooses to ditch the second marketing campaign and go all-in on the first one. Effectively, the company has just run a marketing experiment.

What are marketing experiments?

Marketing experiments are a form of market research whereby businesses test different material and/or means of communication (such as sales pitches, calls to action, social media posts, and email marketing campaigns) to see which ones yield the best results. 

For instance, a company might run two concurrent ad campaigns: one on Instagram and one on TikTok. The creative is the same, but the platforms are different. The company wants to gain insight into which platform drives more traffic to the company’s website. If the company sees a difference with statistical significance, it may choose to fully commit more budget to the platform that produced a higher yield.

This marketing experiment, where two outreach approaches are compared side by side, combines active marketing campaigns with market research. Insights gleaned from the research will inform marketing strategies for future campaigns.

How to conduct a successful marketing experiment

A typical marketing experiment involves five steps that span planning, execution, and analysis. Here are the steps, as they apply to a hypothetical shoe company running an A/B email marketing campaign. A/B tests are where two (or more) different versions of a messaging or UI element (for example, the placement of an “add to cart” button on a page, or using “buy now” instead of “add to cart”) are sent to two randomized groups of users to see which version performs better.

  • Determine what you want to learn. Our hypothetical shoe company has decided it wants to test different email marketing messages about a promotion to see which version has the highest open rate. 
  • Establish your experiment parameters. Now the shoe company’s marketing team gets into specifics. It will conduct an A/B marketing test based on email subject lines. Half of its mailing list will get a message with a subject line that reads “Surprise! Our Biggest Sale of the Year!” The other half of its mailing list will get a message with the subject line “Exclusive Limited Sale: 48 Hours Only!” Recipients will be divided into two groups: those who have bought from the brand before, and those who haven’t. Half of each group will receive one message; half will receive the other. The contents of the email will be the same, and use both “biggest” and “limited time only” language. 
  • Deploy the experiment. Our shoe brand sends these emails out the day the sale launches. 
  • Collect data. As the marketing experiment progresses, the marketing team monitors the open rates among the recipient groups: previous buyers who received the “biggest” email, previous buyers who received the “limited” email, prospective customers who received the “biggest” email, and prospective customers who received the “limited” email. 
  • Analyze the results. Once the sale concludes, the brand evaluates its data and learns that customers who were informed about an expiring sale (“48 Hours Only!”) opened the email at a 7% higher rate than those who received the “biggest” message. The company decides this is a statistically significant result, and it resolves to mention “expiring” sales more often in its future marketing campaigns.

Common marketing experiments

Companies use marketing experiments to study nearly all components of their marketing strategies. These experiments include:

  • Ad copy. Companies invested in digital marketing may run marketing experiments with different advertising content. They experiment with headlines and calls to action to see what drives more clicks.
  • Website landing pages . Companies sometimes show different website landing pages to different customers, studying whether landing page content can affect customer conversion rates .
  • Social media influencers. The role of influencers has surged on social media. Companies can conduct a simple marketing test by paying multiple influencers to offer a referral code or exclusive link to their audiences. The influencer who drives more traffic to the website (and more sales) can prove they are worthy of an ongoing partnership with the company.
  • Automation as part of the customer experience. Many of today’s online sales platforms offer automated chatbots that can handle basic customer requests. Companies can conduct a marketing experiment by letting some website visitors interact with these chatbots while other website visitors interact with human representatives. They can analyze if using cost-saving automation impacts sales outcomes.
  • Email subject lines and preview text. A basic subject line test focuses on email open rates: Companies want to see what types of subject lines inspire more people to open messages. These tests can go further by linking subject lines and preview text to click-through rates and, ultimately, conversion rates .
  • User experience. Ecommerce vendors often experiment with the user experience (UX) of the checkout flow on their website. Shopify offers many resources in this department, handling the checkout UX so that small business owners can focus on other areas of their company.

Marketing Experiment FAQ

What are the two types of experiments used in market research.

  • Controlled Experiments: Controlled experiments are experiments in which the researcher has full control over the assignment of the participants in the experiment to the different conditions.
  • Observational Experiments: Observational experiments are experiments in which the researcher observes the behavior of the participants in the experiment without actively manipulating the conditions of the experiment.

What is a business experiment?

What is sales experiment, why should you run marketing experiments before investing in big changes to your strategy or approach.

Keep up with the latest from Shopify

Get free ecommerce tips, inspiration, and resources delivered directly to your inbox.

By entering your email, you agree to receive marketing emails from Shopify.

popular posts

start-free-trial

The point of sale for every sale.

subscription banner

Subscribe to our blog and get free ecommerce tips, inspiration, and resources delivered directly to your inbox.

Unsubscribe anytime. By entering your email, you agree to receive marketing emails from Shopify.

Learn on the go. Try Shopify for free, and explore all the tools you need to start, run, and grow your business.

Try Shopify for free, no credit card required.

  • Work & Awards Work & Awards Work & Awards Case Studies Portfolio Client Testimonials Awards Recent Case Studies Nutrition | Vitauthority How Email Marketing Increased AOV by 25%. Read the Case Study Technology | Ashling Partners How We Used a Digital Overlay at an In-Person Event to Increase MQLs. Read the Case Study

silly picture of Alicia

  • Resources Resources Topic Marketing Strategy ABM Web Design Sales & Marketing Alignment Reporting & Attribution Function Creative Paid Media Demand Generation Public Relations Content Type Articles E-Books Templates Webinars Podcasts Bootcamps Assessments View All Resources Industries Healthcare Senior Living Manufacturing SaaS Finance Franchise Education Recent Resources Infographic Are You Ready for a Zendesk to HubSpot Service Hub Migration? Learn More Evaluation Evaluation for Sales Leaders: HubSpot Sales Hub vs. Salesforce Take the 5-Minute Evaluation

Let's Talk

The 5 Stages of Conducting a Marketing Experiment, Explained

August 9, 2021

By Marissa Litty-McGill

The old adage goes, “If you always do what you always did, then you will always get what you always got.” While consistency is valuable, marketers strive for growth in strategy and results. 

To avoid stagnation, we take calculated risks and try new tactics to attract, retain, and convert our audiences. Marketing experiments are our way of measuring effectiveness and deciding whether to add the tactic to our ever-growing toolbox. 

Now let’s get you started.

Marketing Experiment Basics

What is a marketing experiment, and why do we use them? A marketing experiment is a research method that helps marketers test strategic tactics for future campaigns. As most experiments do, it starts with an educated guess and research, and ultimately supplies information on what does and doesn’t resonate with target audiences. 

Now the real question: Should you be running one? And the answer is Yes!  

Don’t be fooled, though—marketing experiments can be done wrong. Read on for the five stages of a solid marketing experiment. 

Check Out Our Inbound Marketing Playbook for 2020 and Beyond Playbook

Stage 1: Hypothesize 

Just like in the scientific method, a hypothesis is an educated guess. You don’t have to be an expert to hypothesize, but it helps to be knowledgeable about the brand, the audience, the industry, or even just marketing as a practice. 

Create your hypothesis by getting curious. Ask questions like:

  • Where do I want to see growth?
  • What isn’t working well right now?
  • What do I want to influence or see my audience do?
  • What are my marketing goals?

These will help you see growth opportunities , pinpoint potential problems, and think about your current strategy with a more critical eye. 

Example: You have a goal to increase conversions next quarter. You noticed that one landing page underperforms compared to others on your site, and you believe it’s because the form is too low and requires too much scrolling to see. Your hypothesis is that moving the form above the fold will increase conversions. 

Stage 2: Research 

With a hypothesis in hand, you should have a few ideas of where to gather information about your experiment. Often in your researching phase, you will run into others who have tested your theory or even ideas around best practices. Keep these in mind when running your own experiment, but remember every audience, brand, and industry is different. Their findings might point to possible outcomes and things to consider but may not produce the same results. 

Additionally, while researching you might notice your hypothesis has shifted or changed. That is okay! Make adjustments to your experiment to suit your new hypothesis and ensure you have enough research to feel confident with your new direction.

Stage 3: Establish KPIs and Metrics 

A key performance indicator (KPI) is a performance measurement. To measure success or failure, you will need something concrete to compare and ensure you are looking at facts and not relying on feelings or instinct. During the research stage, you should be able to see potential areas of focus that will tell the story of your experiment. 

Establishing a baseline based on tactics of the past will give a unique look into your actual audience and brand. Gathering this information would include looking at analytics for your site, email, social media, and so forth. If you are just starting your marketing journey or don’t have a lot of historical data, marketing experiments are still possible! Look at best practices specific to the industry and tactics where possible. 

Your KPI and metrics should also include a waiting period for when your experiment will end. An email might run its course in a matter of days, but a landing page might need weeks before you can confidently call your data collection. 

Stage 4: Execute the Experiment

Before you begin, ensure there is only one variable. In the previous example, we established the hypothesis that the placement of a landing page’s form is affecting conversions negatively. The test was to move the form up higher on the page to see if that increased conversion. 

In that experiment, we would only move the form. If you change the copy, button color, or form appearance and see increased conversions, it would mean you can no longer tell if the form’s location is what really improved performance. Even worse, if conversions decreased, you wouldn’t know which element to fix, and you would ultimately need to start from the beginning with no new information to lead you.

When you’re ready, run the test and stick to your pre-established testing period timeline. You can check progress along the way because chances are some patterns will begin emerging, but be sure to give the experiment the time it needs to really reveal results. It can be tempting to want to change things while the experiment is in progress, but that would also add more variables to your experiment.

Stage 5: Analyze the Results

Once your marketing experiment is over, review the results against the KPIs you established. Some tests will highlight whether your hypothesis was correct almost immediately while others might reveal unexpected results that require further testing. 

Where You Can Start

If you’re itching to start your own marketing experiment but aren’t entirely sure where to start, we’ve got you covered. Below is a short list of simple marketing tests you can run with your own marketing strategy that can actually result in new tactics for your campaigns. 

A/B Testing Email Subject Lines

Try running a test with and without emojis, or compare a statement with a question. This is probably the best place to start your marketing experiment journey.

Animation vs. Static Imagery

Incorporate gifs, videos, or swipe images in your social ads, and see how they perform against static images. If you don’t have the budget, keep it even simpler by comparing graphic elements to images with people.

Call to Action (CTA) and Pop-Up Placement and Design

Do you always place CTAs in one place or default to a specific color for pop-ups? Shift the location or design, and see if the change gets your visitors’ attention. 

Change Send or Share Times

If your newsletter always goes out at 7 a.m. on Monday, shift the send time or day to see the effects on engagement. You might find you have an entirely different audience viewing your work or, if there is a decrease, that you’ve got more brand loyalty than you knew!

Discover more ideas on inbound marketing tests you should be running!

Push Strategic Boundaries and See What Your Brand Accomplishes

It’s easy to find yourself doing something because it’s always been done that way, but the world of marketing moves fast and keeping up means not resting on laurels. Now that you understand the five stages of a marketing experiment, you can more easily push your own strategic boundaries. Not all experiments will be winners, but take a few small steps and see just what your brand can accomplish!

For more insights on how to kick off a customer-winning marketing strategy, check out the Inbound Marketing Playbook for 2020 and Beyond from SmartBug ® . It’s filled with advice and actionable steps for businesses across industries to meet their audiences where they are, no matter the budget.

-Your-Inbound-Marketing-Playbook-for-2020-&-Beyond-cover

Learn how to address & overcome industry-specific marketing challenges despite unforeseen circumstances.

Your Inbound Marketing Playbook for 2020 & Beyond

Check It Out

About the author

Marissa Litty-McGill is a HubSpot Technologist with 8+ years of marketing experience spans both B2B and B2C industries. Though she received her Communications degree in Journalism & PR/Advertising, she fell for marketing before even walking across the University of Nebraska at Omaha's graduation stage. When she isn't geeking out over inbound, she's exploring the world with her husband and dog. Read more articles by Marissa Litty-McGill .

Subscribe to get our new blogs delivered right to your inbox

Other insights you might like.

INBOUND 2023 HubSpot Releases - Banner

How to Build the Perfect Executive Dashboard Using HubSpot's AI Reporting Tools

Man looking at his laptop while analyzing data

5 Closed-Loop Marketing Metrics You Should Be Reporting and Why They Matter

4 Ways to Measure the Lifetime Value of Your Customers

4 Ways to Measure the Lifetime Value of Your Customers

Oct 8, 2020

How to run marketing experiments: practical lessons from four marketing leaders

14-MINUTE READ | By Pinja Virtanen

[ Updated Oct 8, 2020 ]

“Marketing is one big experiment. Some experiments just have a longer shelf life than others.”

That’s the first thing Andy Culligan, CMO of Leadfeeder, said to me when I asked him about experimentation in marketing.

And to help you understand how to run those experiments, I interviewed Andy and three other seasoned marketing leaders. 

After reading this article, you’ll know exactly:

  • Why you absolutely should run experiments in marketing
  • What’s stopping marketers from experimenting — and how to overcome those challenges
  • What all good marketing experiments have in common
  • Which 7 steps you should include in your experimentation process
  • Examples of 4 successful experiments

Ready? Let’s go!

Should more marketers run experiments? And why?

It’ll probably come as no surprise to you that when asked whether more marketers should run experiments, all four of our experts came back with some variation of “hell yes”.

Mari Luukkainen, Head of Growth at Icebreaker.vc , explains that since she has a background in affiliate marketing, a data-driven iteration with the goal of business growth is simply the only type of marketing that makes sense to her.

She says, “To figure out what works and what doesn’t to grow your business, you need experimentation. There’s no point for a marketing team or even a business function that isn’t running experiments to find a better, faster, or a more optimized way to grow their area of the business.”

Both Michael Hanson, Founder & Sales Consultant at Growth Genie , and Andy from Leadfeeder bank on experimentation because the market is a moving target.

Michael explains, “If you don’t run tests in marketing, you’re always going to fail. What worked last year, won’t necessarily work so well this year. Take organic Facebook for example. 10 years ago, you could get good reach just by posting from your company account. But it doesn’t work like that anymore. And so if you don’t constantly measure performance and try to improve, you’re definitely going to fail.”

Andy adds, “Everything you do in marketing is a test anyway. Some tests work longer than others but the point is, you need to help your marketing evolve.”

And finally Mikko Piippo, Founder & Digital Analytics & Optimization Consultant at Hopkins , argues that experiments are great for reducing bias.

Mikko says, “Everyone has an opinion, and sometimes expert opinion is not worth much more than tossing a coin. Experiments are the best way to systematically create new knowledge, to learn more about your audience and your customers. They force you to question your own ideas, beliefs, and the best practices you’ve read so much about. This can be somewhat uncomfortable if the data doesn’t support your own ideas.”

And now that the jury has reached a unanimous verdict, let’s move on to the next big question, i.e. what’s stopping marketers from experimenting.

So why isn’t everyone and their grandma already running experiments?

According to Mari, the biggest problem isn’t that marketers don’t want to experiment, it’s that they’re working towards the wrong goals, lacking the routine, and/or afraid of failing.

She explains, “Far too many marketing teams are still struggling to set goals that directly correlate with business performance. But as soon as you set goals that make sense for the business, you’ll start systematically working towards them. And that’s when you’ll need to start experimenting.”

Working with larger corporations, Mari has also noticed that sometimes the company culture works against a fundamental part of experimentation: failure.

Mari says, “The other big issue can be that marketers are so afraid of failing that they won’t feel comfortable trying anything new. In some corporations, failing means that you’ll get fired. That’s when there’s no incentive for marketers to run experiments.”

If you recognize any of these problems in your organization, Mari offers three alternatives to overcoming the issues:

  • Get buy-in for experimentation from as high up the organizational ladder as possible (investors, the board of directors, or the management team)
  • Start small and prove the value of experimentation with small wins (the problem with this approach is that it can be painfully slow)
  • Replace your team with people who know exactly how to run experiments (this is quick but can be a painful process)

And now, with any possible obstacles out of the way, let’s look at what a good experiment looks like.

The 3 things all good marketing experiments have in common

According to our four experts, all marketing experiments should have these three things in common.

1. They’re systematic and measured with data

The first rule of experimentation is that you have to stick to a process and make sure to use data to determine how successful the experiment was.

Mari says, “All good experiments are systematic and measured with data.”

Mikko follows with, “Good experiments follow a plan or a process. In a bad experiment, for example, a marketer would set a goal only after seeing the metrics.”

To summarize, a systematic process and a healthy relationship with data are what ultimately make or break an experiment.

Like Michael says, “If you’re going to test something, you need to measure its success. Otherwise it’s not really an experiment, is it?”

2. They’re big enough (but not too big)

The other, perhaps a more controversial, requirement for a good experiment is that it’s big enough. In other words, yes, you should absolutely forget about the button color A/B tests of yesteryear.

Mikko says, “Be bold. Test complete landing page redesigns instead of button colors, experiment with product pricing instead of call-to-action microcopy, experiment with different automated advertising strategies instead of tweaking single ads, experiment with budget allocation over different advertising platforms instead of micromanaging individual platforms.”

Because the problem with small tests is that even when they’re successful, they’ll yield small results.

Andy explains, “If you only test one small thing at a time, you’re never going to get big enough of an uplift. So if you do run tests, you need to try something completely different. Throw the existing landing page out the window and try something new instead. If the new version works better, then use that as your benchmark going forward.”

Michael says, “Obviously you don’t want to change the company name or logo every 5 minutes. But beyond that, you have to be flexible with the scope of the experiment.” 

He continues, “I had an ex-colleague who had all these wacky ideas and when we tried them, they always came through. The point is, even though ideally you’d want to test one variable at a time, you also have to realize that the impact of a small change will be small. And if you want quick results, you have to think bigger. So I’m all for experimenting with wacky ideas.”

And with that, the verdict is in: think big when you’re experimenting.

3. They’re run as split tests

The final precondition for a good experiment is that it’s run as a split test. This means that you’re testing one variable against a different one.

Mikko explains, “Good marketing tests are usually split tests. You split the audience (website visitors, advertising audience) into two or more groups. Then you offer different treatments to different groups — and you keep some percentage of the audience separately as a control group. This way, you can really compare the effectiveness of different treatments.”

Mikko also emphasizes that even if you don’t have a ton of media budget or website traffic, you can still run experiments. “With low website traffic, the methods just aren’t as scientific as with high traffic.” The point is: don’t let external variables like low traffic stop you from experimenting.

Bonus: They may or may not have a hypothesis — depending on who you ask

As the bonus criteria for running good experiments, let’s look at the one word that always comes up when we’re talking about experiments: hypothesis.

So, do you need one?

Mikko and his team at Hopkins, for one, are strong believers in setting a hypothesis before running an experiment. Mikko says, “Good marketing experiments start from a hypothesis you try to validate or refute. Actually, without a hypothesis I wouldn’t even call something an experiment. For example, it’s easy to add a couple of more ad versions to an ad set or ad group. Most marketers don’t follow any logic here, they just add some random ad versions. Doing this might improve the results, but they wouldn’t know why.”

He continues, “A hypothesis forces you to think about the experiment: Why do I expect something to change for the better if I change something else? Why would people behave differently?”

Andy, on the other hand, would go easy on the hypothesis setting. He explains, “In my opinion, really analytical marketers like to make experimentation into rocket science and it doesn’t have to be that. I’m data-driven but purely from a revenue perspective. I don’t tend to get too deep into the grass, the weeds, and the bushes. You’re only going to end up in a rabbit hole. If it’s working, it’s working — and that’s all I care about.”

And that’s why, rather than spending a lot of time forming hypotheses, Andy likes to tie the Leadfeeder marketing team’s experiments into their quarterly OKRs. 

For example, if a key result is to increase Leadfeeder’s tracker install rate by 10%, the team will simply come up with a number of changes to get there.

To conclude, whether or not you should set a hypothesis for your experiment depends on this question: will you benefit from knowing the contribution of each individual change?

If the answer is no, you’re in team Andy.

And if the answer is yes, well… Welcome to team Mikko.

And now that we got that out of our systems, let’s look at the steps you need to take to actually run an experiment.

How to run a marketing experiment: step-by-step instructions

Even though there are clearly some things our expert panelists disagree about, the actual experimentation process all four of them follow is pretty uniform.

Step 1: Start by setting (or checking) your goal

The very first step in the experimentation process comes down to understanding what KPI you’re trying to influence. 

For example, if like Andy’s team at Leadfeeder you’re using OKRs, you can use your key results as the goals for your experiments.

So for him, a goal would look something like “increase our tracker installation rate by 10% within the next 3 months.”

Like Andy, you’ll want to make your goal unambiguous and give it a clear timeframe.

Step 2: Analyze historical data

Once you understand what needle you’re trying to move, it’s time to analyze your existing data. Mari suggests that at this point, you “analyze where you are and how you got there”. 

Similarly, Mikko says that for his team, this step involves, “looking at existing data from our ad platforms and web analytics tools.”

Step 3: Come up with ideas

Equipped with your analysis of historical performance, you can probably list a dozen (or more) things that may or may not influence the metric you’re trying to influence.

At this point, your only job is to list those ideas down.

Step 4: Prioritize your ideas

Mari suggests that you prioritize the ideas you came up with based on “resources efficiency, success probability, and scalability.”

Alternatively, you can use a scoring system like ICE , which stands for impact, confidence, and ease.

After this step, you should have a clear idea of which experiments to go after.

Step 5: Run the experiment(s)

Now this one’s a bit of a no-brainer. Now that you know the expected impact of your experiments, it’s time to run the one(s) you think will have the biggest impact.

But can you run multiple experiments at once? Yes and no.

I’ll refer you back to the discussion we had earlier about hypotheses: if you absolutely need to know which tactic was the most successful, you should isolate your experiments and run one at a time.

If, on the other hand, you’re just trying to move the needle as quickly as possible and don’t really care about figuring out correlation and causation, feel free to run multiple experiments at the same time.

Step 6: Measure success

Whether it’s while your experiment is still running or after it has ended, it’s time to look at data. Did the experiment drive the expected results?

Is there anything you can do to optimize it (if it’s still running)?

Psst! This is where Supermetrics for Google Sheets comes in handy: you can automate data refreshes and email alerts to cut down the time you would otherwise have to spend on manually collecting data from multiple platforms.

Step 7: Rinse and repeat

Depending on the scope and results of your experiment, you might want to start from the very beginning, or simply go back to Step 4 and choose new experiments to run off the back of your results.

And finally, if you need any inspiration for your upcoming experiments, keep reading. Because in the next section Mari, Michael, and Andy will spill awesome examples of successful experiments they’ve run.

4 examples of successful marketing experiments

To get you in the mood for planning your own experiments, here are quick examples from our experts.

Freska: experimenting both offline and online

When I asked Mari about the most memorable experiments she ran at Freska, a modern home cleaning service, she came back with two examples.

Mari starts from an offline experiment, “At Freska our hypothesis was that people who have expensive hobbies that consume lots of time would buy home cleaning services. We tested this by going to a boat expo instead of the usual “baby expos” cleaning companies go to. And we ended up getting surprisingly good results.”

The second experiment that has stayed with her was of the online variety. 

Mari says, “At Freska, our original hypothesis was that people are afraid of seeing the price of home cleaning services in ads because it feels kind of expensive. But when we conducted a deeper analysis with offline surveys, people actually thought that home cleaning services are way more expensive (over 1000 €/month) and our price is actually a pleasant surprise (150 €/month for a biweekly 2-3-room apartment). So we tested showing the price in ads and it actually increased conversions.”

Growth Genie: building the perfect outbound cadence one iteration at a time

Michael’s favorite experiment to date is of a more persistent kind. Over time, his team has perfected the art and science of cold outreach , one iteration at a time.

Michael says, “It’s all about cadences here at Growth Genie; how many calls, how many emails, and how many LinkedIn messages does it take — and in which order — to book a call with a prospect who’s never heard of you before.”

He continues, “We’ve learned what works simply by experimenting and iterating. For example, instead of asking for a meeting in the first few touchpoints, we quickly noticed that we can get much better results by giving away these valuable content snippets and only then asking for a meeting.”

Psst! If you’re interested in absorbing the TL;DR version of Michael’s learnings, check out his recent LinkedIn post below.

And if you want to swipe Growth Genie’s “ultimate outbound sales cadence”, you can access it here .

Leadfeeder: pivoting lead generation for the “new normal”

When COVID-19 hit Europe and the US in March 2020, the Leadfeeder team needed to quickly cut their marketing budget by a third to increase runway.

For Andy, that meant figuring out a channel that would quickly generate pipeline without taking a ton of time or budget upfront.

Andy says, “We started pushing up webinars and those exploded. We quickly got 11,000 leads by only spending something like $1,000. But as everyone started doing webinars, the numbers began to drop. And that’s why we decided to start recycling the webinars into short 10-15 minute videos, rebranded as “The B2B Rebellion” series on YouTube .” 

He continues, “The webinars were a test and because they were working, we doubled down, and eventually moved into the video concept. The video concept has been working nicely, and now we’re experimenting with new speakers and distribution channels.”

Overall, this constantly evolving experiment has allowed the Leadfeeder team to maintain their pre-pandemic lead volume at a third of the cost.

Andy says, “Without constant experimentation, you’re not going to win and your marketing will go stale.”

Over to you! ?

What are some of the most successful (or surprising) marketing experiments you’ve run? 

Let me know on Twitter or LinkedIn !

Stay in the loop with our newsletter

Be the first to hear about product updates and marketing data tips

the marketing experiments blog

26 Marketing Experiments That Brands Can Use To Unlock New Insights

Picture of Ross Simmonds

  • Apr 4, 2019

Marketing experiments differentiate innovative brands from the bland ones.

What is a Marketing Experiment?

A marketing experiment is a strategic approach used by businesses to test different marketing tactics, strategies, and activities in order to discover the most effective ways to reach their target audience, engage potential customers, and improve overall marketing performance. This process involves setting up controlled tests where various elements of marketing campaigns are modified – such as messaging, digital advertising platforms, content formats, or audience segments – to evaluate their impact on defined metrics like engagement rates, conversion rates, or return on investment (ROI).

By systematically analyzing the outcomes of these experiments, marketers can gather valuable insights that inform data-driven decisions, leading to optimized marketing strategies that drive better results. Essentially, marketing experimentation embodies the principle of learning through trial and error and is fundamental for brands aiming to stay ahead in the rapidly evolving digital landscape.

Experimentation is what leads to breakthroughs—allowing your company to stand out while your competitors blend in.

Yet marketers often make the mistake of assuming that experimenting is for things like email subject lines, button colors and display copy. And sure, those experiments are all fine and good, but if that’s all you can envision, you’re limiting yourself.

In this article, I’ve got a list of impactful marketing experiments you may have never even considered. Use these to ramp up your own experimentation and improve your campaigns.

But before you do, you may want to check out this video:

Title Formats For Your Content

The titles of your landing pages, blog posts, resources, and other assets play a massive role in search engine optimization, from the position of your page in the SERP to the likelihood that searchers will click.

After Googling the phrase “marketing advice,” here are the titles you may see:

the marketing experiments blog

If you already have a page ranking for this keyword and want to move up the ladder, experiment with your title. Try adding a date. Try adding the phrase “UPDATED”. Try adding “NEW”. Try using phrases that make the content seem more recent.

You may find that one of these variations increases your click-through rate and propels your page higher up the search results—maybe even to the #1 spot.

Emojis In Your Email Subject Lines

Rocket ships. Money bags. Smiley faces.

I’ve seen them all in my inbox.

As you evaluate your email marketing efforts, consider emojis. Yes, emojis.

Marketers debate whether brands should use emojis, but I’m a firm supporter—why not go for it if emojis increase your open rate or response rate?

Even if you’re skeptical, at least experiment with emojis and see how your audience responds. Experian examined email subject lines with emoji and found that including an emoji in the subject line led to a 56% higher open rate compared to text-based subject lines.

Here’s a list of emojis and the top emojis by subject lines:

the marketing experiments blog

Related reading: Should B2B Brands Use Emojis In Their Content Marketing?

People In Your Live Chat

Studies have shown that people respond differently to women and men in live chats. Women are often times met with crude comments and more trolling in comparison to men. When Julia Enthoven the Co-founder of Kapwing experimented with different photos on their chat bot, the results were unfortunate. She tried 4 different images and tracked the trolling.

Here’s how it turned out:

the marketing experiments blog

Their informal study showed bias against women when it came to their name and photo. This is a sad reality…but it’s still a reality, and we have to recognize it as one of the many biases that influence our customers’ actions as horrible as it may be. In fact, Julia wrapped up the blog post with this exact takeaway:

“ If you’re a female with a chat box, you might want a pseudonym : Hiding behind Kapwing’s logo and company name on our messenger makes customer support easier and less discouraging. With the “Team Kapwing” name, people tend to assume that the person responding to them is a dude.”

It’s a shame. I 100% agree…

But Sarah Betts of Olark found a very similar situation when she ran an experiment using a male name (Samuel) for online support rather than her own. She found that her advice was taken more seriously by customers as Samuel, and her suggestions for fixing complicated technical issues were more likely to be accepted.

That said, when the data was complete and she ran the numbers it turned out that Sarah was rated higher by visitors. On the flipside, Sarah frequently had to escalate cases to engineers while Samuel had almost no escalated chat cases. This is why it’s not a bad idea to experiment with your own live chat window. Can you test a different name? Can you use photos of people with different genders? Can you analyze the data surrounding your customer support team based on gender? Can you try giving everyone the logo design as their profile picture?  All of these different things are worth testing!

You might end up disappointed with society.

But you might also unlock a powerful insight that improves your live chat experience.

Onboarding Emails

A quality on-boarding experience can be the difference between a customer who sticks around for years and a customer who churns in a month.

But the only way to really understand how your messaging affects new customers is through experimentation.

As new users sign up for your product, send half of them your existing onboarding email and the other half a completely different email. This means that you will have to craft two types of onboarding emails, which you can paraphrase online if you need inspiration. Run the test until you have enough data to determine which onboarding email worked best. If you noticed that one segment is using more key features or sticking around longer after a free trial – use this info to make adjustments.

Even if the difference is negligible, you will have learned that … So create another email and keep trying new onboarding content to see if things improve.

If you determine that your first email was and is the best welcome email you can send, congratulations—you’re a legend.

Sequences For Different Types Of Users

Not every prospect is the same. Not every customer is the same.

So why do we think it’s a good idea to interact the exact same way with every individual who signs up for our products or services?

Rather than trying to fit every customer into the same marketing sequence, develop an array of emails that target different customers (based on data) and create design experiences (landing pages or remarketing ads) that differ depending on their needs, preferred features, use cases, etc.

Experiment With New Content & Visuals

How many blog posts do you have on your site? Do you have some pages or articles that haven’t been doing so hot lately?

Give them a refresh.

Update these assets with new graphics, links, videos (try using text to video AI , as an example), and stats that make the content feel fresh and relevant.

You may even want to hire a designer—the sites below are great for finding designers (or templates, if a designer is not in the budget) that can give your tired assets a facelift:

the marketing experiments blog

…into something more engaging and on-brand:

the marketing experiments blog

All of this will make your content more appealing to the reader. The more appealing the content, the more time the reader will spend on your site.

The same rejuvenation efforts can be applied to the written content on your site. If it’s been a few years since you published that in-depth guide on your blog, rewriting the post for today’s audience could help the page rank higher and deliver more value to readers. You can even use a sentence rewriter tool to freshen up your content and make it more engaging.

Some simple things to update:

  •       References to old content
  •       Links going out to 404 pages
  •       References to old dates
  •       Outdated stats
  •       Ineffective writing (e.g., bad ledes)

All of these things can have a negative impact on your content. But what if you have a lot of content on your site? Which pages should you update first? Look for…

  •       Content that has tons of links but isn’t generating much traffic
  •       Content that used to generate traffic but is quickly losing its SERP position
  •       Content that has high engagement but has never ranked for keywords

Revisit your assets to look for these red flags. You can make changes in-house or hire a content writer to give it a tuneup and apply SEO best practices . Related reading: Safe Or Risky SEO: How Dangerous Is It REALLY To Change Your Article Dates?

Tip: A close companion to content is any tool that helps more people see it. For instance, if you run a small business, experiment with a local business schema to help push your business higher in search results when locals are looking.

Experiment With Paid Media Opportunities

Sponsored podcasts. eSports sponsorships. Online communities. Sponsored posts. Twitter media. Influencer marketing .

All of these opportunities over index for human usage & content consumption yet under index for paid media spend. This is where you might find a real opportunity that’s likely being overlooked by your competition.

Take podcasts, for example—over the last decade, the growth of podcasts in the US has been massive:

the marketing experiments blog

This growth is an opportunity for brands to experiment with a medium that has lots of engagement but is still relatively cheap because the market overlooks its potential.

It’s the same for influencer marketing, which is so widely viewed as a B2C opportunity that B2B brands are blind to the potential for their audience.

Here’s some insight on how B2B brands can leverage influencer marketing:

Funnels For Paid Media

Paid media efforts can burn through your budget very quickly and, if you’re not careful, result in very few leads and very little ROI.

Obviously that’s not good for business. ( Maybe not good for anybody )

One of the best ways to counter this common failure is to create experiential funnels that change for every ad set you create. Rather than launching one marketing funnel for a series of keywords, create funnels based on specific problems and let the users go through each until you start getting data for key metrics like time to close, conversion rate, LTV, CAC, etc. Send the right people to the right content.

Experiment with Local Search Optimization

In addition to paid media, local optimization also presents a fair opportunity for you to experiment and build your brand presence in the local market.

Since paid media can also burn through your marketing budget rapidly, local optimization helps get free engagement for your brand and increases revenue over time. It’s also another way to counter increased ad spending.

Experimenting with local search optimization includes creating business profiles on Google, Yelp, and other listing pages. Moreover, you can experiment with creating local content, including location-specific keywords, and building more authority through reviews.

Keeping local search ranking factors in mind at the time of optimization will help boost your brand’s presence and awareness in the local market and contribute to your marketing too For example, if you’re a self storage SEO agency or a local dentist, individuals seeking storage options or dental care in those areas will discover your website more easily if it’s optimized for local SEO.

Visuals On Your Homepage

In a recent study, we found that 56% of SaaS companies use photos of real people on their homepage and 70% of SaaS companies use custom illustrations . If you’re a SaaS company (heck, even if you’re not a SaaS company) experiment with these two options and see what your customers are more drawn to when it comes to visuals.

Value Propositions

If you’re early in the product development stage, you might have yet to hit on the most effective value proposition. At this stage, your SaaS product management  team is still in the process of getting feedback from beta users and working to tailor the product development roadmap accordingly.

Rather than playing a guessing game with your value proposition and crossing your fingers that it aligns with your customers’ needs and wants, experiment with the message and use user feedback to help define your message.

With Google Optimize, Unbounce or the landing page optimizer/creator of your choice, test a few different value propositions with site visitors and see how signups are influenced.

For example, HubSpot could experiment with two different value propositions on their homepage. One might say “There’s a better way to grow” kinda like this…

the marketing experiments blog

And another might say “Generate more leads and close more deals” Comparing the two would reveal what resonates with their customers and how HubSpot should tell their story moving forward.

While you’re at it, you could even test to see whether including keywords in this value proposition improves your search ranking position.

Salutations In Chat

Hey! Hi! Hello! What’s Up? Yo! Sup? Howdy! Bonjour!

You get the idea.

And sure, they all mean roughly the same thing. But these salutations may land differently depending on the industry. If you’re using a service like Drift or Intercom to acquire potential customers, you want to reduce friction as much as possible, so test out a range of chat greetings to find out which ones are most effective.

Sales Calls

One of my favorite technologies on the market today is the AI-driven phone service—you know, services like Chorus, Dialpad, ExecVision and Gong that analyze your voice calls using artificial intelligence and machine learning . Marketing and sales teams can use tools like these to track what their salespeople are saying to prospects and craft conversations that are more likely to convert.

I have a friend who manages a SaaS sales team. He listened to his team’s calls and found that when salespeople shared the story behind the company’s founding, prospects were 33% more likely to close. That’s huge! So he encouraged his team to inject the founding story into every pitch they could, and as a result the entire team saw a sustainable spike in their close rates.

Long-Form vs. Short-Form

There’s so much debate in the industry about whether you should use long-form or short-term content that I’ve decided the only answer is…

Experiment.

You knew I was going to say that, didn’t you?

The best way to truly understand whether your customers, prospects and leads would prefer a short content over a long content is to test ’em both.

One of the best blog posts I’ve read on the topic of long form vs. short form content is this piece from Joanna Wiebe about landing page length . She outlines how important it is for brands to think about the awareness levels of their audience at the time they’re reading their copy… And how their level of awareness can influence how much you need to explain.

Here’s a graphic she created to showcase the thinking behind it:

the marketing experiments blog

Brilliant right?

High awareness = Say less…

Low awareness = Say more…

No matter the content whether it’s a landing page, onboarding email, drip campaign, or cold email outreach experiment with message length across all aspects of your email marketing efforts until you have the data you need.

Pricing Page Layout & Design

One of the pages that matters most yet often gets very little love is your pricing page. This page is a key point in the buying cycle—the point where your customer decides whether your product or service is worth it.

Your pricing page is a gold mine for experimentation:

  •       Should you highlight your most popular plan?
  •       In what order should you list your product features?
  •       Testimonials or no testimonials?
  •       Should the default be annual or monthly pricing?
  •       For enterprise solutions, is it better to include a price or just a “Contact Us” button?
  •       Does the entire layout need a redesign?
  •       Should you run remarketing ads on the page?
  •      Do you need an  exit intent popup ?

All of these little experiments can add up to massive increases in revenue for your business. But don’t look at this list and assume you have to do ALL the things.

Identify which of these experiments make the most sense for your pricing page and your goals, like achieving a higher LTV or a lower CAC.

Social Media Display Copy

Every modern-day marketer has experimented with email subject lines . You can do something similar on social media by sharing two posts with the same link but different copy to drive folks to the content.

Let’s say you want to promote your blog post about wearable alerts and their safety aspects. Your first post could answer an FAQ while your second post could repurpose a customer review as a graphic, with both linking to your blog.

Whatever you originally titled your blog post doesn’t have to be what goes in your tweet. Try a new variation of that title, reshare it with your followers and see the reaction.

The same thing works on Facebook, Instagram and even Reddit .

Messaging Through Paid Media

Not 100% sure how best to broadcast your value proposition?

You’re not alone.

Tons of startups struggle with communicating clearly what they do and how they do it. Rather than leaving this to chance and a dream – invest in a paid media campaign on a channel where your audience spends time and showcase your message. Communicate a few variations of your value proposition on both the ad and the landing page. Does one value proposition resonate more? Is one value proposition generating more clicks and sign ups? If so, it’s likely that you can use this insight to guide your approach to communicating what it is you do.

New Channels For Distribution

A lot of brands stick to one marketing and distribution channel until the well runs dry. I get it—you need to fish where the fish are. But there are many other ponds and lakes out there you may be overlooking.

We often make the mistake of assuming that our audience is singular in their channel usage. We think the people who use Instagram only use Instagram, and the people who use LinkedIn only use LinkedIn. In reality, these people also frequent Quora, Medium, Reddit, Yelp, G2 Crowd, Capterra, Facebook, Twitter, YouTube, Vimeo…

So how do you experiment on these channels?

Start with a hypothesis. For example: If we create long form content on Medium and distribute it to the ABC publication we will be able to drive their audience to our site through a lead magnet.

Not sure where to start? Try using assets or topics that have worked well elsewhere. For example, if a blog post worked really well on your own site, it’ll likely do well on Medium.

The next step: Test it.

Create the content, publish it and gauge whether the effort was a success based on the response you get.

Blog Post Titles

When it comes to gaining initial traction for your blog posts , the title is often the most important element. Some people use their titles purely as clickbait, but the best marketers recognize that the title is where you should communicate the value of the blog post.

But figuring out how to communicate that can be tough. So… test it!

Experiment with multiple titles. One way to do this is by sharing the post on social media. Let’s say you’ve published this blog post:

the marketing experiments blog

Rather than assuming that this title is the one , share the article on Twitter with different headlines:

  • Ending the Debate: What’s Better—B2B or B2C eCommerce?
  • What’s the Difference Between B2B and B2C eCommerce in 2020? 
  • B2B vs B2C: Let’s End the Debate Once and for All

See which title generates the most likes, retweets and clicks, and run with that title going forward.

Related reading: A Scientific Guide to Writing Great Headlines on Twitter, Facebook, and Your Blog

On-Site Personalization

the marketing experiments blog

Your jaw hits the floor—hard. Like fractured-in-half hard.

This is the power of personalization—if it’s done right. Testing personalization at scale has never been easier thanks to services like Clearbit, Optimizely and more. Cara Harshman talks about this approach in her post The Homepage is Dead: A Story of Website Personalization . A user’s IP address can tell you what company they work for or where they’re browsing from, allowing you to deliver a personalized online experience.

Native vs. Embedded Video On Social Media

the marketing experiments blog

Videos uploaded directly to Facebook perform better than videos uploaded to YouTube and then shared on Facebook. Videos uploaded to LinkedIn directly perform better than videos uploaded to YouTube and then shared on LinkedIn.

This is intentional. Both LinkedIn and Facebook recognize that if they send people away to YouTube, they’re losing eyeballs (and therefore advertising revenue) to Google.

Run a test where you share a video natively and via YouTube .

Meta Descriptions

When your landing pages, blog posts and other assets are ranking well, start testing ways to increase your click-through rate. One of the most underrated game-changers is the meta description. This is a meta description:

the marketing experiments blog

PS: Notice the cool “Volume” & “CPC” feature under the search bar? Check out this video on my favourite Chrome extensions to see how I did it. .. If Google is a highway for traffic, the title and meta description are the billboard that convinces your audience to take the next exit.

For example, if you run an ecommerce site, see how potential visitors respond when you put the price directly in the meta description. Don’t be afraid to try emojis, too.

Case Studies & Testimonials

the marketing experiments blog

The case studies and testimonials on your site have a real influence on potential buyers—demonstrating credibility and trust is crucial for conversion.

You can make assumptions about what you think will resonate with your audience … or you can run experiments.

Take a look at your testimonials—do you have quotes from clients in other roles or other industries? Switch things up! Can you swap out a case study for one about another company? Try it!

You’ll never know until you try.

One of the coolest case studies I’ve come across is this one from Slack:

the marketing experiments blog

There’s so much to unpack here. But if NASA trusts Slack, shouldn’t you?

That’s the message your own case studies and testimonials ought to send.

Traditional Advertising

today I downloaded an app after seeing a subway ad and I’m sending cosmic vibes to the marketer who will never be able to attribute it. — Kelly Eng (@boomereng) March 16, 2019

I’m a digital marketer by trade. I dream in pixels.

So do most marketers who are reading this post, I would imagine. In fact a lot of marketers have turned their heads when it comes to traditional marketing channels.

Radio ads. Television ads. Billboards. Magazine ads. Brochures.

Most of us consider these formats old-fashioned and ineffective. But how many of us have actually tried traditional channels lately?

You might discover that traditional media isn’t as lifeless as you thought. Or your campaign might fail—and that’s the point of experimentation. Either you’ll unlock a new growth channel or you’ll walk away with a lesson you can apply in the future.

Graphics For Social Sharing

the marketing experiments blog

People connect with people—we know that. But don’t assume you can slap any old image of a person on your content and call it a day.

In one of its many content experiments , Netflix compared how people responded to faces showing complex emotions vs. stoic or benign expressions. The results were clear: People connect with emotions.

Recapping the experiment, Netflix wrote:

Seeing a range of emotions actually compels people to watch a story more. This is likely due to the fact that complex emotions convey a wealth of information to members regarding the tone or feel of the content, but it is interesting to see how much members actually respond this way in testing…

Their testing also found that more visible, recognizable characters (especially polarizing ones) resulted in more engagement.

These results should be arresting for any marketer creating content.

We live in a world of social sharing. Millions of links are shared on Twitter, LinkedIn and Facebook every single day. When links are shared, they are typically accompanied by a graphic like this:

the marketing experiments blog

Experiment with the image. Use polarizing photos. Use people showing emotion. Try illustrations. Eventually you’ll figure out which graphics resonate with your audience and drive consistent engagement.

Outreach Times & Days

There’s a lot of debate online about the best time to send marketing emails .

Some people say the weekend. Some say the evening. And some say it really doesn’t matter—just share content when it makes sense for you.

What’s my suggestion?

There are literally tons of studies on this topic:

  • MailChimp on send time optimization .
  • Customer.io on best day to send emails .
  • GetResponse on best day to send email .
  • HubSpots best time to send a business email report .
  • MailerMailer’s report on email marketing metrics .

And they all say different things. Some say send an email at 8AM on Tuesday. Some say send an email at 5PM on the weekend. The key is simple… Experiment. Test different time slots. Test different days. The only way to really get a clear idea of what works is to run some tests.

Wrapping This Up

The next time you think you’ve run all the marketing tests you can, think again.

There are so many opportunities for experimenting—that is, if you work in an environment where experimentation isn’t frowned upon. You need to surround yourself with people who have a growth mindset and believe that experimentation leads to breakthroughs.

But don’t just leave this post feeling inspired. Pick an experiment from the list to test out in the next few months.

Which experiment will you be running? Leave a comment or send me a tweet @TheCoolestCool —I’d love to hear from you.

PS: My latest experiment is using YouTube. Check out my channel and subscribe !

Running Marketing Experiments That Work

Learn all about successful marketing experiments, then design and run your own research to generate traffic, boost sales, and grow your business.

There are so many ways to market your business that it's hard to know where to focus your limited time and resources. Should you send out text updates? Can you expand your social media presence? Which headlines are most likely to get customers to read a blog post and generate more sign-ups for your newsletter?

Rather than taking a wild guess and hoping for the best, conduct a few well-designed marketing experiments and then make your next marketing move with confidence.

What are marketing experiments?

Marketing experimentation is a process to test marketing messages and approaches on a small scale to see what generates the best response. A well-designed marketing experiment can validate existing marketing strategies or help you understand where you can refocus your marketing budget.

Even a small increase in your ability to target potential buyers can make a big difference in your bottom line, making marketing experiments important as a way to optimize your reach, sales prospects, and customer follow-through.

Marketing experiments vs. market research

While marketing experiments are more specific, focused efforts to get specific information, market research is a broader area of marketing study in general. It can provide information on a specific market segment , the key metrics that you want to target, and even insights from other industries that may be useful for your own business.

Market research is an opportunity to get information about your market and generate ideas for marketing experiments. It often uses methods like data analysis, surveys, and focus groups to gather and synthesize information.

Most marketers use market research to generate ideas about which areas of their marketing plan could be boosted by more effective marketing campaigns, subject lines, or landing pages. For example, if a market research focus group reveals that customers find your website too dense, you may ask your marketing content team to streamline your copy and measure the responses to see whether the new version is more effective to increase sign-ups.

Marketing experimentation, on the other hand, tests one or more variables for a specific hypothesis. It is designed to generate specific data that you can use to help guide your marketing plan.

The elements of a marketing experiment

Whether you're testing different headlines, social media platforms, or your website's visual style, there are a few elements that all marketing experiments include.

Every marketing experiment is unique, but all of them include some basic components.

Experimental subjects

Your subjects are the members of your target audience. Depending on what you're testing, they may be customers, sales prospects, or job applicants. In many marketing experiments, subjects are divided into groups to compare the results of different experimental conditions.

The conditions of your experiment are the variables that you change in order to see which one works best. These may include an email's subject line, the location of a search bar on your website, the graphic style of a social media post, or the price of a product.

For example, does offering a one-month free account option to your subscription service increase paid sign-ups after a free trial is over? You can gain valuable insights by comparing the data from two different months in which only that variable is changed.

Although there are many variables you can test, it's usually a good idea to keep it simple and change just one thing in each experiment to make the results more clear. If you run an experiment with more than one variable, it can be hard to know which variable caused the difference in responses.

The results you get from your experiment are called the effects. Depending on what you're testing, the effects may be sales, newsletter sign-ups, engagement, or customer feedback. In a good marketing experiment, the effects are measurable, allowing you to apply the results of your experiment to drive your marketing decisions.

The benefits of marketing experiments

Marketing experiments can do more than just show you which copy works best in a message. Here are some ways that experimentation can help your organization.

Develop new ideas

There's rarely a bad idea for a marketing experiment because you're gathering data rather than committing to a new, long-term marketing tactic. Sometimes the creative idea for your email marketing campaign ends up yielding much better results than existing ones.

Test new strategies

If you're ready to change up your marketing plan, try out more effective marketing messages, or figure out the best digital channels for your product, marketing experimentation allows you to test the waters before committing fully to something new.

Save time and money

It's always great when you have an idea of what will succeed before you invest your business's valuable time and money. Running experiments gives you some insight into what messages customers respond to or what promotion schedule is most effective. Then you can focus your marketing message in a way that your audience will respond the best to, making the most of your resources.

Optimize marketing campaigns

Even the most well-designed marketing campaigns sometimes fall short of expectations. Maybe newsletter sign-ups are lagging behind projections or your sales conversion rate isn't as high as you would like it to be.

There are many reasons why a campaign might not be working—a marketing experiment is a great way to isolate different factors and determine which changes will improve your marketing metrics most effectively.

If you find that your campaigns generate very few leads, you can test different approaches on a small scale to find something that's more effective. Once you have enough data, you can launch a new approach with confidence.

Types of marketing experiments

Depending on what you want to test, there are different types of experiments you can run. Understanding the benefits of each one will allow you to choose the option that's right for you.

A/B testing

One of the most common kinds of marketing experiments is A/B testing . This kind of experiment allows you to test two (or sometimes more) versions of a single variable. Your target audience receives a random version of your message, with one element changed.

For example, in A/B testing you may send out an email to your subscriber list with the same body copy for everyone, but two different versions of your subject line. Or a link in a social media post may lead people to one of two different landing pages.

It's important to make sure that the material your subjects see is delivered as randomly as possible to avoid accidentally influencing their behavior. It's impossible to control every possible condition, of course, but the more you can randomize who receives your messages the better.

If you show one landing page to the first 100 people who click through your social media post and a different one to the next 100 people, you can't be sure that their responses weren't influenced by something other than the page itself. Perhaps those who view your post in the morning are already more-eager consumers of your brand than those who don't interact with it until the evening. Making the test as random as possible helps to eliminate that influence.

Multivariate testing

Sometimes it makes sense to test more than one variable at a time through multivariate testing rather than A/B testing. This works best when you want to test different variables that go together like updated copy and a fresh visual style to give your brand a more contemporary, casual feel. Testing a major revamp of your website or a new pricing structure might also be good times to consider multivariate testing.

Multivariate testing can help you optimize more than one variable while running fewer tests. However, it often requires a larger sample size and a way to analyze the data that can single out the effects of individual variables in a statistically significant way.

The 5 steps of a great marketing experiment

Running a marketing test should be simple and straightforward if you follow these 5 easy steps.

Step #1: Decide what to test

What do you want to learn from your experiment? There are many elements you can measure. Some ideas include how many people click through to your site from an email newsletter or social media post, how long visitors stay on your website, the percentage of site visitors who make a purchase, or how many new people you reach with online ads.

You may want to analyze historical data for your business to see what has worked well for you in the past. If you find that shorter and more conversational subject lines result in a higher conversion rate for your email marketing campaigns, you might decide to experiment with subject lines that take your casual tone even further.

If your sales tend to drop off in the summer, testing different discount offerings at that time of the year can help you determine the one that boosts your sales the most. Maybe customer feedback indicates that they love the blog content on your website. Trying different post lengths—from short, snappy updates to long-form articles—will give you a sense of how in-depth your audience wants to go.

One tip here is to think about your value proposition—the benefits that you offer to your customers with your product or service. How can the benefits that are part of your value proposition inspire ways to reach out?

Step #2: Make a hypothesis

It may be tempting to run an experiment and just see what happens. But you'll get better results if you think ahead about what you expect to happen (and why) and then compare the results to your hypothesis.

For example, if your historical data shows that discount offers emailed to your most loyal customers result in a low response rate, try to brainstorm some reasons why that may be the case. If those offers are usually sent at the beginning of the week, you may suspect that your customers have busy schedules that distract them. Thus, your hypothesis may be that the later in the week you send the email, the more successful it will be.

Make sure to use a measurable hypothesis so you have data that you can test. After you've run the experiment, you can see which schedule produces the best results and compare it to your hypothesis, giving you valuable insight into hypotheses you can test in future experiments.

Step #3: Design the research

One of the most important things when designing a marketing experiment is to keep it simple. Choose your independent variable—the variable you change—and try to keep the other aspects of your different versions the same.

If you're sending a discount code via email and want to test two different discount amounts to see which one generates the most revenue, make sure to send both versions of the message at the same time to randomly generated groups of your customers.

In addition, think about the length of time the experiment will run. You want to make sure you get enough data to analyze, but you should also have a clear beginning and end to the experiment.

Step #4: Run the experiment

Here's where you get to put your hypothesis to the test! Create and launch your experiment by sending emails, publishing the two different versions of your landing page, or offering different discounts.

Follow your design parameters and collect research data that you'll use in your next step. Make sure to let the experiment run its course and collect all the data you require before analyzing the results.

Step #5: Analyze your results

Before you make any changes, it's important to think about how to measure success. What are you measuring—the number of people who make a purchase? Sign-ups for your newsletter? Click-throughs to your website?

Sometimes the results will be simple to understand: if one call-to-action line generates significantly more click-throughs than another, that one is clearly the winner. But if an experiment uses multiple variables or there are other factors that may affect the results, you may need to use a statistical analysis tool to help you isolate the data you need.

There are numerous software programs and online tools to assist with data collection and statistical analysis. Popular options include Minitab , Segment , and Google Analytics .

Does the data match your hypothesis and expected results? If so, great! Now you know one way you can boost your marketing efforts. If not, that's valuable information, too because it will allow you to change course, run future experiments, or give other options a try before you commit time and resources. Successful experiments and those that don't turn out as predicted both have valuable insight to offer.

What can marketing experimentation test?

There are many things that marketing experimentation can test. Popular options are different methods of communication or the messages themselves. But experiment ideas are endless and with a well-designed experiment you can also test things like communication frequency, product prices and discounts, and the use of social media influencers versus real customers.

Marketing messages

Any way you deliver a marketing message is an opportunity for an experiment. Digital messages are often easiest to test since you can get real-time data about which ad sales copy brings more people to your site. But no matter how you market your product or service—whether it's paid media, in-person sales, or even skywriting—it's possible to design a marketing experiment for it.

Subject lines

When customers and others on your mailing list get your emails, you want them to open the messages and read the valuable information within. Subject lines can affect the appeal of your email marketing messages and your click-through rates. By testing different subject lines, you can discover which ones resonate better with your audience and optimize your email performance.

Landing pages

A landing page is where someone lands when they follow a link from somewhere else—a social media post, an email, or paid media like a digital ad. By designing two or more versions of a landing page and directing your audience randomly to one of the versions, you can see which one converts more people to registered users or revenue-generating customers.

Search engine results

Making sure that potential customers can find you is one of the most important steps you can take to grow your business. If your business isn't coming up near the top of search results, your digital marketing isn't doing all it could.

There are several factors that determine how well your website performs on a search engine: search engine optimization keywords, title tags, and your site's content. You might want to use one title tag on some product pages and a different one on another random set of pages. Seeing which tag brings more traffic will help you find the most effective way to attract customers.

Social media posts

Social media is a great place to run a marketing experiment. It has become an increasingly large part of many companies' marketing plans.

According to a survey done by the data-driven portfolio website Visual Objects, social media is the most successful digital marketing tool for small businesses. The continuous flow of new content on most social media sites means that there are a lot of opportunities to present fresh information to your target audience.

You can test different copy text, hashtags, or posting times. If your company offers business services, your hypothesis may be that you'll get more sales by offering a discount code on your social media accounts at the end of a quarter, when customers are looking to stretch their budget.

Customer service experiences

Providing great customer service is a factor in customer loyalty. Consider running a marketing experiment on your customer service process. For example, when customers contact you with a question or concern about an order, you might randomize their first contact with either a live agent or a chatbot.

By analyzing how efficiently their issue was handled or how satisfied they were with their experience, you will gain valuable data about what works best.

It may be necessary to rely on customer feedback to determine how successful your experiment is. However, you can get lots of measurable data if you analyze responses by category (positive, neutral, or negative) or ask customers to rank their experience on a numerical scale.

Visual style

Are your customers likely to spend more time on your website if it has a sleek, modern design or if it features colorful, eye-catching graphics? Trying out different visual styles in a small, focused experiment may give you surprising insights about whether your company's visual style could use a refresh or whether you're already on the right track.

Communication frequency

Finding the right frequency for your company's emails, social media posts, and text updates can boost your marketing campaigns and optimize activities like purchasing behavior. You can use marketing tests to experiment with different frequencies.

Does a weekly email about product updates generate more click-throughs per month than a message once a month? Study your existing data and look for opportunities to change how often you get in touch.

Tips for successful marketing experiments

Marketing experiments can be a powerful tool! You probably want to start deploying them immediately to develop new marketing strategies and boost your business growth.

Read on for some things to keep in mind to make sure every experiment is a successful marketing experiment.

Keep it simple

It can be tempting to try to get as much data as possible from an experiment, but trying to test too many variables at once is a bad idea. It can leave you with inconclusive results or sample sizes for each variable that don't provide enough information.

Instead, run experiments that focus on your most important or impactful options and then design another experiment to test another hypothesis.

Get creative

This is your chance to take a chance. Because you're testing multiple options, you're not yet committed to any one choice. If you think potential customers might respond more to humor in your sponsored social media posts, you can place two different versions of your post and measure which one leads to more click-throughs.

Use the right sample size

Deciding how many people you need to reach to test your marketing hypothesis is important. If your sample size is too small, your results might not be accurate. But if the sample size is too large, you might be spending more time and effort than you need to.

The right sample size will be determined by your total number of users, how precise you want the results to be, and how confident you want to be in the results.

Know what you can control

Even if you run a well-designed marketing experiment with an appropriate sample size, a single variable, and a clear hypothesis, it's impossible for it to exist in a vacuum.

For example, if you test an email that asks users to sign up for membership at your outdoor aquatic club on an unseasonably cold day, your results will probably reflect that circumstance, which you can't control.

Tools and resources for your marketing team

Luckily, when it comes to running successful marketing experiments, some of the work has already been done for you. There are several options to help you test different messages and digital channels so you can focus on developing the best ideas for your business and putting the winning growth strategies into practice.

If you're considering running a marketing experiment on your email newsletter or social media , Mailchimp has tools to help you set up, send, and publish your content and then gather the data so you can study the effects and use the results to guide your marketing.

Wynter is a platform focused on testing business-to-business (B2B) messages. Wynter has tools to help you design and run marketing experiments with templates and guidelines to test which subject line, blog post, or paid media content resonates most with your target audience.

You may want to use a service to collect website data you need for an effective marketing experiment. hotjar analyzes and tracks user interaction and experiences with your website and other online products. This can give you valuable insight into which marketing copy, images, or pages are attracting the most interaction and click-throughs.

Userpilot offers tools to help you run marketing experiments and personalize your user interface, focusing on things like user retention and how many users with a free trial become paid subscribers.

When it comes to experiment ideas, the sky is the limit. Whether you use a tool or design your own experiments from scratch, marketing experimentation can help you make smarter business decisions and reach your target audiences more effectively.

Start coming up with new ideas today to reach your customers and optimize your marketing efforts!

Marketing Experiments in SaaS: Different Experiment Ideas To Try and How To Run Them

10 min read

Marketing Experiments in SaaS: Different Experiment Ideas To Try and How To Run Them cover

Marketing experiments give you an insight into how well marketing strategies will perform before implementing them. But how should you run them?

In this article, we’ll start by exploring how to conduct a marketing experiment and give you different experiment ideas to try in your SaaS business.

So let’s jump right in.

A marketing experiment is a type of market research to validate the effectiveness of an existing marketing strategy or discover a new one.

  • Marketing experiments help you to innovate new ideas, try new marketing strategies, identify marketing mistakes, make smarter business decisions, and optimize campaigns to maximize performance.
  • Marketing testing is a technique used by businesses to evaluate the viability of their strategies before implementing them.
  • There are seven steps to conducting a marketing experiment:

Brainstorm experiment ideas

Analyze existing data, make a hypothesis, choose the audience, select metrics to measure success, run the experiment, analyze results.

  • Here are five marketing experiments types you can try in your business:

In-app onboarding experiments

Product messaging experiments, email marketing experiments, paid media experiments, case study experiments.

  • Tools like Userpilot , Wynter , and Hotjar can help you run successful marketing experiments.

What is a marketing experiment?

Why are marketing experiments important.

Imagine spending days with your team trying to come up with a marketing campaign to increase sales and awareness.

Fortunately, you and your team came up with a marketing campaign idea that looks so good on paper and you’re sure that is the one.

So what’s next? Implementation, right?

You and your team decide it’s time to implement the campaign, and you spend so much money and time on turning this strategy into real-life actions only to see that your potential customers are not responding well to the campaign.

All that money and effort down the drain. Yikes!

However, imagine if you had tested your marketing campaign first. This would have saved you time, money, and effort on an ineffective campaign since you would have known what your target audience wanted.

Marketing experiments extend beyond validating existing campaigns. Marketing experiments can also be used to innovate, test brand-new ideas, and make smarter business decisions.

Marketing experiments vs. marketing tests

Now that we’ve understood the importance of marketing experiments let’s look at the difference between marketing experiments and marketing tests.

A marketing test is a technique businesses use to evaluate the viability of their products, services, or strategies before launching them on a large scale.

Marketing testings often include the following:

  • A previous champion
  • A challenger (the product, service, or campaign)
  • A set budget and known timeframe
  • An expectation that there will be a winner

On the other hand, a marketing experiment is a type of marketing research with one of two aims: to discover new strategies or validate existing ones. Marketing experiments include the following:

  • Two or more challengers
  • A set budget but an unknown time frame
  • An expectation that it might be successful or it might fail

How to conduct a marketing experiment?

So you’re ready to start conducting marketing experiments, but how do you go about it? Here are seven steps to conducting successful marketing experiments:

The very first step in the marketing experimentation process is brainstorming ideas. What do you want to experiment with and why?

Don’t know where to start with brainstorming ideas? Have team meetings in your department, exchange ideas, and take daily notes . Then sum up the notes and prioritize the ideas based on your company objectives.

Once you identify your goals and the KPIs you’re trying to influence; the next step is to analyze historical data. What strategies did you use in the past? What worked and what didn’t?

Use analytics tools to dig into your historical data and identify potential reasons why some of your strategies didn’t work.

Then, analyze where you are in marketing, how you got there, and what you’re trying to achieve.

The next step is to create a hypothesis. Your hypothesis is what you’re trying to prove, and it should be testable.

When creating a hypothesis, most marketers fail to be specific. Suppose you said, “We need a new CTA copy for our website.”

But what happens next, and how do you know that you really need a new CTA copy? What’s wrong with the last one?

Here the hypothesis is subjective and not testable.

A better hypothesis would be, “Changing the CTA copy from “Submit” to “Join Our Community” will increase sign-ups by 5%.”

You can test this hypothesis by changing the CTA copy, and if you get more sign-ups, it means that your hypothesis is correct, and users need more clarity. If you get fewer sign-ups or do not record any changes in sign-ups, your hypothesis would be incorrect.

Using that data, you can follow up with another test like changing the CTA copy to something more basic like “Get Started” to see how users react.

You don’t have to run the experiment on all your customers if it’s not necessary. You can create customer segments based on your specific goals and run marketing experiments with them.

Don’t know how to get started with segmentation ?

Userpilot offers many practical ways to segment users; NPS score , user attribute, experience flows, demographics, in-app behavior, and custom events.

Just create segments with the tool and run experiments with only them.

userpilot-smarter-segmentation

The next step is to select the proper metrics that will indicate if your hypothesis is true or not.

Your metrics can be anything, including traffic, engagement, click-through rate (CTR), reach, conversion rate, and more.

Once your hypothesis is ready, and you’ve selected your metrics, the next step is to conduct your experiments.

When conducting marketing experiments, you should use A/B testing. Why?

You can compare the original version of an item with the new version by using A/B testing. Additionally, it helps you determine whether your hypothesis is correct or incorrect by comparing your change to the original.

Userpilot’s A/B testing feature makes it easy for your team to make informed decisions about a marketing campaign and measure the impact of different strategies on your growth goals with higher accuracy.

a_b-testing-userpilot

The final step is to analyze your results. You don’t have to wait for the experiment to end before checking its results. Look at the different metrics you set for your campaign; they are the indicators that will determine the validity of your hypothesis.

Did the experiment drive the expected results?

If not, is there anything you can do to optimize it? If the answer is yes, you should take action to optimize the campaigns.

Different types of marketing experiments

Now that you know how to run marketing experiments, it’s time to figure out what type of marketing experiment you want to do. Here are five examples of different marketing experiments to try out in your business:

In-app onboarding is crucial, but most businesses fail to do it right.

According to research by Upland , 25% of users abandon an app after one use.

This means one thing – marketers are not paying so much attention to in-app onboarding. There are several reasons why users will abandon your app after one use:

  • The app isn’t relevant to their needs.
  • The user experience is bad.
  • They don’t know how to use your app.

A good in-app onboarding experience gives users a clear picture of what your app does and how to use it.

Your experiments and testing should be aimed at helping your users understand your app’s value and reach milestones faster. On that note, here are a few in-app onboarding experiments you can run:

Test welcome screens

In-app welcome screens can significantly impact your user activation , satisfaction level , churn, and retention rate . Here is an example of how you can conduct a marketing experiment and segment users into two groups.

Don’t use a welcome screen for the first group; welcome them with a simple dashboard and leave them to figure it out on their own. On the other hand, welcome another group with a welcome screen that provides helpful “getting started” information or nudges them to take an interactive tour.

Userpilot-welcome-screen

Then watch closely to see how different each group acts.

Test onboarding flows

Onboarding flows are a way to introduce your brand-new users to your product or new features. The goal of every onboarding flow is to shorten the time to value and get users to the Aha! Moment as quickly as possible.

You want to identify which onboarding flow version leads users to an ‘Aha!’ moment the quickest and keeps them engaged.

For this, you can show slide-outs to half of your users and tooltips to the other half. You can also experiment with different CTAs, in-app experiences, and progress bars to determine which version works better.

userpilot-flows-ui-patterns

With Userpilot, you can create different UI patterns for your onboarding flows. Choose from modals, slideouts, tooltips, banners, and checklists, or combine them together.

In the example below, we tested if adding an in-app checklist will have an impact on our feature adoption goal. Based on the preliminary results, our goal completion went to 60% from 29%.

ab-test-userpilot-marketing-experiments

You can also use Userpilot’s feature tagging to tag any UI pattern that will trigger when users click on them. This way, you can see how users engage with your UI patterns.

feature-tagging-userpilot

Product messaging has a big influence on your customer’s purchase decision.

You can display different versions of your product messaging to see which resonates the most with your users. Be sure to test the design and copy length. You could also test different formats – for example, if feature adoption is higher in the group that got a pop-up announcement rather than introductory tooltips, then you’ll know which approach works best.

Segment users based on jobs they need to complete, milestones they’ve reached, and other characteristics, then show relevant messages according to each segment:

userpilot-smarter-segmentation

Email marketing is one of the best digital channels to grow and nurture your leads. So it’s worth running experiments to see what your audience will respond best to.

Test the subject lines

Your subject line is the first thing users see before they decide to open your email. If your subject line doesn’t resonate with your users, they won’t open it, and you’ll miss out on potential conversations.

When you run marketing experiments, you should try different subject lines for your email to see what works. For example, you could personalize one email version by adding the recipient’s name to the subject line and see how your users respond.

You could also get creative by experimenting with emojis on your subject lines and see if it gets more open rates.

email-marketing-subject-line-experiment

Test the content

If your users are opening your emails but aren’t doing anything after that, there might be an issue with your email content.

You should experiment with the email’s headline, CTA, content, and images and see how your audience responds. However, it’s important to focus on one aspect at a time, so don’t try to experiment with everything all at once; pick one focus and experiment with it.

Using paid media incorrectly can cause you to lose money and generate very few leads and return on investment.

You don’t want to experience this with paid ads.

You need to experiment with your ads to make the most of them. Rather than going with gut feelings and launching one marketing campaign, create two or more.

Then add different headlines, CTAs, and copy in those marketing campaigns and show different variations to different potential customers segment until you start getting data for key metrics like conversion rate, CAC, etc. Once you have this data, you can identify which marketing campaign works and which doesn’t.

Here is a paid media example from Datasine as they are A/B testing two different visuals.

datasine-ppc-experiment

The case studies on your site influence your customer purchase decisions – as it helps you demonstrate credibility and authority.

Take a look at existing testimonials instead of making assumptions about what your audience might like.

Include quotes from clients in various roles rather than just those in executive management. Or it might be necessary to replace your case study with an updated one.

case-study-experiments

Best tools to run successful marketing experiments

Let’s take a look at some of the best tools to run successful experiments:

Userpilot – for in-app onboarding experiments

userpilot-user-segmentation

Userpilot is a code-free product experience platform that helps product teams deliver personalized in-app experiences to increase growth metrics at every user journey stage.

The special ingredients to Userpilot regarding marketing experiments are its ability to analyze in-app customer behavior and perform A/B testings on in-app experiences.

With Userpilot, marketers can segment users based on specific attributes or in-app events and create onboarding flows to get customers to get users to the ‘Aha!’ moment.

Userpilot has 3 pricing packages: $249/mo (Traction), $499/mo (Growth), $1000/mo for Enterprise plans.

Wynter – for messaging & copywriting experiments

wynter-messaging-testing-tool

Wynter is a message research tool to get customer feedback on how your messaging resonates with them.

It conducts messaging testing for ads, website, and email copy. As a result, you can also determine where your messaging falls short and fix it in order to attract more customers. Wynter helps you determine your message-market fit, get internal consensus, identify your customers’ pain points and desires, and validate your messaging.

Pricing: $899/test (Wynter Starter), $1489/mo (Wynter Monthly), $19,000/yr (Wynter Pro).

Hotjar – product experience experiments

hotjar-marketing-experiments-tools

Hotjar is a user feedback and behavior analytics software that provides insights that show how users behave on your site and why it is happening.

It crams a range of essential features, including heatmaps, conversion funnels, feedback pools, session recordings, surveys, form analysis, recruit testers, and incoming feedback.

Pricing: The tool comes in two different plans:

Observe: Pricing starts at $0 (Basic), $39/mo (Plus), and $99/mo (Business).

Ask: It starts at $0 (Basic), $59/mo (Plus), and $79/mo (Business).

Marketing experiments are a great way to produce the best strategy for your business. We shared five different experiment ideas you can try for your SaaS.

Are you looking to run in-app experiments code-free? Book a demo call with our team and get started!

Leave a comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Book a demo with on of our product specialists

Get The Insights!

The fastest way to learn about Product Growth,Management & Trends.

The coolest way to learn about Product Growth, Management & Trends. Delivered fresh to your inbox, weekly.

the marketing experiments blog

The fastest way to learn about Product Growth, Management & Trends.

You might also be interested in ...

How to Plan & Test Marketing Experiments

Learn how to run marketing experiments the right way in 2022. We show you how every marketing strategy can be a learning strategy.

' src=

Today’s marketing leaders are using marketing tests and experiments to increase insight and action. And organisations committed to marketing experiments are transforming their businesses with increased sales. 

How? Because experiments are the best way to create new knowledge systematically and to learn more about your customers and your audience.

They force you to question your own ideas, beliefs, and the best practices you’ve read so much about. 

Continually testing and learning is crucial to making future marketing decisions that are based on proven results rather than opinions. 

After all, at the pace customer behaviours are evolving, it’s becoming even more challenging for marketers to stay on top of how brands and customers can connect. 

Why Run Marketing Experiments?

In the short and long run, marketing experiments are pivotal for success. On a deeper level, testing what works and analysing your data makes it easier for you to determine your ROI and make data-driven decisions. 

Here are 3 of the top reasons why you should prioritise marketing experiments:

1) Improved customer experience: Marketing experiments help you understand customer needs, and how you can deliver more personalised customer experiences. 

2) Better decision making: Marketing experiments help you make better-informed, data-driven decisions. By conducting marketing experiments, you’ll be able to get a snapshot of what’s working and what’s not working.

3) Understand customers more deeply: Marketing experiments allow you to easily learn and predict customer behaviour based on data and make course corrections along the way. 

Get Inspired : 21 Marketing Experiments & Test Ideas

The Planning and Testing Process

Before setting up a marketing experiment, make sure you plan for maximum effectiveness. We’ve pulled together our top tips for planning and testing your marketing experiments.

Start with a goal and hypothesis : Good marketing experiments start with a clear goal to validate. A hypothesis forces you to think about the experiment. Ask yourself questions about how and why you think outcomes will occur. 

Analyse where you are now : Once you understand what needle you’re trying to move, look at existing data in your marketing management tool to see where you are, so you can know when you reach your goal. 

Brainstorm ideas : Equipped with insight on your starting point, you can meet with your team to come up with the highest-impact ideas to get where you want to be with marketing and sales. 

Prioritise your ideas : Use the ICE (Impact, Confidence, Ease) method to create an abundance of ideas, before you need them, so you can respond rapidly to performance data. After this step, you should have a clear idea of which experiments to go after. 

Run the experiments : Start small and prove the value of experimentation with small wins. Allow campaigns of at least 3 weeks to gather info about what works best to optimise your results. 

Measure campaign success : And last (but definitely not least) in your marketing experiment process, measurement solutions are crucial when it comes to understanding impact, and making any needed real-time adjustments. 

TrueNorth allows you to test assumptions about your messaging, audience, and ad creatives in real-time.
You can cut down on the amount of time you’d have to spend on manually collecting data from multiple platforms.

Final Marketing Experiment Tips 

Most marketing experiments will have one of two objectives: optimising performance or unlocking growth.

To optimise performance, you can start by testing out different tactics, tools, and strategies. Try new content types or ad formats.

In terms of looking for growth opportunities, try something completely different. Throw your existing landing page out the window, and try something new. If the fresh version works better, use that as your benchmark going forward. 

Ideally, you’d want to test one variable at a time, but you also have to realise the impact of a small change will be small.

The final precondition for a good marketing experiment is that it’s run as a split test. This way, you can really compare the effectiveness of different ideas.

Bonus point: Build a culture of data-led experimentation. Encourage testing through ideation sessions with your team. Prioritise discussing ideas and reviewing results in order to develop learning for further growth.

Whether the results are good or not so good, every marketing experiment is an opportunity to learn.

Are you ready to start your first marketing experiment?

In an environment that’s always evolving, constant learning becomes crucial to ensure we stay ahead of the curve. 

Experimentation makes us think about what works, why it works, and what might be done differently. This is part of what makes it an important tool for marketers to keep a finger on the pulse of the evolving nature of marketing.

TrueNorth brings all of your data together to run marketing experiments at scale. It helps by continually surfacing insights to help you course-correct as you go. 

With marketing experiments software, there’s no more need to log into multiple tools to track important marketing metrics.

You can even automate ideation alerts and goal projections, and respond rapidly to drops in performance to stay on track.

Get started with your first marketing experiments for free with TrueNorth.

Marcus Taylor

Marcus Taylor

Marcus is the CEO of TrueNorth, a growth marketing platform that helps marketing teams focus, align and track marketing in one place.

More from Marcus

  • Follow Marcus

You may also like

Redefining growth marketing with truenorth.

Growth marketing requires a scientific and collaborative approach. Here’s how TrueNorth gets your whole team working (and growing) together.

Marcus Taylor

The Growth Marketing Handbook (2022 Edition)

Everything you need to know about growth marketing – from what it is and how it differs to traditional marketing, to the frameworks, tools and tactic

Lauren Detweiler

25+ Books, Courses & Tools to Learn Growth Marketing

We’ve rounded up the top books, podcasts, courses, communities, guides and more to help you learn growth marketing in your preferred medium.

Leave a Reply Cancel Reply

Save my name, email, and website in this browser for the next time I comment.

Try TrueNorth for free

Free for 14 days, then $99/mo for all features & your whole team.

Please provide an email address

Something went wrong, please try here

  • Getting Started
  • Affiliate Program
  • System status
  • Growth Projection
  • Growth Tracking
  • Experiments
  • Integrations
  • What’s New?
  • Create a marketing plan
  • Plan your marketing calendar
  • Growth marketing
  • Marketing management
  • Marketing planning
  • Marketing forecasting

© 2024 Venture Harbour Ltd.

Privacy Terms Cookies

Guide your team in the right direction

Focus on the most impactful ideas

Plan, test & measure to move fast

Capture insights & results

Track growth & campaign performance automatically

Manage your clients’ marketing

Generate your best ideas to accelerate
your growth

Watch a 2-minute demo explaining how TrueNorth helps marketing teams hit their goals.

Learn how TrueNorth can streamline agency operations & wow your clients.

Get your team up and running in days with this handy guide to getting started.

Learn growth marketing – from the frameworks to the tools and tactics.

  • Try for free

How to Do A/B Testing: 15 Steps for the Perfect Split Test

Planning to run an A/B test? Bookmark this checklist for what to do before, during, and after to get the best results.

kissmetrics_Abtestkit_headerimage

THE COMPLETE A/B TESTING KIT

Inside: Intro to A/B Testing, Significance Calculator, and Tracking Template

how to do a/b testing; perfect split between two people

Published: 05/23/24

So, you want to discover what truly works for your audience, and you’ve heard about this mythical form of marketing testing. But you have questions like: “What is A/B testing in marketing, anyway?” and “Why does it matter?”

Don’t worry! You’ll get all the answers to your burning questions. I’ll even tell you the second answer straight away…

Free Download: A/B Testing Guide and Kit

When marketers like us create landing pages , write email copy, or design call-to-action buttons, it can be tempting to use our intuition to predict what will make people click and connect.

But as anyone who’s been in marketing for a minute will tell you, always expect the unexpected. So, instead of basing marketing decisions on a “feeling,” you’re much better off running an A/B test to see what the data says.

Keep reading to learn how to conduct the entire A/B testing process before, during, and after data collection so you can make the best decisions based on your results.

the marketing experiments blog

The Complete A/B Testing Kit for Marketers

Start improving your website performance with these free templates.

  • Guidelines for effective A/B testing
  • Running split tests for email, landing pages, and CTAs
  • Free simple significance calculator
  • Free A/B test tracking template.

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

Table of Contents

What is A/B testing?

History of a/b testing.

  • Why is A/B Testing important?

How does A/B testing work?

A/b testing in marketing, what does a/b testing involve, a/b testing goals, how to design an a/b test, how to conduct a/b testing, how to read a/b testing results, a/b testing examples.

  • 10 A/B Testing Tips from Marketing Examples

What Is A/B Testing?

A/B Testing In Marketing

A/B testing, also known as split testing, is a marketing experiment wherein you split your audience to test variations on a campaign and determine which performs better. In other words, you can show version A of a piece of marketing content to one half of your audience and version B to another.

A/B testing is helpful for comparing two versions of a webpage, email newsletter, subject lines, designs, apps, and more or to see which is more successful.

Split testing takes the guesswork out of discerning how your digital marketing materials should look, operate, and be distributed. I'll walk you through everything you need to know about split testing, but I've got you covered if you're a visual learner. 

The video below walks you through everything you need to know. 

It’s hard to track down the “true” origins of A/B testing. However, in terms of marketing, A/B testing — albeit in its initial and imperfect form — arguably started with American advertiser and author Claude Hopkins.

Hopkins tested his ad campaigns using promotional coupons.

Still, Hopkins’ “Scientific Advertising” process didn’t include the key principles we use in A/B testing today. We have 20th-century biologist Ronald Fisher to thank for those.

Fisher, who defined statistical significance and developed the null hypothesis, helped to make A/B testing more reliable.

That said, the marketing A/B testing we know and love today started in the 1960s and ‘70s. It was also used to test direct response campaign methods. Another key marketing moment came to us in 2000.

At this time, Google engineers ran their first A/B test. (They wanted to know the best number of results to display on the search engine results page.)

Why is A/B testing important?

A/B testing has many benefits to a marketing team, depending on what you decide to test. For example, there is a limitless list of items you can test to determine the overall impact on your bottom line.

But you shouldn’t sleep on using A/B testing to find out exactly what your audience responds best to either. Let’s learn more.

You Can Find Ways To Improve Your Bottom Line

Let’s say you employ a content creator with a $50,000/year salary. This content creator publishes five articles weekly for the company blog, totaling 260 articles per year.

If the average post on the company’s blog generates 10 leads, you could say it costs just over $192 to generate 10 leads for the business ($50,000 salary ÷ 260 articles = $192 per article). That’s a solid chunk of change.

Now, if you ask this content creator to spend two days developing an A/B test on one article, instead of writing two posts in that time, you might burn $192, as you’re publishing fewer articles.

But, if that A/B test finds you can increase conversion rates from 10 to 20 leads, you just spent $192 to potentially double the number of customers your business gets from your blog.

… in a Low Cost, High Reward Way

If the test fails, of course, you lost $192 — but now you can make your next A/B test even more educated. If that second test succeeds, you ultimately spent $384 to double your company’s revenue.

No matter how many times your A/B test fails, its eventual success will almost always outweigh the cost of conducting it.

You can run many types of split tests to make the experiment worth it in the end. Above all, these tests are valuable to a business because they’re low in cost but high in reward.

You Can Find Out What Works for Your Audience

A/B testing can be valuable because different audiences behave, well, differently. Something that works for one company may not necessarily work for another.

Let’s take an unlikely B2B marketing tactic as an example. I was looking through HubSpot’s 2024 Industry Trends Report data for an article last week.

I noticed that 10% of B2B marketers planned to decrease their investment in NFTs as part of their strategy in 2024.

My first thought was, “ Huh, NFTs in B2B? ”

Then it hit me. To have that decrease, B2B marketers must’ve been using NFTs in the first place. Even more surprising than this revelation was that 34% of marketers plan to increase investment in NFTs as part of their B2B strategy.

That’s just one example of why conversion rate optimization (CRO) experts hate the term “best practices.” Because that “best practice”? Well, it may not actually be the best practice for you.

But, this kind of testing can be complex if you’re not careful. So, let’s review how A/B testing works to ensure you don’t make incorrect assumptions about what your audience likes.

To run an A/B test, you need to create two different versions of one piece of content, with changes to a single variable .

Then, you’ll show these two versions to two similarly-sized audiences and analyze which one performed better over a specific period. But remember, the testing period should be long enough to make accurate conclusions about your results.

An image showing an A/B test with a control and variation group

Image Source

A/B testing helps marketers observe how one version of a piece of marketing content performs alongside another. Here are two types of A/B tests you might conduct to increase your website’s conversion rate.

Example 1: User Experience Test

Perhaps you want to see if moving a certain call-to-action (CTA) button to the top of your homepage instead of keeping it in the sidebar will improve its click-through rate.

To A/B test this theory, you’d create another, alternative web page that uses the new CTA placement.

The existing design with the sidebar CTA — or the “ control ” — is version A. Version B with the CTA at the top is the “ challenger .” Then, you’d test these two versions by showing each to a predetermined percentage of site visitors.

Ideally, the percentage of visitors seeing either version is the same.

If you want more information on how to easily perform A/B testing on your website, check out HubSpot’s Marketing Hub or our introductory guide.

Example 2: Design Test

Perhaps you want to find out if changing the color of your CTA button can increase its click-through rate.

To A/B test this theory, you’d design an alternative CTA button with a different button color that leads to the same landing page as the control.

If you usually use a red CTA button in your marketing content, and the green variation receives more clicks after your A/B test, this could merit changing the default color of your CTA buttons to green from now on.

Here are some elements you might decide to test in your marketing campaigns:

  • Subject lines.
  • Fonts and colors.
  • Product images.
  • Blog graphics.
  • Navigation.
  • Opt-in forms.

Of course, this list is not exhaustive. Your options are countless and differ depending on the type of marketing campaign you’re A/B testing. (Blog graphics, for example, typically won’t apply to email campaigns.)

But product images can apply to both email and blog testing.)

An image showing the results of A/B website testing

But let’s say you wanted to test how different subject lines impacted an email marketing campaign’s conversion rates. What would you need to get started?

Here’s what you’ll need to run a successful A/B test.

  • A campaign: You’ll need to pick a marketing campaign (i.e., a newsletter, landing page, or email) that’s already live. We’re going with email.
  • What you want to test: You’ll need to pick the element(s) you wish to A/B test. In this case, that would be the subject line used in an email marketing campaign. But you can test all manner of things, even down to font size and CTA button color. Remember, though, if you want accurate measurements, only test one element at a time.
  • Your goals: Are you testing for the sake of it? Or do you have well-defined goals? Ideally, your A/B testing should link to your revenue goals. (So, discovering which campaign has a better impact on revenue success.) To track success, you’ll need to select the right metrics. For revenue, you’d track metrics like sales, sign-ups, and clicks.

A/B testing can tell you a lot about how your intended audience behaves and interacts with your marketing campaign.

Not only does A/B testing help determine your audience’s behavior, but the results of the tests can help determine your next marketing goals.

Here are some common goals marketers have for their business when A/B testing.

Increased Website Traffic

You’ll want to use A/B testing to help you find the right wording for your website titles so you can catch your audience’s attention.

Testing different blog or web page titles can change the number of people who click on that hyperlinked title to get to your website. This can increase website traffic.

Providing it’s relevant, an increase in web traffic is a good thing! More traffic usually means more sales.

Higher Conversion Rate

Not only does A/B testing help drive traffic to your website, but it can also help boost conversion rates.

Testing different locations, colors, or even anchor text on your CTAs can change the number of people who click these CTAs to get to a landing page.

This can increase the number of people who fill out forms on your website, submit their contact info to you, and “convert” into a lead.

Lower Bounce Rate

A/B testing can help determine what’s driving traffic away from your website. Maybe the feel of your website doesn’t vibe with your audience. Or perhaps the colors clash, leaving a bad taste in your target audience’s mouth.

If your website visitors leave (or “bounce”) quickly after visiting your website, testing different blog post introductions, fonts, or featured images can retain visitors.

Perfect Product Images

You know you have the perfect product or service to offer your audience. But, how do you know you’ve picked the right product image to convey what you have to offer?

Use A/B testing to determine which product image best catches the attention of your intended audience. Compare the images against each other and pick the one with the highest sales rate.

Lower Cart Abandonment

E-commerce businesses see an average of 70% of customers leave their website with items in their shopping cart. This is known as “shopping cart abandonment” and is, of course, detrimental to any online store.

Testing different product photos, check-out page designs, and even where shipping costs are displayed can lower this abandonment rate.

Now, let’s examine a checklist for setting up, running, and measuring an A/B test.

Designing an A/B test can seem like a complicated task at first. But, trust us — it’s simple.

The key to designing a successful A/B test is to determine which elements of your blog, website, or ad campaign can be compared and contrasted against a new or different version.

Before you jump into testing all the elements of your marketing campaign, check out these A/B testing best practices.

Test appropriate items.

List elements that could influence how your target audience interacts with your ads or website. Specifically, consider which elements of your website or ad campaign influence a sale or conversion.

Be sure the elements you choose are appropriate and can be modified for testing purposes.

For example, you might test which fonts or images best grab your audience’s attention in a Facebook ad campaign. Or, you might pilot two pages to determine which keeps visitors on your website longer.

Pro tip: Choose appropriate test items by listing elements that affect your overall sales or lead conversion, and then prioritize them.

Determine the correct sample size.

The sample size of your A/B test can have a large impact on the results — and sometimes, that is not a good thing. A sample size that is too small will skew the results.

Make sure your sample size is large enough to yield accurate results. Use tools like a sample size calculator to help you figure out the correct number of interactions or visitors to your website or participants in your campaign you need to obtain the best result.

Check your data.

A sound split test will yield statistically significant and reliable results. In other words, your A/B test results are not influenced by randomness or chance. But how can you be sure your results are statistically significant and reliable?

Just like determining sample size, tools are available to help verify your data.

Tools, such as Convertize’s AB Test Significance Calculator , allow users to plug in traffic data and conversion rates of variables and select the desired level of confidence.

The higher the statistical significance achieved, the less you can expect the data to occur by chance.

Pro tip: Ensure your data is statistically significant and reliable by using tools like A/B test significance calculators.

Copy of Linkedin - 1104x736 - Quote + Headshot - Orange

Schedule your tests.

When comparing variables, keeping the rest of your controls the same is important — including when you schedule to run your tests.

If you’re in the ecommerce space, you’ll need to take holiday sales into consideration.

For example, if you run an A/B test on the control during a peak sales time, the traffic to your website and your sales may be higher than the variable you tested in an “off week.”

To ensure the accuracy of your split tests, pick a comparable timeframe for both tested elements. Run your campaigns for the same length of time to get the best, most accurate results.

Pro tip: Choose a timeframe when you can expect similar traffic to both portions of your split test.

Test only one element.

Each variable of your website or ad campaign can significantly impact your intended audience’s behavior. That’s why looking at just one element at a time is important when conducting A/B tests.

Attempting to test multiple elements in the same A/B test will yield unreliable results. With unreliable results, you won’t know which element had the biggest impact on consumer behavior.

Be sure to design your split test for just one element of your ad campaign or website.

Pro tip: Don’t try to test multiple elements at once. A good A/B test will be designed to test only one element at a time.

Analyze the data.

As a marketer, you might have an idea of how your target audience behaves with your campaign and web pages. A/B testing can give you a better indication of how consumers really interact with your sites.

After testing is complete, take some time to thoroughly analyze the data. You might be surprised to find that what you thought was working for your campaigns was less effective than you initially thought.

Pro tip: Accurate and reliable data may tell a different story than you first imagined. Use the data to help plan or change your campaigns.

To get a comprehensive view of your marketing performance, use our robust analytics tool, HubSpot's Marketing Analytics software .

Follow along with our free A/B testing kit , which includes everything you need to run A/B testing, including a test tracking template, a how-to guide for instruction and inspiration, and a statistical significance calculator to determine whether your tests were wins, losses, or inconclusive.

Significance Calculator Preview

Before the A/B Test

Let’s cover the steps to take before you start your A/B test.

1. Pick one variable to test.

As you optimize your web pages and emails, you’ll find there are many variables you want to test. But to evaluate effectiveness, you’ll want to isolate one independent variable and measure its performance.

Otherwise, you can’t be sure which variable was responsible for changes in performance.

You can test more than one variable for a single web page or email — just be sure you’re testing them one at a time.

To determine your variable, look at the elements in your marketing resources and their possible alternatives for design, wording, and layout. You may also test email subject lines, sender names, and different ways to personalize your emails.

Pro tip: You can use HubSpot’s AI Email Writer to write email copy for different audiences. The software is built into HubSpot’s marketing and sales tools.

Keep in mind that even simple changes, like changing the image in your email or the words on your CTA button , can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.

Note: Sometimes, testing multiple variables rather than a single variable makes more sense. This is called multivariate testing.

If you’re wondering whether you should run an A/B test versus a multivariate test, here’s a helpful article from Optimizely that compares the processes.

2. Identify your goal.

Although you’ll measure several metrics during any one test, choose a primary metric to focus on before you run the test. In fact, do it before you even set up the second variation.

This is your dependent variable , which changes based on how you manipulate the independent variable.

Think about where you want this dependent variable to be at the end of the split test. You might even state an official hypothesis and examine your results based on this prediction.

If you wait until afterward to think about which metrics are important to you, what your goals are, and how the changes you’re proposing might affect user behavior, then you may not set up the test in the most effective way.

3. Create a 'control' and a 'challenger.'

You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you’re testing as your control scenario.

If you’re testing a web page, this is the unaltered page as it exists already. If you’re testing a landing page, this would be the landing page design and copy you would normally use.

From there, build a challenger — the altered website, landing page, or email that you’ll test against your control.

For example, if you’re wondering whether adding a testimonial to a landing page would make a difference in conversions, set up your control page with no testimonials. Then, create your challenger with a testimonial.

4. Split your sample groups equally and randomly.

For tests where you have more control over the audience — like with emails — you need to test with two or more equal audiences to have conclusive results.

How you do this will vary depending on the A/B testing tool you use. Suppose you’re a HubSpot Enterprise customer conducting an A/B test on an email , for example.

HubSpot will automatically split traffic to your variations so that each variation gets a random sampling of visitors.

5. Determine your sample size (if applicable).

How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you’re running.

If you’re A/B testing an email, you’ll probably want to send an A/B test to a subset of your list large enough to achieve statistically significant results.

Eventually, you’ll pick a winner to send to the rest of the list. (See “The Science of Split Testing” ebook at the end of this article for more.)

If you’re a HubSpot Enterprise customer, you’ll have some help determining the size of your sample group using a slider.

It’ll let you do a 50/50 A/B test of any sample size — although all other sample splits require a list of at least 1,000 recipients.

What is A/B testing in marketing? HubSpot’s slider for sample size grouping

If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size.

You’ll need to let your test run long enough to obtain a substantial number of views. Otherwise, it will be hard to tell whether there was a statistically significant difference between variations.

6. Decide how significant your results need to be.

Once you’ve picked your goal metric, think about how significant your results need to be to justify choosing one variation over another.

Statistical significance is a super important part of the A/B testing process that’s often misunderstood. If you need a refresher, I recommend reading this blog post on statistical significance from a marketing standpoint.

The higher the percentage of your confidence level, the more sure you can be about your results. In most cases, you’ll want a confidence level of 95% minimum, especially if the experiment was time-intensive.

However, sometimes, it makes sense to use a lower confidence rate if the test doesn’t need to be as stringent.

Matt Rheault , a senior software engineer at HubSpot, thinks of statistical significance like placing a bet.

What odds are you comfortable placing a bet on? Saying, “I’m 80% sure this is the right design, and I’m willing to bet everything on it,” is similar to running an A/B test to 80% significance and then declaring a winner.

Rheault also says you’ll likely want a higher confidence threshold when testing for something that only slightly improves the conversion rate. Why? Because random variance is more likely to play a bigger role.

“An example where we could feel safer lowering our confidence threshold is an experiment that will likely improve conversion rate by 10% or more, such as a redesigned hero section,” he explained.

“The takeaway here is that the more radical the change, the less scientific we need to be process-wise. The more specific the change (button color, microcopy, etc.), the more scientific we should be because the change is less likely to have a large and noticeable impact on conversion rate,” Rheault says.

7. Make sure you're only running one test at a time on any campaign.

Testing more than one thing for a single campaign can complicate results.

For example, if you A/B test an email campaign that directs to a landing page while you’re A/B testing that landing page, how can you know which change increased leads?

During the A/B Test

Let's cover the steps to take during your A/B test.

8. Use an A/B testing tool.

To do an A/B test on your website or in an email, you’ll need to use an A/B testing tool.

If you’re a HubSpot Enterprise customer, the HubSpot software has features that let you A/B test emails ( learn how here ), CTAs ( learn how here ), and landing pages ( learn how here ).

For non-HubSpot Enterprise customers, other options include Google Analytics , which lets you A/B test up to 10 full versions of a single web page and compare their performance using a random sample of users.

9. Test both variations simultaneously.

Timing plays a significant role in your marketing campaign’s results, whether it’s the time of day, day of the week, or month of the year.

If you were to run version A for one month and version B a month later, how would you know whether the performance change was caused by the different design or the different month?

When running A/B tests, you must run the two variations simultaneously. Otherwise, you may be left second-guessing your results.

The only exception is if you’re testing timing, like finding the optimal times for sending emails.

Depending on what your business offers and who your subscribers are, the optimal time for subscriber engagement can vary significantly by industry and target market.

10. Give the A/B test enough time to produce useful data.

Again, you’ll want to make sure that you let your test run long enough to obtain a substantial sample size. Otherwise, it’ll be hard to tell whether the two variations had a statistically significant difference.

How long is long enough? Depending on your company and how you execute the A/B test, getting statistically significant results could happen in hours... or days... or weeks.

A big part of how long it takes to get statistically significant results is how much traffic you get — so if your business doesn’t get a lot of traffic to your website, it’ll take much longer to run an A/B test.

Read this blog post to learn more about sample size and timing .

11. Ask for feedback from real users.

A/B testing has a lot to do with quantitative data... but that won’t necessarily help you understand why people take certain actions over others. While you’re running your A/B test, why not collect qualitative feedback from real users?

A survey or poll is one of the best ways to ask people for their opinions.

You might add an exit survey on your site that asks visitors why they didn’t click on a certain CTA or one on your thank-you pages that asks visitors why they clicked a button or filled out a form.

For example, you might find that many people clicked on a CTA leading them to an ebook, but once they saw the price, they didn’t convert.

That kind of information will give you a lot of insight into why your users behave in certain ways.

After the A/B Test

Finally, let's cover the steps to take after your A/B test.

12. Focus on your goal metric.

Again, although you’ll be measuring multiple metrics, focus on that primary goal metric when you do your analysis.

For example, if you tested two variations of an email and chose leads as your primary metric, don’t get caught up on click-through rates.

You might see a high click-through rate and poor conversions, in which case you might choose the variation that had a lower click-through rate in the end.

13. Measure the significance of your results using our A/B testing calculator.

Now that you’ve determined which variation performs the best, it’s time to determine whether your results are statistically significant. In other words, are they enough to justify a change?

To find out, you’ll need to conduct a test of statistical significance. You could do that manually, or you could just plug in the results from your experiment to our free A/B testing calculator . (The calculator comes as part of our free A/B testing kit.)

You’ll be prompted to input your result into the red cells for each variation you tested. The template results are for either “Visitors” or “Conversions.” However, you can customize these headings for other types of results.

You’ll then see a series of automated calculations based on your inputs. From there, the calculator will determine statistical significance.

An image showing HubSpot’s free A/B testing calculator

14. Take action based on your results.

If one variation is statistically better than the other, you have a winner. Complete your test by disabling the losing variation in your A/B testing tool.

If neither variation is significant, the variable you tested didn’t impact results, and you’ll have to mark the test as inconclusive. In this case, stick with the original variation or run another test.

You can use failed data to help you figure out a new iteration on your new test.

While A/B tests help you impact results on a case-by-case basis, you can also apply the lessons you learn from each test to future efforts.

For example, suppose you’ve conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates better clickthrough rates. In that case, consider using that tactic in more of your emails.

15. Plan your next A/B test.

The A/B test you just finished may have helped you discover a new way to make your marketing content more effective — but don’t stop there. There’s always room for more optimization.

You can even try conducting an A/B test on another feature of the same web page or email you just did a test on.

For example, if you just tested a headline on a landing page, why not do a new test on the body copy? Or a color scheme? Or images? Always keep an eye out for opportunities to increase conversion rates and leads.

You can use HubSpot’s A/B Test Tracking Kit to plan and organize your experiments.

An image showing HubSpot’s free A/B Test Tracking Kit

Download This Template Now

As a marketer, you know the value of automation. Given this, you likely use software that handles the A/B test calculations for you — a huge help. But, after the calculations are done, you need to know how to read your results. Let’s go over how.

1. Check your goal metric.

The first step in reading your A/B test results is looking at your goal metric, which is usually conversion rate.

After you’ve plugged your results into your A/B testing calculator, you’ll get two results for each version you’re testing. You’ll also get a significant result for each of your variations.

2. Compare your conversion rates.

By looking at your results, you’ll likely be able to tell if one of your variations performed better than the other. However, the true test of success is whether your results are statistically significant.

For example, variation A had a 16.04% conversion rate. Variation B had a 16.02% conversion rate, and your confidence interval of statistical significance is 95%.

Variation A has a higher conversion rate, but the results are not statistically significant, meaning that variation A won’t significantly improve your overall conversion rate.

3. Segment your audiences for further insights.

Regardless of significance, it’s valuable to break down your results by audience segment to understand how each key area responded to your variations. Common variables for segmenting audiences are:

  • Visitor type, or which version performed best for new visitors versus repeat visitors.
  • Device type, or which version performed best on mobile versus desktop.
  • Traffic source, or which version performed best based on where traffic to your two variations originated.

Let’s go over some examples of A/B experiments you could run for your business.

We’ve discussed how A/B tests are used in marketing and how to conduct one — but how do they actually look in practice?

As you might guess, we run many A/B tests to increase engagement and drive conversions across our platform. Here are five examples of A/B tests to inspire your own experiments.

1. Site Search

Site search bars help users quickly find what they’re after on a particular website. HubSpot found from previous analysis that visitors who interacted with its site search bar were more likely to convert on a blog post.

So, we ran an A/B test to increase engagement with the search bar.

In this test, search bar functionality was the independent variable, and views on the content offer thank you page was the dependent variable. We used one control condition and three challenger conditions in the experiment.

The search bar remained unchanged in the control condition (variant A).

In variant B, the search bar was larger and more visually prominent, and the placeholder text was set to “search by topic.”

Variant C appeared identical to variant B but only searched the HubSpot Blog rather than the entire website.

In variant D, the search bar was larger, but the placeholder text was set to “search the blog.” This variant also searched only the HubSpot Blog.

AB testing example: variant D of the hubspot blog search blog AB test

We found variant D to be the most effective: It increased conversions by 3.4% over the control and increased the percentage of users who used the search bar by 6.5%.

2. Mobile CTAs

HubSpot uses several CTAs for content offers in our blog posts, including ones in the body of the post as well as at the bottom of the page. We test these CTAs extensively to optimize their performance.

We ran an A/B test for our mobile users to see which type of bottom-of-page CTA converted best.

We altered the design of the CTA bar for our independent variable. Specifically, we used one control and three challengers in our test. For our dependent variables, we used pageviews on the CTA thank you page and CTA clicks.

The control condition included our normal placement of CTAs at the bottom of posts. In variant B, the CTA had no close or minimize option.

In variant C, mobile readers could close the CTA by tapping an X icon. Once it was closed out, it wouldn’t reappear.

In variant D, we included an option to minimize the CTA with an up/down caret.

variant D of the hubspot blog mobile CTA AB test

Our tests found all variants to be successful. Variant D was the most successful, with a 14.6% increase in conversions over the control. This was followed by variant C with an 11.4% increase and variant B with a 7.9% increase.

3. Author CTAs

In another CTA experiment, HubSpot tested whether adding the word “free” and other descriptive language to author CTAs at the top of blog posts would increase content leads.

Past research suggested that using “free” in CTA text would drive more conversions and that text specifying the type of content offered would help SEO .

In the test, the independent variable was CTA text, and the main dependent variable was conversion rate on content offer forms.

In the control condition, the author CTA text was unchanged (see the orange button in the image below).

In variant B, the word “free” was added to the CTA text.

In variant C, descriptive wording was added to the CTA text in addition to “free.”

variant C of the hubspot blog CTA AB test

Interestingly, variant B saw a loss in form submissions, down by 14% compared to the control. This was unexpected, as including “free” in content offer text is widely considered a best practice.

Meanwhile, form submissions in variant C outperformed the control by 4%. It was concluded that adding descriptive text to the author CTA helped users understand the offer and thus made them more likely to download.

4. Blog Table of Contents

To help users better navigate the blog, HubSpot tested a new Table of Contents (TOC) module. The goal was to improve user experience by presenting readers with their desired content more quickly.

We also tested whether adding a CTA to this TOC module would increase conversions.

The independent variable of this A/B test was the inclusion and type of TOC module in blog posts. The dependent variables were conversion rate on content offer form submissions and clicks on the CTA inside the TOC module.

The control condition did not include the new TOC module — control posts either had no table of contents or a simple bulleted list of anchor links within the body of the post near the top of the article (pictured below).

In variant B, the new TOC module was added to blog posts. This module was sticky, meaning it remained onscreen as users scrolled down the page. Variant B also included a content offer CTA at the bottom of the module.

variant B of the hubspot blog chapter module AB test

Variant C included an identical module to variant B but with the CTA removed.

variant C of the hubspot blog chapter module AB test

Both variants B and C did not increase the conversion rate on blog posts. The control condition outperformed variant B by 7% and performed equally with variant C.

Also, few users interacted with the new TOC module or the CTA inside the module.

5. Review Notifications

To determine the best way of gathering customer reviews, we ran a split test of email notifications versus in-app notifications.

Here, the independent variable was the type of notification, and the dependent variable was the percentage of those who left a review out of all those who opened the notification.

In the control, HubSpot sent a plain text email notification asking users to leave a review. In variant B, HubSpot sent an email with a certificate image including the user’s name.

For variant C, HubSpot sent users an in-app notification.

variant C of the hubspot notification AB test

Ultimately, both emails performed similarly and outperformed the in-app notifications. About 25% of users who opened an email left a review versus the 10.3% who opened in-app notifications.

Users also opened emails more often.

10 A/B Testing Tips From Marketing Experts

I spoke to nine marketing experts from across disciplines to get their tips on A/B testing.

1. Clearly define your goals and metrics first.

“In my experience, the number one tip for A/B testing in marketing is to clearly define your goals and metrics before conducting any tests,” says Noel Griffith, CMO at SupplyGem .

Griffith explains that this means having a solid understanding of what you want to achieve with your test and how you will measure its success. This matters because, without clear goals, it’s easy to get lost in the data and draw incorrect conclusions.

For example, Griffith says, if you’re testing two different email subject lines, your goal could be to increase open rates.

“By clearly defining this goal and setting a specific metric to measure success (e.g., a 10% increase in open rates), you can effectively evaluate the performance of each variant and make data-driven decisions,” says Griffith.

Aside from helping you focus your testing efforts, Noel explains that having clear goals also means you can accurately interpret the results and apply them to improve your marketing strategies.

2. Test only ONE thing during each A/B test.

“This is the most important tip for A/B marketing from my perspective... Always decide on one thing to test for each individual A/B test,” says Hanna Feltges , growth marketing manager at Niceboard .

For example, when A/B testing button placement in emails, Feltges makes sure the only difference between these two emails is the button placement.

No difference should be in the subject line, copy, or images, as this could skew the results and make the test invalid.

Feltges applies the same principle to metrics by choosing one metric to evaluate test results

“For emails, I will select a winner based on a predefined metric, such as CTR, open rate, reply rate, etc. In my example of the button placement, I would select CTR as my deciding metric and evaluate the results based on this metric,” Feltges says.

3. Start with a hypothesis to prove or disprove.

Another similarly important tip for A/B testing is to start with a hypothesis. The goal of each A/B test is then to prove the hypothesis right or wrong, Feltges notes.

For example, Feltges poses testing two different subject lines for a cold outreach email. Her hypothesis here is:

“Having a subject line with the prospect’s first name will lead to higher open rates than a subject line without the prospect’s first name,” she says.

Now, she can run multiple tests with the same hypothesis and can then evaluate if the statement is true or not.

Feltges explains that the idea here is that marketers often draw quick conclusions from A/B tests, such as “Having the first name in the subject line performs better.” But that is not 100% true.

A/B tests are all about being precise and specific in the results.

4. Track key test details for accurate planning and analysis.

“I keep a running log of how long my A/B tests for SEO took, and I make sure to track critical metrics like the statistical significance rate that was reached,” says NamePepper Founder Dave VerMeer.

VerMeer explains that the log is organized in a spreadsheet that includes other columns for things like:

  • The type of test.
  • Details about what was tested.

“If I notice any factors that could have influenced the test, I note those as well,” he adds. Other factors could be a competitor having a special event or something that happened in the news and caused a traffic spike.

“I check the log whenever I’m planning a series of A/B tests. For example, it lets me see trends and forecast how the seasonality may affect the test period lengths. Then I adjust the test schedule accordingly,” VerMeer says.

According to VerMeer, this form of tracking is also helpful for setting realistic expectations and providing clues as to why a test result did or didn’t match up with past performance.

5. Test often…

When I spoke to Gabriel Gan, head of editorial for In Real Life Malaysia , for my guide on running an email marketing audit , he set out two main rules for A/B testing.

For the A/B testing email, Gan recommends setting email A as the incumbent and email B as the contender.

Like Hanna, Gabriel emphasizes changing only one variable at a time. “For example, in email B, when testing open rates, only tweak the subject line and not the preview,” says Gan.

That’s because if you have more than one variable changed from the old email, “it’s almost impossible to determine which new addition you made has contributed to the improvement in OPR/CTR.”

Aside from only changing one variable at a time, Gan recommends testing often until you find out what works and what doesn’t.

“There’s a perception that once you set up your email list and create a template for your emails, you can ‘set it and forget it.’” Gan says. “But now, with the power of A/B testing, with just a few rounds of testing your headlines, visuals, copy, offer, call-to-action, etc., you can find out what your audience loves, do more of it, and improve your conversion rates twofold or threefold.”

6. …But don’t feel like you need to test everything.

“My top tip for A/B testing is only to use it strategically,” says Joe Kevens , director of demand generation at PartnerStack and the founder of B2B SaaS Reviews .

Kevens explains that “strategically” means that only some things warrant an A/B test due to the time and resources it consumes.

“I’ve learned from experience that testing minor elements like CTA button colors can be a waste of time and effort (unless you work at Amazon or some mega-corporation that gets a gazillion page visits, and a minor change can make a meaningful impact),” Kevens says.

Kevens recommends that instead, it’s more beneficial to concentrate on high-impact areas such as homepage layouts, demo or trial pages, and high-profile marketing messages.

That’s because these elements have a better shot to impact conversion rates and overall user experience.

Kevens reminds us that “A/B testing can be powerful, but its effectiveness comes from focusing on changes that can significantly impact your business outcomes.”

7. Use segmentation to micro-identify winning elements.

“When using A/B testing in marketing, don’t limit your target audience to just one set of parameters,” says Brian David Crane , founder and CMO of Spread Great Ideas .

Crane recommends using criteria like demographics, user behavior, past interactions, and buying history to experiment with A/B testing of these different segments. You can then filter the winning strategy for each segment.

“We use core metrics like click-through rates, bounce rates, and customer lifetime value to identify the combination that converts the most,” explains Crane.

Copy of Linkedin - 1104x736 - Quote + Headshot - Orange (2)

8. Leverage micro-conversions for granular insights.

“I know that it’s common to focus on macro-conversions, such as sales or sign-ups, in A/B testing. However, my top tip is to also pay attention to micro-conversions,” says Laia Quintana , head of marketing and sales at TeamUp .

Quintana explains that micro-conversions are smaller actions users take before completing a macro-conversion.

They could be actions like clicking on a product image, spending a certain amount of time on a page, or watching a promotional video.

But why are these micro-conversions important? Quintana states, “They provide granular insights into user behavior and can help identify potential roadblocks in the conversion path.”

For example, if users spend a lot of time on a product page but do not add items to their cart, there might be an issue with the page layout or information clarity.

By A/B testing different elements on the page, you can identify and rectify these issues to improve the overall conversion rate.

“Moreover, tracking micro-conversions allows you to segment your audience more effectively. You can identify which actions are most indicative of a user eventually making a purchase and then tailor your marketing efforts to encourage those actions. This level of detail in your A/B testing can significantly enhance the effectiveness of your marketing strategy,” says Quintana.

9. Running LinkedIn Ads? Start with five different versions and A/B test them.

“A best practice when running LinkedIn Ads is to start a campaign with five different versions of your ad,” says Hristina Stefanova , head of marketing operations at Goose’n’Moose .

Stefanova reminds us that it’s important to tweak just one variable at a time across each version.

For a recent campaign, Stefanova started with five ad variations — four using different hero images and three having the CTA tweaked.

“I let the campaign run with all five variations for a week. At that point, there were two clearly great performing ads, so I paused the other three and continued running the campaign with the two best-performing ones,” says Stefanova.

According to Stefanova, the two ads performed best and had the lowest CPC. The A/B testing exercise helped not only the specific campaign but also helped her to better understand what attracts their target audience.

So what’s next? “Images with people in them are better received, so for upcoming campaigns, I am focusing right away on producing the right imagery. All backed up by real performance data thanks to A/B testing,” Stefanova says.

10. Running SEO A/B tests? Do this with your test and control group URLs.

“Given that the SEO space is constantly evolving, it’s getting increasingly difficult to run any sort of experiments and get reliable and statistically significant results. This is especially true when running SEO A/B tests,” says Ryan Jones , marketing manager at SEOTesting .

Luckily, Jones explains that you can do things to mitigate this and make sure that any SEO A/B tests you run now — and in the future — are reliable. You can then use the tests as a “North Star” when making larger-scale changes to your site.

“My number one tip would be to ensure that your control group and test group of URLs contain as identical URLs as you can make them. For example, if you’re running an A/B test on your PLP pages as an ecommerce site, choose PLPs from the same product type and with the same traffic levels. This way, you can ensure that your test data will be reliable,” says Jones.

Why does this matter? “Perhaps the number one thing that ‘messes’ with A/B test data is control and variant groups that are too dissimilar.

But by ensuring you are testing against statistically similar URLs, you can mitigate this better than anything else,” Jones says.

Start A/B Testing Today

A/B testing allows you to get to the truth of what content and marketing your audience wants to see. With HubSpot’s Campaign Assistant , you’ll be able to generate copy for landing pages, emails, or ads that can be used for A/B testing.

Learn how to best carry out some of the steps above using the free ebook below.

Editor's note: This post was originally published in May 2016 and has been updated for comprehensiveness.

abtesting_0

Don't forget to share this post!

Related articles.

How to Determine Your A/B Testing Sample Size & Time Frame

How to Determine Your A/B Testing Sample Size & Time Frame

How The Hustle Got 43,876 More Clicks

How The Hustle Got 43,876 More Clicks

What Most Brands Miss With User Testing (That Costs Them Conversions)

What Most Brands Miss With User Testing (That Costs Them Conversions)

Multivariate Testing: How It Differs From A/B Testing

Multivariate Testing: How It Differs From A/B Testing

How to A/B Test Your Pricing (And Why It Might Be a Bad Idea)

How to A/B Test Your Pricing (And Why It Might Be a Bad Idea)

11 A/B Testing Examples From Real Businesses

11 A/B Testing Examples From Real Businesses

15 of the Best A/B Testing Tools for 2024

15 of the Best A/B Testing Tools for 2024

These 20 A/B Testing Variables Measure Successful Marketing Campaigns

These 20 A/B Testing Variables Measure Successful Marketing Campaigns

How to Understand & Calculate Statistical Significance [Example]

How to Understand & Calculate Statistical Significance [Example]

What is an A/A Test & Do You Really Need to Use It?

What is an A/A Test & Do You Really Need to Use It?

Learn more about A/B and how to run better tests.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

A Tactical Guide to Marketing Experiments

A Tactical Guide to Marketing Experiments

When was the first time you used the scientific method?

One of my earliest memories of using the scientific method was in elementary school. My 5th-grade teacher assigned my class a project to display at the science fair at the end of the year; the type of science fair where you had to get the huge tri-fold cardboard. She gave us the liberty to choose whatever topic we wanted to do our project on, as long as we used the scientific method we learned in class.

One night, I saw my dad cleaning the table with a paper towel and noticed that the towel continuously ripped as he cleaned. This inspired me to figure out which brand of paper towel was better for cleaning wet spots for my science project.

  • I asked a question — “which brand of paper towel would be the strongest when cleaning wet spots?”
  • I looked into industry data — about paper towels and who claimed to be the best.
  • I set a hypothesis based on those insights — I think Brawny is better than Bounty because of xyz.
  • I tested my hypothesis– I used a paper towel from each brand to soak up 235 ml of water and determined which one held longer.
  • I analyzed & communicated the result.

The growth process, in a nutshell, is the scientific method. You ask a question, analyze an existing occurrence, set a hypothesis, then test an experiment to justify your hypothesis, analyze the results, and communicate the findings.

In this article, I’ll go through a step-by-step overview of how the growth process works. This process is an accumulation of things I learned from reading 100’s of articles from people such as John Egan, Andrew Chen, Brian Balfour, Sean Ellis, Morgan Brown, Susan Su, and many more people; also my own personal insights as well.

The Growth Process

I’ll be going through the basics of the growth process and using an example with fake numbers to demonstrate each step in context.

1. Ask a Question

The first step in the growth process is to ask a question and justify it. Many questions can be formulated by identifying both the key output (metric you’re trying to grow) and seeing what inputs make that output grow.

For instance, Airbnb’s key output is nights booked, so they want to see what features or behaviors (inputs) increase the number of nights booked. Amazon would be items purchased. Facebook would be daily active users.

Example : Let’s say the Airbnb growth team assumes that people who add to wishlists tend to be retained long term and book more trips with Airbnb because they have interesting places they’ve saved to look back into.

The next step is justifying the assumption by diving into your growth models or analytics. In our example, one thing the Airbnb growth team could do is a run a correlation analysis to see if adding to a wishlist does actually increase the number of nights booked within a year.

Growth model: a growth model is an estimated projection of inputs to outputs that is established by existing baselines. This is extremely helpful in identifying levers and figuring out where to focus your growth team’s energy. This is something I learned going through the Reforge program.

Here is an example growth model

Correlation Analysis– correlation is the relationship between sets of variables used to describe or predict information. What we’re looking for is the correlation coefficient which will determine the degree a set of variables is related (represented by “r”).

r = n(∑xy) — (∑x)(∑y) / ( √n(∑x²)-(∑x)²)(√n(∑y ²) — (∑y)²)

  • -1 to 0 = negative correlation
  • 0 to 1 = positive correlation
  • closer to 0 = no correlation

Fortunately, there are tools to automate the correlation analysis for us, so we don’t have to plug it into a calculator each time.

Example : Back to the Airbnb example; after diving into the data, the Airbnb growth team determined there was actually a strong correlation with nights booked and adding to a wishlist with a strong r of .79.

Framing the question: Adding to a wishlist is strongly correlated with nights booked. How can we increase the number homes added to a wishlist per person?

2. Brainstorm an Idea

The second step in the process is to brainstorm an idea that you’ll base your hypothesis on. This is our output, and the ideas we brainstorm to answer this question will be our inputs. We make this our new output because we already determined that adding to a wishlist does affect the rate of nights booked with a positive correlation.

(inputs) idea 1 | idea 2 | idea 3 = Increase # of saved homes in a wishlist per person (outputs)

A good way to come up with ideas is to see what other products outside your immediate space do already, go off natural psychological behavior, associate two ideas together, or even ask why people are doing the behavior in your product itself.

Let’s dive back into the Airbnb example:

The question we asked: How can we increase the number of saved homes in a wishlist per person?

The Airbnb growth team comes up with 4 different ideas:

  • Change the “star” saved icon to a “heart” icon.
  • Use existing customer data to send users a recommendation wishlist while they’re browsing the app or desktop.
  • Every time you go back to a listing more than 2 times, it’s automatically added to a wishlist.
  • Create a pop-up module to notify the user to add to their wishlist if on it for more than 30 secs.

As you come up with ideas, you’ll set a hypothesis for each one. For the sake of brevity, we’ll choose the first idea to show how a hypothesis should be set.

Keep in mind, if you set a hypothesis the justification might be a little gut-feeling in the beginning if you have no data to go off of, which is why we test the hypothesis in the first place. You may even use some qualitative assumption like customer interviews to make a justification.

Example : If successful, # of homes added to a wishlist will increase by 30% if the “star” saved button is changed to a “heart” because qualitatively a heart appeals to the emotional psychology of loving something, whereas a star is more arbitrary/logical.

Here is a template of how to set a hypothesis: If successful, I predict [metric you’re testing] will increase by [% or units of the metric you’re testing] because [initial assumptions].

3. Prioritize

Every company has limited resources, so testing all the experiments might not be a viable option. The next step in the growth process is to prioritize which of these ideas you want to experiment with. When we prioritize ideas to experiment, we choose based on the knowledge that this is a minimally viable test, not the whole feature carried out.

Thinking if each test will work is just one part of prioritizing. We also had to determine if the test is actually worth the effort and impact and our level of confidence. For this, we can use a framework that’ll help us determine which test we should prioritize.

This is what I’ve learned by combining Sean Ellis and Brian Balfour’s prioritization framework. With every idea you have for an experiment, there are three decision criteria to help you prioritize: impact, confidence and effort.

We ultimately score each idea by impact, confidence, and effort from 1 -10, and take the average. Most of the time we’ll test ideas that have the highest avg score.

1. Impact or Upside : if the experiment is successful, will it impact the northstar metric?

  • Figure out the reach (how many people will this experiment touch)
  • Estimate the variable’s impact (probability the metric we test will impact the northstar metric)

2. Confidence : what is the probability that this experiment will be successful at moving the metric we’re testing? You can gauge this by how much domain experience you have with the particular idea.

  • Score 1–3 if you it’s a completely new idea that you have barely any knowledge on.
  • Score 3–5 if it’s a subject you’re somewhat familiar but still are not entirely confident.
  • Score 6–10 if it’s an area you’ve experimented before or have deep domain knowledge.

3. Effort : how much time, energy, manpower, and money will it take to execute a minimal viable test? The higher the score, the easier it is to execute this experiment.

For prioritization, you don’t have to be exact; estimating the scores is fine. As you conduct more experiments, your estimates will become more accurate overtime.

Example : Let’s score one of the wishlist ideas: Idea #1: Changing Star icon to heart icon saved button Impact: 8 Airbnb has a large user base, they’ll rollout this feature to 20% of the user base to test. From the growth team’s correlation analysis, they already justified that increasing # of homes saved to a wishlist per user would increase the # of nights book, so this idea will get a high upside score.

Confidence: 5 Airbnb has done many button A/B tests before, so the growth team knows that changes to the buttons don’t just offer a small marginal change on large features. However, they haven’t done many tests on this particular feature, so they’ll give this a medium score.

Ease: 8 Only two people are needed for this test to execute, the designer and an engineer. Both estimate it might take a day to complete for testing, so it’ll be on the easier end. Avg score for this: 7.0

The growth team decides to test this experiment since it had the highest score out of all 4 ideas.

4. Experiment

The next part of the growth process is to execute, implement, and track the experiment. If you’re familiar with agile and product management, it’s essentially the same process. You’ll have growth sprints and set a cadence of a set number of experiments per sprint.

Each experiment should be designed to run for at least 1 week and make sure there is a control group (assuming there is a large enough user base). However, some experiments will run over 2 weeks — you can check out why this might happen here in this AirBnB blog post on their experimentation process .

At my company, UpKeep , we set our sprints for 2 weeks and use Jira to keep track of all our experiments. We attempt to do an experiment every week because experimenting is more about getting the velocity to acquire more information of what will help you grow whether the experiments succeed or not.

Some companies could run more than 700 experiments per week. Companies figure out their experiment cadence depending on how many resources they have, how mature the company is, and if there are any other similar experiments running that might influence yours.

A good rule of thumb is to do an experiment a week per engineer you have on your team in the early stages of a company; 1 engineer = 1 test/week, 2 engineers = 2 tests/week, etc. Later stage companies can automate a lot of their experiments, which helps them scale their experiment cadence 10x.

For tracking experiments, you can use Trello , Jira , Pipifey, Basecamp , etc. When a company becomes large enough, they’ll most likely create their own type of system to manage growth for extreme customization.

Airbnb’s custom dashboard:

Analyzing the experiment is the most important part of the process, and documenting your analysis is what will help guide the growth in your organization moving forward. The first part of the analysis will start by looking at the hypothesis and diving into the data to see if the experiment actually moved the metric.

Looking back to our example, the Airbnb growth team ran the star-to-heart button experiment with a hypothesis that stated the experiment would increase the number of homes saved to a wishlist per user by 30 percent. After going through the analytics, the growth team sees that the number of homes saved to a wishlist had a 70 percent increase compared to the control group.

Before we call this a success, we have to see if there were any confounding issues, ask why this experiment resulted in positive movement, and what were the potential reasons for this result. After seeing what metrics were affected, we’ll see if there were any confounding issues that might have influenced the experiment. In most cases, there is at least one issue, but that’s why there’s always a discussion on the findings to sort out if the conflicts actually affected the test.

Types of confounding issues:

  • If there was another experiment testing with the same user set (both the control & experimental group).
  • Something such as a huge press release, holiday, etc that could have affected the results.
  • Testing too many variables between the control group and experimental group of users.
  • Didn’t run the test long enough to see an accurate result.
  • The sample size was too small. For smaller stage companies, you should work on growing the number of users and retain them to start experimenting.

In our Airbnb example, the growth team discovered that there was a press release that was launched the same time but determined that it didn’t affect the experiment.

Now, we can call the experiment a success and start discussing why we think there was such a huge lift in the # of homes saved to a wishlist. The discussion will probably also investigate why there was a huge discrepancy compared to the hypothesis as well.

We’re not done just yet, the final part of the growth process is to systemize. After the test has been determined a success or failure, you communicate the next steps when you log the results into your project management system.

If successful: a. Determine how to double-down on this experiment (do you release it to the whole user base, or conduct the same experiment with a larger sample size?) b. See if this experiment provides more insight to other experiments and then readjust ICE scores for other tickets to prioritize other ideas. Also, does this offer insight on other behaviors or funnels for other parts of the product? c. Move to playbook.

If it failed: a. Indicate learnings on how to run this test next time with another hypothesis. b. See if this experiment provides more insight into other experiments and then readjust ICE scores for other tickets to prioritize other ideas. Also does this offer insight into other behaviors or funnels for other parts of the product? c. Move to Failed Experiments: For the Airbnb example, we’ll wait to see if the increase in # of homes saved to a wishlist does positively impacts the # of nights booked as we collect more data throughout the next few months.

The last step is to repeat this process over and over again.

This was an overview of how to run effective growth experiments and at first, it can be a little tedious but it has massive payoffs as you continue to test. You’ll have more failures than successes, but with every failure is an opportunity to learn more about your product and how people use it. More tests = more learning.

The Airbnb example was actually a real test they did back in 2011, you can read more about that test in this co.design article .

If you have any insight, questions, or feedback, feel free to reach out. I’m always trying to learn better ways to conduct growth experiments myself.

Fun fact: For my paper towel experiment, Brawny turned out to be the better brand.

Arjun Patel is a Growth Marketing Manager for UpKeep Maintenance Management . Arjun has been a founder and CEO of previously funded startup and is now helping change the way maintenance teams streamline their workflow by running growth initiatives at UpKeep. UpKeep is asset and maintenance management — done the right way. We take a mobile-first approach to the traditional desktop-based enterprise software. We keep technicians out in the field working on the most pressing issues — saving valuable time and money. Tracking the costs of assets over their lifetime is now easier than ever. We keep businesses more efficient and streamline their workflow by eliminating paper work orders.

Related Posts

2024 IT Tech Buyer Trends: 3 Key Insights for Marketers from Our Latest Report

2024 IT Tech Buyer Trends: 3…

4 Mistakes to Avoid When Hiring a B2B Lead Generation Company

4 Mistakes to Avoid When Hiring…

Everything B2B Marketers Need to Know About Newsletter Sponsorships

Everything B2B Marketers Need to Know…

BOFU Lead Outreach: Proven Email Templates that Drive Lead Conversions

BOFU Lead Outreach: Proven Email Templates…

Hit enter to search or ESC to close

the marketing experiments blog

MarketingSherpa Blog

Use MECLABS AI for FREE (for now)

  • Letters to the Editor
  • Sponsorships
  • Web Design Services
  • MarketingSherpa.com

Daniel Burstein

Marketing Experimentation: Answers to marketers’ and entrepreneurs’ questions about marketing experiments

Here are answers to some chat questions from last week’s ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel . I hope they help with your own marketing efforts. And feel free to join us on any Wednesday to get your marketing questions answered as well.

Am I understanding the message correctly that the main value at first isn’t in more conversions or $$$ but in deeper understanding of the customer mindset ?

This questioner is asking about the value of marketing experimentation. And it reminds me of this great movie quote…

Alfred: Why do we fall, Master Wayne? The (future) Batman: So we can learn to pick ourselves back up again.

Similarly, we might say…

Marketer: Why do we run experiments, Flint? Flint (the real Batman) McGlaughlin: “The goal of a test is not to get a lift, but rather to get a learning.”

So when you see marketing experiments in articles or videos (and we are guilty of this as well), they usually focus on the conversion rate increase. It is a great way to get marketers’ attention. And of course we do want to get performance improvements from our tests.

But if you’re always getting performance improvements, you’re doing it wrong. Here’s why…

Marketing is essentially communicating value to the customer in the most efficient and effective way possible so they will want to take an action (from Customer Value: The 4 essential levels of value propositions ).

So if you’re always getting performance improvements, you’re probably not pushing the envelope hard enough. You’re probably not finding the most efficient and effective way, you’re only fixing the major problems in your funnel. Which, of course, is helpful as well.

In other words, don’t feel bad about getting a loss in your marketing experiment. Little Bruce Wayne didn’t become Batman by always doing the obvious, always playing it safe. He had to try new things, fall down from time to time, so he could learn how to pick himself back up.

While that immediate lift feels good, and you should get many if you keep at it, the long-term, sustainable business improvement comes from actually learning from those lifts and losses to do one of the hardest thing any marketer, nay, any person can do – get into another human being’s head. We just happen to call those other human beings customers.

Which leads us to some questions about how to conduct marketing experiments…

Do we need a control number?

In the MEC300 LiveClass, we practiced using a calculator to determine if results from advertisement tests are statistically significant

The specific tool we practiced with for test analysis was the AB+ Test Calculator by CXL .

I’m guessing this questioner may think ‘control number’ comes from previous performance. And when we conducted pre-test analysis, we did use previous performance to help us plan (for more on pre-test planning and why you should calculate statistical significant with your advertising experiments, you can read Factors Affecting Marketing Experimentation: Statistical significance in marketing, calculating sample size for marketing tests, and more ).

But once you’ve run an advertising experiment, your ‘control number’ – really two numbers, users or sessions and conversions – will be data from your ads’ or landing pages’ performance.

You may be testing your new ideas against an ad you have run previously by splitting your budget between the old and new ads. In this case, you would usually label the incumbent ad the control, and the new ad idea would be the treatment or variation.

If both ads are new, technically they would both be treatments or variations because you do not have any standard control that you are comparing against. For practical purposes in using the test analysis tool, it is usually easier to put the lower-performing ad’s numbers in the control, so you are dealing with a positive lift.

Remember, what you are doing with the test calculator is ensuring you have enough samples to provide a high likelihood that the difference in results you are seeing is not due to random chance. So for the sake of the calculator, it does not matter which results you put in the control section.

Labeling a version a ‘control’ is most helpful when actually analyzing the results, and realizing which ad you had originally been running, and what your hypothesis was for making a change.

Which brings us to what numbers you should put in the boxes in the test calculator…

Users or sessions, would that be landing page views? I’m running on Facebook.

In this specific test calculator, it asks for two numbers for the control and variation – ‘users or sessions’ and ‘conversions.’

What the calculator is basically asking for is – how many people saw it, and how many people acted on it – to get the conversion rate.

What you fill into these boxes will depend on the primary KPI for successes in the experiment (for more on primary KPI selection, you can read Marketing Experimentation: How to get real-world answers to questions about a company’s marketing efforts ).

If your primary KPI is a conversion on a landing page, then yes, you could use landing page views or, even better, unique pageviews – conversion rate would be calculated by dividing the conversion actions (like a form fill or button click) by unique pageviews.

However, if your primary KPI is clickthrough on a Facebook ad, then the conversion rate would be calculated by dividing the ad’s clicks by its impressions.

Which brings us to the next question, since this tool allows you to add in a control and up to five variations of the control (so six total treatments)…

Can you confirm the definition of a variation really fast? Is it a change in copy/imagery or just different size of ad?

Remember what we are doing when we’re running a marketing experiment – we are trying to determine that there is a change in performance because of our change in messaging. For example, a headline about our convenient location works better than a headline about our user-friendly app, so we’ll focus our messaging about our convenient location.

When there are no validity threats and we just make that one change, we can be highly confident that the one change is the reason for the difference in results.

But when there are two changes – well, which change caused the difference in results?

For this reason, every change is a variation.

That said, testing in a business context with a necessarily limited budget and the need for reasonably quick action, it could make sense to group variations.

So in the question asked, each ad size should be a variation, but you can group those into Headline A ads, and Headline B ads.

Then you can see the difference in performance between the two headlines. But you also have the flexibility to get more granular and see if there are any differences among the sizes themselves. There shouldn’t be right? But by having the flexibility to zoom in and see what’s going on, you might discover that small space ads for Headline B are performing worst. Why? Maybe Headline B works better overall, but it is longer than Headline A, and that makes the small space ads too cluttered.

Ad size is a change unrelated to the hypothesis. But for other changes, this is where a hypothesis helps guide your testing. Changing two unrelated things would result in multiple variations (two headlines, and two images, would create four variations). However, if your experimentation is guided by a hypothesis, all of the changes you make should tie into that hypothesis.

So if you were testing what color car is most likely to attract customers, and you tested a headline of “See our really nice red car” versus “See our really nice blue car,” it would make no sense to have a picture of a red car in both ads. In this case, if you didn’t change the image, you wouldn’t really be testing the hypothesis.

For a real-world example see Experiment #1 (a classic test from MarketingExperiments) in No Unsupervised Thinking: How to increase conversions by guiding your audience . The team was testing a hypothesis that the original landing page had many objectives competing in an unorganized way that may have been creating friction. Testing this hypothesis necessitated making multiple changes, so they didn’t create a variation for each. However, when making a new variation would be informative (namely, how far should they go in reducing distractions) they created a new variation.

So there were three variations total. The control (original), treatment #1 which tested simplifying by making multiple changes, and treatment #2 which tested simplifying even further.

When we discuss testing, we usually talk about splitting traffic in half (or thirds, in the case above) and sending an equal amount of traffic to each variation to see how they perform. But what if your platform is fighting you on that…

One thing I’m noticing is that Google isn’t showing my ads evenly – very heavily skewed. If it continues, should I pause the high impression group to let the others have a go?

It really comes down to a business decision for you to make – how much transparency, risk, and reward are you after? Here are the factors to consider.

On the one hand, this could seem like a less risky approach. Google is probably using a statistical model (probably Bayesian) paired with artificial intelligence to skew towards your better performing ad to make you happy – i.e., to keep you buying Google ads because you see they are working. This is similar to multi-armed bandit testing, a methodology that emphasizes the higher performers while a test is running. You can see an example in case study #1 in Understanding Customer Experience: 3 quick marketing case studies (including test results) .

So you could view trusting Google to do this well as less risky. After all, you are testing in a business context (not a perfect academic environment for a peer-reviewed paper). If one ad is performing better, why pay money to get a worse-performing ad in front of people?

And you can still reach statistical significance on uneven sample sizes. The downside I can see is that Google is doing this in a black box, and you essentially just have to trust Google. It’s up to you how comfortable you are doing that.

When you go to validate your test, you could get a Sample Ratio Mismatch warning in a test calculator, warning you that you don’t have a 50/50 split. But read the warning carefully (my emphasis added), “SRM-alert: if you intended to have a 50% / 50% split, we measured a possible Sample Ratio Mismatch (SRM). Please check your traffic distribution.”

This warning is likely meant to warn you of a difference that isn’t obvious to the naked eye if you intended to run a 50/50 split. This could be due to validity threats like instrumentation effect and selection effect. Let’s say your splitter wasn’t working properly, and some notable social media accounts shared a landing page link but it only went to one of the treatments. That could threaten the validity of the experiment. You are no longer randomly splitting the traffic.

On the flip side, if you want more control over things, you could evenly split the impressions or traffic and use a Frequentist (Z Test) statistical methodology and choose the winner after things have run. You aren’t trusting that Google is picking the right winner, and not giving up on an initially under-performing treatment too soon.

I can’t personally say that one approach is definitely right or wrong, it comes down to what you consider more risky, and how much control and transparency you would like to have.

And if you would like to get deeper into these different statistical models, you can read A/B Testing: Why do different sample size calculators and testing platforms produce different estimates of statistical significance?

Flint, do I remember right? 4 words to action in the headline ?

This question is a nice reminder that we’ve been answering questions about testing methodology – the infrastructure that helps you get reliable data – but you still need to craft powerful treatments for your tests.

The teaching from Flint McGlaughlin that you are referring to is actually about four words to value in the headline, not action. Four words to value. To give you ideas for headline testing, you can watch Flint teach this concept in the free FastClass – Effective Headlines: How to write the first 4 words for maximum conversion .

Hi, how would one gain access to this? Looks fascinating.

How can I join a cohort?

How do you join a cohort on 4/26?

Are the classes in the cohort live or self-paced? I’m based in Australia so there’s a big-time difference.

Oh, I would be very interested in joining that cohort. Do I email and see if I can join it?

At the end of the LiveClass, we stayed on the Zoom and talked directly to the attendees who had questions about joining the MECLABS SuperFunnel Cohort. If you are thinking of joining, or just looking for a few good takeaways for your own marketing, RSVP now for a Wednesday LiveClass .

Here are some quick excerpt videos to give you an idea of what you can experience in a LiveClass:

Single variable testing vs variable cluster testing

What is an acceptable level of statistical confidence?

Paul Good talks about the need for the MECLABS cohort

About Daniel Burstein

Daniel Burstein, Senior Director of Editorial Content, MECLABS. Daniel oversees all content and marketing coming from the MarketingExperiments and MarketingSherpa brands while helping to shape the editorial direction for MECLABS – digging for actionable information while serving as an advocate for the audience. Daniel is also a speaker and moderator at live events and on webinars. Previously, he was the main writer powering MarketingExperiments publishing engine – from Web clinics to Research Journals to the blog. Prior to joining the team, Daniel was Vice President of MindPulse Communications – a boutique communications consultancy specializing in IT clients such as IBM, VMware, and BEA Systems. Daniel has 18 years of experience in copywriting, editing, internal communications, sales enablement and field marketing communications.

Twitter | LinkedIn | Daniel's Posts | Send a Letter to the Editor

Categories: Marketing and Advertising Tags: Facebook

Top Resources

Infographic: how to create a model of your customer’s mind.

model-your-customers-mind

Infographic: 21 Psychological Elements that Power Effective Web Design

21 web design elements

7 Steps to Discovering Your Essential Value Proposition with Simple A/B Tests

the marketing experiments blog

Marketer Vs Machine

Marketer Vs Machines: We need to train the marketer to train the machine.

Free Marketing Course

Become a Marketer-Philosopher: Create and optimize high-converting webpages

  • Affiliate Marketing
  • B To B Ecommerce
  • Business Technology Marketing
  • Business To Business
  • Channel Marketing
  • Lead Generation
  • Sales Lead Generation
  • B2C Marketing
  • Consumer Electronics
  • Consumer Packaged Goods
  • Ecommerce Eretail
  • Entertainment And Sports Marketing
  • Financial Services Marketing
  • Health Marketing
  • Real Estate Marketing
  • Retail And Restaurant
  • Travel And Hospitality Marketing
  • Content Marketing
  • Copywriting
  • Customer-Centric Marketing
  • Email Marketing
  • Event Marketing
  • Inbound Marketing
  • International Marketing
  • Marketing and Advertising
  • Marketing Careers
  • Marketing Law
  • Media Buying
  • Non Profit Fundraising
  • Offline Marketing And Advertising
  • Online Advertising
  • Online Marketing
  • PR Fame Communications
  • Referral Marketing
  • Research And Measurement
  • Search Marketing
  • Social Media Marketing
  • Social Media
  • Uncategorized
  • Value Proposition
  • Video Marketing
  • Viral Marketing
  • Website And Landing Page Design

the marketing experiments blog

  • Subscribers
  • How To Use a New AI App and AI Agents To Build Your Best Landing Page
  • The MECLABS AI Guild in Action: Teamwork in Crafting Their Optimal Landing Page
  • How MECLABS AI Is Being Used To Build the AI Guild
  • MECLABS AI’s Problem Solver in Action
  • MECLABS AI: Harness AI With the Power of Your Voice
  • Harnessing MECLABS AI: Transform Your Copywriting and Landing Pages
  • MECLABS AI: Overcome the ‘Almost Trap’ and Get Real Answers
  • MECLABS AI: A brief glimpse into what is coming!
  • Transforming Marketing with MECLABS AI: A New Paradigm
  • Creative AI Marketing: Escaping the ‘Vending Machine Mentality’

MarketingExperiments

  • Research Briefs

MarketingExperiments Research Briefs are short reports that summarize recent A/B tests and other research done for companies in our research partnership program.

AI + Synoptic Layered Approach = Transformed Webpage Conversion

Ai customer simulations: a marketing revolution, a/b testing summit 2019 keynote: transformative discoveries from 73 marketing experiments to help…, conversion lifts in 10 words or less, landing page optimization: how aetna’s healthspire startup generated 638% more leads for its call…, adding content before subscription checkout increases product revenue 38%, call center optimization: how a nonprofit increased donation rate 29% with call center testing, how a nonprofit leveraged a value proposition workshop to see a 136% increase in conversions, marketing is not about making claims; it’s about fostering conclusions, the prospect’s perception gap, how sermo increased the opt-in rate for a rented list by 197%, repeatable brand strategy, does fear-based marketing work, digital subscriptions boosted, site banners tested, the top 5 marketing discoveries of 2015, strengthen your copy in 35 minutes, optimizing email capture, how to write headlines that convert, personalized messaging tested.

  • Quick Win Clinics
  • A/B Testing
  • Conversion Marketing
  • Copywriting
  • Digital Advertising
  • Digital Analytics
  • Digital Subscriptions
  • E-commerce Marketing
  • Email Marketing
  • Lead Generation
  • Social Marketing
  • Value Proposition
  • Research Services
  • Video – Transparent Marketing
  • Video – 15 years of marketing research in 11 minutes
  • Lecture – The Web as a Living Laboratory
  • Featured Research

Welcome, Login to your account.

Recover your password.

A password will be e-mailed to you.

Plans and Pricing

Artificial intelligence (AI)

Business leadership

Communication & collaboration

CX / Customer experience

EX / Employee experience

Hybrid work

Productivity

Small business

Virtual events

UCaaS Roundup

Business Communications Roundup

Business Software Roundup

Life @ RingCentral

RingCentral newsdesk

RingCentral products

Customer stories

Industry insights

Reports & research

Strategic partnerships

Working at RC Bulgaria

the marketing experiments blog

Already a partner?

Interested in partnering with us? Tell us a little about your business here .

Sales: (877) 768-4369

Communication & collaboration, Virtual events

How to plan a successful virtual event: Tips from industry experts

viewing a virtual event on a laptop at home

Bursting onto the events marketing scene in a big way in 2020, virtual events are now the de facto hot stage for connecting with your audience.

Why? Because they’re effective. According to a TechReport survey , 93.2% of respondents considered their virtual event a success in 2023.

With so many online events competing for attention, how do you make yours stand out? Whether you’re hosting a small workshop or a global conference, the secret to success lies in mastering the details. From choosing the right platform to engaging your audience in real time, the difference between a forgettable event and one that leaves a lasting impact is all in the planning.

Here are some expert tips to transform your next event into an unforgettable experience.

Pre-event planning

Define your objectives and audience.

Before diving into the logistics, it’s crucial to understand your event’s goals clearly. Are you looking to boost brand awareness, provide education, or network with industry peers? Once you define your objectives, identify your target audience. Knowing exactly who you’re speaking to will guide every decision you make—from content and presentation style to the platform you’ll use.

Select the right platform and tools

One of the most important steps is choosing the right virtual event platform . When selecting a platform, consider your audience size, required engagement tools (polls, live chats, Q&A sessions), and the technical support provided.

Look for a platform that offers powerful tools for seamless event execution, such as scheduling, live engagement features, and analytics. Ideally, the platform should support multiple event types and allow you to customize content, navigation, and interaction with your branding.

Create a detailed event plan and timeline

Every successful event starts with a solid plan. Create a timeline that includes key milestones such as content development, speaker rehearsals, and marketing promotions. Allocate your budget efficiently, considering both technology needs and promotional efforts. Ensure everyone involved knows their responsibilities and deadlines.

Technical setup and rehearsal

rehearsing for a virtual event at a table

Ensure reliable technology and connectivity

Technology can make or break a virtual event. Select reliable hardware (microphones, cameras, and computers) to avoid glitches and ensure a stable internet connection. Always have backup equipment on hand in case something goes wrong. Consider conducting a thorough test of your internet speed and streaming capabilities to ensure a smooth event day.

Conduct dry runs and technical rehearsals

No matter how well-prepared you think you are, conducting rehearsals is essential. A dry run will help you troubleshoot technical issues and let your speakers become comfortable with the platform. This is also an excellent opportunity to practice transitions between segments, test multimedia elements, and familiarize your team with the event’s flow.

Engaging your audience during the event

Incorporate interactive features and engagement strategies.

Engagement is the cornerstone of any successful virtual event. Use interactive features such as live polls, Q&A sessions, and breakout rooms to keep attendees engaged and active throughout the event. These features encourage real-time participation, leading to more meaningful experiences for attendees.

Leverage multimedia and visual content

Utilizing a variety of multimedia content is a great way to hold your audience’s attention. Incorporate videos, dynamic presentations, and eye-catching visuals to keep the event lively. For example, short video clips between presentations can break up the monotony and keep attendees engaged.

Post-event follow-up and analysis

Gather feedback and analyze data.

The event doesn’t end once the virtual doors close. Collecting feedback through post-event surveys is crucial for understanding what worked well and where improvements can be made. Analyze event data such as attendance numbers, engagement levels, and platform performance to gauge overall success and areas for improvement.

Build long-term relationships

Successful virtual events are more than just one-off experiences. Keep the conversation going by maintaining relationships with your attendees. Sending post-event follow-up emails, sharing additional content, or providing exclusive access to future events can help build long-term engagement and brand loyalty.

Achieving success with expert planning and RingCentral Events

Virtual events require meticulous planning, reliable technology, and engaging content to be truly effective. By focusing on pre-event planning, rehearsals, and audience interaction, you’ll create an experience that resonates with attendees.

Ready to plan your next virtual event with confidence? Discover how RingCentral Events can streamline your planning process and enhance audience engagement. Schedule a free demo today and unlock the potential of your virtual events .

Originally published Sep 23, 2024

Thank you for your interest in RingCentral.

Related content

Workers laughing

5 Top Trends for Small Business Communications in 2018

the marketing experiments blog

Five Video Meeting Practices to Fuel Collaboration Among Remote Teams

An employee checking his dashboard to see if tasks are organized

4 cool file-sharing features in your RingCentral app

the marketing experiments blog

IMAGES

  1. 26 Marketing Experiments That Brands Can Use To Unlock New Insights

    the marketing experiments blog

  2. 5 Marketing Experiments That Show the Value of Testing Ideas

    the marketing experiments blog

  3. 7 Steps To Marketing Experiments That Grow Your Brand

    the marketing experiments blog

  4. 6 Steps To Run Marketing Experiments That Grow Traffic 106%

    the marketing experiments blog

  5. 6 Tips for Creating Successful Marketing Experiments

    the marketing experiments blog

  6. Marketing Experiments: From Hypothesis to Results

    the marketing experiments blog

VIDEO

  1. মাটি কুমড়া আর কালাইর ঝল

  2. Top Marketing Blogs to Stay Current in 2024

  3. Experiments in Marketing

  4. Reverse Reality with refraction of science experiments ➡️👀 ⬅️.#viralshort #experiment #shortvideos

  5. What is planning experiments in digital marketing I How to plan digital marketing experiments

  6. Amazing science experiments 🔥🔥🔥.#viralshort #easyexperiment #science #scienceexperiment #ytshorts

COMMENTS

  1. Discover What Really Works In Optimization

    MarketingExperiments is a publication with a simple (but not easy) seven-word mission statement: To discover what really works in optimization. At the turn of the century, internet marketing was replete with hyperbole and bold claims (sound familiar?). Someone had to step into the void to determine "what really works.".

  2. How to Conduct the Perfect Marketing Experiment [+ Examples]

    Make a hypothesis. Collect research. Select your metrics. Execute the experiment. Analyze the results. Performing a marketing experiment involves doing research, structuring the experiment, and analyzing the results. Let's go through the seven steps necessary to conduct a marketing experiment. 1.

  3. MarketingExperiments Research Archive: Everything We've Ever Published

    In this video, we demonstrate how to effectively delegate your marketing tasks to MECLABS AI, emphasizing the separation of the Copywriter and Email Specialist roles for optimal results. Discover how MECLABS AI can help you develop compelling content by applying its understanding of the eight micro-yeses and the synoptic layered approach (SLA) to design a high-converting landing page. See a ...

  4. Analysis Archives

    150 Experiments on the Call-to-Action: Six psychological conditions that hinder our results (Part 2) Linda Johnson Feb 5, 2020 0. ... If you are experienced in business, you have heard of using marketing tactics like scarcity, story and influence to increase your sales. But there is a right way and a wrong way to apply these psychological drivers.

  5. Marketing Experiments: From Hypothesis to Results

    This leads to more efficient use of marketing resources and results in higher conversion rates, increased customer satisfaction, and, ultimately, business growth. Marketing experiments are the backbone of building an organization's culture of learning and curiosity, encouraging employees to think outside the box and challenge the status quo.

  6. What Is Experiment Marketing? (With Tips and Examples)

    Marketing experimentation is like a scientific journey into how customers respond to your marketing campaigns. Imagine you've got this wild idea for your PPC ads. Instead of just hoping it'll work, you test it. That's your experiment. You're not just throwing stuff at the wall to see what sticks.

  7. How to Use Marketing Experimentation to Boost Your Marketing ROI

    Marketing experimentation helps you come up with ideas, test strategies, identify mistakes, optimize campaigns, uncover opportunities, and make data-driven decisions. You should run marketing experiments to gain unique insights into customer behavior that focus groups simply can't offer. It'll also drive sustained growth at lower costs by ...

  8. 21 Marketing Experiments & Test Ideas

    In practical terms, this allows you to: Prove the effectiveness of marketing campaigns. Stop wasting money on ineffective strategies. Test new design ideas. Optimise campaigns and pages to maximise performance. Try new marketing strategies. Learn from previous experiments. Make smarter business decisions. Identify & stop expensive mistakes.

  9. How to Conduct a Successful Marketing Experiment

    Marketing experiments are a form of market research whereby businesses test different material and/or means of communication (such as sales pitches, calls to action, social media posts, and email marketing campaigns) to see which ones yield the best results. ... Subscribe to our blog and get free ecommerce tips, inspiration, and resources ...

  10. The 5 Stages of Conducting a Marketing Experiment, Explained

    Stage 2: Research. With a hypothesis in hand, you should have a few ideas of where to gather information about your experiment. Often in your researching phase, you will run into others who have tested your theory or even ideas around best practices. Keep these in mind when running your own experiment, but remember every audience, brand, and ...

  11. How to run marketing experiments: practical lessons from four marketing

    The 3 things all good marketing experiments have in common. According to our four experts, all marketing experiments should have these three things in common. 1. They're systematic and measured with data. The first rule of experimentation is that you have to stick to a process and make sure to use data to determine how successful the ...

  12. 26 Marketing Experiments That Brands Can Use To Unlock New Insights

    In one of its many content experiments, Netflix compared how people responded to faces showing complex emotions vs. stoic or benign expressions. The results were clear: People connect with emotions. Recapping the experiment, Netflix wrote: Seeing a range of emotions actually compels people to watch a story more.

  13. Marketing Experiments on Steroids: The Power of No-Code Platforms

    This blog provides insights, advice and practical knowledge from thought leaders and practitioners in Citizen Development. ... One of the most common marketing experiments is A/B testing. Marketers often need to test different versions of a landing page to determine which design, message, or call-to-action drives more conversions. ...

  14. Running Marketing Experiments That Work

    Marketing experimentation is a process to test marketing messages and approaches on a small scale to see what generates the best response. A well-designed marketing experiment can validate existing marketing strategies or help you understand where you can refocus your marketing budget. Even a small increase in your ability to target potential ...

  15. Marketing Experiments in SaaS: Different Experiment Ideas To Try

    Run the experiment. Analyze results. Here are five marketing experiments types you can try in your business: In-app onboarding experiments. Product messaging experiments. Email marketing experiments. Paid media experiments. Case study experiments. Tools like Userpilot, Wynter, and Hotjar can help you run successful marketing experiments.

  16. About MarketingExperiments and MECLABS

    About MarketingExperiments: How we curate the world's largest library of research in optimization. MarketingExperiments was the first web-based research lab to conduct experiments in optimizing sales and marketing processes. Today, with multiple optimization indices and formulas MarketingExperiments, as part of the MECLABS Institute, is forging research partnerships with key companies such ...

  17. How to Plan, Execute & Measure Marketing Experiments as a Growth

    To help you plan and manage your experiments, we've created a brand new 2017 Marketing Experiments Calendar you can download and customize yourself. This calendar will teach you how to build out processes for prioritizating, implementating, and measuring the results of your experiments. Use it to: Brainstorm experiment ideas, and build a backlog.

  18. How to Plan & Test Marketing Experiments

    1) Improved customer experience: Marketing experiments help you understand customer needs, and how you can deliver more personalised customer experiences. 2) Better decision making: Marketing experiments help you make better-informed, data-driven decisions. By conducting marketing experiments, you'll be able to get a snapshot of what's ...

  19. 15 Years of Marketing Research in 11 Minutes

    If you would like to jump to a specific point in the video, here are some key points that Flint covered: 00:34 - Three flaws in the funnel analogy. 2:31 - The inverted sales funnel. 9:34 - Conversion Heuristic. 10:34 - Recent Experiments: 10,000+ Paths Tested. The marketing research and methodologies in this video are achieved from ...

  20. How to Do A/B Testing: 15 Steps for the Perfect Split Test

    4. Split your sample groups equally and randomly. For tests where you have more control over the audience — like with emails — you need to test with two or more equal audiences to have conclusive results. How you do this will vary depending on the A/B testing tool you use.

  21. A Tactical Guide to Marketing Experiments

    Each experiment should be designed to run for at least 1 week and make sure there is a control group (assuming there is a large enough user base). However, some experiments will run over 2 weeks — you can check out why this might happen here in this AirBnB blog post on their experimentation process.

  22. Marketing Experimentation: Answers to marketers' and entrepreneurs

    So when you see marketing experiments in articles or videos (and we are guilty of this as well), they usually focus on the conversion rate increase. It is a great way to get marketers' attention. ... from Web clinics to Research Journals to the blog. Prior to joining the team, Daniel was Vice President of MindPulse Communications - a ...

  23. Research Briefs Archives

    The Top 5 Marketing Discoveries of 2015. Editorial Staff Dec 17, 2015 0. In this 60-minute webinar replay, the MarketingExperiments team looked analyzed thousands of hours of research from 2015 to bring you the five of most surprising, actionable discoveries from the year. These simple principles will help you….

  24. How to plan a successful virtual event: Tips from industry experts

    Bursting onto the events marketing scene in a big way in 2020, virtual events are now the de facto hot stage for connecting with your audience. Why? Because they're effective. According to a TechReport survey, 93.2% of respondents considered their virtual event a success in 2023. With so many ...