Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Explore core features and advanced tools in one powerful platform.

Build and customize online forms to collect info and payments.

Integrate with 100+ apps and plug-ins to get more done.

Purpose-built solutions for all of your market research needs.

Create better surveys and spot insights quickly with built-in AI.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in

Are your results statistically significant? Calculate statistical significance with our calculator.

A/B testing calculator

1.00%

1.14%

A two-sided test accounts for the possibility that your variant could have a negative impact on your result.
The level of confidence you can have that your results are not due to random chance.

Variant B’s conversion rate (1.14%) was 14% higher than variant A’s conversion rate (1.00%). You can be 95% confident that variant B will perform better than variant A.

86.69%

0.0157

Statistical significance is important when running A/B tests because it ensures your results are certain and didn’t happen by chance. 

Gather quick answers using the SurveyMonkey A/B testing calculator above.

A/B testing, or split testing, compares the performance of two versions—such as a product concept or ad creative—to identify which variant is more appealing to your target audience.

Researchers, CX professionals, and marketing experts use A/B testing to test small changes, like a new website button or homepage design. It provides direct feedback and data to guide decisions on which variant to choose. 

In A/B tests, statistical significance measures the likelihood that the difference between the control and test versions is genuine and not due to error or random chance.

For example, if you run a test with a 95% significance level, you can be 95% confident that the differences are authentic.

Statistical significance is used to observe how experiments affect your business’s conversion rates. In surveys, statistical significance is used to ensure confidence in results. 

For example, if you asked whether people preferred ad concept A or ad concept B in a survey, you’d want to ensure the difference in their results was statistically significant before deciding which ad concept to use.

Let us do the math for you. Get automated statistical significance with an Advantage plan. See pricing.

First, you must form a hypothesis. For any experiment, there is a null hypothesis, which states there’s no relationship between the two things you’re comparing, and an alternative hypothesis.

An alternative hypothesis typically tries to prove that a relationship exists and supports the statement you’re trying to make. 

For example, if you are doing conversion rate A/B testing, your hypotheses may be:

  • Null hypothesis (H₀): Adding a new button to the webpage does not affect conversion rates.
  • Alternative hypothesis (H₁): Adding a new button to the webpage increases conversion rates

After formulating null and alternative hypotheses, statisticians sometimes do tests to ensure their hypotheses are sound.

A z-score represents confidence level and evaluates the validity of your null hypothesis, which can tell you if there is, in fact, no relationship between the things you’re comparing. A p-value tells you whether the evidence you have to prove your alternative hypothesis is strong.

Next, decide whether your test will be one-sided or two-sided (sometimes called one-tailed or two-tailed). A one-sided test assumes that your alternative hypothesis will have a directional effect, while a two-sided test accounts for if your hypothesis could have a negative effect on your results.

For instance, in the conversion rate A/B testing example, your test could be:

  • One-sided: Assumes the effect will be in one direction (e.g., an increase in conversion rates).
  • Two-sided: Assumes the effect could be in either direction (e.g., an increase or decrease in conversion rates).

Next, you will collect the results from your A/B test, including the relevant metrics for both the control (A) and test (B) versions. 

In our example, the results of the A/B test could be: 

  • Variant A: Out of 50,000 visitors, 500 users converted. Conversion rate of 1.00%
  • Variant B: Out of 50,000 visitors, 570 users converted. Conversion rate of 1.14%.

Then, you will calculate the z-score which measures how far the observed results are from the null hypothesis to determine if the difference between A and B is statistically significant. 

Additionally, you will calculate the p-value, which indicates the probability that the observed difference is due to random chance. A smaller p-value suggests stronger evidence against the null hypothesis. 

In our example:

  • z-score is 14%
  • p-value is 0.0157

To determine statistical significance, set a significance level (alpha). This is commonly set at 0.05 (5%), representing the acceptable risk level for incorrectly rejecting the null hypothesis.

Next, compare your p-value to the alpha level. If the p-value is less than the alpha level, reject the null hypothesis and conclude the difference is statistically significant. 

In our example, the p-value is less than the alpha level, meaning the difference of 14% is statistically significant.

Now, it’s time to interpret the results. If you receive significant results, it indicates that the observed difference is unlikely due to chance, providing evidence supporting the alternative hypothesis. Non-significant results indicate insufficient evidence to reject the null hypothesis, meaning the observed difference could be due to random variations.

For the most efficient process, use tools for calculation such as:

  • Calculator: Utilize the A/B testing calculator at the top of the page for quick and accurate results.
  • Statistical software: For more complex analyses, consider using statistical modeling software.

In summary, statistical significance validates your A/B testing results. Using statistical significance is important in making informed decisions based on A/B tests.

Check out the calculator at the top of the page to automatically calculate the significance of your survey results.

ノートパソコンでアンケートを作成している赤毛の女性

Discover our toolkits, designed to help you leverage feedback in your role or industry.

ノートパソコンで記事を見ながら、付箋に情報を書き留めている男性と女性

Enhance your survey response rates with 20 free email templates. Engage your audience and gather valuable insights with these customizable options!

眼鏡をかけてノートパソコンを見ている笑顔の男性

Leverage our p-value calculator to find your p-value. Plus, learn how to calculate p-value and how to interpret p-values with our step-by-step guide.

ノートパソコンで情報を確認している女性

Invite survey collaborators, with or without a SurveyMonkey account, to review surveys for better collaboration.

Try sending a survey to your customers to find out what they’re looking for.