Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Explore core features and advanced tools in one powerful platform.

Build and customize online forms to collect info and payments.

Integrate with 100+ apps and plug-ins to get more done.

Purpose-built solutions for all of your market research needs.

Create better surveys and spot insights quickly with built-in AI.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in
Article

Putting our Audience survey panel quality to the test

Here’s how our research team ensures your surveys receive thoughtful, consistent responses.

From December 2015 through February 2016, we sent the same monthly survey to respondents in our SurveyMonkey Audience panel. Each survey ran for 1 week, included 25 questions, and received more than 1,000 responses.

We compared the results for each wave, weighting for age and gender, to see if we got meaningful data back from our respondents. And we built our survey in order to test for 3 respondent behaviors that may negatively impact data quality:

  • Straight lining: Behavior in which respondents select the same answer over and over again in a set of matrix questions
  • Open-end response validity: Whether respondents write invalid responses that are gibberish, contain foul language, or give non-helpful comments
  • Trap question: Completion of a picture-identification task, ensuring the respondent is paying attention

In addition to monitoring data quality, we wanted to make sure results were consistent from one wave to the next. Why? Even if members cycle in and out of Audience, we always want to be sure we maintain a panel that’s reliably representative of the general population.

Some of the questions we asked were included specifically because their responses should not change much over time, such as questions around:

  • Physical characteristics: Different physical characteristics (e.g., green eyes, red hair, right-handedness)
  • Purchasing and financial behaviors: Whether respondents own and use debit and credit cards, and how often they shop for certain items
  • Opinions: What respondents think about the morality of alcohol use and homosexuality
  • Demographics

We also checked to make sure the amount of time respondents took to complete the survey did not change significantly from wave to wave.

As we mentioned before, in order to determine whether or not our panel yields quality data, we measured 3 key metrics: straight lining, open-ended responses, and a trap (picture-identification) question. Here’s what we found:

  • 97% didn’t straightline responses
  • 97% gave valid open-ended responses
  • 94% passed the trap question

As you can see, 97% of the respondents in the U.S. passed our straight lining test. This means almost all respondents carefully differentiate the questions in a matrix (grid) question type when providing answers. Similarly, about only 2–3% of the open-ended responses are invalid or gibberish, while the majority of respondents provided a valid and meaningful answer.

We also included a trap question in the survey to test whether respondents paid close attention to the question—and found 94% of respondents passed this quality check as well. The question was asked more than halfway through the survey, suggesting that most respondents still paid enough attention to instructions and picked the correct answers when they got close to the end of the survey.

Next we examined whether the 3 surveys over the course of 3 months provide consistent findings. We tested for this by asking questions that should yield stable answers in terms of proportionality. For example, a sample should always include roughly the same percentage of people who say they have green eyes.

In the table below, we’ve included responses to these questions. For one question, respondents could “select all that apply,” so we show each of those response options separately. A check mark indicates no significant change over the 3 waves.

Most of the questions show a consistent pattern over time, suggesting the data collected from Audience are reliable and trustworthy. From 1 wave to another, of the 23 indicators we tracked, 20 did not change significantly.

Survey questionConsistency
I have green eyes3
I have naturally red hair3
My birthday is in February3
I have a twin3
I am right-handed3
I am less than 6 feet tall3
I wear corrective lenses (glasses or contacts)3
None of the aboveå
Own a credit card3
Own a debit card3
Used a credit card in the past year3
Used a debit card in the past year3
Moral acceptability of alcohol useå
Moral acceptability of homosexuality3
Picture verification question3
Education level3
Race/ethnicity3
Income3
Motivation for taking the survey3
Rating of survey quality3
Open-ended question3
Time to completeå
Device completed on3

Quality: The 3 data quality indicators we examined all indicate SurveyMonkey Audience respondents provide thoughtful, high-quality responses. Only very small percentages of respondents failed our data quality checks.

Reliability: Over the 3 survey waves we ran, our results did not change substantially, indicating SurveyMonkey Audience provides consistent survey results representative of your target demographic regardless of when you’re conducting your survey.

This means you can survey away with confidence knowing you’ll get results you can trust! And remember: Follow Audience’s survey design guidelines to increase the likelihood of receiving thoughtful answers.

Get reliable feedback from qualified people in minutes.