If you must send out a survey, avoid these common mistakes
You might suspect I’m not the biggest fan of surveys — I certainly have my criticisms of them as a tool. My main gripe is that I see them being used in completely the wrong way, and then those results can be used to justify major decisions. We’re building our houses on sand!
Surveys are a great tool when we want to explore a simple topic, with discrete and simple answers, from a large(ish) group of participants. For example, if I want to understand which beverages my colleagues in the office prefer to drink, and use those results to decide which brands we should purchase in.
Surveys are not a good tool when we’re exploring hypotheticals — like asking people what they would do, or would buy, or how much they would pay. People are terrible at predicting their own behaviour, and they have a tendency to tell you what they think you want to hear.
If you want to hear more about the risks and limitations of conducting a survey, Erika Hall covers them well here.
Surveys can be useful to gather feedback, areas to improve on, or measure the positive or negative impact of something — but they must be well written and delivered strategically to give a reliable result. I know many of you will be planning a supporter survey, an impact assessment, or employee feedback survey right about now.
With that in mind, here are my top 6 tips for improving surveys.
1) Ask people things they know the answer to
Don’t require them to guess, or predict, or really even think too hard. Try to root your questions in their experience, so that what we’re asking them to do is remember and then tell us. For example, instead of asking people “What is the most you would pay for a coffee?” it is more reliable to ask people “What is the most you remember paying for a coffee?”.
If you’re asking for some form of feedback, give respondents the choice to complete the fields which apply to them. For example, if you’re asking people to share positive experiences and negative experiences, don’t mandate that they fill in both fields. This forces people to make things up, or to enter something that they otherwise wouldn’t have felt strongly about.
2) Open questions to explore, and closed questions to quantify
Are you trying to understand something, or to measure something? If you want to understand how a participant feels about something, their mental models, attitudes or beliefs then you need to use open questions. These often begin with ‘How’ or ‘Why’ — but sometimes these terms can be a little direct for participants and end up limiting answers. I recommend a version of the TEDW questions :
Tell us about..
Explain..
Describe..
Walk us through..
For example, “Tell us about the things you consider when choosing a coffee”, or “Walk us through how you pick your brand of coffee beans”.
If you’re exploring how respondents feel about your product or service, then these open questions are generally a better choice than something like “On a scale of 1–10, how much do you like our coffee?”. Because what can you really do with answers to that question? It’s only really useful as a measure over time. You’d learn more from having the participant talk about what they like and dislike about your coffee, or drinking coffee in general.
Closed questions give us a means to measure and compare. They’re generally simple, yes/no or numerical answers, and they are best used when you want to measure something across your audience base e.g. the proportion of coffee drinkers or the average monetary amount they pay for a coffee.
3) Only speak to your target audience
You should know exactly who you want to talk to, and be able to define your audience so that you can recruit and screen for them. Imagine I’m investigating how much London-based office workers are happy to regularly pay for a coffee — I might define my audience as;
Living in London
Working at least 1 day per week in office
Drinks at least 1 coffee most days
I’ve sent out the survey through our company social pages and to our email database, but I haven’t had many responses.
I could get more by asking my colleagues — but they work for a coffee company, so that might bias their opinion. They probably value coffee more than the average Joe. I could ask my friends — they all fit the criteria, but because they’re my friends they might answer based on what they think I want to hear, to help me out. My family would fill it out, but they aren’t big coffee drinkers and they live up North and think London prices are bonkers.
I understand the temptation to share with friends, family members and other connections. But not only is this data useless to you, it’s harmful — contaminating your responses with false data.
Imagine we get 10 responses from our actual target audience, who are happy to regularly pay around £4–6 for their daily coffee. My colleagues might have been happier to pay around £5–7, and my family might have said £2–4. If I included responses from either group it would falsely skew my price range and average.
This applies equally to qualitative questions — we can’t trust the sentiments, emerging themes or feedback of respondents who are not our audience and who have a vested interest in the outcome of the survey. If our target audience feel pretty ‘meh’ about our product, but there are 5 responses who say it’s amazing, that’s going to skew our perception and we’re going to think we’ve got a base of superfans. If those fans were all our family members, we’re going to be in trouble.
This is a challenge for all user researchers. It is usually hard to find people from your target group, who are unbiased, to complete your survey for free. People just don’t.
4) Give people a reason to complete your survey
An important and highly contentious point. Desperate all-staff emails or pleading social media shares are a shoddy tool for engaging participants. The sad fact is, nobody completes a survey without a reason.
Have the reason for completing your survey be that the participant gets something out of it, too! Offering an incentive values your participants’ time and gives them a less biased reason to participate. For a <20 minute survey, I recommend a £10 Amazon voucher or similar. If your budget is small, you can enter participants into a draw for one cash prize. If you can’t offer a monetary incentive, what other value can you offer? A discount, offer or exchange of services? Free merch? A limerick written in their honour?
When offering an incentive, it’s important to be clear that participants will receive this regardless of their responses (criticism is encouraged just as much as praise!) and if you’re sharing publicly rather then in a closed group I recommend you screen participants…
5) Screen your participants
Interpreting your results is a fragile and delicate art — and can be led horribly astray if your participant pool include folks outside your target audience, or at the worst malicious influences. For example, if I’m gathering information to help me price a new breast pump offering, it is going to muddy the results if I get responses from folk who have not and would not breast feed. If I shared this survey on twitter, or another social platform, I could also attract the attention of trolls and end up bombarded with unhelpful and antagonistic responses. Equally, if I’m offering an incentive for participation this might lead to some individuals falsely representing themselves in order to receive the incentive.
One of the best ways to ensure our data is clean and usable is to screen out participants outside our target audience.
When writing your screening questions, I recommend you try to disguise the characteristics that you’re recruiting for. For example, in my breast pump pricing screener I’m looking to recruit mothers who breast fed or expressed milk. I could simply ask ‘Did you breastfeed your child?’, but that’s a little on the nose for a screener, so instead I might ask:
How did you most often feed your baby?
- bottle-fed using formula
- bottle-fed using expressed milk
- breast-fed
And screen out folk that selected formula. If you want to learn more about writing screeners and how they work, Survey Monkey have a handy guide
6) Test it!
Like all aspects of a good design process, user research should be iterative. Send out your survey to a small number of people, and check what kind of responses you’re getting. Are people giving you the answers you hoped for? Are they answering your open-text questions? Are they leaving fields incomplete or giving minimal answers? Are they dropping off entirely?
If people are not giving the kind of responses you need for your analysis, you can go back and refine your questions — or, you can consider if you need to improve your screening or recruitment. Then test again!
When you’re happy that the responses you get are giving you what you need, you can scale up recruitment.
If you’d like to learn more about writing surveys, and the alternative methods you can use to capture audience insight, you can learn directly from me on our Doing Good User Research course.


