A good survey works like a therapy session. You don’t begin by asking for deep truths, you guide the person gently through context, emotion, and interpretation. When done in the right sequence, your questions help people articulate thoughts they didn’t even realize they had. Most UX surveys fall short not because users hold back, but because the design doesn’t help them get there. They capture behavior and preferences but often miss the emotional drivers, unmet expectations, and mental models behind them. In cognitive psychology, we understand that thoughts and feelings exist at different levels. Some answers come automatically, while others require reflection and reconstruction. If a survey jumps straight to asking why someone was frustrated, without first helping them recall the situation or how it felt, it skips essential cognitive steps. This often leads to vague or inconsistent data. When I design surveys, I use a layered approach grounded in models like Levels of Processing, schema activation, and emotional salience. It starts with simple, context-setting questions like “Which feature did you use most recently?” or “How often do you use this tool in a typical week?” These may seem basic, but they activate memory networks and help situate the participant in the experience. Visual prompts or brief scenarios can support this further. Once context is active, I move into emotional or evaluative questions (still gently) asking things like “How confident did you feel?” or “Was anything more difficult than expected?” These help surface emotional traces tied to memory. Using sliders or response ranges allows participants to express subtle variations in emotional intensity, which matters because emotion often turns small usability issues into lasting negative impressions. After emotional recall, we move into the interpretive layer, where users start making sense of what happened and why. I ask questions like “What did you expect to happen next?” or “Did the interface behave the way you assumed it would?” to uncover the mental models guiding their decisions. At this stage, responses become more thoughtful and reflective. While we sometimes use AI-powered sentiment analysis to identify patterns in open-ended responses, the real value comes from the survey’s structure, not the tool. Only after guiding users through context, emotion, and interpretation do we include satisfaction ratings, prioritization tasks, or broader reflections. When asked too early, these tend to produce vague answers. But after a structured cognitive journey, feedback becomes far more specific, grounded, and actionable. Adaptive paths or click-to-highlight elements often help deepen this final stage. So, if your survey results feel vague, the issue may lie in the pacing and flow of your questions. A great survey doesn’t just ask, it leads. And when done right, it can uncover insights as rich as any interview. *I’ve shared an example structure in the comment section.
Guidelines For Effective Surveys
Explore top LinkedIn content from expert professionals.
-
-
Remember that bad survey you wrote? The one that resulted in responses filled with blatant bias and caused you to doubt whether your respondents even understood the questions? Creating a survey may seem like a simple task, but even minor errors can result in biased results and unreliable data. If this has happened to you before, it's likely due to one or more of these common mistakes in your survey design: 1. Ambiguous Questions: Vague wording like “often” or “regularly” leads to varied interpretations among respondents. Be specific—use clear options like “daily,” “weekly,” or “monthly” to ensure consistent and accurate responses. 2. Double-Barreled Questions: Combining two questions into one, such as “Do you find our website attractive and easy to navigate?” can confuse respondents and lead to unclear answers. Break these into separate questions to get precise, actionable feedback. 3. Leading/Loaded Questions: Questions that push respondents toward a specific answer, like “Do you agree that responsible citizens should support local businesses?” can introduce bias. Keep your questions neutral to gather unbiased, genuine opinions. 4. Assumptions: Assuming respondents have certain knowledge or opinions can skew results. For example, “Are you in favor of a balanced budget?” assumes understanding of its implications. Provide necessary context to ensure respondents fully grasp the question. 5. Burdensome Questions: Asking complex or detail-heavy questions, such as “How many times have you dined out in the last six months?” can overwhelm respondents and lead to inaccurate answers. Simplify these questions or offer multiple-choice options to make them easier to answer. 6. Handling Sensitive Topics: Sensitive questions, like those about personal habits or finances, need to be phrased carefully to avoid discomfort. Use neutral language, provide options to skip or anonymize answers, or employ tactics like Randomized Response Survey (RRS) to encourage honest, accurate responses. By being aware of and avoiding these potential mistakes, you can create surveys that produce precise, dependable, and useful information. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling
-
User experience surveys are often underestimated. Too many teams reduce them to a checkbox exercise - a few questions thrown in post-launch, a quick look at average scores, and then back to development. But that approach leaves immense value on the table. A UX survey is not just a feedback form; it’s a structured method for learning what users think, feel, and need at scale- a design artifact in its own right. Designing an effective UX survey starts with a deeper commitment to methodology. Every question must serve a specific purpose aligned with research and product objectives. This means writing questions with cognitive clarity and neutrality, minimizing effort while maximizing insight. Whether you’re measuring satisfaction, engagement, feature prioritization, or behavioral intent, the wording, order, and format of your questions matter. Even small design choices, like using semantic differential scales instead of Likert items, can significantly reduce bias and enhance the authenticity of user responses. When we ask users, "How satisfied are you with this feature?" we might assume we're getting a clear answer. But subtle framing, mode of delivery, and even time of day can skew responses. Research shows that midweek deployment, especially on Wednesdays and Thursdays, significantly boosts both response rate and data quality. In-app micro-surveys work best for contextual feedback after specific actions, while email campaigns are better for longer, reflective questions-if properly timed and personalized. Sampling and segmentation are not just statistical details-they’re strategy. Voluntary surveys often over-represent highly engaged users, so proactively reaching less vocal segments is crucial. Carefully designed incentive structures (that don't distort motivation) and multi-modal distribution (like combining in-product, email, and social channels) offer more balanced and complete data. Survey analysis should also go beyond averages. Tracking distributions over time, comparing segments, and integrating open-ended insights lets you uncover both patterns and outliers that drive deeper understanding. One-off surveys are helpful, but longitudinal tracking and transactional pulse surveys provide trend data that allows teams to act on real user sentiment changes over time. The richest insights emerge when we synthesize qualitative and quantitative data. An open comment field that surfaces friction points, layered with behavioral analytics and sentiment analysis, can highlight not just what users feel, but why. Done well, UX surveys are not a support function - they are core to user-centered design. They can help prioritize features, flag usability breakdowns, and measure engagement in a way that's scalable and repeatable. But this only works when we elevate surveys from a technical task to a strategic discipline.
-
Writing your own survey? Stop making these survey mistakes… I’ve reviewed dozens of surveys from brands and consultants who are taking a DIY approach to survey-based research. While I love seeing more companies using data and original insights in their content, there are some common pitfalls with surveys that can undermine your efforts. Here are the biggest mistakes I see—and how to avoid them: 1️⃣ Too many open-ended questions While open-ended questions can be valuable, overusing them can overwhelm respondents and make it harder to extract actionable insights. Many of these could easily be reworked as multi-select options, which are quicker to answer and easier to analyze. 2️⃣ Not tailoring questions to respondents Failing to properly segment your audience or filter questions (e.g., asking irrelevant questions to people outside a specific group) frustrates respondents and skews your data. Make sure your survey flows logically and adapts based on responses. 3️⃣ Using jargon or acronyms Don’t assume your audience speaks the same language as your internal team. Spell out acronyms and avoid industry jargon—it ensures clarity and a better response rate. 4️⃣ Combining ideas in one question or response option Questions or responses like “Do you think A and B?” are problematic because a respondent might agree with one but not the other. Keep questions and responses focused on one idea at a time to get accurate answers. 5️⃣ Making surveys too long Long surveys lead to drop-offs or rushed responses. Respect your respondents' time—focus on what you really need to know and keep it concise. 6️⃣ No narrative structure—just a dump of internal questions One of the most common mistakes I see is surveys that lack a clear story arc. Instead of building around a strong theme or hypothesis, it’s just a long list of random questions from different stakeholders. The result? Disconnected data that's hard to turn into compelling content. When designing your survey, think about the story you want to tell. Build your questions to support that narrative. Key Takeaway: Thoughtful design makes a huge difference in the quality of your insights—and ultimately, the impact of your content. Have you seen any survey mistakes that drive you nuts? Or tips for improving them? #SurveyTips #OriginalResearch #ContentStrategy Hi, I'm Becky. 👋 My clients have garnered 80+ media mentions, 2-3X the leads, and over 250K in free advertising from branded research💰 Interested in branded original research to boost your marketing KPIs? DM me and we'll talk. 🙂
-
People often say what they think they should say. I had a great exchange with 👋 Brandon Spencer, who highlighted the challenges of using qualitative user research. He suggested that qual responses are helpful, but you have to read between the lines more than you do when watching what they do. People often say what they think they should be saying and do what they naturally would. I agree. Based on my digital experiences, there are several reasons for this behavior. People start with what they know or feel, filtered by their long-term memory. Social bias ↳ People often say what they think they should be saying because they want to present themselves positively, especially in social or evaluative situations. Jakob's Law ↳ Users spend most of their time on other sites, meaning they speak to your site/app like the sites they already know. Resolving these issues in UX research requires a multi-faceted approach that considers what users say (user wants) and what they do (user needs) while accounting for biases and user expectations. Here’s how we tackle these issues: 1. Combine qualitative and quantitative research We use Helio to pull qualitative insights to understand the "why" behind user behavior but validate these insights with quantitative data (e.g., structured behavioral questions). This helps to balance what users say with what they do. 2. Test baselines with your competitors Compare your design with common patterns with which users are familiar. Knowing this information reduces cognitive load and makes it easier for users to interact naturally with your site on common tasks. 3. Allow anonymity Allow users to provide feedback anonymously to reduce the pressure to present themselves positively. Helio automatically does this while still creating targeted audiences. We also don’t do video. This can lead to more honest and authentic responses. 4. Neutral questioning We frame questions to reduce the likelihood of leading or socially desirable answers. For example, ask open-ended questions that don’t imply a “right” answer. 5. Natural settings Engage with users in their natural environment and devices to observe their real behavior and reduce the influence of social bias. Helio is a remote platform, so people can respond wherever they want. The last thing we have found is that by asking more in-depth questions and increasing participants, you can gain stronger insights by cross-referencing data. → Deeper: When users give expected or socially desirable answers, ask follow-up questions to explore their true thoughts and behaviors. → Wider: Expand your sample size (we test with 100 participants) and keep testing regularly. We gather 10,000 customer answers each month, which helps create a broader and more reliable data set. Achieving a more accurate and complete understanding of user behavior is possible, leading to better design decisions. #productdesign #productdiscovery #userresearch #uxresearch
-
A LinkedIn contact recently DMed me to get some advice. She was struggling to get people to answer her survey. This is one of the most common frustrations I hear from marketers who are conducting survey-based research. While there is no “easy button” you can push to get the right people to spend time on your survey, here are a few practical things I would suggest. 👉 Make it clear why you’re doing the survey If you're asking your own audience to take your survey, they need to understand why they should invest their time. Your survey intro should clearly share: - Why you are conducting this research - How long it will take - How you’ll use the results to help the participant 👉 Evaluate your screening criteria I recently worked with a client on a niche B2B survey. As we tested the survey, we saw that a high percentage of respondents were getting disqualified. Our issue: the screening criteria was too restrictive. Our solution was think about who could answer our survey even if it wasn't our ideal audience. We simplified and broadened our criteria and had a much easier time getting people to respond. If you have screening/disqualification logic in place, consider if you can broaden it. 👉 Make the first questions easy to answer The most common place people drop off on surveys is at the beginning. I’m not sure why this is, but I suspect it’s because people are wondering: - Do I have the right knowledge to answer this question? - How difficult/time-consuming will this be? To get people invested, make sure the initial questions are super easy to answer. Said another way: don’t do what you see in this screenshot here (which comes from my "survey questions gone wrong" swipe file). As you can see, this is the first question they asked, which is ridiculously hard to answer. What other advice would you share with someone who is trying to get more responses for their #contentmarketing survey?
-
HOW TO GET MORE SURVEYS + MAKE YOUR NEXT EVENT BETTER?! Event pros, we know how it goes - you have a GREAT event (with some hiccups!) and you leave the event with all these ideas for the future... and then you get swallowed up by your inbox, your next event, and LIFE... and those good ideas/changes you want to make just fall further and further down the to-do list. I get it! I have been part of a lot of event post-mortems with my clients (and as a former meetings + events pro) and here is a list of things I've seen clients do that were impactful/good ideas: STRATEGIES TO IMPROVE # OF SURVEY RESPONSES: 1. Get each speaker to include QR code of survey at the end of their session / right before Q+A and give attendees 1 minute to fill out in real-time 2. Have main stage emcee give allotted 1-2 minutes during mainstage time at closing sessions of each day to let folks "vote with their surveys" for their favorite session of the day 3. Partner with a local charity to donate $1 per survey to them (for example, client is supporting Second Harvest Food Bank of Central Florida for upcoming event in Orlando, where $1 gives 4 meals). 4. Multiple follow up emails giving time deadline to "get in their surveys" with possible incentive for their response (discount to future event?) STRATEGIES TO IMPROVE QUALITY OF SURVEYS: 1. Be clear on what your leadership wants ROI/KPI on and then focus on questions that can deliver those metrics/insights 2. Give the survey to your event partners - they often have a lot of experience and some ideas for improvements/feedback that you either didn't see, didn't know about or wouldn't have thought of. (and yes, they could upsell you here but great partners will give, give, give to help you and trust you may return to them if they did a great job) 3. If you have sponsors/rely on sponsors for event income, call them personally and follow up about the survey to hear their feedback (and have a special sponsor/partner survey, if possible) so you can understand their experience, feedback and improve sponsorship sales for next year STRATEGIES TO MAXIMIZE POST-EVENT MEETINGS 1. Have a quick on-site post-event meeting celebrating the wins! Nothing super heavy here, but share the joy! You all did it!!! Wins will be fresh in the mind - record it on a voice note and transcribe later, if possible. 2. Send out internal survey ahead of meeting for "braindumping" and allowing those who think-then-talk to prepare accordingly. Give open ended questions, make it anonymous if your org's culture may benefit from that. 3. Ask for everyone to pick something they will "Champion" - which means they'll see it through the stages of research, proposal, planning and execution with support of others for the next event. Co-Champions are good, too, but either way - leave the meeting with folks being excited about a new idea and with the expectations set on how to convert from "Great idea" to "Awesome reality!" #meetingsandevents #associations #eventprofs
-
Had an interesting conversation with students on how to get more responses when doing a security knowledge assessment or security culture survey. My favorite answer is share the results! Example, let's say you want to do an annual Security Culture survey which is optional. The first time you do it you get only 20% participation, what is going on and how do you improve that? This is how I would proceed. 1. Realize that people are "surveyed" out, you are not the only person pushing surveys. This is often why HR will say no when you want to do a survey, they are protecting the workforce. 2. Keep your survey's short, no more than 15-20 questions / 8 minutes to complete. Anything longer and you will frustrate people. One idea is instead of doing your own culture survey ask HR if you can add security related questions to your company's annual Engagement Survey. 3. Once you get the results, share them with the company. Where are you strong, where are you weak? What did people think about the security team or security policies? I know this sounds odd at first but this does several things. First, it let's people know you are actually listening. There is nothing more frustrating then taking a survey and then silence, you have no idea what the result was. Second, let people know what actions you are going to take as a result of the survey, what impact their feedback had and how they made a difference. What tricks have you learned to improve engagement when doing security assessments / surveys? SANS LDR433 Human Risk course - sans.org/ldr433 SANS LDR521 Security Culture course - sans.org/ldr521 #securityawareness #securityculture SANS Security Leadership
-
It’s arguably never been easier to run surveys, but that doesn’t mean we’re getting better insights from them. Often this stems back to two main issues 1) lack of clear objectives and hypotheses upfront, and 2) poorly written questions. ✅ Start with a list of hypotheses you want to investigate. Think of these as statements you believe to be true and want to confirm. This should not be a list of stats you’d like to generate from the survey. Instead, what are the ideal “headlines” you’d love to report on? For example, rather than seeking a stat like “60% of Gen Z discover new products on social media compared to 20% of Gen X”, think of the overall insight you want to gain, like “the shopping experience has changed and brands need to adapt their marketing strategy: a majority of Gen Z now use social media to discover new products, while a minority of Gen X shoppers discover products this way”. ⁉️ Now, what questions help you get to these insights? One of the most frequent question pitfalls I see is asking two questions in one question. Don’t ask a question with “or” in the middle. Each question should have a single point to it. E.g. “Which of the below channels do you use for product discovery?” If you want to also learn about channels that they are more likely to convert from, ask it in a different question. Define all terms you are using. What do you mean by “discovery”? Are all the channels you list easily understood? Questions should be as simple and specific as possible: as few words as possible, no fancy vocab. Then test your questions with a few users. Do they all understand and interpret the questions in the same way? If people tell you multiple meanings, be more simple, and more specific. To put these points together, add a sentence to your survey draft above each question (or in some cases, a set of questions) with the headline you ideally want to share. 💡 To summarize, before running a survey, what insights do you want to take from it? And do you have the right question to get you there?