How to Use Surveys to Improve Training Sessions

Explore top LinkedIn content from expert professionals.

Summary

Using surveys to improve training sessions involves crafting thoughtful questions that gather actionable feedback on the content, delivery format, and overall experience. A well-designed survey not only collects data but guides participants through their thoughts and emotions to generate meaningful insights.

  • Start with context-setting questions: Begin by asking simple, specific questions that help participants recall their experience, such as which tools they used or their frequency of use, as this activates memory and encourages accurate responses.
  • Focus on clarity and relevance: Include questions about actionable topics like what participants found exciting, unclear, or challenging, avoiding vague prompts that won’t provide practical insights.
  • Respond to feedback: Share how the feedback is being used to improve future sessions, as this reassures participants that their input is valued and encourages them to contribute honestly in the future.
Summarized by AI based on LinkedIn member posts
  • View profile for Mohsen Rafiei, Ph.D.

    UXR Lead | Assistant Professor of Psychological Science

    10,323 followers

    A good survey works like a therapy session. You don’t begin by asking for deep truths, you guide the person gently through context, emotion, and interpretation. When done in the right sequence, your questions help people articulate thoughts they didn’t even realize they had. Most UX surveys fall short not because users hold back, but because the design doesn’t help them get there. They capture behavior and preferences but often miss the emotional drivers, unmet expectations, and mental models behind them. In cognitive psychology, we understand that thoughts and feelings exist at different levels. Some answers come automatically, while others require reflection and reconstruction. If a survey jumps straight to asking why someone was frustrated, without first helping them recall the situation or how it felt, it skips essential cognitive steps. This often leads to vague or inconsistent data. When I design surveys, I use a layered approach grounded in models like Levels of Processing, schema activation, and emotional salience. It starts with simple, context-setting questions like “Which feature did you use most recently?” or “How often do you use this tool in a typical week?” These may seem basic, but they activate memory networks and help situate the participant in the experience. Visual prompts or brief scenarios can support this further. Once context is active, I move into emotional or evaluative questions (still gently) asking things like “How confident did you feel?” or “Was anything more difficult than expected?” These help surface emotional traces tied to memory. Using sliders or response ranges allows participants to express subtle variations in emotional intensity, which matters because emotion often turns small usability issues into lasting negative impressions. After emotional recall, we move into the interpretive layer, where users start making sense of what happened and why. I ask questions like “What did you expect to happen next?” or “Did the interface behave the way you assumed it would?” to uncover the mental models guiding their decisions. At this stage, responses become more thoughtful and reflective. While we sometimes use AI-powered sentiment analysis to identify patterns in open-ended responses, the real value comes from the survey’s structure, not the tool. Only after guiding users through context, emotion, and interpretation do we include satisfaction ratings, prioritization tasks, or broader reflections. When asked too early, these tend to produce vague answers. But after a structured cognitive journey, feedback becomes far more specific, grounded, and actionable. Adaptive paths or click-to-highlight elements often help deepen this final stage. So, if your survey results feel vague, the issue may lie in the pacing and flow of your questions. A great survey doesn’t just ask, it leads. And when done right, it can uncover insights as rich as any interview. *I’ve shared an example structure in the comment section.

  • View profile for Lisa Friscia

    Strategic Advisor & Fractional Chief People Officer for Small And Growing Orgs| Systems & Learning Nerd | I Help Founders & CEOs Scale Culture, Develop Leaders & Build Organizations That Last

    7,611 followers

    One of my biggest learnings from leading summer professional development for teachers? If you want a culture of feedback, you have to intentionally do so. The first step is to have short and sweet surveys (daily for summer PD, weekly thereafter). Most leaders do this. But to ensure the survey truly builds a culture of feedback and continuous improvement, I've learned three things: ✅ Ask focused questions. Simply, we get the data that we ask for. Ask both about the content and the general format of PD. For content, a few questions can be: What is one practice you are excited to try?; What is one thing you remain unclear on? What is one thing you know you will need further support on? For format, a simple Keep-Start-Stop can be super helpful. ✅ Review the data with your leadership team- This will allow you to process the feedback, add any additional color based on observations, and design a game plan. This can include differentiating groups, shifting a summer PD schedule or changing up future case studies and role plays to better address where the team is at. During the year, it will help you focus your observations. ✅ Respond to the feedback-It's not enough to make changes to the day based on the feedback. If you are giving people surveys, you must discuss the trends you saw and address these so that folks know they are being heard. Articulate how you are shifting things or if you can't, address where concerns or confusions will be addressed. When folks hear how their feedback is being heard they are more likely to be honest in the future. For concerns or feedback that only 1 or 2 folks have? Follow up individually. The time invested early on will pay dividends later. I know these tips don't only apply to school leaders, though Summer PD is definitely top of my mind. What are your tips and 1% solutions in building a culture of feedback and continuous improvement?

  • View profile for Nick Lawrence

    Outcomes, Outputs, & Obstacles || Enabling reps to achieve outcomes and produce outputs by removing obstacles @ Databricks

    9,475 followers

    Don't ask your trainees to rank how confident they feel: — "After the training, I feel confident to perform my job." 1) Strongly Disagree 2) Disagree 3) Neither Agree or Disagree 4) Agree 5) Strongly Agree — You'll end up with an average of 3.9 (or something like that). But what are you supposed to do with a 3.9? What decisions should you make? What specific actions should be taken? It’s impossible to know. Instead: Ask questions that reveal insights related to the effectiveness of the training. — “How confident are you when applying this training to real work situations? (Select all that apply)” A) I AM CONFIDENT I can successfully perform because I PERFORMED REAL WORK during the training and received HANDS ON COACHING B) I AM CONFIDENT because the training challenged me WITH AMPLE PRACTICE on WORK-RELATED TASKS C) I’M NOT FULLY CONFIDENT because the training DID NOT PROVIDE ENOUGH practice on WORK-RELATED TASKS D) I AM NOT CONFIDENT because the training DID NOT challenge me with practice on WORK-RELATED TASKS E) I HAVE ZERO CONFIDENCE that I can successfully perform because the training DID NOT REVIEW WORK-RELATED TASKS — One look at survey results that gauge the effectiveness of training will leave you with immediate decisions and actions to make. #salesenablement #salestraining PS - “confidence to apply” is only one important factor to assess. Read Will Thalheimer’s “Performance-Focused Learner Surveys” for the other pillars of training effectiveness.

Explore categories