One of my biggest learnings from leading summer professional development for teachers? If you want a culture of feedback, you have to intentionally do so. The first step is to have short and sweet surveys (daily for summer PD, weekly thereafter). Most leaders do this. But to ensure the survey truly builds a culture of feedback and continuous improvement, I've learned three things: ✅ Ask focused questions. Simply, we get the data that we ask for. Ask both about the content and the general format of PD. For content, a few questions can be: What is one practice you are excited to try?; What is one thing you remain unclear on? What is one thing you know you will need further support on? For format, a simple Keep-Start-Stop can be super helpful. ✅ Review the data with your leadership team- This will allow you to process the feedback, add any additional color based on observations, and design a game plan. This can include differentiating groups, shifting a summer PD schedule or changing up future case studies and role plays to better address where the team is at. During the year, it will help you focus your observations. ✅ Respond to the feedback-It's not enough to make changes to the day based on the feedback. If you are giving people surveys, you must discuss the trends you saw and address these so that folks know they are being heard. Articulate how you are shifting things or if you can't, address where concerns or confusions will be addressed. When folks hear how their feedback is being heard they are more likely to be honest in the future. For concerns or feedback that only 1 or 2 folks have? Follow up individually. The time invested early on will pay dividends later. I know these tips don't only apply to school leaders, though Summer PD is definitely top of my mind. What are your tips and 1% solutions in building a culture of feedback and continuous improvement?
Creating Surveys That Get Real Training Feedback
Explore top LinkedIn content from expert professionals.
Summary
Creating surveys that get real training feedback involves designing questions and processes that move beyond surface-level impressions to gather actionable insights about content comprehension, skill application, and overall impact.
- Ask purposeful questions: Design questions that focus on specific aspects of learning, such as areas needing more clarity or skills learners are confident to apply in real scenarios.
- Review and analyze data: Regularly examine feedback with your team to identify trends, adjust training methods, and address any gaps in knowledge or execution.
- Communicate changes: Share how feedback has been implemented or addressed to ensure participants feel heard and are encouraged to provide honest input in the future.
-
-
Smile Sheets: The Illusion of Training Effectiveness. If you're investing ~$200K per employee to ramp them up, do you really want to measure training effectiveness based on whether they liked the snacks? 🤨 Traditional post-training surveys—AKA "Smile Sheets"—are great for checking if the room was the right temperature but do little to tell us if knowledge was actually transferred or if behaviors will change. Sure, logistics and experience matter, but as a leader, what I really want to know is: ✅ Did they retain the knowledge? ✅ Can they apply the skills in real-world scenarios? ✅ Will this training drive better business outcomes? That’s why I’ve changed the way I gather training feedback. Instead of a one-and-done survey, I use quantitative and qualitative assessments at multiple intervals: 📌 Before training to gauge baseline knowledge 📌 Midway through for real-time adjustments 📌 Immediately post-training for immediate insights 📌 Strategic follow-ups tied to actual product usage & skill application But the real game-changer? Hard data. I track real-world outcomes like product adoption, quota achievement, adverse events, and speed to competency. The right metrics vary by company, but one thing remains the same: Smile Sheets alone don’t cut it. So, if you’re still relying on traditional post-training surveys to measure effectiveness, it’s time to rethink your approach. How are you measuring training success in your organization? Let’s compare notes. 👇 #MedDevice #TrainingEffectiveness #Leadership #VentureCapital
-
🤔 How Do You Actually Measure Learning That Matters? After analyzing hundreds of evaluation approaches through the Learnexus network of L&D experts, here's what actually works (and what just creates busywork). The Uncomfortable Truth: "Most training evaluations just measure completion, not competence," shares an L&D Director who transformed their measurement approach. Here's what actually shows impact: The Scenario-Based Framework "We stopped asking multiple choice questions and started presenting real situations," notes a Senior ID whose retention rates increased 60%. What Actually Works: → Decision-based assessments → Real-world application tasks → Progressive challenge levels → Performance simulations The Three-Point Check Strategy: "We measure three things: knowledge, application, and business impact." The Winning Formula: - Immediate comprehension - 30-day application check - 90-day impact review - Manager feedback loop The Behavior Change Tracker: "Traditional assessments told us what people knew. Our new approach shows us what they do differently." Key Components: → Pre/post behavior observations → Action learning projects → Peer feedback mechanisms → Performance analytics 🎯 Game-Changing Metrics: "Instead of training scores, we now track: - Problem-solving success rates - Reduced error rates - Time to competency - Support ticket reduction" From our conversations with thousands of L&D professionals, we've learned that meaningful evaluation isn't about perfect scores - it's about practical application. Practical Implementation: - Build real-world scenarios - Track behavioral changes - Measure business impact - Create feedback loops Expert Insight: "One client saved $700,000 annually in support costs because we measured the right things and could show exactly where training needed adjustment." #InstructionalDesign #CorporateTraining #LearningAndDevelopment #eLearning #LXDesign #TrainingDevelopment #LearningStrategy
-
Don't ask your trainees to rank how confident they feel: — "After the training, I feel confident to perform my job." 1) Strongly Disagree 2) Disagree 3) Neither Agree or Disagree 4) Agree 5) Strongly Agree — You'll end up with an average of 3.9 (or something like that). But what are you supposed to do with a 3.9? What decisions should you make? What specific actions should be taken? It’s impossible to know. Instead: Ask questions that reveal insights related to the effectiveness of the training. — “How confident are you when applying this training to real work situations? (Select all that apply)” A) I AM CONFIDENT I can successfully perform because I PERFORMED REAL WORK during the training and received HANDS ON COACHING B) I AM CONFIDENT because the training challenged me WITH AMPLE PRACTICE on WORK-RELATED TASKS C) I’M NOT FULLY CONFIDENT because the training DID NOT PROVIDE ENOUGH practice on WORK-RELATED TASKS D) I AM NOT CONFIDENT because the training DID NOT challenge me with practice on WORK-RELATED TASKS E) I HAVE ZERO CONFIDENCE that I can successfully perform because the training DID NOT REVIEW WORK-RELATED TASKS — One look at survey results that gauge the effectiveness of training will leave you with immediate decisions and actions to make. #salesenablement #salestraining PS - “confidence to apply” is only one important factor to assess. Read Will Thalheimer’s “Performance-Focused Learner Surveys” for the other pillars of training effectiveness.