My AI lesson of the week: The tech isn't the hard part…it's the people! During my prior work at the Institute for Healthcare Improvement (IHI), we talked a lot about how any technology, whether a new drug or a new vaccine or a new information tool, would face challenges with how to integrate into the complex human systems that alway at play in healthcare. As I get deeper and deeper into AI, I am not surprised to see that those same challenges exist with this cadre of technology as well. It’s not the tech that limits us; the real complexity lies in driving adoption across diverse teams, workflows, and mindsets. And it’s not just implementation alone that will get to real ROI from AI—it’s the changes that will occur to our workflows that will generate the value. That’s why we are thinking differently about how to approach change management. We’re approaching the workflow integration with the same discipline and structure as any core system build. Our framework is designed to reduce friction, build momentum, and align people with outcomes from day one. Here’s the 5-point plan for how we're making that happen with health systems today: 🔹 AI Champion Program: We designate and train department-level champions who lead adoption efforts within their teams. These individuals become trusted internal experts, reducing dependency on central support and accelerating change. 🔹 An AI Academy: We produce concise, role-specific, training modules to deliver just-in-time knowledge to help all users get the most out of the gen AI tools that their systems are provisioning. 5-10 min modules ensures relevance and reduces training fatigue. 🔹 Staged Rollout: We don’t go live everywhere at once. Instead, we're beginning with an initial few locations/teams, refine based on feedback, and expand with proof points in hand. This staged approach minimizes risk and maximizes learning. 🔹 Feedback Loops: Change is not a one-way push. Host regular forums to capture insights from frontline users, close gaps, and refine processes continuously. Listening and modifying is part of the deployment strategy. 🔹 Visible Metrics: Transparent team or dept-based dashboards track progress and highlight wins. When staff can see measurable improvement—and their role in driving it—engagement improves dramatically. This isn’t workflow mapping. This is operational transformation—designed for scale, grounded in human behavior, and built to last. Technology will continue to evolve. But real leverage comes from aligning your people behind the change. We think that’s where competitive advantage is created—and sustained. #ExecutiveLeadership #ChangeManagement #DigitalTransformation #StrategyExecution #HealthTech #OperationalExcellence #ScalableChange
Implementing a Learning Management System
Explore top LinkedIn content from expert professionals.
-
-
📈 Unlocking the True Impact of L&D: Beyond Engagement Metrics 🚀 I am honored to once again be asked by the LinkedIn Talent Blog to weigh in on this important question. To truly measure the impact of learning and development (L&D), we need to go beyond traditional engagement metrics and look at tangible business outcomes. 🌟 Internal Mobility: Track how many employees advance to new roles or get promoted after participating in L&D programs. This shows that our initiatives are effectively preparing talent for future leadership. 📚 Upskilling in Action: Evaluate performance reviews, project outcomes, and the speed at which employees integrate their new knowledge into their work. Practical application is a strong indicator of training’s effectiveness. 🔄 Retention Rates: Compare retention between employees who engage in L&D and those who don’t. A higher retention rate among L&D participants suggests our programs are enhancing job satisfaction and loyalty. 💼 Business Performance: Link L&D to specific business performance indicators like sales growth, customer satisfaction, and innovation rates. Demonstrating a connection between employee development and these outcomes shows the direct value L&D brings to the organization. By focusing on these metrics, we can provide a comprehensive view of how L&D drives business success beyond just engagement. 🌟 🔗 Link to the blog along with insights from other incredible L&D thought leaders (list of thought leaders below): https://lnkd.in/efne_USa What other innovative ways have you found effective in measuring the impact of L&D in your organization? Share your thoughts below! 👇 Laura Hilgers Naphtali Bryant, M.A. Lori Niles-Hofmann Terri Horton, EdD, MBA, MA, SHRM-CP, PHR Christopher Lind
-
Every enablement team has the same problem: - Reps say they want more training. - You give them a beautiful deck. - They ghost it like someone who matched with Keith on Tinder. These folks don't have a content problem as much as they have a consumption problem. Think of it thusly: if no one’s using the enablement you built, it might as well not exist. Here’s the really scary part: The average org spends $2,000 - $5,000 per rep per year on enablement tools, programs, and L&D support. But fewer than 40% (!!!) of reps consistently complete assigned content OR apply it in live deals. So what happens? - You build more content. - You launch new certifications. - You roll out another LMS. And your top reps ignore it all because they’re already performing, while your bottom reps binge it and still miss quota. 🕺 We partner with some of the best enablement leaders in the game here at Sales Assembly. Here’s how they measure what matters: 1. Time-to-application > Time-to-completion. Completion tells you who checked a box. Application tells you who changed behavior. Track: - Time from training to first recorded usage in a live deal. - % of reps applying new language in Gong clips. - Manager feedback within 2 weeks of rollout. If you can’t prove behavior shift, you didn’t ship enablement. You shipped content. 2. Manager reinforcement rate. Enablement that doesn’t get reinforced dies fast. Track: - % of managers who coach on new concepts within 2 weeks. - # of coaching conversations referencing new frameworks. - Alignment between manager deal inspection and enablement themes. If managers aren’t echoing it, reps won’t remember it. Simple as that. 3. Consumption by role, segment, and performance tier. Your top reps may skip live sessions. Fine. But are your mid-performers leaning in? Slice the data: - By tenure: Is ramp content actually shortening ramp time? - By segment: Are enterprise reps consuming the right frameworks? - By performance: Who’s overconsuming vs. underperforming? Enablement is an efficiency engine...IF you track who’s using the gas. 4. Business impact > Feedback scores. “Helpful” isn’t the goal. “Impactful” is. Track: - Pre/post win rates by topic. - Objection handling improvement over time. - Change in average deal velocity post-rollout. Enablement should move pipeline...not just hearts. 🥹 tl;dr = if you’re not measuring consumption, you’re not doing enablement. You’re just producing marketing collateral for your own team. The best programs aren’t bigger. They’re measured, inspected, and aligned to revenue behavior.
-
Throwing AI tools at your team without a plan is like giving them a Ferrari without driving lessons. AI only drives impact if your workforce knows how to use it effectively. After: 1-defining objectives 2-assessing readiness 3-piloting use cases with a tiger team Step 4 is about empowering the broader team to leverage AI confidently. Boston Consulting Group (BCG) research and Gilbert’s Behavior Engineering Model show that high-impact AI adoption is 80% about people, 20% about tech. Here’s how to make that happen: 1️⃣ Environmental Supports: Build the Framework for Success -Clear Guidance: Define AI’s role in specific tasks. If a tool like Momentum.io automates data entry, outline how it frees up time for strategic activities. -Accessible Tools: Ensure AI tools are easy to use and well-integrated. For tools like ChatGPT create a prompt library so employees don’t have to start from scratch. -Recognition: Acknowledge team members who make measurable improvements with AI, like reducing response times or boosting engagement. Recognition fuels adoption. 2️⃣ Empower with Tiger Team Champions -Use Tiger/Pilot Team Champions: Leverage your pilot team members as champions who share workflows and real-world results. Their successes give others confidence and practical insights. -Role-Specific Training: Focus on high-impact skills for each role. Sales might use prompts for lead scoring, while support teams focus on customer inquiries. Keep it relevant and simple. -Match Tools to Skill Levels: For non-technical roles, choose tools with low-code interfaces or embedded automation. Keep adoption smooth by aligning with current abilities. 3️⃣ Continuous Feedback and Real-Time Learning -Pilot Insights: Apply findings from the pilot phase to refine processes and address any gaps. Updates based on tiger team feedback benefit the entire workforce. -Knowledge Hub: Create an evolving resource library with top prompts, troubleshooting guides, and FAQs. Let it grow as employees share tips and adjustments. -Peer Learning: Champions from the tiger team can host peer-led sessions to show AI’s real impact, making it more approachable. 4️⃣ Just in Time Enablement -On-Demand Help Channels: Offer immediate support options, like a Slack channel or help desk, to address issues as they arise. -Use AI to enable AI: Create customGPT that are task or job specific to lighten workload or learning brain load. Leverage NotebookLLM. -Troubleshooting Guide: Provide a quick-reference guide for common AI issues, empowering employees to solve small challenges independently. AI’s true power lies in your team’s ability to use it well. Step 4 is about support, practical training, and peer learning led by tiger team champions. By building confidence and competence, you’re creating an AI-enabled workforce ready to drive real impact. Step 5 coming next ;) Ps my next podcast guest, we talk about what happens when AI does a lot of what humans used to do… Stay tuned.
-
Of course your onboarding program failed. You built it to serve you — not your customer. I see it all the time. Companies over-engineer onboarding to hit internal milestones, check off boxes, and declare victory when they say it’s “done.” But here’s the truth: Just because you wrapped onboarding doesn’t mean your customer is ready. They’re still dealing with: Misalignment Confusion Internal resistance A tool they don’t fully understand how to use, let alone adopt. Here’s what’s actually going wrong: 1️⃣ You treat onboarding like a training event, not a change process 2️⃣ You deliver the same training to every user, regardless of role 3️⃣ You never define what success actually looks like 4️⃣ You don’t empower internal champions 5️⃣ You abandon them the second onboarding is “over” 6️⃣ You think “Go-Live” means “Mission Accomplished” 7️⃣ You ignore resistance to change 8️⃣ You don’t communicate enough (or clearly) 9️⃣ You overload them with info 🔟 You never got executive buy-in Want to fix it? Here’s where to start — tomorrow: ✅ Build a post-onboarding success plan Pre-populate it with the customer’s goals and share it before onboarding ends. ✅ Identify and empower champions early Find them at kickoff. Equip them to lead. Keep them close. ✅ Reinforce the WHY Stop talking about features. Start connecting usage to business impact. ✅ Monitor early signals and take action Don’t just measure adoption. Share it. Explain it. Adjust as needed. ✅ Keep the learning going Enablement isn’t one-and-done. Build ongoing learning paths and resources that scale. Let’s stop designing onboarding for our own convenience. And start designing it for customer success. Put them back in the driver’s seat. ____________________ 📣 If you liked my post, you’ll love my newsletter. Every week I share learnings, advice and strategies from my experience going from CSM to CCO. Join 12k+ subscribers of The Journey and turn insights into action. Sign up on my profile.
-
The EDPB recently published a report on AI Privacy Risks and Mitigations in LLMs. This is one of the most practical and detailed resources I've seen from the EDPB, with extensive guidance for developers and deployers. The report walks through privacy risks associated with LLMs across the AI lifecycle, from data collection and training to deployment and retirement, and offers practical tips for identifying, measuring, and mitigating risks. Here's a quick summary of some of the key mitigations mentioned in the report: For providers: • Fine-tune LLMs on curated, high-quality datasets and limit the scope of model outputs to relevant and up-to-date information. • Use robust anonymisation techniques and automated tools to detect and remove personal data from training data. • Apply input filters and user warnings during deployment to discourage users from entering personal data, as well as automated detection methods to flag or anonymise sensitive input data before it is processed. • Clearly inform users about how their data will be processed through privacy policies, instructions, warning or disclaimers in the user interface. • Encrypt user inputs and outputs during transmission and storage to protect data from unauthorized access. • Protect against prompt injection and jailbreaking by validating inputs, monitoring LLMs for abnormal input behaviour, and limiting the amount of text a user can input. • Apply content filtering and human review processes to flag sensitive or inappropriate outputs. • Limit data logging and provide configurable options to deployers regarding log retention. • Offer easy-to-use opt-in/opt-out options for users whose feedback data might be used for retraining. For deployers: • Enforce strong authentication to restrict access to the input interface and protect session data. • Mitigate adversarial attacks by adding a layer for input sanitization and filtering, monitoring and logging user queries to detect unusual patterns. • Work with providers to ensure they do not retain or misuse sensitive input data. • Guide users to avoid sharing unnecessary personal data through clear instructions, training and warnings. • Educate employees and end users on proper usage, including the appropriate use of outputs and phishing techniques that could trick individuals into revealing sensitive information. • Ensure employees and end users avoid overreliance on LLMs for critical or high-stakes decisions without verification, and ensure outputs are reviewed by humans before implementation or dissemination. • Securely store outputs and restrict access to authorised personnel and systems. This is a rare example where the EDPB strikes a good balance between practical safeguards and legal expectations. Link to the report included in the comments. #AIprivacy #LLMs #dataprotection #AIgovernance #EDPB #privacybydesign #GDPR
-
✳ Integrating AI, Privacy, and Information Security Governance ✳ Your approach to implementation should: 1. Define Your Strategic Context Begin by mapping out the internal and external factors impacting AI ethics, security, and privacy. Identify key regulations, stakeholder concerns, and organizational risks (ISO42001, Clause 4; ISO27001, Clause 4; ISO27701, Clause 5.2.1). Your goal should be to create unified objectives that address AI’s ethical impacts while maintaining data protection and privacy. 2. Establish a Multi-Faceted Policy Structure Policies need to reflect ethical AI use, secure data handling, and privacy safeguards. Ensure that policies clarify responsibilities for AI ethics, data security, and privacy management (ISO42001, Clause 5.2; ISO27001, Clause 5.2; ISO27701, Clause 5.3.2). Your top management must lead this effort, setting a clear tone that prioritizes both compliance and integrity across all systems (ISO42001, Clause 5.1; ISO27001, Clause 5.1; ISO27701, Clause 5.3.1). 3. Create an Integrated Risk Assessment Process Risk assessments should cover AI-specific threats (e.g., bias), security vulnerabilities (e.g., breaches), and privacy risks (e.g., PII exposure) simultaneously (ISO42001, Clause 6.1.2; ISO27001, Clause 6.1; ISO27701, Clause 5.4.1.2). By addressing these risks together, you can ensure a more comprehensive risk management plan that aligns with organizational priorities. 4. Develop Unified Controls and Documentation Documentation and controls must cover AI lifecycle management, data security, and privacy protection. Procedures must address ethical concerns and compliance requirements (ISO42001, Clause 7.5; ISO27001, Clause 7.5; ISO27701, Clause 5.5.5). Ensure that controls overlap, such as limiting access to AI systems to authorized users only, ensuring both security and ethical transparency (ISO27001, Annex A.9; ISO42001, Clause 8.1; ISO27701, Clause 5.6.3). 5. Coordinate Integrated Audits and Reviews Plan audits that evaluate compliance with AI ethics, data protection, and privacy principles together (ISO42001, Clause 9.2; ISO27001, Clause 9.2; ISO27701, Clause 5.7.2). During management reviews, analyze the performance of all integrated systems and identify improvements (ISO42001, Clause 9.3; ISO27001, Clause 9.3; ISO27701, Clause 5.7.3). 6. Leverage Technology to Support Integration Use GRC tools to manage risks across AI, information security, and privacy. Integrate AI for anomaly detection, breach prevention, and privacy safeguards (ISO42001, Clause 8.1; ISO27001, Annex A.14; ISO27701, Clause 5.6). 7. Foster an Organizational Culture of Ethics, Security, and Privacy Training programs must address ethical AI use, secure data handling, and privacy rights simultaneously (ISO42001, Clause 7.3; ISO27001, Clause 7.2; ISO27701, Clause 5.5.3). Encourage a mindset where employees actively integrate ethics, security, and privacy into their roles (ISO27701, Clause 5.5.4).
-
Data privacy might seem like a box to tick, but it’s much more than that. It’s the backbone of trust between you and your users. Here are a few ways to stay on top of it: + Encrypt sensitive data from day one to prevent unauthorized access. + Regular audits of your data storage and access systems are crucial to catch vulnerabilities before they become issues. + Be transparent about how you collect, store, and use data. Clear privacy policies go a long way in building user confidence. + Stay compliant with regulations like GDPR and CCPA. It’s not optional - it’s mandatory. + Train your team on the importance of data security, ensuring everyone from developers to support staff understands their role in safeguarding information. It’s easy to overlook these tasks when you're focused on growth. But staying proactive with data privacy isn’t just about following laws - it’s about protecting your reputation and building long-term relationships with your users. Don’t let what seems monotonous now turn into a crisis later. Stay ahead. #DataPrivacy #AppSecurity #GDPR #Trust #DataProtection #StartupTips #TechLeaders #CyberSecurity #UserTrust #AppDevelopment
-
Two of the biggest problems I hear about in leadership development: 1/ “Learning doesn’t stick.” 2/ “We don’t have a culture of learning.” BOTH of these problems can be solved. The key is to create a “learning ecosystem.” I’m not saying it’s easy...It’s certainly not something you can do overnight. But, these 7 tactics can go a long way: 1/ Hold a monthly community of practice Get your audience together each month (on Zoom). Use the call to: - reinforce key learnings - forge peer connections - give everyone a chance to ask Qs & share challenges - facilitate practice 2/ Create a Resource Vault Store learning resources in one live folder. Keep your docs updated in real time: - Insert new examples - Take & apply real-time feedback from learners - Create new resources based on what learners need The goal here is to make the vault a place your learners return to often. 3/ Send Weekly Behavioral Nudges Weekly behavioral nudges: - are a simple way to double or triple the value of an existing assessment or training program - can take a one-and-done program/assessment and add a year-long tail of exercises and key insights Nudges = STICKY learning 4/ Give Every Learner Access to a REAL Coach Use message-based coaching to: - expand the number of employees you can offer coaching to - meet employees at the exact moment that they need help 5/ Create a Peer Learning Network Peer learning tech enables collaboration in new ways. (And in ways that in-person can’t) Example: One leadership development team at a big tech company used a simple Google doc where learners shared questions, insights, and examples from over a dozen locations. As their doc grew… - themes emerged - ideas intersected - they had a running record of key info 6/ Deliver Microlearning in the Flow of Work Micro-learning: - makes learning available on-demand (open book test) - helps increase repetition to build habits - brings learning into the flow of work 7/ Trigger Organic Conversations You might: - use conversational guides (between peers or between learners & managers). - use prompts in your peer learning network - hold breakouts in your community of practice The idea is that over time, your learners will naturally use the language and ideas from your learning in their daily conversations. ____ Apply these 7 tactics (or even just a few) and you'll be well on your way to creating a learning ecosystem. One that will: 1/ take in new topics and spit out behavior change 2/ generate more feedback than you can collect 3/ solidify a culture of learning What other components do you include in your programs? #leadershipdevelopment
-
You're not just delivering a project You're delivering a behavior shift. A new system, process, or tool means nothing if no one uses it. Except most project plans stop at launch. Not adoption. If you're a PM, you're also a change manager. Here's 3 tips to build for behavior AND delivery: ☝ Define what's changing for the end user Every project introduces friction. New steps. New tools. New habits. Map the real impact. Not just the shift in duties, but the human change. ✌ Bring people in early Change lands smoother when people see themselves in the solution. Co-design communications + plans with users. This will make them champions rather than critics. 🤟 Reinforce even after launch The project isn't done at go-live. Change management doesn't just happen at the end either. It's a living process, so plan for training, support, feedback loops, and follow-ups. That's where real adoption happens. Deliverables don't manage change. People do. Make sure to build behavior change into your projects so they're successful. 🤙