Top-performing AI users are twice as likely to quit. Here's why - and how to stop it: While AI tools are dramatically increasing productivity, They're also dramatically increasing burnout. According to new findings from The Upwork Research Institute, 88% of top AI performers report burnout, And they are 2x as likely to leave their jobs. The reality is that AI is quietly eroding ↳Human connection ↳Trust ↳Purpose Which are all critical to long-term success. But it doesn't have to be that way. Proactive leaders can take advantage of AI's massive contributions, While still supporting motivated, engaged, and thriving teams. Here's how to start: 1. Make it okay to log off ↳AI tools are always available, which makes people feel like they should be too ↳Ex: Say, "You don't need to respond outside work hours," and back it up 2. Make time for real connection ↳People now say they trust AI more than coworkers, putting connection at risk ↳Ex: Start team meetings with shoutouts and appreciation, not just metrics 3. Notice the good stuff out loud ↳AI delivers quick results, but people need to feel appreciated to stay engaged ↳Ex: Ask, "What's something you're proud of this week?" in 1:1s 4. Give people time to think ↳AI increases speed, but without space to think, quality suffers ↳Ex: Block two hours daily for deep work across the team 5. Remind people why their work matters ↳AI can make work feel transactional, draining the meaning that fuels motivation ↳Ex: Share a customer story to remind people what they're part of 6. Make it safe to speak up ↳AI moves fast, but without safety, speed leads to silence, not innovation ↳Ex: Encourage questions even when things are moving quickly 7. Share the full picture ↳AI tools give fast answers, but people need context to feel connected ↳Ex: Share how and why key decisions were made 8. Celebrate what only humans can do ↳AI optimizes for output - humans thrive when their unique gifts are seen ↳Ex: Recognize creativity, empathy, or leadership, not just speed or output 9. Show it's okay to struggle ↳AI can make it seem like everyone's cruising, making people hide their own challenges ↳Ex: Share your confusion or frustrations so others feel safe doing the same 10. Slow down when it matters ↳AI can answer fast, but it takes human judgment to ask the right questions ↳Ex: Praise someone who pauses to think critically instead of rushing 11. Find moments to build trust ↳Trust is built through action, but AI has made human follow-through less visible ↳Ex: Send the message, make the call, follow up - show them they can count on you AI's short-term gains are real and impressive. But only the leaders who invest in people will make those gains last. Are you seeing burnout rise as AI use increases? --- ♻️ Repost to help more teams thrive with AI. And for even more tips on how to strengthen workplaces in the AI era, Check out Upwork’s complete report: http://spr.ly/GeorgeStern #UpworkPartner #FutureOfWork #AI
How to Use AI While Maintaining Engagement
Explore top LinkedIn content from expert professionals.
Summary
Using AI while maintaining engagement combines technology’s capabilities with genuine human interaction to ensure productivity and connection coexist in workplaces. It's about leveraging AI to enhance tasks while preserving trust, motivation, and emotional intelligence within teams.
- Create human connections: Balance AI’s efficiencies with regular check-ins and team-building activities that prioritize trust and recognition among colleagues.
- Define AI's role clearly: Specify where AI ends and human judgment begins to ensure transparency and foster confidence in decision-making processes.
- Use AI to support, not replace: Delegate routine, repetitive tasks to AI, freeing time for meaningful work that requires creativity, empathy, and critical thinking.
-
-
Most AI implementations can be technically flawless—but fundamentally broken. Here's why: Consider this scenario: A company implemented a fully automated AI customer service system, and reduced ticket solution time by 40%. What happens to the satisfaction scores? If they drop by 35%, is the reduction in response times worth celebrating? This exemplifies the trap many leaders fall into - optimizing for efficiency while forgetting that business, at its core, is fundamentally human. Customers don't always just want fast answers; they want to feel heard and understood. The jar metaphor I often use with leadership teams: Ever tried opening a jar with the lid screwed on too tight? No matter how hard you twist, it won't budge. That's exactly what happens when businesses pour resources into technology but forget about the people who need to use it. The real key to progress isn't choosing between technology OR humanity. It's creating systems where both work together, responsibly. So, here are 3 practical steps for leaders and businesses: 1. Keep customer interactions personal: Automation is great, but ensure people can reach humans when it matters. 2. Let technology do the heavy lifting: AI should handle repetitive tasks so your team can focus on strategy, complex problems, and relationships. 3. Lead with heart, not just data (and I’m a data person saying this 🤣) Technology streamlines processes, but can't build trust or inspire people. So, your action step this week: Identify one process where technology and human judgment intersect. Ask yourself: - Is it clear where AI assistance ends and human decision-making begins? - Do your knowledge workers feel empowered or threatened by technology? - Is there clear human accountability for final decisions? The magic happens at the intersection. Because a strong culture and genuine human connection will always be the foundation of a great organization. What's your experience balancing tech and humanity in your organization?
-
I'm knee deep this week putting the finishing touches on my new Udemy course on "AI for People Managers: Lead with confidence in an AI-enabled workplace". After working with hundreds of managers cautiously navigating AI integration, here's what I've learned: the future belongs to leaders who can thoughtfully blend AI capabilities with genuine human wisdom, connection, and compassion. Your people don't need you to be the AI expert in the room; they need you to be authentic, caring, and completely committed to their success. No technology can replicate that. And no technology SHOULD. The managers who are absolutely thriving aren't necessarily the most tech-savvy ones. They're the leaders who understand how to use AI strategically to amplify their existing strengths while keeping clear boundaries around what must stay authentically human: building trust, navigating emotions, making tough ethical calls, having meaningful conversations, and inspiring people to bring their best work. Here's the most important takeaway: as AI handles more routine tasks, your human leadership skills become MORE valuable, not less. The economic value of emotional intelligence, empathy, and relationship building skyrockets when machines take over the mundane stuff. Here are 7 principles for leading humans in an AI-enabled world: 1. Use AI to create more space for real human connection, not to avoid it 2. Don't let AI handle sensitive emotions, ethical decisions, or trust-building moments 3. Be transparent about your AI experiments while emphasizing that human judgment (that's you, my friend) drives your decisions 4. Help your people develop uniquely human skills that complement rather than compete with technology. (Let me know how I can help. This is my jam.) 5. Own your strategic decisions completely. Don't hide behind AI recommendations when things get tough 6. Build psychological safety so people feel supported through technological change, not threatened by it 7. Remember your core job hasn't changed. You're still in charge of helping people do their best work and grow in their careers AI is just a powerful new tool to help you do that job better, and to help your people do theirs better. Make sure it's the REAL you showing up as the leader you are. #AI #coaching #managers
-
⚙️ AI is transforming the way we work. But leadership? That still starts with people. We’re in the midst of an AI revolution. Tech is moving fast. Automation is accelerating. And leaders are being pushed to integrate these tools—fast. But here’s what’s also happening: Teams are unsure where they fit. Burnout is creeping in Human connection is thinning. Leaders today face a unique dual mandate. Embrace AI, upskill teams, and stay competitive. And lead with empathy, care, and adaptability. Here are 8 steps I use with my executive clients to lead through this kind of change with clarity and confidence: 1. Acknowledge the Disruption: Start by naming the shift. Teams need to know you see the change and are leading through it, not avoiding it. 2. Lead with Empathy: Check in with your team to see how they are coping. Emotional clarity builds trust and resilience. 3. Upskill, Don’t Just Automate: Invest in reskilling. AI isn’t here to replace people—it’s here to enhance them. 4. Model AI Literacy: Be the first to learn and try new tools. Your curiosity sets the tone. 5. Encourage Dialogue: Let teams ask questions, explore new tools, and even fail. Innovation needs room to breathe. 6. Communicate Transparently: Share what you know—and what you’re still figuring out. Clarity over certainty builds credibility. 7. Balance Performance with Well-Being: Don’t just measure output. Pay attention to energy, burnout signals, and team cohesion. 8. Stay Anchored to Purpose: Remind people why the work matters. AI can improve outcomes, but it’s human meaning that drives real engagement. 💡 The tools may be new, but the best leadership is still rooted in trust, communication, and clarity of purpose. If you’re navigating this kind of landscape, I support leaders and teams to adapt with purpose and performance in mind. 📩 To learn more, email me at mc@mccoachingnyc.com. #AIleadership #executivecoaching #changemanagement #futureofwork #wellbeing #digitaltransformation #peoplefirst
-
Interoperability. Augmentation. Human-in-the-Loop. The AI Trifecta Most Teams Miss. AI isn’t replacing us. It’s redefining how we lead, build, and solve—with humans at the center. As a CTO, I’ve seen firsthand that the most transformative AI solutions don’t sideline people—they supercharge them. We’ve deployed GenAI copilots, real-time analytics, and predictive systems across global teams. But not to chase buzzwords. We did it to reduce churn, accelerate insights, and empower decision-makers—from engineers to execs. And here’s what I’ve learned: The best AI design is grounded in 3 non-negotiables: 🧩 Interoperability: If your systems don’t speak to each other—and to humans—you’re not scaling. Open APIs, clean data, and integration-first thinking are essential. 🚀 Augmentation: AI should be your team’s copilot, not their replacement. Done right, it boosts productivity, speeds up feedback loops, and elevates performance across the board. 🧠 Human-in-the-loop: No substitute for context and judgment exists, especially in high-stakes environments. Keep humans engaged where it matters most. I say this often: AI without human context is just a hammer in search of a nail. Let’s design systems that make us more human, not less. ✅ Rethink your roadmap. ✅ Are you building for augmentation or automation? ✅ Do all 3 pillars show up in your AI strategy? Which pillar do you think gets overlooked most in practice? Let’s challenge each other to build better. #CTOThoughts #AILeadership #DigitalStrategy #HumanCenteredAI #GenAI #EnterpriseInnovation #EthicalAI #FutureOfWork #HumanInTheLoop #Interoperability #AugmentedIntelligence #TechLeadership #ProductStrategy #DigitalTransformation
-
If you’ve found yourself caught in the swirl of catastrophic headlines — “AI will kill critical thinking.” “Screens are ruining childhood.” “Teachers will be replaced by 2030.” Take a breath. Get above the silo. The truth is: education isn’t ruined, it’s being rewritten. And the best way to shape what’s next isn’t panic. Its purpose. You don’t need to overhaul everything overnight. But you do need to start taking intentional steps now. Here are five actions you can take today to design for balance, equity, and human connection before reaction becomes policy. Problem → Purpose → Solution: Don’t Let Curiosity Be Collateral Problem: We’re fast-tracking AI into schools without asking: Whose dreams are we designing for? Too often, we focus on teaching how to use tools before we've given the space to imagine why they might need them. Purpose: To ensure that the tools we adopt amplify curiosity, not replace it. To remember that the spark begins with a question, not an answer. Solution: Actions That Protect Curiosity and Build Capacity 1. 𝐀𝐮𝐝𝐢𝐭 𝐲𝐨𝐮𝐫 𝐜𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐟𝐨𝐫 𝐞𝐱𝐩𝐥𝐨𝐫𝐚𝐭𝐢𝐨𝐧 𝐠𝐚𝐩𝐬 → Identify where students are being asked to consume vs. create. → Integrate inquiry-based learning models where students investigate real-world careers and questions before applying AI tools. 2. 𝐈𝐧𝐭𝐫𝐨𝐝𝐮𝐜𝐞 “𝐃𝐫𝐞𝐚𝐦-𝐭𝐨-𝐓𝐨𝐨𝐥” 𝐦𝐚𝐩𝐩𝐢𝐧𝐠 𝐚𝐜𝐭𝐢𝐯𝐢𝐭𝐢𝐞𝐬 → Have students first identify a career or passion, then explore how AI might enhance their journey. → Reinforces purpose-first learning rather than tool-first exposure. 3. 𝐑𝐞𝐝𝐞𝐬𝐢𝐠𝐧 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐥𝐢𝐭𝐞𝐫𝐚𝐜𝐲 𝐩𝐫𝐨𝐠𝐫𝐚𝐦𝐬 𝐭𝐨 𝐜𝐞𝐧𝐭𝐞𝐫 𝐬𝐭𝐮𝐝𝐞𝐧𝐭 𝐠𝐨𝐚𝐥𝐬 → Move beyond “how to use AI” to “how to use AI with intention.” → Frame tech skills within a context of self-awareness, ethics, and ambition. 4. 𝐇𝐨𝐬𝐭 𝐬𝐭𝐮𝐝𝐞𝐧𝐭-𝐥𝐞𝐝 𝐬𝐡𝐨𝐰𝐜𝐚𝐬𝐞𝐬 𝐨𝐟 𝐟𝐮𝐭𝐮𝐫𝐞 𝐜𝐚𝐫𝐞𝐞𝐫 𝐩𝐚𝐭𝐡𝐬 → Let students present how they’d use AI in the job of their dreams, whether it’s an astronaut, artist, or activist. → Support them with mentorship and interdisciplinary exploration. 5. 𝐄𝐬𝐭𝐚𝐛𝐥𝐢𝐬𝐡 𝐬𝐭𝐮𝐝𝐞𝐧𝐭 𝐚𝐝𝐯𝐢𝐬𝐨𝐫𝐲 𝐠𝐫𝐨𝐮𝐩𝐬 𝐨𝐧 𝐭𝐞𝐜𝐡 𝐚𝐝𝐨𝐩𝐭𝐢𝐨𝐧 → Involve students in reviewing and giving feedback on AI tools your school is considering. → Teach civic engagement that ensures AI decisions are grounded in lived experience. Protect open-ended inquiry in curriculum design. Center student voice in AI adoption strategies. #EducationalLeadership #AIinEducation #EthicalAI #FutureofEducation #Superintendents #Teachers #Edtech #Strategy #Implementation #Purpose #BrightMinds
-
🤖 AI can enhance your coaching—but it can’t replace conversation, context, or courage. Yes, it can analyze talk time, word choice, sentiment. It can spot when a rep speaks too much or avoids pricing. But here’s what AI can’t do: It can’t feel tension in a rep’s voice. It can’t notice a shift in posture during tough feedback. It can’t sit in silence when a rep says, “I don’t think I’m good enough anymore.” Let’s break it down. 🔎 Where AI helps: * Surfacing trends in talk tracks * Highlighting rep behavior patterns at scale * Speeding up feedback loops for repetitive issues Coach: “AI shows you’re avoiding direct language during budget talks. Let’s dissect that moment together.” But that’s just the first 10%. The rest is human coaching. 💬 Where AI hurts: * Coaching becomes transactional: “Fix the red box.” * Reps start performing for the tool instead of selling with intent * Emotional nuance is missed completely Coach: “The data says this deal is fine. But you sound checked out. What’s really going on?” 🧠 Framework to integrate AI into coaching without losing humanity: 1. Use AI to spot patterns—not as the final answer 2. Ground every insight in a real conversation 3. Prioritize emotion, energy, and context over checklists 4. Ask better questions, not just provide faster answers Coach: “Why do you think you drop confidence in second meetings?” Rep: “That’s where I start questioning myself.” Coach: “That’s not a scripting issue. That’s an identity edge we’re going to strengthen.” AI doesn’t build trust. It doesn’t challenge limiting beliefs. It doesn’t remind someone who they are when they forget. Only a great coach does that. Follow me for more B2B sales insights. Repost if this resonates. Subscribe to my B2B Sales Sorcery Newsletter here: https://lnkd.in/dgdPAd3h Explore free B2B sales playbooks: https://lnkd.in/dg2-Vac6
-
Stop asking AI tools like ChatGPT, Gemini, or Claude to edit and rewrite your marketing copy, emails, or other assets. Instead, use them as collaborative partners to help you improve the quality of your work. Here's how 👇 Ask your AI tool to review your work as the editor you want it to be. Are you looking for copy edits for grammar? Changes to stay on brand? Adaptation for a specific vertical? The perspective of your target persona? Give it specific guidance and the skills to be that exact editor. Then provide all of the appropriate context needed to do a great job. Share your goals, audience, brand guidelines, purpose, and/or whatever else a human would need to know to do a good job on the edits. Now comes the magic - request the AI review your copy for suggested changes. Ask it to give you three things for every edit it suggests: - The original copy you wrote. - Its suggested revisions. - The reasoning behind each change it suggested. This method works so much better than just asking the AI to re-write your copy and make it better because when you edit using my before/after/why framework you'll get... 1️⃣. Higher-quality edits When the AI is required to explain its suggestions, it avoids making changes just for the sake of making changes. This leads to more thoughtful, meaningful, high-quality improvements. 2️⃣. YOU stay connected Applying the AI’s suggestions yourself keeps you actively involved. You won’t accidentally become complacent (it's so easy with AI!) and blindly accept poor edits that degrade rather than enhance the quality of work. 3️⃣. Critical thinking helps a lot Understanding the reasoning behind a suggestion helps you decide if you agree with the logic. Even if you don’t love the execution, you can adopt the thinking behind the suggestion and adjust the execution to fit your voice and goals. 4️⃣ . The AI may catch edits you might overlook AI can flag things you didn’t notice, giving you the chance to refine them in your own way. This approach works especially well with tools like Gemini in Google Docs, Copilot in Word, or ChatGPT and Claude in a chatbot environment. While it might take a little longer to apply the suggestions, the payoff in quality is well worth it. You'll get higher-quality results and a deeper understanding of your own work. We talk a lot about AI efficiency gains, but AI isn’t just about saving time. One of the biggest reasons to build AI skills is because it improves the quality - not just the speed - of work. In fact, CMOs whose marketing teams I've trained with AI skills over the last 2 years frequently tell me post-training that they can really see who is actively using AI because of the dramatic increase in the quality of their work (and how much better it is than other people's now)! So if you've been asking ChatGPT to re-write your copy for you, try this method with your next project instead, and see how much better it is!
-
When it comes to using AI to match candidates with jobs, more accurate/predictive AI is better, right? Not necessarily. One data-driven study would suggest the answer is no. I recently read Co-Intelligence, Living and Working with AI by Ethan Mollick, which I highly recommend. In the book, Ethan features a study by Fabrizio Dell’Acqua titled, "Falling Asleep at the Wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters" in which 181 experienced recruiters were hired to collectively review nearly 8000 resumes for a software engineering position. Of note: the recruiters were incentivized to be as accurate as possible. The recruiters received algorithmic recommendations about the job candidates but the quality of these AI recommendations was randomized between 1) perfectly predictive AI; 2) high-performing AI; 3) lower-performing AI; and 4) no AI. Of critical importance to the study, recruiters were aware of the type of AI assistance they would be receiving. Key findings include: 1. Recruiters with higher-quality AI performed worse in their assessments of candidates in relation to the job than those using lower-quality AI. They spent less time and effort in their evaluations of each candidate, and they tended to blindly trust the AI recommendations. 2. Recruiters with lower-quality AI "exerted more effort and spent more time evaluating the resumes, and were less likely to automatically select the AI-recommended candidate. The recruiters collaborating with low-quality AI learned to interact better with their assigned AI and improved their performance." These findings suggest that when users have access to high-quality AI (or at least believe they do), they are indeed in danger of "falling asleep at the wheel," where they become overly reliant on AI, and reduce their attention, effort, and critical thinking - which can negatively impact outcomes for all involved. As we increasingly integrate AI into work, it's important to maintain a balance between technological support and human skill/expertise. Instead of aiming for (or claiming to have!) "perfect" AI, perhaps our goal should be to develop systems that enhance human decision-making and keep users actively engaged and thinking critically. What do you think? Check out the full details of the study here: https://lnkd.in/eGaTmTEi #AI #matching #criticalthinking #futureofwork