Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]
Evaluating Workflows for Efficiency
Explore top LinkedIn content from expert professionals.
-
-
Over the last year, I’ve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)… But only track surface-level KPIs — like response time or number of users. That’s not enough. To create AI systems that actually deliver value, we need 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰, 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 that reflect: • User trust • Task success • Business impact • Experience quality This infographic highlights 15 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 dimensions to consider: ↳ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 — Are your AI answers actually useful and correct? ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 — Can the agent complete full workflows, not just answer trivia? ↳ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 — Response speed still matters, especially in production. ↳ 𝗨𝘀𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 — How often are users returning or interacting meaningfully? ↳ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗥𝗮𝘁𝗲 — Did the user achieve their goal? This is your north star. ↳ 𝗘𝗿𝗿𝗼𝗿 𝗥𝗮𝘁𝗲 — Irrelevant or wrong responses? That’s friction. ↳ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗗𝘂𝗿𝗮𝘁𝗶𝗼𝗻 — Longer isn’t always better — it depends on the goal. ↳ 𝗨𝘀𝗲𝗿 𝗥𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻 — Are users coming back 𝘢𝘧𝘵𝘦𝘳 the first experience? ↳ 𝗖𝗼𝘀𝘁 𝗽𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 — Especially critical at scale. Budget-wise agents win. ↳ 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝘁𝗵 — Can the agent handle follow-ups and multi-turn dialogue? ↳ 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲 — Feedback from actual users is gold. ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — Can your AI 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 𝘢𝘯𝘥 𝘳𝘦𝘧𝘦𝘳 to earlier inputs? ↳ 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — Can it handle volume 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 degrading performance? ↳ 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 — This is key for RAG-based agents. ↳ 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗼𝗿𝗲 — Is your AI learning and improving over time? If you're building or managing AI agents — bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system — these are the metrics that will shape real-world success. 𝗗𝗶𝗱 𝗜 𝗺𝗶𝘀𝘀 𝗮𝗻𝘆 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗼𝗻𝗲𝘀 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Let’s make this list even stronger — drop your thoughts 👇
-
Evaluating LLMs is hard. Evaluating agents is even harder. This is one of the most common challenges I see when teams move from using LLMs in isolation to deploying agents that act over time, use tools, interact with APIs, and coordinate across roles. These systems make a series of decisions, not just a single prediction. As a result, success or failure depends on more than whether the final answer is correct. Despite this, many teams still rely on basic task success metrics or manual reviews. Some build internal evaluation dashboards, but most of these efforts are narrowly scoped and miss the bigger picture. Observability tools exist, but they are not enough on their own. Google’s ADK telemetry provides traces of tool use and reasoning chains. LangSmith gives structured logging for LangChain-based workflows. Frameworks like CrewAI, AutoGen, and OpenAgents expose role-specific actions and memory updates. These are helpful for debugging, but they do not tell you how well the agent performed across dimensions like coordination, learning, or adaptability. Two recent research directions offer much-needed structure. One proposes breaking down agent evaluation into behavioral components like plan quality, adaptability, and inter-agent coordination. Another argues for longitudinal tracking, focusing on how agents evolve over time, whether they drift or stabilize, and whether they generalize or forget. If you are evaluating agents today, here are the most important criteria to measure: • 𝗧𝗮𝘀𝗸 𝘀𝘂𝗰𝗰𝗲𝘀𝘀: Did the agent complete the task, and was the outcome verifiable? • 𝗣𝗹𝗮𝗻 𝗾𝘂𝗮𝗹𝗶𝘁𝘆: Was the initial strategy reasonable and efficient? • 𝗔𝗱𝗮𝗽𝘁𝗮𝘁𝗶𝗼𝗻: Did the agent handle tool failures, retry intelligently, or escalate when needed? • 𝗠𝗲𝗺𝗼𝗿𝘆 𝘂𝘀𝗮𝗴𝗲: Was memory referenced meaningfully, or ignored? • 𝗖𝗼𝗼𝗿𝗱𝗶𝗻𝗮𝘁𝗶𝗼𝗻 (𝗳𝗼𝗿 𝗺𝘂𝗹𝘁𝗶-𝗮𝗴𝗲𝗻𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀): Did agents delegate, share information, and avoid redundancy? • 𝗦𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗼𝘃𝗲𝗿 𝘁𝗶𝗺𝗲: Did behavior remain consistent across runs or drift unpredictably? For adaptive agents or those in production, this becomes even more critical. Evaluation systems should be time-aware, tracking changes in behavior, error rates, and success patterns over time. Static accuracy alone will not explain why an agent performs well one day and fails the next. Structured evaluation is not just about dashboards. It is the foundation for improving agent design. Without clear signals, you cannot diagnose whether failure came from the LLM, the plan, the tool, or the orchestration logic. If your agents are planning, adapting, or coordinating across steps or roles, now is the time to move past simple correctness checks and build a robust, multi-dimensional evaluation framework. It is the only way to scale intelligent behavior with confidence.
-
You've built your AI agent... but how do you know it's not failing silently in production? Building AI agents is only the beginning. If you’re thinking of shipping agents into production without a solid evaluation loop, you’re setting yourself up for silent failures, wasted compute, and eventully broken trust. Here’s how to make your AI agents production-ready with a clear, actionable evaluation framework: 𝟭. 𝗜𝗻𝘀𝘁𝗿𝘂𝗺𝗲𝗻𝘁 𝘁𝗵𝗲 𝗥𝗼𝘂𝘁𝗲𝗿 The router is your agent’s control center. Make sure you’re logging: - Function Selection: Which skill or tool did it choose? Was it the right one for the input? - Parameter Extraction: Did it extract the correct arguments? Were they formatted and passed correctly? ✅ Action: Add logs and traces to every routing decision. Measure correctness on real queries, not just happy paths. 𝟮. 𝗠𝗼𝗻𝗶𝘁𝗼𝗿 𝘁𝗵𝗲 𝗦𝗸𝗶𝗹𝗹𝘀 These are your execution blocks; API calls, RAG pipelines, code snippets, etc. You need to track: - Task Execution: Did the function run successfully? - Output Validity: Was the result accurate, complete, and usable? ✅ Action: Wrap skills with validation checks. Add fallback logic if a skill returns an invalid or incomplete response. 𝟯. 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗲 𝘁𝗵𝗲 𝗣𝗮𝘁𝗵 This is where most agents break down in production: taking too many steps or producing inconsistent outcomes. Track: - Step Count: How many hops did it take to get to a result? - Behavior Consistency: Does the agent respond the same way to similar inputs? ✅ Action: Set thresholds for max steps per query. Create dashboards to visualize behavior drift over time. 𝟰. 𝗗𝗲𝗳𝗶𝗻𝗲 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗠𝗲𝘁𝗿𝗶𝗰𝘀 𝗧𝗵𝗮𝘁 𝗠𝗮𝘁𝘁𝗲𝗿 Don’t just measure token count or latency. Tie success to outcomes. Examples: - Was the support ticket resolved? - Did the agent generate correct code? - Was the user satisfied? ✅ Action: Align evaluation metrics with real business KPIs. Share them with product and ops teams. Make it measurable. Make it observable. Make it reliable. That’s how enterprises scale AI agents. Easier said than done.
-
AI scribes promise to save doctors from burnout. Yet this newly published NEJM AI article suggests we temper our expectations. Investigators compared EHR use and the financial performance of 112 Atrium Health PCPs who used DAX Copilot with 103 PCP controls who did not. Shockingly, active DAX users and high DAX users did NOT spend less time in the EHR and did not generate more revenue per visit. They concluded that DAX did not make clinicians more efficient. MY TAKE: 1️⃣ 𝐀𝐫𝐭𝐡𝐮𝐫 𝐂. 𝐂𝐥𝐚𝐫𝐤𝐞 𝐟𝐚𝐦𝐨𝐮𝐬𝐥𝐲 𝐬𝐭𝐚𝐭𝐞𝐝, “𝐀𝐧𝐲 𝐬𝐮𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭𝐥𝐲 𝐚𝐝𝐯𝐚𝐧𝐜𝐞𝐝 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐢𝐬 𝐢𝐧𝐝𝐢𝐬𝐭𝐢𝐧𝐠𝐮𝐢𝐬𝐡𝐚𝐛𝐥𝐞 𝐟𝐫𝐨𝐦 𝐦𝐚𝐠𝐢𝐜.” 𝐁𝐮𝐭 𝐀𝐈 𝐢𝐬𝐧’𝐭 𝐦𝐚𝐠𝐢𝐜. 𝐖𝐞 𝐦𝐮𝐬𝐭 𝐞𝐱𝐚𝐦𝐢𝐧𝐞 𝐢𝐭 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥𝐥𝐲. These investigators concluded that “the hype and novelty of ambient-listening AI tools have outpaced the evidence to support or refute claims that these tools are transformational in terms of time savings and efficiency.” 2️⃣ 𝐀𝐬𝐬𝐞𝐬𝐬𝐢𝐧𝐠 𝐀𝐈’𝐬 𝐢𝐦𝐩𝐚𝐜𝐭 𝐨𝐧 𝐜𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐢𝐬 𝐜𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐢𝐧𝐠. Because RCTs are often impractical, researchers typically perform cohort studies that cannot adjust for certain confounding factors (particularly the willingness to use the new technology). Also, complex statistics make it hard to explain the results. Additionally, EHR meta-data does not precisely reflect actual EHR usage. For example, while EHR use logs did not differ, most PCPs who used DAX qualitatively reported that it eased their cognitive burden and reduced their documentation time. 2️⃣ 𝐓𝐨𝐝𝐚𝐲’𝐬 𝐀𝐈 𝐬𝐜𝐫𝐢𝐛𝐞𝐬 𝐡𝐚𝐯𝐞 𝐥𝐢𝐦𝐢𝐭𝐞𝐝 𝐟𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐚𝐥𝐢𝐭𝐲. Conversations inform only a portion of notes. And note-writing is only part of the workflow. To yield major benefits, AI scribes may need to move into adjacent/downstream activities — such as summarizing records, pending orders, and suggesting diagnostic and billing codes. 4️⃣ 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 𝐠𝐚𝐢𝐧𝐬 𝐦𝐚𝐲 𝐝𝐞𝐩𝐞𝐧𝐝 𝐦𝐨𝐫𝐞 𝐨𝐧 𝐭𝐡𝐞 𝐮𝐬𝐞𝐫𝐬 𝐭𝐡𝐚𝐧 𝐭𝐡𝐞 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲. If AI saves clinicians time, some may see additional patients, whereas others may spend more time with patients, attend to other non-revenue-generating clinical activities, or leave work sooner. 5️⃣ 𝐅𝐮𝐥𝐥𝐲 𝐡𝐚𝐫𝐧𝐞𝐬𝐬𝐢𝐧𝐠 𝐀𝐈 𝐥𝐢𝐤𝐞𝐥𝐲 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐬 𝐞𝐧𝐭𝐢𝐫𝐞𝐥𝐲 𝐧𝐞𝐰 𝐰𝐨𝐫𝐤𝐬𝐭𝐫𝐞𝐚𝐦𝐬. Henry Ford famously stated, "If I had asked people what they wanted, they would have said faster horses.” Initially, we use new technologies to automate what we already do. But rather than using AI to write (mostly crappy) notes faster, we should reconsider clinical documentation entirely. I believe AI scribes will yield many benefits (along with some costs). But let's not expect too much, too soon. https://lnkd.in/gQcwRgE7
-
The legal technology landscape has never been more promising, yet I continue to observe fundamental disconnects between our tools and our workflows. After years of working at the intersection of law and technology, I've identified six critical gaps that deserve our collective attention as opportunities for meaningful improvement: 1. The Workflow Integration Challenge We invest in sophisticated AI-powered contract review platforms while basic document intake remains fragmented. The most advanced technology loses its impact when the foundation isn't solid. Success requires us to examine and optimize entire workflows, not just individual tools. 2. Demonstrating Value Beyond Intuition "It feels more efficient" doesn't satisfy stakeholders who need concrete metrics. We must develop better frameworks for measuring and communicating the value of legal technology investments. This means tracking not just time saved, but quality improvements, risk reduction, and business enablement. 3. Resource Allocation Reality Legal departments face increasing demands with static resources. The solution isn't always more headcount or bigger budgets—it's reimagining how work flows through our departments. Strategic automation and process optimization can multiply our impact without multiplying costs. 4. Empowering Business Partners The most effective legal departments enable self-service for routine matters. When business partners can confidently handle standard NDAs or basic contract questions independently, legal professionals can focus on higher-value strategic work. This shift requires intentional design and change management, but the payoff is substantial. 5. Measuring What Truly Matters While cost per contract provides one data point, we need metrics that capture our true business impact. Are we accelerating deal velocity? Reducing friction in the sales process? Enabling better business decisions? These outcomes matter more than pure efficiency metrics. 6. Meeting Users Where They Work The best legal technology integrates seamlessly into existing workflows. Rather than creating another portal or login, we should focus on bringing legal intelligence into the tools our colleagues already use daily. Invisible technology often provides the most visible results. Each of these challenges represents an opportunity to strengthen the connection between legal and business teams while leveraging technology more effectively. The path forward requires honest assessment, collaborative problem-solving, and a willingness to challenge traditional approaches. We have the tools and knowledge. We now we need to align them with how people actually work. Let's continue building a legal tech community that transforms these disconnects into opportunities for innovation and growth. #legaltech #innovation #law #business #learning
-
How GCC Leaders Can Improve Work Execution to Drive Employee Experience, Productivity, and Quality Most GCCs focus on scaling operations and cost efficiencies, but the best leaders go beyond that. They rethink how work gets done—removing inefficiencies, empowering employees, and ensuring quality outcomes. Here’s what truly moves the needle: 1. Fix Process Inefficiencies and Automate the Obvious Too many GCCs still replicate HQ processes instead of optimizing for agility. Identify bottlenecks, eliminate redundant approvals, and automate manual tasks—especially in IT, HR, and finance. Workflow automation can cut task times in half. 2. Align Teams Across Time Zones with Outcome-Based Execution Global teams struggle with coordination, leading to handover gaps and rework. Instead of micromanaging, real-time dashboards, and clear outcome ownership. Focus on customer impacting outcomes not effort. 3. Empower Employees with the Right Tools and Autonomy A poor employee experience leads to low engagement and productivity loss. Give teams self-service analytics, knowledge bases, and low-code/no-code tools to solve problems independently. Cut meeting overload and encourage deep work time. 4. Prioritize Learning, Growth, and Cross-Functional Expertise GCCs shouldn’t just execute work—they should drive innovation. Invest in technical upskilling, global mobility programs, and leadership rotations to create a future-ready workforce. 5. Governance Without Bureaucracy Traditional governance models slow down execution. Instead of rigid top-down approvals, implement agile decision-making frameworks and RACI models that balance control with speed. GCC leaders must shift from process execution to work transformation—optimizing workflows, leveraging AI, and making employee experience a top priority. The results can be significant: • 15-30% productivity gains by automating and streamlining workflows. • 10-25% cost savings through elimination of reduntang processes, process efficiencies and automation. • 20-40% improvement in employee engagement by reducing friction in daily work. • 20-50% faster execution of key projects by reducing delays and dependencies. • 25-50% fewer errors through improved governance and automation.
-
Anything CAN be automated. The real question is what SHOULD be automated. Instead of going automation-crazy, there’s often space to take a step back and see if you have messy/unnecessary automations that are wasting more time than they save. Case in point: Our client in the healthcare staffing industry had set up their own Pipedrive and things were a mess. Their automated workflow fed the team tasks that were unnecessary, duplicate, or long complete. Every day was a new avalanche of 100s of “urgent” overdue tasks. Stressful, right? Our goal when working with them wasn’t to add anything new, but rather to reorganize the way they used their CRM. We dove deep into their procedures, and then… ➡️ streamlined their task list ➡️ created new KPIs …and the craziest one: ➡️ switched daily to-do lists back to manual. That’s right, no more automation for this. Team members now create their own to-do lists. It might sound odd, but that’s what this business needed to flourish. I’d like to encourage you to start thinking in this direction, too. Instead of blindly trying to automate everything possible, take a step back. Look under the hood of your business. See what your process REALLY looks like, how your automations support those workflows, and how things would be different if you tweaked them. You might need more aspects automated. You might need to get rid of some automated workflows. And you might simply need to restructure some elements of your workflow. Approach it with an open mind — you just might be surprised at what you discover. -- Hi, I’m Nathan Weill, a business process automation expert. ⚡️ At Flow Digital, we help business owners like you unlock the power of automation with customized solutions so you can run your business better, faster, and smarter. #crm #automation #business #automationtiptuesday #automation #workflow
-
Are 80% of your meetings effective? Do people have at least four 2+ hour blocks of focus time every week? Scaling effective meetings, asynchronous collaboration and time for "deep work" across thousands of employees is challenging. Too many leaders shrug and give up: "it's just the way things are." ⭐ It might be hard, but it's totally possible to scale better use of time: 📅 Dropbox employees say 69% of meetings are effective, impressive vs research showing both executives and employees told Future Forum that ~50% of all meetings should be eliminated entirely. 🕖 Dropbox also got to >80% compliance with core collaboration hours around the globe -- a massive win, especially when you realize "one size doesn't fit all" on almost any work practice. 💪 Atlassian saw a 31% increase in progress against weekly goals when combining better calendar management with weekly goal-setting. 🔎 Slack got to 85% of employees saying Focus Fridays and No Meeting Weeks were a significant benefit to them -- higher than many monetary or services benefits. What's the secret sauce? 1️⃣ Aligned Executives: in both cases, the executive suite from CEO on down understood that excessive meetings and a lack of time for deep work were leading to burnout and lower quality work. 2️⃣ Pilot then Expand: We experimented with No Meeting Weeks in the Product, Design & Eng team at Slack, refined it, then partnered with functional leaders to translate specific meeting types and workflows in order to roll it out. 3️⃣ Measure Progress: A quarterly pulse survey with results by function and Spotify's meetings cost calculator are examples of pretty straightforward ways to measure progress. Tools like Microsoft Viva also help! 4️⃣ Reinforce Regularly: Discuss survey results in exec staff quarterly, build reinforcement into leadership conversations, All Hands meetings and comms. A cross-functional task force can bring ownership closer to functions. ❓ What practices have you scaled in your organization? Where have you seen programs fail to take hold? 🏗️ Dig deeper: 🔗 Links to Atlassian's time boxing and goal setting experiments by Molly Sands, PhD and team, Dropbox's virtual-first toolkit by Allison Vendt, Melanie Rosenwasser and Alastair Simpson and the Slack Focus Friday and Maker Week content I did with Christina Janzer and Kristen Swanson in comments. Would also recommend Kasia Triantafelo's collection of insights from the Running Remote community, linked as well. This is Part 2 of a series on 2025 Resolution: Make Better Use of Time. Thanks Karrah Phillips, Dave O'Neill, the folks listed above and Kevin Delaney, Tim Glowa (IBDC.D, GCB.D) & Nick Petrie for inspiring me to pick this back up! #Meetings #Productivity #Focus #DeepWork #FocusTime #Collaboration #Leadership #ChangeManagement #EmployeeExperience #EX
-
Anyone else suffer from meeting overload? It’s a big deal. Simply put too many meetings means less time available for actual work, plus constantly attending meetings can be mentally draining, and often they simply are not required to accomplish the agenda items. At the same time sometimes it’s unavoidable. No matter where you are in your career, here are a few ways that I tackle this topic so that I can be my best and hold myself accountable to how my time is spent. I take 15 minutes every Friday to look at the week ahead and what is on my calendar. I follow these tips to ensure what is on the calendar should be and that I’m prepared. It ensures that I have a relevant and focused communications approach, and enables me to focus on optimizing productivity, outcomes and impact. 1. Review the meeting agenda. If there’s no agenda I send an email asking for one so you know exactly what you need to prepare for, and can ensure your time is correctly prioritized. You may discover you’re actually not the correct person to even attend. If it’s your meeting, set an agenda because accountability goes both ways. 2. Define desired outcomes. What do you want/need from the meeting to enable you to move forward? Be clear about it with participants so you can work collaboratively towards the goal in the time allotted. 3. Confirm you need the meeting. Meetings should be used for difficult or complex discussions, relationship building, and other topics that can get lost in text-based exchanges. A lot of times though we schedule meetings that we don’t actually require a meeting to accomplish the task at hand. Give ourselves and others back time and get the work done without that meeting. 4. Shorten the meeting duration. Can you cut 15 minutes off your meeting? How about 5? I cut 15 minutes off some of my recurring meetings a month ago. That’s 3 hours back in a week I now have to redirect to high impact work. While you’re at it, do you even need all those recurring meetings? It’s never too early for a calendar spring cleaning. 5. Use meetings for discussion topics, not FYIs. I save a lot of time here. We don’t need to speak to go through FYIs (!) 6. Send a pre-read. The best meetings are when we all prepare for a meaningful conversation. If the topic is a meaty one, send a pre-read so participants arrive with a common foundation on the topic and you can all jump straight into the discussion and objectives at hand. 7. Decline a meeting. There’s nothing wrong with declining. Perhaps you’re not the right person to attend, or there is already another team member participating, or you don’t have bandwidth to prepare. Whatever the reason, saying no is ok. What actions do you take to ensure the meetings on your calendar are where you should spend your time? It’s a big topic that we can all benefit from, please share your tips in the comments ⤵️ #careertips #productivity #futureofwork