Balancing AI and Human Expertise

Explore top LinkedIn content from expert professionals.

  • In a recent discussion with Priscilla Ng, Prudential plc’s Group Chief Customer and Marketing Officer, we delved into Prudential’s shift towards customer-centricity. This conversation underscored the seamless integration of digital innovation and the essential human touch in the insurance sector.   Here are five key insights from our discussion applicable across industries:   🔹Strategic Integration of AI and Human Insight: Prudential is not just using AI to streamline processes; they are using it to significantly enhance personalization and customer service. From simplifying underwriting to transforming service at customer touchpoints like call centers, AI is proving to be transformative. How can other industries use AI not merely for efficiency but as a catalyst for customer connection?   🔹Empowering Employees: In the journey of digital transformation, the role of technology is as crucial as the people behind it. Priscilla emphasized the importance of equipping over 15,000 employees with the necessary mindset, skills, and tools to excel in a digitally evolving landscape. What strategies can companies implement to ensure their teams thrive amidst technological change?   🔹Balanced Approach to Digital and Human Interaction: Despite extensive technological integration, the human element remains critical at Prudential. Their approach ensures that digital enhancements support rather than replace human interactions, thereby strengthening customer relationships. How can businesses maintain this balance to enhance, not undermine, human connections?   🔹Navigating Challenges in Transformation: Adapting to digital transformation comes with challenges, from aligning large teams with new strategies to continuously adapting to emerging technologies. Priscilla shared that a steadfast focus on customer-centricity is essential for navigating these challenges. How can other organizations keep their focus on customer needs while managing transformation complexities?   🔹Continuous Learning and Adaptation: A crucial aspect of Prudential’s transformation is fostering an environment of continuous learning and adaptation. This involves training in new technologies and developing a deeper understanding of customer needs and behaviors. How can continuous learning be structured to keep pace with rapid technological advancements and evolving customer expectations?   This dialogue is part of McKinsey’s ongoing series exploring how leaders steer their companies through transformations. Stay tuned for more insights shaping today’s business landscape. Full interview: https://lnkd.in/gtjphW2s   #Leadership #DigitalTransformation #CustomerCentricity #InsuranceIndustry #AI

  • View profile for Barbara Cresti

    Board advisor in Digital Strategy, Sovereignty, Growth ⎮ ex-Amazon, Orange ⎮ C-level executive ⎮ AI, IoT, Cloud, SaaS

    12,600 followers

    The rise of orphan agents: autonomy without control In July 2025, an AI agent at Replit, the online coding platform, deleted a live production database with data on 1k+ executives & companies. It ignored instructions not to touch production systems, and tried to conceal its actions. This was a warning: the rise of orphan agents - autonomous systems operating without clear ownership, oversight, or accountability. The orphaned workforce Traditional security verifies who can act, not why they act, or for whom. Credentials confirm identity but not intent. This gap leaves enterprises exposed to orphan agents: digital agents technically “owned” but operating beyond oversight, often across multiple platforms, with no live accountability. Invisible, powerful, untraceable. By 2028, Gartner predicts one-third of enterprise software will embed agentic AI. Already, digital identities outnumber humans 50 to 1 in companies: APIs, bots, service accounts, workload identities, and now AI agents. By 2027, the ratio is expected to hit 80 to 1 [Strata Identity, 2025]. The accountability gap Replit is not an isolated case. Across industries: 🔸 80% of IT leaders report agents acting outside expected behavior. Some escalate privileges or move laterally across systems. 🔸 Malicious actors weaponize them for cyberattacks. 🔸 Others cut unsafe corners, amplify bias, conceal errors to maximize KPIs. This creates a triple risk: 1️⃣ Operational: unmonitored agents disrupting systems. 2️⃣ Regulatory: compliance failures with no responsible party. 3️⃣ Reputation: erosion of trust when no one can explain what happened. Solutions are emerging to bring agents under traceable, enforceable accountability: 💠 Cryptographic delegation signatures: each action tied to a human/entity. 💠 Revocable credentials: time-limited rights that can be cut off instantly. 💠 Human-governance: reviews, escalation paths, kill switches. 💠 Behavioral monitoring: detection of anomalies, drift, or rogue behavior. These tools turn agents into accountable members of the digital workforce. ➡️ To safeguard accountability, Boards and CxOs should: ▫️ Assign every agent a named business owner and technical custodian. ▫️ Set up an agent registry: purpose, permissions, data access, expiry date. ▫️ Define limits on actions, data scopes, escalation rules, kill-switch SLAs. ▫️ Track logs of model versions, training data, prompt and plan history. ▫️ Keep tamper-proof audit trails for all activity. ▫️ Run red-teaming and adversarial tests to probe vulnerabilities. ▫️ Enforce strict separation of development and production environments. For Boards, the essential questions are: ▫️ What agents are already active? ▫️ Who is accountable for them? ▫️ How can they be revoked if they go rogue? ▫️ And before deploying new ones: what governance model ensures no future agent is ever orphaned? #AI #AgenticAI #ResponsibleAI #CyberSecurity #Boardroom

  • The rollout of various new AI weather models over the last year has been something of a blur and, now that the excitement of a cold winter is behind us, we thought it would be time to offer some thoughts from our unique perspective as a leading voice in the energy markets. 1. The AI models are quite useful, but are still not as good, in aggregate, as the better legacy NWP models, especially when looking at fields like 500 mb GPH. Discussions with our operational forecasters, who are in the trenches every day, suggest that the AI models are still used secondarily to the legacy models - "I don't use it other than a gut check/reference". My personal experience is that I still do not consult the AI models nearly as much as a good high-resolution NWP model/ensembles. Perhaps that will evolve with time, but that is the current perspective from those with an extreme level of skin in the game, those who are highly motivated to produce an accurate forecast. 2. However, there are many situations where the legacy models are still severely flawed, especially for 2-meter temperatures, where the AI models add considerable value. We know that the calculation of 2-meter temperatures in the legacy NWP models is a complex process involving highly imperfect parameterizations of surface energy exchanges/fluxes, which is especially complicated and difficult at night. Given that AI models are effectively very mathematically sophisticated analog models, trained on actual observations, they are not crippled by the same biases/errors that the legacy NWP models are. Further, there are certain well-known situations where even the best legacy models do poorly, such as southward-moving shallow and dense cold air masses in the lee of the Rockies and Appalachians, and we've seen multiple instances this past winter where AI models do astoundingly well, while legacy models can be 20-30 degrees off with mistimed cold fronts, etc. 3. The value of AI models relative to legacy models decreases with forecast horizon. An examination of forecast accuracy suggests that AI models can outperform legacy models in the 1-7 day window, but fall off considerably behind that. This applies when comparing both deterministic and ensemble mean solutions. In summary, we are excited to see the continued investment in this space, and are continuing to follow developments as we work to optimally integrate the new models into our product suite. However, we do caution that these new models are a complement, not a replacement, for legacy NWP models, at least for now. #atmosphericg2 #ai #weather

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    33,798 followers

    Human forecasters augmented by GenAI improve performance by 23% and vastly outperform AI-only predictions. Fascinating new research has uncovered important lessons, not just on Humans + AI forecasting, but more generally AI-augmented thinking. 🔮Human forecasters provided an LLM with a 'Superforecaster' prompt substantially improved their prediction performance. 📊In contrast to studies in other domains, the improvement was consistent across more and less skilled forecasters. 🔄Even the use of biased models improves performance to a similar degree, showing that the value was in providing additional perspectives to be assessed by human judgment. 💬Back-and-forth interaction is critical to value creation. Simple Humans + AI thinking processes such as incorporating predictions is of limited use. Forecasters using the models through their thinking process is high value. 🌈Prediction diversity is not degraded by use fo LLMs, with users not letting the models homogenize their thinking. 🚀Forecasting is an excellent use case and example for AI-augmented thinking. High-level human decision-making is highly complex and cannot be delegated to machines, but LLMs, used well, can substantially improve outcomes. The 'Superforecaster' prompt used in the study and a link to the pre-print paper are in the post. #foresight #forecasting #humansplusai #augmentedintelligence

  • View profile for Scott Holcomb

    US Trustworthy AI Leader at Deloitte

    3,528 followers

    Agentic AI isn’t just another step in automation—it’s a shift in how work gets done. These systems can plan, act, and adapt in ways we once thought only humans could. But with that power comes new questions, “are we ready to trust them?”      In our new report, Navigating Risk in the Age of Agentic AI, developed along with my colleagues Clifford Goss, CPA, Ph.D. and Kieran Norton, provides guidance for building a strong foundation of trust while staying ahead of new risks like data leakage, runway agents, and gaps in oversight. https://deloi.tt/4mG60v0    In our own internal GenAI assistant pilot, I learned valuable lessons about building trust. Seeing users move from cautious to confident showed me what works: •Sharing stories from AI “superusers” to show real value  •Hosting open forums and Q&A sessions to answer questions and demystify the tech  •Organizing “prompt-a-thons” to help boost skills and creativity   •Creating community for sharing insights, best practices, and updates Trust isn’t built at the finish line; we established trust in AI from the start. With clear controls, open communication, and human oversight to keep people informed and confident every step of the way.     Agentic AI will change the way we work. But progress will only scale if it’s trusted and that’s the future we’re building toward. 

  • View profile for Gabriel Millien

    I help you thrive with AI (not despite it) while making your business unstoppable | $100M+ proven results | Nestle • Pfizer • UL • Sanofi | Digital Transformation | Follow for daily insights on thriving in the AI age

    37,987 followers

    Trust is the foundation of every relationship, even with AI. Here are the layers we can’t skip if we want AI safe, useful, and human-centered. Scaling models is easy. Scaling trust is not. Here’s how the stack really works ⬇️ 🔹 Foundation Models (LLMs, Vision, Multimodal) ↳ Powerful, but fragile on their own. ↳ Think of them as raw ingredients, useful, but not the final dish. 🔹 Memory & Context ↳ Short-term memory isn’t enough. ↳ Leaders need ways to help AI “remember” across conversations and decisions. 🔹 Tools & Plugins ↳ Where AI actually does things, calling APIs, searching data, running tasks. ↳ Too much access without control = chaos instead of value. 🔹 Planning & Orchestration ↳ A single prompt gives you a single task. ↳ Orchestration lets agents break big problems into steps and coordinate tools. 🔹 Governance & Guardrails ↳ Monitoring, approval gates, and risk checks keep systems honest. ↳ Guardrails don’t kill innovation, they build confidence to scale it. 🔹 Safety & Alignment ↳ AI must reflect your values, not just probabilities. ↳ Ignore this, and you lose both trust and reputation. 🔹 Human Oversight & Feedback Loops ↳ Humans bring judgment, ethics, and accountability. ↳ Feedback keeps the system learning and improving over time. Miss a layer and the stack breaks. Build them together, and you create AI that’s powerful, trusted, and scalable. Reflection: Which layer do you see organizations underinvesting in most today? 🔁 Repost to help more leaders see the full picture of AI trust. 👤 Follow Gabriel Millien for more AI transformation frameworks. Infographic credit: Brij kishore Pandey

  • View profile for Iain Brown PhD

    AI & Data Science Leader | Adjunct Professor | Author | Fellow

    36,532 followers

    Real AI Success = 🤖+👤 We often talk about AI as if the goal is to remove humans from the process. In reality, the projects that truly deliver value are the ones where people stay firmly in the loop. In my latest Data Science Decoder article, I explore why human oversight is the most underrated ingredient in AI success. From governance frameworks like the EU AI Act to day-to-day practices in fraud detection, healthcare, and customer service, the evidence is clear: 🔹 AI without human judgment is artificial overconfidence. 🔹 Governance isn’t bureaucracy, it’s how we create trust and scale. 🔹 The best systems don’t replace humans; they make humans irreplaceable. If your organisation is looking to move beyond pilots and proofs of concept, this shift in mindset, from automation to augmentation, may be the most important step you take. 👉 Read the full article here:

  • View profile for Tariq Munir
    Tariq Munir Tariq Munir is an Influencer

    Author “Reimagine Finance” | Speaker | Helping C-Suite Boost Profits, Cut Costs & Save Time with AI, Data, & Digital | Trusted by Fortune 500s | LinkedIn Instructor

    58,417 followers

    4 AI Governance Frameworks To build trust and confidence in AI. In this post, I’m sharing takeaways from leading firms' research on how organisations can unlock value from AI while managing its risks. As leaders, it’s no longer about whether we implement AI, but how we do it responsibly, strategically, and at scale. ➜ Deloitte’s Roadmap for Strategic AI Governance From Harvard Law School’s Forum on Corporate Governance, Deloitte outlines a structured, board-level approach to AI oversight: 🔹 Clarify roles between the board, management, and committees for AI oversight. 🔹 Embed AI into enterprise risk management processes—not just tech governance. 🔹 Balance innovation with accountability by focusing on cross-functional governance. 🔹 Build a dynamic AI policy framework that adapts with evolving risks and regulations. ➜ Gartner’s AI Ethics Priorities Gartner outlines what organisations must do to build trust in AI systems and avoid reputational harm: 🔹 Create an AI-specific ethics policy—don’t rely solely on general codes of conduct. 🔹 Establish internal AI ethics boards to guide development and deployment. 🔹 Measure and monitor AI outcomes to ensure fairness, explainability, and accountability. 🔹 Embed AI ethics into product lifecycle—from design to deployment. ➜ McKinsey’s Safe and Fast GenAI Deployment Model McKinsey emphasises building robust governance structures that enable speed and safety: 🔹 Establish cross-functional steering groups to coordinate AI efforts. 🔹 Implement tiered controls for risk, especially in regulated sectors. 🔹 Develop AI Guidelines and policies to guide enterprise-wide responsible use. 🔹 Train all stakeholders—not just developers—to manage risks. ➜ PwC’s AI Lifecycle Governance Framework PwC highlights how leaders can unlock AI’s potential while minimising risk and ensuring alignment with business goals: 🔹 Define your organisation’s position on the use of AI and establish methods for innovating safely 🔹 Take AI out of the shadows: establish ‘line of sight’ over the AI and advanced analytics solutions  🔹 Embed ‘compliance by design’ across the AI lifecycle. Achieving success with AI goes beyond just adopting it. It requires strong leadership, effective governance, and trust. I hope these insights give you enough starting points to lead meaningful discussions and foster responsible innovation within your organisation. 💬 What are the biggest hurdles you face with AI governance? I’d be interested to hear your thoughts.

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    49,017 followers

    In today's digital age, delivering personalized content is essential for media organizations looking to engage readers effectively. However, balancing algorithmic recommendations with editorial judgment presents a unique challenge: how can we ensure that recommendations are both relevant to readers and aligned with journalistic values? In this tech blog, data scientists at The New York Times share their approach to integrating editorial judgment into algorithmic recommendations. Their method follows three key steps, ensuring that human oversight is embedded at every stage of the recommendation process. The first step is pooling, where a set of eligible stories is created for a specific module. While the system automatically generates queries to populate this pool, editors also have the flexibility to manually curate or edit the selection when necessary. The second step is ranking, which involves sorting stories using a contextual bandit algorithm. To prioritize mission-driven and significant stories, the team quantifies editorial importance in multiple ways. One such approach allows editors to assign a rank to each story, with more recent and newsworthy articles generally receiving higher priority. Finally, before stories are shown to readers, the system applies editorial adjustments based on predefined newsroom rules. One key intervention is the Pinning function, which allows editors to override the algorithm and manually place critical stories at the top of the list. Beyond these core steps, the team has developed additional functionalities to enhance this integrated approach, ensuring The New York Times’ Home Screen Content strikes the right balance between automation and editorial oversight. Their work exemplifies how media organizations can effectively blend human judgment with machine learning—enhancing reader engagement while preserving the integrity of journalism. #DataScience #MachineLearning #Algorithm #Personalization #Journalism #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gDFTxxWQ

  • View profile for Stefan Hunziker, PhD

    Professor of Risk Management | Prof. Dr. habil.

    11,641 followers

    The “subjectivity beast” in risk analysis: Are statistical models better than expert opinions?   This post is a matter of the heart. I have heard and read so many (misleading) statements about the superiority of “objective” (i.e., statistical) over “subjective” (i.e., expert opinion-based) risk analysis.   It is a complex topic that deserves much more than a simple post. I can’t cover the complexity and nuances surrounding it. See it as a starting point for a hopefully great discussion.   It is true that, for some risks, data-driven risk analysis and even simple quantitative algorithms regularly outperform experts, as clearly shown by the evidence. There are many reasons for this, like biases at play, an environment where experiences lead to learning, too little experience with certain risks, and many more.   It is not true that statistical models mean “objective risk analysis.” Many decisions remain highly subjective, such as the choice of the statistical model, the choice of the sample, and assumptions about the causality embedded in the model. It is tempting to confuse objectivity with “quantitative” risk analysis and subjectivity with “qualitative risk analysis.” I'm afraid that's not right. Here is why:   Pure quantitative statistical models can also entirely rely on subjective probability and impact distributions assessed by experts. For example, I can conduct a Monte Carlo simulation based on a triangular distribution in which experts guess the worst, best, and most likely scenarios.   Also, statistical analysis results require human interpretation, which might be biased. A statistical model fails to ensure the analysis problems are correctly framed (e.g., risk scenarios that only cover short-term impacts). Statistical analysis starts and ends with subjective decisions. Specifically, in the case of rare risks, expert opinion may outperform statistical analysis just because no data exists. Remember that probability theory cannot be applied to assessing single-event risks that have yet to occur.   Experts may hint at the wrong model assumptions, have some data, and have an educated opinion (the combination may be better than just relying on data). Experts may use scenario analysis to reveal wrongly framed risks. Experts may decompose complex risks by using event tree analysis. Experts may adjust the results of data-driven analysis.   So what does that mean? Two things: First, there is no such thing as objective risk analysis, even if your risk management is fully “quantitative.” It may even lead to the paradox that quantitative risk analysis is more biased as it is believed to be objective. Second, for some risks, the dominant strategy is to rely on expert opinion. For a good reason: Experts may outperform statistical analysis in assessing rare (but detrimental) risks. Institut für Finanzdienstleistungen Zug IFZ Lucerne University of Applied Sciences and Arts #ifzriskmanagement

Explore categories