How do financial services firms put agentic AI to work in real-world scenarios? Clayton Pilat, head of AI, ANZ at Synechron, shares actionable steps in his latest article. For direct insights and recommendations, read the full piece here: https://lnkd.in/dBquCbVi
How to use AI in financial services: ANZ's Clayton Pilat shares tips
More Relevant Posts
-
Stop Trusting AI. Start Measuring It. It starts innocently enough. You open a new analytics dashboard that promises to summarise client activity and highlight emerging risks. In seconds, it produces a clean, confident paragraph - the kind of summary you might have spent half an hour assembling from spreadsheets and notes. You scan it, nod and drop it straight into your briefing pack. It looked right. But did you know it was really right? That tiny moment of hesitation captures a much bigger challenge. The question is: when is that trust is deserved, how do you verify it and where do you draw the line? In this new landscape, trust is a discipline we must build. Continuing the series with Ash Mondal. https://lnkd.in/e7F-k8QE
To view or add a comment, sign in
-
What is agentic AI and what role could it play in financial services? Why the next stage in AI is about adding automation to multi-step processes, helping financial professionals scale their expertise. Read more: https://lnkd.in/eAMrNs-G Partner Content by Moody’s #ai #innovation #financialservices Moody's Analytics
To view or add a comment, sign in
-
Leslie Norman, the chief technology officer at Dynasty Financial Partners, said that advisors with her firm have been rushing to make use of AI. “What we tell people is this is not a space that you should be navigating alone," Norman said. https://lnkd.in/eWCKb6Tg
To view or add a comment, sign in
-
🔍 You can’t trust what you can’t see. That thought has stayed with me lately. If you’ve been following my recent posts, you know I’ve been exploring the evolving world of AI agents — not just their capabilities, but the trust they demand from us. I’ve written about data quality, culture, and the very human confidence that underpins every meaningful AI transformation. This post continues that conversation. Because trust, as foundational as it is, doesn’t survive on belief alone. It has to be demonstrated, measured, and sustained through practice. And that’s where observability comes in. If trust is the promise, observability is the proof. It’s what allows us to move from assuming our AI systems are doing the right thing to knowing they are — consistently, transparently, and in alignment with both our business principles and ethical intent. In traditional systems, monitoring has always been straightforward. We track uptime, latency, and performance. But Agentic AI changes the equation. These systems don’t just run — they reason. They make inferences, collaborate, and adapt in ways that make simple monitoring feel obsolete. The question is no longer, “Is it working?” It’s, “Do we understand what it’s doing, why it’s doing it, and whether those actions align with our goals and responsibilities?” That’s the heart of observability. Observability isn’t about more dashboards or bigger logs. It’s about understanding the story behind every decision — where the data came from, how it was processed, what logic was applied, and when human judgment entered the loop. When that story becomes visible, confidence follows. Leaders can explain outcomes. Engineers can improve reasoning. And organizations can move from reaction to reflection — from putting out fires to preventing them. Observability turns autonomy into accountability. It transforms AI from a black box into a transparent partner that earns trust instead of eroding it. I don’t see observability as surveillance or control. I see it as situational awareness — the ability to understand what’s happening between humans and machines in real time. It’s how innovation moves fast without losing integrity, and how autonomy evolves without drifting beyond oversight. Because at the end of the day, trust isn’t built in code. It’s built in how we see, understand, and explain what that code does. That’s why I keep coming back to this truth: ✨ You can’t trust what you can’t observe.
To view or add a comment, sign in
-
I just read a piece that shifted how I'm thinking about AI governance conversations. Usually, these debates go nowhere. One camp says AGI is just around the corner, the other says AI is overhyped, and nobody listens. But this article in Asterisk Magazine actually maps out where both sides agree on next steps. https://lnkd.in/dDZmDghS Here's what struck me: whether you think AGI will soon be among us or believe AI will just be another useful technology soon to be unmasked for what it really is, both sides converge on some practical steps. The surprising consensus: - We don't actually know how to align AI systems with human values yet. This isn't solved, and both camps admit it. - Current AI shouldn't be autonomously running critical infrastructure. Not power grids, not financial systems, not military operations. The disagreement is about when that might be safe, not whether it's safe now (it's not). - Transparency and auditing matter more than most companies want to admit. You can't govern what you can't see. What I found most useful: this isn't about picking sides between "AI will change everything next Tuesday" and "AI is just fancy autocomplete." It's about acknowledging we're navigating genuine uncertainty, and doing so requires governance frameworks that work across wildly different scenarios. Finding shared ground first, then working from there. This rarely happens in practice, and reading the piece made me hopeful.
To view or add a comment, sign in
-
As governments around the world expand their use of Artificial Intelligence, one question becomes even more important: Are we using AI in a way that builds trust or weakens it? I recently came across a great article from The Brookings Institution titled “How can governments use AI systems better?” What stood out to me most was a simple truth smart transformation only succeeds when it is guided by governance and transparency. AI should never be an end in itself, but rather a tool to enhance fairness, efficiency, and accountability in public services. In government environments, the real challenge lies in balancing innovation with responsibility ensuring that every step forward in automation and analytics is matched with integrity and trust. That’s where true smart transformation begins. https://lnkd.in/dG5QnNBG
To view or add a comment, sign in
-
The $440,000 AI Mistake That Cost a Reputation. The integrity of government contracting is under threat. In October 2025, Deloitte Australia was forced to refund a portion of a AU$440,000 policy report after submitting a document riddled with over 20 AI-generated fabrications—a phenomenon known as "hallucination". What Went Wrong? Deloitte used Azure OpenAI GPT-4o to "fill documentation gaps" but critically confused AI augmentation with AI delegation, skipping mandatory human verification. The result was "AI slop" that constituted a severe failure of professional diligence in a high-stakes environment. The shocking errors included: • Invented quotes falsely attributed to Federal Court judges, misstating core legal findings. • Non-existent legal precedents and fabricated academic books attributed to real professors. • Over a dozen bogus references, leading to a $97,000 partial refund and widespread condemnation. The scandal occurred during a critical government review mandated after the catastrophic Robodebt scheme, proving that the quality assurance mechanism itself failed due to unchecked automation. As one analysis concluded: "GPT-4o did not malfunction. Deloitte's process did". The Solution: Human Verification is Non-Negotiable This case study validates Aliff Capital's human-centered proposal methodology, which prevents AI fabrication and maintains epistemic integrity. Learn how Aliff Capital guarantees zero AI hallucinations by: 1. Treating AI as Augmentation Only: AI drafts initial content based on human research, but never serves as the final authority. 2. Implementing Pink-Red-Gold Quality Gates: Every proposal goes through three mandatory human review stages managed by a 6-expert team (including Compliance Analysts and Technical SMEs) designed specifically to catch fabricated citations, phantom legal precedents, and invented technical specifications. 3. Ensuring Accountability: The Gold Review requires executive leadership sign-off with "zero tolerance for fabrication," protecting clients from False Claims Act exposure and reputational damage. In government contracting, fabricated citations don't just cost you the contract. They cost you your reputation. Discover the proven methodology that delivers results: • 22% Win Rate (vs. industry average 10-15%). • $47M+ in contracts won. • Zero AI hallucinations in submitted proposals. Don't risk your firm becoming the next case study in "AI slop". Watch the full video to understand the true cost of unchecked AI and how compliance maturity—including steps like DCAA-compliant accounting and CMMC readiness—serves as the ultimate competitive differentiator in the $773 billion GovCon market. --- 🔗 Resources & Next Steps: • Learn more about Aliff Capital's Methodology: https://lnkd.in/dqcHF6JH https://lnkd.in/djfpbD2p
GOVCON in the Age of AI
https://www.youtube.com/
To view or add a comment, sign in
-
RAG isn’t enough. LLMs generally use RAG to do the following: search some docs, summarize information, answer a question. It’s powerful, but fundamentally reactive. It can only tell you what’s already written down. For organizations that operate in high-stakes, dynamic environments, that’s not enough. They need AI that can reason across fragmented systems, capture relationships between entities, incorporate human expertise that isn’t neatly documented, and support decisions in real time — not just retrieve passages from a knowledge base. This is where Knowledge Engines come in. They unify data across silos, build a representation of an organization's ground truth, and provide a foundation for Expert AI Agents that don’t just read, but can generate net new insights. In short: RAG reads. Knowledge Engines reason. Learn more: https://lnkd.in/eciEWbqV
To view or add a comment, sign in
-
The latest McKinsey #StateOfAI report shows how organizations are actually using AI. Interestingly, few respondents in banking report at-scale implementation of AI agents: 3-7% for most functions (with the highest score in Risk, Legal, and Compliance) - while it's over 15% for key functions in insurance and other industries. Check out the latest survey results here: https://mck.co/StateOfAI
To view or add a comment, sign in
-
NSW’s new AI guidance tackles the tricky question of responsibility when agents act on their own — and who answers when they don’t.
To view or add a comment, sign in