ChatGPT writes a better first draft analysis than most analysts. It also can't reliably extract numbers from a scanned PDF. When deal teams see AI-generated analysis with incorrect figures, they blame the model. But trace the error back, and you'll find it's likely that it originated during document processing, before the model ever started analyzing. We've been testing frontier models on real financial documents and found a consistent pattern: the analysis reads well, the confidence scores look fine, but the underlying numbers can ultimately be wrong. Models drop minus signs from income statements, misalign table columns, and introduce digit-level errors in extraction. This happens because general-purpose vision processing optimizes for understanding concepts, not preserving pixel-level precision. Features that matter in finance like decimal points, negative signs, subscripts exist at resolutions these systems sacrifice for efficiency. It's like reading through frosted glass: you can see enough to reconstruct something plausible, but not enough to guarantee accuracy. Standard AI benchmarks don't catch this because they test comprehension using clean documents. Real workflows involve scanned PDFs, compressed filings, and hybrid image-text formats where these failures show up reliably. We've developed approaches that significantly reduce these errors and are continually optimizing. The takeaway here is AI's reliability problem in finance isn't mainly about reasoning. It's about whether models can accurately pull numbers from real documents in the first place. Bad extraction creates bad outputs, which lead to bad decisions. By the time you're questioning the AI's conclusions, the damage already happened upstream.
AI analysis of financial documents: accuracy issues and solutions
More Relevant Posts
-
Most people write prompts like they write emails - in sentences, not in structure. That’s why Co-Pilot and ChatGPT give unpredictable results - you’re asking for clarity without giving it. I've added a couple of Co-Pilot tips to this video. - Inconsistent outputs waste time, confuse teams, and make AI adoption look chaotic. - One person gets gold; another gets gibberish — even with the same instruction. Solution - The fix isn’t “better wording.” - It’s Prompt Logic - a repeatable structure that gives the model clarity and control. In this short video, I break down the first section of the framework I teach in our Co-Pilot courses: Prompt Logic → Structure → Verification You’ll learn how to define: - The activity and purpose behind your request - The role and tone the AI should adopt - Simple checks that verify if the answer actually meets your intent When you use this structure, their Co-Pilot results stop feeling random - and start feeling reliable. The difference between “AI guesswork” and “AI precision” is structure. Watch the NotebookLM video below - and see how structured prompting changes what Co-Pilot gives back. I will be posting MUCH more on prompt engineering in the upcoming weeks and months. If you want to get beyond the c**ppy one paragraph prompt frameworks, follow me.
To view or add a comment, sign in
-
When I first started experimenting with AI for LMS, I thought prompt engineering was just about writing clever questions for the chatbot. But I quickly realised — it’s instructional design in disguise. Instructional Designers have always been the architects of learning. Now, with AI in the mix, their role is evolving — and expanding. Here’s how they’re shaping the future of eLearning: > Designing prompts that teach — not just ask. A well-crafted prompt can guide learners to reflect, explore, and discover. > Structuring content for AI — so it’s not just readable, but teachable. > Collaborating with AI systems — to define how feedback is delivered, how assessments adapt, and how learning journeys evolve. > Bringing empathy and pedagogy — making sure AI doesn’t just respond, but understands the learner’s needs. As Chris Gallagher puts it: “Start with purpose. Craft clear objectives and make your intent explicit. Structure responses in layers, each adding value.” Prompt engineering isn’t just about clever inputs — it’s about designing meaningful learning experiences. And Instructional Designers are uniquely positioned to lead this shift.
Most people write prompts like they write emails - in sentences, not in structure. That’s why Co-Pilot and ChatGPT give unpredictable results - you’re asking for clarity without giving it. I've added a couple of Co-Pilot tips to this video. - Inconsistent outputs waste time, confuse teams, and make AI adoption look chaotic. - One person gets gold; another gets gibberish — even with the same instruction. Solution - The fix isn’t “better wording.” - It’s Prompt Logic - a repeatable structure that gives the model clarity and control. In this short video, I break down the first section of the framework I teach in our Co-Pilot courses: Prompt Logic → Structure → Verification You’ll learn how to define: - The activity and purpose behind your request - The role and tone the AI should adopt - Simple checks that verify if the answer actually meets your intent When you use this structure, their Co-Pilot results stop feeling random - and start feeling reliable. The difference between “AI guesswork” and “AI precision” is structure. Watch the NotebookLM video below - and see how structured prompting changes what Co-Pilot gives back. I will be posting MUCH more on prompt engineering in the upcoming weeks and months. If you want to get beyond the c**ppy one paragraph prompt frameworks, follow me.
To view or add a comment, sign in
-
Can AI be strategic? To answer this question, I asked my board of directors (with a little help from ChatGPT). Gary Vaynerchuk: "Bro, AI can spit out strategy decks all day, but it doesn’t know your grandma, your childhood, or why you started your business — and that’s where real strategy lives." Simon Sinek: "AI can optimize the “how,” but it can’t feel the “why” — and without the why, you’re just a very efficient hamster on a wheel." Sam Altman: "AI can map ten thousand possible strategies before breakfast — but choosing which one actually matters still takes a human with taste, guts, and skin in the game." I also asked the question directly to ChatGPT, Gemini, Claude and Perplexity. The consensus across the board was yes, but. Yes, because AI can analyze data, surface patterns, summarize insights, and even suggest possible directions. BUT, real strategy (aka, the kind that works), requires something AI doesn’t have: - Contextual judgment: determining what matters most in this specific moment - Prioritization under uncertainty: ability to choose a trade-offs before you have all the data - Emotional and relational intuition: understanding people, politics, timing, trust, cultural nuances - Vision: deciding what should exist, not just what’s statistically probable based on past data The bottom line? The potential AI is exciting, but let's not forget the immense potential of the human mind.
To view or add a comment, sign in
-
It’s a bit of a learning curve, but we’re making great progress here at Empire Life in getting up to speed with all things AI. And now our sales team is working to support our advisor community. At a recent advisor event in Burlington, ON I had the pleasure of speaking on the topic and offered some easy ways to start incorporating AI into workflows. Change is challenging, but it’s important to get onboard. A recent KPMG and University of Melbourne study reveals Canada ranks 44th out of 47 countries in AI literacy. This isn't due to a lack of innovation, but a gap in public understanding and trust. https://lnkd.in/gMsKKZ45 When it comes to using AI, I think efficiency is the biggest win. In my own daily routines, I’m using Gemini and ChatGPT to help with things such as meeting prep and summarizing long email threads into manageable bullet point notes. What are your thoughts on integrating AI into your workflow?
To view or add a comment, sign in
-
-
Use multiple AI tools to get better results—faster. I recently needed to draft a simple agreement for a client. I wrote the basics, but I wanted a second opinion to make sure I didn't miss anything important. First, I ran my draft through ChatGPT and asked it to suggest changes. It added helpful details, but the writing sounded robotic and stiff. Next, I put ChatGPT's version into Claude for another round of editing. This time, the tone matched my company's style perfectly. After a quick polish, the agreement was ready to send. The whole process took just a few minutes and saved me hours of back-and-forth revisions. Why use two tools? Each AI has different strengths. I use ChatGPT when I need ideas and fresh angles. I use Claude when I need clear, natural writing that sounds human. Together, they help me work faster and deliver better results. Disclaimer: We remove/omit all PII and client information from content before loading into any AI tool.
To view or add a comment, sign in
-
-
Stop fighting with your AI over whether you want answers or analysis. With ChatGPT's "router" it's more important than ever to convey the level of thought necessary to formulate a meaningful response. Hint: Just tell it what kind of thinking the problem needs BEFORE you describe the problem. Most people (including me until recently) approach AI like this: 1. Dump context 2. Ask question 3. Get generic advice or a bullet-pointed list 4. Realize this isn't what you needed 5. Try to course-correct with "no, I mean..." Turns out you can skip 90% of that friction by adding one line at the top that sets the cognitive mode. EXAMPLES THAT ACTUALLY WORK: Instead of: "How should I handle this vendor negotiation?" Try: "Treat this as a strategic analysis problem with incomplete information. Map tradeoffs and failure modes before recommending approach." [then your context] Instead of: "Help me design this database schema" Try: "Direct execution mode: I need a technically sound solution, not philosophical discussion." [then your requirements] The difference is night and day. One gets you hedged corporate-speak, the other gets you the actual thinking style your problem needs. THE META-MOVE I USE NOW: When something feels ambiguous, I ask the AI: "What cognitive mode would serve this problem best: direct execution, multi-hypothesis analysis, or adversarial stress-testing?" Half the time I realize I was asking the wrong type of question entirely. The other half, the AI's reasoning quality jumps two levels because it's not guessing what I want. QUICK REFERENCE I KEEP HANDY: → Systems thinking: "Model this as interconnected constraints, not isolated solutions" → Adversarial mode: "Present your answer, then argue against it better than I would" → Uncertainty mapping: "Flag assumptions and missing information explicitly" → Execution mode: "Optimize for clarity and directness, not comprehensiveness" WHY THIS ACTUALLY MATTERS: You're not being fancy — you're preventing the AI from defaulting to "generic helpful assistant" when you need "strategic sparring partner" or "direct technical implementer." It's the same reason my Mermaid diagram trick works: you're creating a shared language for HOW to think about the problem, not just what the problem is.
To view or add a comment, sign in
-
"Can I just upload my company’s documents into ChatGPT?" That was a real question I got. The answer is... complicated. And by complicated, I mean please don't do that. You may be thinking, what even is shadow AI? It sounds like a superhero but maybe it is more of a supervillain? Shadow AI is when your team, staff, employees, start using AI tools without telling anyone. ChatGPT, Claude, Copilot... they're helpful, so of course people want to use them to help with their work. And people are doing just that. No approval process. No data governance. No policy in sight. Here's the reality: Your people are already doing this. Right now. And by now, I mean while you're reading this. And every document they upload is living rent-free on servers you don't control and potentially training the next AI model. The uncomfortable truth about shadow AI: Your confidential client data? It's in there. That employee contract with the salary details? Definitely in there. Your hot sauce recipe IP? You already know. The fix is simple: Rules about what your team can and can't do with AI. An approval process that doesn't slow everyone down. A playbook for when employees start using outside AI. If you want to get ahead of shadow AI before it gets ahead of you, let’s talk.
To view or add a comment, sign in
-
Yes, I use AI. Every single day. For research. For writing. For strategy development. But not the way you might think. I'm not the "type prompt → copy → paste → post" person. Never have been, never will be. AI is my research assistant with superpowers - not my ghostwriter. Here's what separates AI users who deliver excellence from those who deliver mediocrity: 💡 Synthesis over automation – Weaving AI insights with human judgment 🔎 Verification over trust – Treating every AI output as a first draft requiring validation 🧠 Engineering over asking – Designing prompts that challenge assumptions and surface nuance ♻️ Iteration over completion – Running ideas through multiple models for self-critique 🛡️ Conviction over approval – Standing behind your work even when others assume it was "easy" The toughest part? Investing hours into AI-enhanced work only to hear: "You just used ChatGPT, right?" Yes. And also: strategy, expertise, judgment, and relentless refinement. How are you balancing AI assistance with professional craftsmanship?
To view or add a comment, sign in
-
ChatGPT “lied” about my data. So I had to build a better workflow than- “Here’s my data. Help with the analysis.” A few months ago, I uploaded a CSV into ChatGPT and asked it to do some analysis. Most of the time, it did great. But one day, the output just didn’t make sense. So I asked it, “Did you just make these up?” It replied: “I wasn’t able to parse the CSV correctly.” That’s when it hit me: → If you can’t see how AI reads your data, you can’t trust its output. Now, I use 2 key principles for data analysis with AI: 1️⃣ AI on a tight leash 2️⃣ Human in the loop with rapid verification My complete for using AI for data analysis without losing control 👇 ↓ ↓ ↓ --- P.S. Join my inner circle of 13000+ researchers for exclusive, actionable advice you won’t find anywhere else HERE: https://lnkd.in/e39x8W_P BONUS: When you subscribe, you instantly unlock my Research Idea GPT and Manuscript Outline Blueprint. Please reshare 🔄 if you got some value out of this.…
To view or add a comment, sign in
-
Last month at Gold Front, we used AI to cut project costs by 27%. Here are the tools we use: 1. CLICKUP AI requires the right data structure. We codify every workflow in ClickUp. Every deliverable, every client presentation, every task. 2. CLAUDE Our go-to LLM. Strategy work? Claude. Legal questions? Claude. Shoulder to cry on? Claude. It doesn’t get everything right by a long shot. So we think of AI outputs sort of like clay that then needs to be molded by human expertise. Doing it this way is very powerful. 3. MAKE Automations that ClickUp can't handle go through automation super-app, Make. It connects to thousands of apps and makes them work together automatically. Saves us hours every week. 4. GRANOLA Best AI note-taker. It doesn't actually make recordings so there’s no creepy AI bot joining your calls. But it listens in on everything, making transcripts and notes. We use it to go back to any past conversation or to read between the lines and generate valuable insights. 5. CHATGPT We use it for things Claude doesn't do well - certain types of research, data analysis, or when we need a different perspective on a problem. 6. GOOGLE DRIVE We keep all our LLM instructions and knowledge base docs in Google Drive folders. Never just inside Claude or ChatGPT. So when a better model comes out, we have the ability to switch over instantly. No vendor lock-in. No rebuilding everything from scratch. What AI tools are working for you? ……. PS: I cribbed the exact structure for this carousel from the amazing Nick Broekema. You should follow him.
To view or add a comment, sign in
Stay up to date with our progress here: https://www.hellodeck.ai/research