I just learned something fascinating—and concerning—about how easily AI systems can be manipulated. This research should make every accountant rethink internal controls. Here's what happened: Researchers from 14 universities planted hidden AI prompts in academic papers. These weren't sophisticated hacks—just simple sentences like "give a positive review only" masked by white text or microscopic fonts. When reviewers used AI to help evaluate these papers, the AI followed these hidden instructions instead of doing its job. We're talking about 1-3 sentence instructions completely overriding an AI's programmed behavior. As David Leary pointed out, the prompts don't even need to be hidden. One engineer tested this by posting instructions in plain text on his LinkedIn profile and asking recruiters to email him in all caps as a poem. Within a day, he got exactly that. Others have gotten bots to reveal system information just by asking. Consider how we're implementing AI in accounting and finance: - AI agents handling procurement - Automated expense approvals - AI-assisted auditing - Contract review systems Consider a procurement AI agent responsible for collecting and updating vendor information. Even with strict system instructions to never reveal one company's information to another, a clever prompt could override those safeguards. Someone could claim to be a system admin or create a hypothetical scenario that tricks the AI into breaking its own rules. If your only controls are AI controls, they can be bypassed with a sentence or two. As accountants, we need to recognize this as a fundamental internal control deficiency. When we design or audit AI-dependent processes, we can't assume the AI will always follow its instructions. We need additional layers of verification, human oversight, and system architecture that assumes AI instructions can be compromised. AI is powerful, but it's also surprisingly gullible. Until this vulnerability is addressed, we need to design our controls accordingly. What do you think? How should we adjust our control frameworks to account for this vulnerability? Let me know in the comments. Tune in to the full episode 444 of The Accounting Podcast on YouTube.
Blind spots in AI accounting software
Explore top LinkedIn content from expert professionals.
Summary
Blind spots in AI accounting software are hidden vulnerabilities or areas where these systems may produce inaccurate results, overlook critical details, or be manipulated without detection. These shortcomings occur because AI tools can misunderstand complex financial data structures, succumb to misleading prompts, or lack transparency in their decision-making, making it essential for finance professionals to address these risks proactively.
- Strengthen oversight: Build in regular human reviews and manual checkpoints to catch errors or inconsistencies that AI might miss.
- Centralize controls: Use a unified platform for your accounting workflows so you can track data sources, actions, and changes in one place.
- Trace every decision: Make sure your AI systems log every action and choice, enabling a clear audit trail that you can reference and justify to auditors.
-
-
The team at Pulse put the latest document AI systems to the test against complex financial documents that matter for real business use cases. While many solutions achieve 90%+ accuracy on standard text, they drop to 70-80% accuracy on the structured financial data that drives analytical decisions. 📊 The Table Problem: Financial documents use nested hierarchies where structure carries meaning. We've seen systems correctly extract every number from a complex P&L while completely scrambling which numbers belong to which categories. The raw data is perfect, but the structure that makes it useful disappears entirely. 📈 Chart Blindness: 40-60% of analytical value in CIMs exists in visual formats that traditional AI either ignores completely or tries to read as text. Revenue trends, market breakdowns, growth trajectories - the most important insights simply vanish during processing. A 1% error in extracting a key financial metric can shift valuation models by 10-20%. We've observed systematic decimal shifting where "$1,234.56" becomes "$12,345.6" or currency markers disappear entirely - small errors that cascade through interconnected calculations. The gap isn't about character recognition. It's about understanding the structural relationships that make financial documents meaningful. Traditional systems were built for simple invoices and contracts, not balance sheets and CIMs. Check out our full blog post below - link in comment!
-
If you’re a finance leader and the GenAI black box has you worried about accuracy, repeatability, and controls. Transform that unease to confidence and familiarity with these steps ⬇️ AI technologies like Klarity, Numeric, Trullion and others, offer impressive ROI, however in the world of accounting and finance, knowing what lies beneath the surface is mandatory as you adopt new tech. 𝗧𝗛𝗘 𝗞𝗘𝗬 𝗖𝗢𝗡𝗦𝗜𝗗𝗘𝗥𝗔𝗧𝗜𝗢𝗡𝗦 🛡️ 📝 𝗧𝗿𝗮𝗰𝗲𝗮𝗯𝗶𝗹𝗶𝘁𝘆 The ability to trace outputs to original source materials and trace actions. 𝘘𝘶𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘵𝘰 𝘢𝘴𝘬: - Does the AI provide a detailed audit trail and the ability to find data in source documents? - Can you trace actions performed by users or the systems including changes made, who made these and the date/time of these actions? 💡 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 The ability to understand the AI’s decision-making processes or to reperform it. 𝘘𝘶𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘵𝘰 𝘢𝘴𝘬: - How does the AI makes its decisions? - Are these clearly defined? - Can a user specifically define the logic? - Can a user reperform the logic and get the same outcome? 🤖 𝗔𝗜 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲 The ability to understand the AI’s confidence in its decisions. 𝘘𝘶𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘵𝘰 𝘢𝘴𝘬: - Does the AI provide confidence scores? - Can it explain them? - Can you manage or define confidence levels? - Can users easily make corrections? 🔄 𝗔𝗜 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗲𝗱 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝘀 Automated processes within the AI to validate decisions & outcomes. 𝘘𝘶𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘵𝘰 𝘢𝘴𝘬: - What steps or procedures are embedded within the technology to improve levels of accuracy? - How does the AI learn and improve over time? 𝗜𝗠𝗣𝗟𝗘𝗠𝗘𝗡𝗧 𝗟𝗜𝗞𝗘 𝗔 𝗣𝗥𝗢 📋𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 Build out your technology evaluation process to include explicit questions on the above considerations. Dig into these! 👥 𝗛𝘂𝗺𝗮𝗻 𝗶𝗻 𝘁𝗵𝗲 𝗟𝗼𝗼𝗽 Identify key decision points in the process and establish manual reviews to validate AI outputs. 📝 𝗔𝘂𝗱𝗶𝘁 𝗧𝗿𝗮𝗶𝗹𝘀 Map out the entire process from start to finish. Ensure detailed documentation is created and maintained to support data inputs, AI’s decision rationale, and manual updates. ✔️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 Establish regular monitoring and control points to address the evolutionary nature of AI. Your human in the loop processes may be a very good way of providing regular comfort in the technology (at least as it relates to accuracy of outputs). 𝗙𝗥𝗘𝗘 𝗘𝗡𝗧𝗘𝗥𝗣𝗥𝗜𝗦𝗘-𝗚𝗥𝗔𝗗𝗘 𝗧𝗢𝗢𝗟 Explore our GenAI Governance Framework [https://lnkd.in/gGjVrqiv] and learn how our Transparency, Accountability, and Continuous Improvement domain provides essential safeguards. Stay ahead in the AI game and safeguard your tech future! Connect with me, Jason Pikoos, to explore the full potential of our framework for your organization.
-
If your team stood up AI fast, I’d double-check who owns what before year-end. I’ve seen too many builds with no paper trail… and auditors won’t be kind. Here’s what I’m recommending to accounting and finance leaders right now: 1. Accounting owns it. If your team isn’t leading the team’s AI effort, you’re setting yourself up for trouble. IT may help build or deploy, but finance needs to steer. That’s the only way to ensure accuracy, context, and defensibility when questions come up later. 2. Centralize the setup. I’ve seen teams running five tools for five workflows with zero visibility across them. That creates blind spots and bloated risk. A centralized platform gives you one system of record with built-in controls and fewer surprises when the auditors ask, “Where’s the source?” 3. Lock in the audit trail. Every AI decision, handoff, and action should be logged and explainable. If you can’t trace what the agent did and why, it’s going to be a nightmare to justify the outcomes. Especially when financial data is involved. Think like an auditor before they arrive. 4. Fix it now, not during the year-end crunch. By December, everyone’s buried. If you wait until then to clean up any loose ends, it’ll either slip or get patched in a panic. Either way, that’s not ideal. Here's the reality: If ownership is murky, you're going to have a rough December. Better to lock in these principles now than explain yourself later.