I’ve had the chance to work across several #EnterpriseAI initiatives esp. those with human computer interfaces. Common failures can be attributed broadly to bad design/experience, disjointed workflows, not getting to quality answers quickly, and slow response time. All exacerbated by high compute costs because of an under-engineered backend. Here are 10 principles that I’ve come to appreciate in designing #AI applications. What are your core principles? 1. DON’T UNDERESTIMATE THE VALUE OF GOOD #UX AND INTUITIVE WORKFLOWS Design AI to fit how people already work. Don’t make users learn new patterns — embed AI in current business processes and gradually evolve the patterns as the workforce matures. This also builds institutional trust and lowers resistance to adoption. 2. START WITH EMBEDDING AI FEATURES IN EXISTING SYSTEMS/TOOLS Integrate directly into existing operational systems (CRM, EMR, ERP, etc.) and applications. This minimizes friction, speeds up time-to-value, and reduces training overhead. Avoid standalone apps that add context-switching or friction. Using AI should feel seamless and habit-forming. For example, surface AI-suggested next steps directly in Salesforce or Epic. Where possible push AI results into existing collaboration tools like Teams. 3. CONVERGE TO ACCEPTABLE RESPONSES FAST Most users have gotten used to publicly available AI like #ChatGPT where they can get to an acceptable answer quickly. Enterprise users expect parity or better — anything slower feels broken. Obsess over model quality, fine-tune system prompts for the specific use case, function, and organization. 4. THINK ENTIRE WORK INSTEAD OF USE CASES Don’t solve just a task - solve the entire function. For example, instead of resume screening, redesign the full talent acquisition journey with AI. 5. ENRICH CONTEXT AND DATA Use external signals in addition to enterprise data to create better context for the response. For example: append LinkedIn information for a candidate when presenting insights to the recruiter. 6. CREATE SECURITY CONFIDENCE Design for enterprise-grade data governance and security from the start. This means avoiding rogue AI applications and collaborating with IT. For example, offer centrally governed access to #LLMs through approved enterprise tools instead of letting teams go rogue with public endpoints. 7. IGNORE COSTS AT YOUR OWN PERIL Design for compute costs esp. if app has to scale. Start small but defend for future-cost. 8. INCLUDE EVALS Define what “good” looks like and run evals continuously so you can compare against different models and course-correct quickly. 9. DEFINE AND TRACK SUCCESS METRICS RIGOROUSLY Set and measure quantifiable indicators: hours saved, people not hired, process cycles reduced, adoption levels. 10. MARKET INTERNALLY Keep promoting the success and adoption of the application internally. Sometimes driving enterprise adoption requires FOMO. #DigitalTransformation #GenerativeAI #AIatScale #AIUX
Addressing Trust Deficit in the AI Industry
Explore top LinkedIn content from expert professionals.
Summary
Building trust in the AI industry is critical for its adoption and success. Addressing the "trust deficit" requires creating systems that prioritize security, transparency, and reliability while integrating seamlessly into existing workflows.
- Focus on transparency: Clearly document and communicate the processes behind your AI models, including data sources, algorithms, and governance measures, to build confidence with users and stakeholders.
- Implement robust safeguards: Design systems with fail-safe mechanisms, secure data governance, and clear escalation paths to prevent misuse and ensure accountability.
- Embed AI thoughtfully: Ensure AI solutions integrate into existing workflows and tools without creating friction, making their adoption seamless for users.
-
-
This entrepreneur, on the heels of ChatGPT Mania, launched a military-grade solution to bring SLMs to enterprises: While working at Hugging Face, Mark McQuade encountered challenges helping enterprise customers adopt GenAI. Some companies resisted closed-source AI APIs due to a lack of transparency. Meanwhile, they avoided open source models over security concerns. Mark realized that overcoming a trust deficit was the primary obstacle for enterprise GenAI adoption. Inspired to find a solution, he teamed up with Jacob Salowetz and Brian Benedict to build Small Specialized Language Models (SLMs). Mark, Jacob, and Brian launched Arcee backed by $5.5M in early funding. What makes Arcee stand out is its ability to train, deploy, and monitor GenAI models within a customer’s own cloud environment. This ensures data privacy while granting full model ownership. I got to know Brian while he was working at another Flybridge portco a few years back. He had a great understanding of enterprise sales and what drives leaders at Fortune 1000 companies. Arcee allows companies to host models in their Virtual Private Cloud from pre-training to post-development, ensuring that the data never leaves the organization. These models are more secure and can be up to 50% less expensive to train. The greatest advantage is that this reduction in size and cost does not come at the expense of reduced performance. They can be even more effective since they are tailored to a particular need. Arcee has a US patent model which showed a 50% improvement over baseline models. Acree gives companies higher ownership, control, and customization over their models, and avoids vendor lock-ins. We expect this will massively drive enterprise adoption, which has been lagging in recent years. My firm Flybridge participated in their seed round given three compelling factors: 1. Massive growth projected for enterprise AI spend 2. Team with clear understanding of market needs 3. Strong solution addressing gap in market Key lessons from Acree's story: • Overcoming the trust deficit is the primary barrier for enterprise GenAI adoption. Arcee directly addresses security concerns through its encrypted training and deployment system. • Talent with intimate market knowledge is invaluable when building solutions tailored to industry needs. Arcee's founders have the expertise to create technology addressing enterprises' pain points. • First mover advantage will go to GenAI platforms securing significant funding upfront. With $5.5 million raised already, Arcee can expand its workforce to seize the expansive market opportunity.
-
We keep talking about model accuracy. But the real currency in AI systems is trust. Not just “do I trust the model output?” But: • Do I trust the data pipeline that fed it? • Do I trust the agent’s behavior across edge cases? • Do I trust the humans who labeled the training data? • Do I trust the update cycle not to break downstream dependencies? • Do I trust the org to intervene when things go wrong? In the enterprise, trust isn’t a feeling. It’s a systems property. It lives in audit logs, versioning protocols, human-in-the-loop workflows, escalation playbooks, and update governance. But here’s the challenge: Most AI systems today don’t earn trust. They borrow it. They inherit it from the badge of a brand, the gloss of a UI, the silence of users who don’t know how to question a prediction. Until trust fails. • When the AI outputs toxic content. • When an autonomous agent nukes an inbox or ignores a critical SLA. • When a board discovers that explainability was just a PowerPoint slide. Then you realize: Trust wasn’t designed into the system. It was implied. Assumed. Deferred. Good AI engineering isn’t just about “shipping the model.” It’s about engineering trust boundaries that don’t collapse under pressure. And that means: → Failover, not just fine-tuning. → Safeguards, not just sandboxing. → Explainability that holds up in court, not just demos. → Escalation paths designed like critical infrastructure, not Jira tickets. We don’t need to fear AI. We need to design for trust like we’re designing for failure. Because we are. Where are you seeing trust gaps in your AI stack today? Let’s move the conversation beyond prompts and toward architecture.
-
Need to build trust as an AI-powered company? There is a lot of hype - and FUD. But just as managing your own supply chain to ensure it is secure and compliant is vital, companies using LLMs as a core part of their business proposition will need to reassure their own customers about their governance program. Taking a proactive approach is important not just from a security perspective, but projecting an image of confidence can help you to close deals more effectively. Some key steps you can take involve: 1/ Documenting an internal AI security policy. 2/ Launching a coordinated vulnerability disclosure or even bug bounty program to incentivize security researchers to inspect your LLMs for flaws. 3/ Building and populating a Trust Vault to allow for customer self-service of security-related inquiries. 4/ Proactively sharing methods through which you implement the best practices like NIST’s AI Risk Management Framework specifically for your company and its products. Customers are going to be asking a lot of hard questions about AI security considerations, so preparation is key. Having an effective trust and security program - tailored to incorporate AI considerations - can strengthen both these relationships and your underlying security posture.
-
I was interviewed at length for today's The Wall Street Journal article on what exactly went so wrong with Grok. Here's what's critical for any leader considering enterprise-grade AI: Great article by Steve Rosenbush breaking down exactly how AI safety can fail, and why raw capability isn't everything. AI tools need to be trusted by enterprises, by parents, by all of us. Especially as we enter the age of agents, we're looking at tools that won't just answer offensively, they'll take action as well. That's when things really get out of hand. ++++++++++ WHAT WENT WRONG? From the article: "So while the risk isn't unique to Grok, Grok's design choices, real-time access to a chaotic source, combined with reduced internal safeguards, made it much more vulnerable," Grennan said. In other words, this was avoidable. Grok was set up to be "extremely skeptical" and not trust mainstream sources. But when it searched the internet for answers, it couldn't tell the difference between legitimate information and harmful/offensive content like the "MechaHitler" meme. It treated everything it found online as equally trustworthy. This highlights a broader issue: Not all LLMs are created equal, because getting guardrails right is hard. Most leading chatbots (by OpenAI, Google, Microsoft, Anthropic) do NOT have real-time access to social media precisely because of these risks, and they use filtering systems to screen content before the model ever sees it. +++++++++++ WHAT DO LEADERS NEED TO KNOW? 1. Ask about prompt hierarchies in vendor evaluations. Your AI provider should clearly explain how they prioritize different sources of information. System prompts (core safety rules) must override everything else, especially content pulled from the internet. If they can't explain this clearly, that's a red flag. 2. Demand transparency on access controls. Understand exactly what your AI system can read versus what it can actually do. Insist on read-only access for sensitive data and require human approval for any actions that could impact your business operations. 3. Don't outsource responsibility entirely. While you leaders aren't building the AI yourselves, you still own the risk. Establish clear governance around data quality, ongoing monitoring, and incident response. Ask hard questions about training data sources and ongoing safety measures. Most importantly? Get fluent. If you understand how LLMs work, even at a basic level, these incidents will be easier to guard against. Thanks again to Steve Rosenbush for the great article! Link to article in the comments! +++++++++ UPSKILL YOUR ORGANIZATION: When your organization is ready to create an AI-powered culture—not just add tools—AI Mindset can help. We drive behavioral transformation at scale through a powerful new digital course and enterprise partnership. DM me, or check out our website.