I’ve had the chance to work across several #EnterpriseAI initiatives esp. those with human computer interfaces. Common failures can be attributed broadly to bad design/experience, disjointed workflows, not getting to quality answers quickly, and slow response time. All exacerbated by high compute costs because of an under-engineered backend. Here are 10 principles that I’ve come to appreciate in designing #AI applications. What are your core principles? 1. DON’T UNDERESTIMATE THE VALUE OF GOOD #UX AND INTUITIVE WORKFLOWS Design AI to fit how people already work. Don’t make users learn new patterns — embed AI in current business processes and gradually evolve the patterns as the workforce matures. This also builds institutional trust and lowers resistance to adoption. 2. START WITH EMBEDDING AI FEATURES IN EXISTING SYSTEMS/TOOLS Integrate directly into existing operational systems (CRM, EMR, ERP, etc.) and applications. This minimizes friction, speeds up time-to-value, and reduces training overhead. Avoid standalone apps that add context-switching or friction. Using AI should feel seamless and habit-forming. For example, surface AI-suggested next steps directly in Salesforce or Epic. Where possible push AI results into existing collaboration tools like Teams. 3. CONVERGE TO ACCEPTABLE RESPONSES FAST Most users have gotten used to publicly available AI like #ChatGPT where they can get to an acceptable answer quickly. Enterprise users expect parity or better — anything slower feels broken. Obsess over model quality, fine-tune system prompts for the specific use case, function, and organization. 4. THINK ENTIRE WORK INSTEAD OF USE CASES Don’t solve just a task - solve the entire function. For example, instead of resume screening, redesign the full talent acquisition journey with AI. 5. ENRICH CONTEXT AND DATA Use external signals in addition to enterprise data to create better context for the response. For example: append LinkedIn information for a candidate when presenting insights to the recruiter. 6. CREATE SECURITY CONFIDENCE Design for enterprise-grade data governance and security from the start. This means avoiding rogue AI applications and collaborating with IT. For example, offer centrally governed access to #LLMs through approved enterprise tools instead of letting teams go rogue with public endpoints. 7. IGNORE COSTS AT YOUR OWN PERIL Design for compute costs esp. if app has to scale. Start small but defend for future-cost. 8. INCLUDE EVALS Define what “good” looks like and run evals continuously so you can compare against different models and course-correct quickly. 9. DEFINE AND TRACK SUCCESS METRICS RIGOROUSLY Set and measure quantifiable indicators: hours saved, people not hired, process cycles reduced, adoption levels. 10. MARKET INTERNALLY Keep promoting the success and adoption of the application internally. Sometimes driving enterprise adoption requires FOMO. #DigitalTransformation #GenerativeAI #AIatScale #AIUX
How to Combine AI and Human Expertise in Enterprise Technology
Explore top LinkedIn content from expert professionals.
Summary
Combining AI and human expertise in enterprise technology means designing systems where artificial intelligence complements human decision-making, amplifies productivity, and respects the nuances of human judgment. This approach balances the speed and scale of AI with the creativity, empathy, and critical thinking that only humans provide.
- Embed AI into workflows: Integrate AI features into existing tools and processes instead of creating standalone systems, ensuring seamless adoption and minimizing disruptions.
- Balance decision-making: Use AI for data processing and pattern recognition while relying on human insights for context, judgment, and critical reasoning.
- Design for trust: Prioritize transparency by showing how AI reaches conclusions and maintaining a human-in-the-loop approach to manage sensitive decisions.
-
-
Harvard Business Review just found that executives using GenAI for stock forecasts made less accurate predictions. The study found that: • Executives consulting ChatGPT raised their stock price estimates by ~$5. • Those who discussed with peers lowered their estimates by ~$2. • Both groups were too optimistic overall, but the AI group performed worse. Why? Because GenAI encourages overconfidence. Executives trusted its confident tone and detail-rich analysis, even though it lacked real-time context or intuition. In contrast, peer discussions injected caution and a healthy fear of being wrong. AI is a powerful resource. It can process massive amounts of data in seconds, spot patterns we’d otherwise miss, and automate manual workflows – freeing up finance teams to focus on strategic work. I don’t think the problem is AI. It’s how we use it. As finance leaders, it’s on us to ensure ourselves, and our teams, use it responsibly. When I was a finance leader, I always asked for the financial model alongside the board slides. It was important to dig in and review the work, understand key drivers and assumptions before sending the slides to the board. My advice is the same for finance leaders integrating AI into their day-to-day: lead with transparency and accountability. 𝟭/ 𝗔𝗜 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗮 𝘀𝘂𝗽𝗲𝗿𝗽𝗼𝘄𝗲𝗿, 𝗻𝗼𝘁 𝗮𝗻 𝗼𝗿𝗮𝗰𝗹𝗲. AI should help you organize your thoughts and analyze data, not replace your reasoning. Ask it why it predicts what it does – and how it might be wrong. 𝟮/ 𝗖𝗼𝗺𝗯𝗶𝗻𝗲 𝗔𝗜 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝘄𝗶𝘁𝗵 𝗵𝘂𝗺𝗮𝗻 𝗱𝗶𝘀𝗰𝘂𝘀𝘀𝗶𝗼𝗻. AI is fast and thorough. Peers bring critical thinking, lived experience, and institutional knowledge. Use both to avoid blindspots. 𝟯/ 𝗧𝗿𝘂𝘀𝘁, 𝗯𝘂𝘁 𝘃𝗲𝗿𝗶𝗳𝘆. Treat AI like a member of your team. Have it create a first draft, but always check its work, add your own conclusions, and never delegate final judgment. 𝟰/ 𝗥𝗲𝘃𝗲𝗿𝘀𝗲 𝗿𝗼𝗹𝗲𝘀 - 𝘂𝘀𝗲 𝗶𝘁 𝘁𝗼 𝗰𝗵𝗲𝗰𝗸 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸. Use AI for what it does best: challenging assumptions, spotting patterns, and stress-testing your own conclusions – not dictating them. We provide extensive AI within Campfire – for automations and reporting, and in our conversational interface, Ember. But we believe that AI should amplify human judgment, not override it. That’s why in everything we build, you can see the underlying data and logic behind AI outputs. Trust comes from transparency, and from knowing final judgment always rests with you. How are you integrating AI into your finance workflows? Where has it helped vs where has it fallen short? Would love to hear in the comments 👇
-
AI won't fix broken product decisions. But it can amplify good ones. I met a product leader recently who spent months using AI tools to build new product ideas, but still couldn't answer a simple question: "Which features should we prioritize next?" This isn't uncommon. We're all overloaded by tools now. However, it's common for product teams to have strong AI capabilities that don't translate to better decisions. After helping numerous product and UX leaders navigate this challenge, here's what separates success from failure: 𝟭. 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻, 𝗻𝗼𝘁 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮 Define what specific product decisions you need to improve first. A CPO I work with narrowed their focus to just user onboarding decisions. This clarity made their AI implementation 3x more effective than their competitor's broader approach. 𝟮. 𝗖𝗿𝗲𝗮𝘁𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗳𝗶𝗿𝘀𝘁 Document your current decision-making process before implementing AI. What criteria matter most? What trade-offs are acceptable? These guardrails ensure AI serves your product strategy, not replaces critical thinking. 𝟯. 𝗠𝗮𝗶𝗻𝘁𝗮𝗶𝗻 𝘁𝗵𝗲 𝗵𝘂𝗺𝗮𝗻 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽 The best product teams use AI to expand options, not narrow them. They still validate AI recommendations through direct customer conversations. AI can spot patterns but can't understand the "why" behind user behaviors. 𝟰. 𝗕𝘂𝗶𝗹𝗱 𝗔𝗜 𝗹𝗶𝘁𝗲𝗿𝗮𝗰𝘆 𝘀𝗲𝗹𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆 Your entire team doesn't need to become AI experts. But product managers should understand enough to critically assess AI outputs. Focus training on interpretation skills, not just tool mechanics. 𝟱. 𝗔𝘂𝗴𝗺𝗲𝗻𝘁 𝗯𝗲𝗳𝗼𝗿𝗲 𝘆𝗼𝘂 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲 Instead of replacing human judgment, first use AI to enhance it. Look for places where your team is constrained by time or resources, not expertise. Flexible consulting partnerships can be more effective than massive AI investments or new full-time hires. It depends on your timeline, budget, and executive buy-in. The right external partner can help you integrate AI incrementally while preserving your team's core decision-making strengths. What's your biggest challenge in integrating AI into product decisions? Has your team found the right balance?
-
Why I Keep Showing This Slide whether in an exec strategy session or a team review: you’ve probably seen this slide: Knowledge > Code. I use it because it captures something I’ve learned over 15+ years building in the Data and AI space: Technology commoditizes. Knowledge compounds. We’ve all seen it: frameworks, models, vector DBs, orchestration layers - they keep getting cheaper, faster, easier to assemble. What doesn’t? Capturing and operationalizing the human judgment, policies, and domain edge cases that actually run your business (This is beyond IDP, RPA) A Real Example Take something simple: extracting the “effective date” from contracts. You might think it’s the signature date but that’s often wrong. One client sets it 3 days after signing. Another uses the last signature in multi-party deals. Get it wrong, and sure, your AI Agent might generate the perfect summary slide, email or update your ERP system but with the wrong logic behind it. My POV, Distilled Knowledge > Code The enduring moat isn’t your next agent framework - it’s how you encode domain logic, policies, exceptions, and organizational judgment so it can be reused and improved over time. Human-in-the-loop by design Confidence thresholds aren’t optional. Auto-approve the 80% you're sure about, escalate the 20% that matters. Just like in insurance claims, underwriting, or loan approvals. Systems that learn from correction When someone manually fixes that “effective date,” log why. Was it a federal clause? A new client rule? That correction should make your system smarter the next time around. Governance is the operating system Without rule capture and audit trails, every AI deployment becomes a one-off, bespoke integration. That doesn’t scale and it doesn’t build enterprise trust. The Real Shift We’re moving from technology as hero to knowledge as product. The companies that figure out how to systematically capture and compound their operational intelligence: they're the ones building 10x defensible value. Oakie.ai was built to make sure the knowledge that makes your business knowledge doesn’t get lost, not protected in the automation wave and we focus on core business processes that matters to your business and empower your knowledge workers of today who will use AI agents in the near future. Your end user is not AI Agent, it is Human who supervise and operate AI agent. If you're wrestling with this same shift, ChatGPT Agent and I are all ears/eyes ;-) "Behind every great AI Agent, there is a greater human."
-
Yesterday, I posted a conversation between two colleagues, we're calling Warren and Jamie, about the evolution of CX and AI integration. Warren argued that the emphasis on automation and efficiency is making customer interactions more impersonal. His concern is valid. And in contexts where customer experience benefits significantly from human sensitivity and understanding — areas like complex customer service issues or emotionally charged situations — it makes complete sense. Warren's perspective underscores a critical challenge: ensuring that the drive for efficiency doesn't erode the quality of human interactions that customers value. On the other side of the table, Jamie countered by highlighting the potential of AI and technology to enhance and personalize the customer experience. His argument was grounded in the belief that AI can augment human capabilities and allow for personalization at scale. This is a key factor as businesses grow — or look for growth — and customer bases diversify. Jamie suggested that AI can handle routine tasks, thereby freeing up humans to focus on interactions that require empathy and deep understanding. This would, potentially, enhance the quality of service where it truly mattered. Moreover, Jamie believes that AI can increase the surface area for frontline staff to be more empathetic and focus on the customer. It does this by doing the work of the person on the front lines, delivering it to them in real time, and in context, so they can focus on the customer. You see this in whisper coaching technology, for example. My view at the end of the day? After reflecting on this debate, both perspectives are essential. Why? They each highlight the need for a balanced approach in integrating technology with human elements in CX. So if they're both right, then the optimal strategy involves a combination of both views: leveraging technology to handle routine tasks and data-driven personalization, while reserving human expertise for areas that require empathy, judgement, and deep interpersonal skills. PS - I was Jamie in that original conversation. #customerexperience #personalization #artificialintelligence #technology #future
-
In a world where access to powerful AI is increasingly democratized, the differentiator won’t be who has AI, but who knows how to direct it. The ability to ask the right question, frame the contextual scenario, or steer the AI in a nuanced direction is a critical skill that’s strategic, creative, and ironically human. My engineering education taught me to optimize systems with known variables and predictable theorems. But working with AI requires a fundamentally different cognitive skill: optimizing for unknown possibilities. We're not just giving instructions anymore; we're co-creating with an intelligence that can unlock potential. What separates AI power users from everyone else is they've learned to think in questions they've never asked before. Most people use AI like a better search engine or a faster typist. They ask for what they already know they want. But the real leverage comes from using AI to challenge your assumptions, synthesize across domains you'd never connect, and surface insights that weren't on your original agenda. Consider the difference between these approaches: - "Write a marketing plan for our product" (optimization for known variables) - "I'm seeing unexpected churn in our enterprise segment. Act as a customer success strategist, behavioral economist, and product analyst. What are three non-obvious reasons this might be happening that our internal team would miss?" (optimization for unknown possibilities) The second approach doesn't just get you better output, it gets you output that can shift your entire strategic direction. AI needs inputs that are specific and not vague, provide context, guide output formats, and expand our thinking. This isn't just about prompt engineering, it’s about developing collaborative intelligence - the ability to use AI not as a tool, but as a thinking partner that expands your cognitive range. The companies and people who master this won't just have AI working for them. They'll have AI thinking with them in ways that make them fundamentally more capable than their competition. What are your pro-tips for effective AI prompts? #AppliedAI #CollaborativeIntelligence #FutureofWork
-
How can we ensure the empathetic aspect of customer service doesn't get lost in the AI mix? I was asked this question in a recent podcast. My answer? It's all about balance. Here’s my simple formula: 🎯 Enhance the customer experience: Empower your support professionals with AI tools. This allows them to focus more on the quality of communication they send back to the customer. 🎯 Engage Human-to-Human: With the valuable time saved by AI, your team can be intentional about their responses, ensuring they are as human and relatable as possible. 🎯 Prioritize valuable tasks: Let's not waste human potential on repetitive tasks that a computer can handle. Instead, let's focus on what humans do best – empathize, understand, and connect. The goal isn't to replace humans with AI but to enhance our abilities and improve our jobs.
-
𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲 𝗲𝘅𝗽𝗲𝗿𝘁𝘀; 𝗶𝘁 𝗮𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝘀 𝘁𝗵𝗲𝗶𝗿 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲! 👉 It’s about harnessing AI to enhance our human capabilities, not replace them. 🙇♂️ Let me walk you through my realization. As a healthcare practitioner deeply involved in integrating AI into our systems, I've learned it's not about tech for tech's sake. It's about the synergy between human intelligence and artificial intelligence. Here’s how my perspective evolved after deploying Generative AI in various sectors: 𝐇𝐞𝐚𝐥𝐭𝐡𝐜𝐚𝐫𝐞: "I 𝐧𝐞𝐞𝐝 AI to analyze complex patient data for personalized care." - But first, we must understand the unique healthcare challenges and data intricacies. 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧: "I 𝐧𝐞𝐞𝐝 AI to tailor learning to each student's needs." - Yet, identifying those needs requires human insight and empathy that AI alone can't provide. 𝐀𝐫𝐭 & 𝐃𝐞𝐬𝐢𝐠𝐧: "I 𝐧𝐞𝐞𝐝 AI to push creative boundaries." - And yet, the creative spark starts with a human idea. 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬: "I 𝐧𝐞𝐞𝐝 AI for precise market predictions." - But truly understanding market nuances comes from human experience and intuition. The Jobs-to-be-Done are complex, and time is precious. We must focus on: ✅ Integrating AI into human-led processes. ☑ Using AI to complement, not replace, human expertise. ✅ Combining AI-generated data with human understanding for decision-making. ☑ Ensuring AI tools are user-friendly for non-tech experts. Finding the right balance is key: A. AI tools must be intuitive and supportive. B. They require human expertise to interpret and apply their output effectively. C. They must fit into the existing culture and workflows. For instance, using AI to enhance patient care requires clinicians to interpret data with a human touch. Or in education, where AI informs, but teachers inspire. 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 𝐀𝐈 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 𝐫𝐨𝐥𝐞𝐬 is critical. And that’s where I come in. 👋 I'm Umer kHan, here to help you navigate the integration of Generative AI into your world, ensuring it's done with human insight at the forefront. Let's collaborate to create solutions where technology meets humanity. 👇 Feel free to reach out for a human-AI strategy session. #GenerativeAI #HealthcareInnovation #PersonalizedEducation #CreativeSynergy #BusinessIntelligence
-
I brainstormed a list of things I ask myself about when designing for Human-AI interaction and GenAI experiences. What's on your list? • Does this person know they are interacting with AI? • Do they need to know? • What happens to the user’s data? • Is that obvious? • How would someone do this if a human was providing the service? • What parts of this experience are improved through human interaction? • What parts of this experience are improved through AI interaction? • What context does someone have going into this interaction? • What expectations? • Do they have a specific goal in mind? • If they do, how hard is it for them to convey that goal to the AI? • If they don't have a goal, what support do they need to get started? • How do I avoid the blank canvas effect? • How do I ensure that any hints I provide on the canvas are useful? • Relevant? • Do those mean the same thing in this context? • What is the role of the AI in this moment? • What is its tone and personality? • How do I think someone will receive that tone and personality? • What does the user expect to do next? • Can the AI proactively anticipate this? • What happens if the AI returns bad information? • How can we reduce the number of steps/actions the person must take? • How can we help the person trace their footprints through an interaction? • If the interaction starts to go down a weird path, how does the person reset? • How can someone understand where the AI's responses are coming from? • What if the user wants to have it reference other things instead? • Is AI necessary in this moment? • If not, why am I including it? • If yes, how will I be sure? • What business incentive or goal does this relate to? • What human need does this relate to? • Are we putting the human need before the business need? • What would this experience look like if AI wasn't in the mix? • What model are we using? • What biases might the model introduce? • How can the experience counteract that? • What additional data and training does the AI have access to? • How does that change for a new user? • How does that change for an established user? • How does that change by the user's location? Industry? Role? • What content modalities make sense here? • Should this be multi-modal? • Am I being ambitious enough against the model's capabilities? • Am I expecting too much of the users? • How can I make this more accessible? • How can I make this more transparent? • How can I make this simpler? • How can I make this easier? • How can I make this more obvious? • How can I make this more discoverable? • How can I make this more adaptive? • How can I make this more personalized? • How can I make this more transparent? • What if I'm wrong? ------------ ♻️ Repost if this is helpful 💬 Comment with your thoughts 💖 Follow if you find it useful Visit shapeofai.substack.com and subscribe! #artificialintelligence #ai #productdesign #aiux #uxdesign