AI won't fix broken product decisions. But it can amplify good ones. I met a product leader recently who spent months using AI tools to build new product ideas, but still couldn't answer a simple question: "Which features should we prioritize next?" This isn't uncommon. We're all overloaded by tools now. However, it's common for product teams to have strong AI capabilities that don't translate to better decisions. After helping numerous product and UX leaders navigate this challenge, here's what separates success from failure: 𝟭. 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻, 𝗻𝗼𝘁 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮 Define what specific product decisions you need to improve first. A CPO I work with narrowed their focus to just user onboarding decisions. This clarity made their AI implementation 3x more effective than their competitor's broader approach. 𝟮. 𝗖𝗿𝗲𝗮𝘁𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗳𝗶𝗿𝘀𝘁 Document your current decision-making process before implementing AI. What criteria matter most? What trade-offs are acceptable? These guardrails ensure AI serves your product strategy, not replaces critical thinking. 𝟯. 𝗠𝗮𝗶𝗻𝘁𝗮𝗶𝗻 𝘁𝗵𝗲 𝗵𝘂𝗺𝗮𝗻 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽 The best product teams use AI to expand options, not narrow them. They still validate AI recommendations through direct customer conversations. AI can spot patterns but can't understand the "why" behind user behaviors. 𝟰. 𝗕𝘂𝗶𝗹𝗱 𝗔𝗜 𝗹𝗶𝘁𝗲𝗿𝗮𝗰𝘆 𝘀𝗲𝗹𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆 Your entire team doesn't need to become AI experts. But product managers should understand enough to critically assess AI outputs. Focus training on interpretation skills, not just tool mechanics. 𝟱. 𝗔𝘂𝗴𝗺𝗲𝗻𝘁 𝗯𝗲𝗳𝗼𝗿𝗲 𝘆𝗼𝘂 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲 Instead of replacing human judgment, first use AI to enhance it. Look for places where your team is constrained by time or resources, not expertise. Flexible consulting partnerships can be more effective than massive AI investments or new full-time hires. It depends on your timeline, budget, and executive buy-in. The right external partner can help you integrate AI incrementally while preserving your team's core decision-making strengths. What's your biggest challenge in integrating AI into product decisions? Has your team found the right balance?
Tips for Balancing AI in Product Features
Explore top LinkedIn content from expert professionals.
Summary
Balancing AI in product features means ensuring artificial intelligence complements human capabilities and aligns with both user needs and business objectives. The goal is to integrate AI thoughtfully, supporting decision-making and improving user experiences without overshadowing human judgment or empathy.
- Start with clarity: Define the specific product decisions or tasks where AI can provide value, ensuring every implementation has a clear purpose and benefits the overall strategy.
- Keep human oversight: Use AI to expand possibilities but validate its outputs through customer feedback or human expertise to maintain trust and relevance.
- Educate selectively: Focus resources on teaching product teams how to understand and critically assess AI’s outputs rather than requiring everyone to be technical experts.
-
-
Claire Vo and Joff Redfern are on the leading edge of 'AI for Product' globally. I spoke to hundreds of CPOs & PMs and realised they want to be more aggressive on AI and had so many unanswered questions. I couldn't think of two better product leaders to join us at Sauce's 'AI for Product' fireside in SF to answer these questions. Claire Vo is the CPO of LaunchDarkly ($3B), former CPO at Color & Optimizely, and building the PM AI CoPilot, ChatPRD. Joff Redfern is a Partner at Menlo Ventures, former CPO at Atlassian ($42B) and former VP Product at LinkedIn & Yahoo. 👇 Here are my top 6 lessons (full video in comments) 1. Upskill your product teams to be ‘GenAI fluent’ Claire: “Be aggressive with the tasks your teams offload to AI and normalise it as an executive. For example, on everything I generate with an AI tool, I put a prompt at the bottom and attribute it with an author to say ‘I’m the Chief Product Officer, I use AI tools and you should too'. Empower your team with budget and training.” 2. Embrace paradigm shifts by starting with a small team Joff: “At LinkedIn we took a small team and said ‘this is our mobile team.’ Then this gets moved to a platform team… then eventually it gets diffused throughout every team in the organisation. I see the same pattern emerging with AI. At Atlassian it started with a small team doing a spike which focused on learning and experimentation.” 3. AI enables you to achieve more with less Claire: “The cost of building is collapsing and speed of building is accelerating. Ask yourself - am I shipping as fast as I can? I’m building ChatPRD with 1.5 employees, I’m actually 0.5 of an employee as I do this at nights outside my day job as a CPO. I’m coding at nights and am support after 7pm… I truly believe there will be a 1 person, $5 billion company.” 4. Reimagine workflows from the ground up Joff: ”Many startups are trying to speed up one step of a workflow. But I think the better answer is to step back and ask - now that the marginal cost of reasoning with AI is trending to zero, what does the world look like if we were to reimagine what we’re doing today from the ground up?” 5. Use AI hack weeks to get leadership bought in Claire: “As the CPO of Color I’d run AI hack weeks every 6 weeks. I gave our exec teams pre-reading on how AI works, then they had to automate something (e.g. generate a product marketing video which our CCO had been waiting 6mo for previously). This opened their eyes to what’s possible, they understood the impact of AI.” 6. Generalists will win in the AI world Joff: "9 years ago I wrote an article called the 'PM craft triangle' which says a PM can be a General Manager, Scientist or Artist. It was difficult to be the best at all so I advised to go deep on being the best at one of these corners. But now AI allows you to be at the centre of the triangle and smooth your weaker corners with CoPilots. Generalist skill sets will do well in the AI world." Full video in comments 👇
-
I brainstormed a list of things I ask myself about when designing for Human-AI interaction and GenAI experiences. What's on your list? • Does this person know they are interacting with AI? • Do they need to know? • What happens to the user’s data? • Is that obvious? • How would someone do this if a human was providing the service? • What parts of this experience are improved through human interaction? • What parts of this experience are improved through AI interaction? • What context does someone have going into this interaction? • What expectations? • Do they have a specific goal in mind? • If they do, how hard is it for them to convey that goal to the AI? • If they don't have a goal, what support do they need to get started? • How do I avoid the blank canvas effect? • How do I ensure that any hints I provide on the canvas are useful? • Relevant? • Do those mean the same thing in this context? • What is the role of the AI in this moment? • What is its tone and personality? • How do I think someone will receive that tone and personality? • What does the user expect to do next? • Can the AI proactively anticipate this? • What happens if the AI returns bad information? • How can we reduce the number of steps/actions the person must take? • How can we help the person trace their footprints through an interaction? • If the interaction starts to go down a weird path, how does the person reset? • How can someone understand where the AI's responses are coming from? • What if the user wants to have it reference other things instead? • Is AI necessary in this moment? • If not, why am I including it? • If yes, how will I be sure? • What business incentive or goal does this relate to? • What human need does this relate to? • Are we putting the human need before the business need? • What would this experience look like if AI wasn't in the mix? • What model are we using? • What biases might the model introduce? • How can the experience counteract that? • What additional data and training does the AI have access to? • How does that change for a new user? • How does that change for an established user? • How does that change by the user's location? Industry? Role? • What content modalities make sense here? • Should this be multi-modal? • Am I being ambitious enough against the model's capabilities? • Am I expecting too much of the users? • How can I make this more accessible? • How can I make this more transparent? • How can I make this simpler? • How can I make this easier? • How can I make this more obvious? • How can I make this more discoverable? • How can I make this more adaptive? • How can I make this more personalized? • How can I make this more transparent? • What if I'm wrong? ------------ ♻️ Repost if this is helpful 💬 Comment with your thoughts 💖 Follow if you find it useful Visit shapeofai.substack.com and subscribe! #artificialintelligence #ai #productdesign #aiux #uxdesign
-
Yesterday, I posted a conversation between two colleagues, we're calling Warren and Jamie, about the evolution of CX and AI integration. Warren argued that the emphasis on automation and efficiency is making customer interactions more impersonal. His concern is valid. And in contexts where customer experience benefits significantly from human sensitivity and understanding — areas like complex customer service issues or emotionally charged situations — it makes complete sense. Warren's perspective underscores a critical challenge: ensuring that the drive for efficiency doesn't erode the quality of human interactions that customers value. On the other side of the table, Jamie countered by highlighting the potential of AI and technology to enhance and personalize the customer experience. His argument was grounded in the belief that AI can augment human capabilities and allow for personalization at scale. This is a key factor as businesses grow — or look for growth — and customer bases diversify. Jamie suggested that AI can handle routine tasks, thereby freeing up humans to focus on interactions that require empathy and deep understanding. This would, potentially, enhance the quality of service where it truly mattered. Moreover, Jamie believes that AI can increase the surface area for frontline staff to be more empathetic and focus on the customer. It does this by doing the work of the person on the front lines, delivering it to them in real time, and in context, so they can focus on the customer. You see this in whisper coaching technology, for example. My view at the end of the day? After reflecting on this debate, both perspectives are essential. Why? They each highlight the need for a balanced approach in integrating technology with human elements in CX. So if they're both right, then the optimal strategy involves a combination of both views: leveraging technology to handle routine tasks and data-driven personalization, while reserving human expertise for areas that require empathy, judgement, and deep interpersonal skills. PS - I was Jamie in that original conversation. #customerexperience #personalization #artificialintelligence #technology #future
-
𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲 𝗲𝘅𝗽𝗲𝗿𝘁𝘀; 𝗶𝘁 𝗮𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝘀 𝘁𝗵𝗲𝗶𝗿 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲! 👉 It’s about harnessing AI to enhance our human capabilities, not replace them. 🙇♂️ Let me walk you through my realization. As a healthcare practitioner deeply involved in integrating AI into our systems, I've learned it's not about tech for tech's sake. It's about the synergy between human intelligence and artificial intelligence. Here’s how my perspective evolved after deploying Generative AI in various sectors: 𝐇𝐞𝐚𝐥𝐭𝐡𝐜𝐚𝐫𝐞: "I 𝐧𝐞𝐞𝐝 AI to analyze complex patient data for personalized care." - But first, we must understand the unique healthcare challenges and data intricacies. 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧: "I 𝐧𝐞𝐞𝐝 AI to tailor learning to each student's needs." - Yet, identifying those needs requires human insight and empathy that AI alone can't provide. 𝐀𝐫𝐭 & 𝐃𝐞𝐬𝐢𝐠𝐧: "I 𝐧𝐞𝐞𝐝 AI to push creative boundaries." - And yet, the creative spark starts with a human idea. 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬: "I 𝐧𝐞𝐞𝐝 AI for precise market predictions." - But truly understanding market nuances comes from human experience and intuition. The Jobs-to-be-Done are complex, and time is precious. We must focus on: ✅ Integrating AI into human-led processes. ☑ Using AI to complement, not replace, human expertise. ✅ Combining AI-generated data with human understanding for decision-making. ☑ Ensuring AI tools are user-friendly for non-tech experts. Finding the right balance is key: A. AI tools must be intuitive and supportive. B. They require human expertise to interpret and apply their output effectively. C. They must fit into the existing culture and workflows. For instance, using AI to enhance patient care requires clinicians to interpret data with a human touch. Or in education, where AI informs, but teachers inspire. 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 𝐀𝐈 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 𝐫𝐨𝐥𝐞𝐬 is critical. And that’s where I come in. 👋 I'm Umer kHan, here to help you navigate the integration of Generative AI into your world, ensuring it's done with human insight at the forefront. Let's collaborate to create solutions where technology meets humanity. 👇 Feel free to reach out for a human-AI strategy session. #GenerativeAI #HealthcareInnovation #PersonalizedEducation #CreativeSynergy #BusinessIntelligence
-
Staying in the A.I. race and picking a path to win: the past week has once again seen breakneck announcements of A.I. capabilities. The major players in the tech sector are showing their latest - from #Microsoft #CoPilot general release date, to #Salesforce #EinsteinAI, to #Google #Bard's new integrations. #Amazon's $4b investment in #Anthropic, and #OpenAI's just announced audio and video 'prompt' capabilities - all are influencing business choices and transforming corporate strategies and models in a race for new leadership. The fundamental question I ask (and get ask) often: "How can I leverage this to my business' benefit (ethically)?)" In short - how and where are the sweet spots for use, and why. And the answers are as diverse as the industries and models they rush to replace. From transportation / route optimizations to airline rewards program scenarios (try that as a prompt to come up with a new model for both flyers and providers), to medical claims processing - all are data-rich (and complex) business models - and each will have a different set of scenarios / data / conditions and outcomes. So what are the fundamentals? My best answers (so far): - Know your business / customer and what you are providing to them fundamentally (a service, a product, a solution, an experience). Decide if you want to enhance, replace, or transform that with more insights, anticipated for the customer. While this is so fundamental to mention - A.I. helps you expedite what you do already. - Identify the key data elements - and the sources of that data -to continously check, educate the A.I. on, and verify fidelity of that information. Bad data leads to hallucinations, which result in bad (and risk filled) outcomes. - Focus on outcomes and evolutions of those for your customers (internal or otherwise). If you want a customer to experience, utilize your product, or engage your service differently - focus on the outcome scenarios and build guide rails within the A.I. (LLM or otherwise) - to limit to those. - Commit to a journey of exploration to commercialization (and rinse and repeat) - The process of design thinking is a good conceptual framework to approach A.I. business transformations. Do not think 'we have always done it that way, or budget allows XYZ' scenarios to be the limits of potential change. New ways of serving customers (and revenue potential / margin enhancement) - come as you set aside the historic. This is not a one and done journey - it is continous (if you allow it to be). A.I. speeds up the journey from ideation to commerce, but does not replace the important steps between. - Use A.I. to be a time machine: probably the best advice I heard lately was think of your A.I. capabilities as a time saving solution - helping team members focus on the most important things that customers need insights, ideas, and outcomes (photo from Dall-E produced this morning via prompt)
-
Balancing Human Creativity & AI Efficiency. 6 Tips for Entrepreneurs... How I develop cutting-edge business strategies daily: 1. Encourage Wild Ideas: Don't let your ego stop your creative potential. Push yourself to think beyond what is "acceptable." Having said that.... Our brains can only go so far. AI then takes these ideas, no matter how wild, and explores their potential, connecting dots I didn't even see. 2. Overcome Bias: AI introduces ideas and points of view that I'd never think of. It enables me to break free from my usual patterns and creative biases. 3. Break Expertise Barriers: AI helps me venture beyond my comfort zone. It's a creative PARTNER, suggesting ways of tackling scenarios and strategies outside of my existing knowledge. AI is the sage. I'm the creative director. 4. Refine for Real-World Use: As entrepreneurs, we can all get carried away with BIG IDEAS. AI assesses these ideas for practicality, helping me refine them into actionable strategies. It can also perform market research in minutes. Fundamental when building products/services for your audience. 5. Enhance Decision Making: A million ideas. ONE DECISION. Using AI, I evaluate options based on data, not just intuition. I have all the cards laid out to execute my goals efficiently. 6. Accelerate Development Cycles: Perfectionism is the Achilles of creativity. Repeatedly self-editing often causes more harm than good. AI’s speed in processing and iterating ideas shortens my development time. It also closes the door so imposter syndrome doesn't creep in. I make this my ritual for creativity. When it’s time to execute, my strategies are not just creative but also AI-optimized for today’s dynamic market. Let AI be your co-pilot in business. Marry creativity with technology for groundbreaking results. P.S. How do you use AI for creativity?
-
When building AI features you never have full confidence “it will work”. There’s an inherent risk: Without building the feature to see the result, you can't know if an AI result is good. But you don't want to build the feature only to discover the AI response isn't good. So how do you determine if it's worth building? Last week we launched AI Assist to the world. Over the past few months, we went from low-confidence early AI ideas to launching AI assist, our new tools to build AI-powered in-product assistance into your product. Building AI Assist was unlike any other feature I've built in my career. It felt much riskier than non-AI features. By navigating the challenges of building AI assist, we learned 3 product lessons for mitigating risk when building AI features: Lesson 1️⃣ : It's too easy to dream. Get grounded with technical research. Make time for designers and developers to research to learn what’s possible. It may end up being a sunk cost (you’ll have to be comfortable with that risk), but research is the best input to creating AI solutions that are grounded in reality. Lesson 2️⃣: Quantify risk to select opportunities Unlike the traditional value vs effort prioritization of non-AI product work, AI requires a sense of the probability of success across a set of bets (like a Growth team!). Quantifying the risk helps navigate where to invest time. Lesson 3️⃣: Building is the only way to know what to build When building non-AI products, you have high certainty that a feature will work after discovery and definition. Not so with AI features. With AI features, you must iteratively build to increase the probability of success and continue or decide to cut your losses and move on. I wrote a case study about building AI Assist and dug deeper into these lessons on our blog, link in the comments!
-
Here's a lesson I learned before releasing any AI model, algorithm, capability or solution to production! 🚀 2023 was exciting, it felt like everyone had some sort of AI use case in POC (proof of concept). 2024 is the year we push these capabilities to production. While many things need to be considered, the one I will discuss in today's post is about "SETTING THRESHOLDS" and guardrails for what is 'good enough' and production worthy. In the race to harness AI's transformative power, it's crucial we set expectations. While it's not PERFECT, that shouldn't stop us from moving forward if we do the following: ✍ Answer this! Is the error rate of your AI solution better than your 'Human Error Rate'? If you don't have the answer, measure the human error rate first and create a baseline metric. Example: Data entry for product master data has a 78% accuracy rate, our AI solution is 86%. You should move forward and push your solution to production. ✍ Before diving into DESIGN, try and set goals and metrics of what GOOD and GREAT should look like. This groundwork ensures your AI solution aligns with your business needs and doesn't set unrealistic expectations. Example: At Watson, we had a threshold of 70% confidence rate for any answer the knowledge store was to provide for Ontology use cases. So if you set your customer service knowledge store at 80%, knowing that if your data retrieval needs to pass 80% confidence level, and that is considered 'acceptable' gives the entire team a view of what needs to be accomplished before pushing it to production. The key however is, you MUST take the time to measure the current output to determine if the AI output is better. In summary: 🙌 Define Acceptability Thresholds: What's the minimum performance level you're willing to accept? Setting this threshold is vital to ensure the AI's outputs meet your basic requirements. 🙌 Assess Confidence Levels: Determine the confidence level that suits your risk appetite. Not all scenarios require 99.9% accuracy. Find the sweet spot that aligns with your company's risk tolerance. 🙌 Align with Company's Risk Exposure: The potential of AI must be weighed against the backdrop of your company's risk profile. The goal is to innovate responsibly, without exposing your enterprise to undue risk. Before the first line of code is written, take a step back. Evaluate if your AI ambitions are not just visionary but also grounded in reality. It's not just about leveraging AI; it's about leveraging AI right. #lessonsinleadership #lessonslearned #artificialintelligence #ai #productdesign #pitfalls #data
-
AI cannot do what experts are doing today But army of AI + domain experts is a lethal combo We saw this first hand from an AI startup at Upekkha Story of “Grunt Work as a Service” This is the story of a startup we worked with What do they do? They manage a marketplace of experts In the beginning, they built a product Experts login & deliver their services But, then they realized that for this to become valuable They needed a marketplace to showcase these experts to the right customers Good learning But, here’s the fun part After a few conversations with experts They started to notice a pattern None of the experts wanted to do the grunt work that comes along with their service They want to stick to the the cool parts - Sharing their expertise - Real world scenarios - Practical things And wished that grunt work just magically disappeared You may have guessed it by now The startup decided to offer “Grunt Work as a Service” While this was a huge struggle in their initial days ChatGPT & LLMs made their way into the world They figured out a way to get this grunt work done with AI This was much faster & much more systematic The expert is happy because if they were on another platform They would have to do all the grunt work But this way The whole offering changed From a managed marketplace to an end to end platform Army of AI plus the experts came together Because the AI cannot do what these experts are doing Bringing in very specific, domain specific smarts A whole spectrum of things will change with AI More than any of us can even imagine I believe AI + Humans = Future of SaaS What do you think?