Last week, I talked about the possibilities of AI to make work easier. This week, I want to share a clear example of how we are doing that at HubSpot. We’re focused on helping our customers grow. So naturally, we take customer support seriously. Whether it’s a product question or a business challenge, we want inquiries to be answered efficiently and thoughtfully. We knew AI could help, but we didn’t know quite what it would look like! We first deployed AI in website and support chat. To mitigate any growing pains, we had a customer rep standing by for questions that came through who could quickly take the baton if things went sideways. And, sometimes they did. But we didn’t panic. We listened, we improved, and we kept testing. The more data AI collects, the better it gets. Today, 83% of the chat on HubSpot’s website is AI-managed and our Chatbot is digitally resolving about 30% of incoming tickets. That’s an enormous gain in productivity! Our customer reps have more time to focus on complex, high touch questions. AI also helps us quickly identify trends—questions or issues that are being raised more frequently—so we can intervene early. In other words, AI has not just transformed our customer support. It has elevated it. So, here is what we learned: Don’t panic if customer experience gets worse initially! It will improve as your data evolves. Evolve your KPIs and how you measure success- if AI resolves typical questions and your team resolves tricky ones, they will need more time. Use AI to elevate your team's efforts How are you using AI in support? What are you learning?
AI For Enhancing User Experience
Explore top LinkedIn content from expert professionals.
-
-
Over the last year, I’ve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)… But only track surface-level KPIs — like response time or number of users. That’s not enough. To create AI systems that actually deliver value, we need 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰, 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 that reflect: • User trust • Task success • Business impact • Experience quality This infographic highlights 15 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 dimensions to consider: ↳ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 — Are your AI answers actually useful and correct? ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 — Can the agent complete full workflows, not just answer trivia? ↳ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 — Response speed still matters, especially in production. ↳ 𝗨𝘀𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 — How often are users returning or interacting meaningfully? ↳ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗥𝗮𝘁𝗲 — Did the user achieve their goal? This is your north star. ↳ 𝗘𝗿𝗿𝗼𝗿 𝗥𝗮𝘁𝗲 — Irrelevant or wrong responses? That’s friction. ↳ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗗𝘂𝗿𝗮𝘁𝗶𝗼𝗻 — Longer isn’t always better — it depends on the goal. ↳ 𝗨𝘀𝗲𝗿 𝗥𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻 — Are users coming back 𝘢𝘧𝘵𝘦𝘳 the first experience? ↳ 𝗖𝗼𝘀𝘁 𝗽𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 — Especially critical at scale. Budget-wise agents win. ↳ 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝘁𝗵 — Can the agent handle follow-ups and multi-turn dialogue? ↳ 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲 — Feedback from actual users is gold. ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — Can your AI 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 𝘢𝘯𝘥 𝘳𝘦𝘧𝘦𝘳 to earlier inputs? ↳ 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — Can it handle volume 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 degrading performance? ↳ 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 — This is key for RAG-based agents. ↳ 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗼𝗿𝗲 — Is your AI learning and improving over time? If you're building or managing AI agents — bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system — these are the metrics that will shape real-world success. 𝗗𝗶𝗱 𝗜 𝗺𝗶𝘀𝘀 𝗮𝗻𝘆 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗼𝗻𝗲𝘀 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Let’s make this list even stronger — drop your thoughts 👇
-
We have ChatGPT for writing, Cursor for Coding… but we really need an AI for data analysis. That’s why today’s launch of Amplitude AI agents is interesting. It has at least 3 really promising use cases for PMs: 1. Feature Adoption Normally, when you launch a feature, there’s a rush the 24 hours after to watch session replays and look at the funnel. The best teams even ship small UX improvements in this time. But an AI agent could change the game. It could identify who’s engaged vs who’s stuck. It could find dropoff points in your flow. And it could even create guides for users that are struggling. All that automation would save you time digging through dashboards and allow you to focus on taking action. Talk about creating leverage for PMs. 2. Product Monitoring The current state of things breaking is: something ships or external happens, then 2-3 days later someone notices, and finally there’s a scramble to fix it. A data analysis AI agent can monitor this 24/7. That’s the promise of Amplitude’s agent. The moment conversion dips, it can analyze session replays, cross-reference recent changes, alert you, present options to take, and then take action based on what you approve. In a competitive market where inches matter, the extra 2-3 days in response time can be a game changer. 3. Monetization Upgrades Growth teams know there's a perfect moment to show upgrade prompts. Too early and you annoy users. Too late and they've already formed habits around the free version. An AI agent can learn behavioral signals that indicate readiness to pay. Usage patterns, feature engagement, time spent - all the data points humans can't track at scale. You can then steer agents to ship high-value upgrade flows and spend less time dwelling about the next pricing plan test. If you want to check out the beta (like I am), find the link in the comments. What’s your take: is this an AI agent launch worth watching?
-
Product managers & designers working with AI face a unique challenge: designing a delightful product experience that cannot fully be predicted. Traditionally, product development followed a linear path. A PM defines the problem, a designer draws the solution, and the software teams code the product. The outcome was largely predictable, and the user experience was consistent. However, with AI, the rules have changed. Non-deterministic ML models introduce uncertainty & chaotic behavior. The same question asked four times produces different outputs. Asking the same question in different ways - even just an extra space in the question - elicits different results. How does one design a product experience in the fog of AI? The answer lies in embracing the unpredictable nature of AI and adapting your design approach. Here are a few strategies to consider: 1. Fast feedback loops : Great machine learning products elicit user feedback passively. Just click on the first result of a Google search and come back to the second one. That’s a great signal for Google to know that the first result is not optimal - without tying a word. 2. Evaluation : before products launch, it’s critical to run the machine learning systems through a battery of tests to understand in the most likely use cases, how the LLM will respond. 3. Over-measurement : It’s unclear what will matter in product experiences today, so measuring as much as possible in the user experience, whether it’s session times, conversation topic analysis, sentiment scores, or other numbers. 4. Couple with deterministic systems : Some startups are using large language models to suggest ideas that are evaluated with deterministic or classic machine learning systems. This design pattern can quash some of the chaotic and non-deterministic nature of LLMs. 5. Smaller models : smaller models that are tuned or optimized for use cases will produce narrower output, controlling the experience. The goal is not to eliminate unpredictability altogether but to design a product that can adapt and learn alongside its users. Just as much as the technology has changed products, our design processes must evolve as well.
-
In the world of AI, most products today lean towards pull-based experiences like you ask a question, and the system responds. These experiences feel intuitive, empowering users to be in control. But while they create a solid foundation for usability, the real wow factor emerges when AI shifts to push-based use cases. Imagine AI anticipating your needs: suggesting edits to your document as you write, proposing new paragraphs to enhance clarity, or even offering tailored deals across portals as you browse a product attached to your shopping list, unlike standard recommendations. Push-based AI doesn’t wait to be called upon, but it’s there, actively delivering value in real time. This proactive intelligence becomes feasible with agentic AI systems across systems. These agents not only automate tasks but also enhance user workflows by making smart decisions on their behalf. For instance, writing an ad copy becomes seamless when AI not only generates ideas but also conducts market research, optimizes for SEO, and aligns with the latest trends that too are all in the background. It’s no longer about searching for insights but having them delivered at the right moment. The value is in timing and relevance, making AI feel more like a trusted assistant than a tool. This shift from pull to push in AI is why agentic systems are gaining so much momentum. It’s not just a race for computing power; rather, it’s a race for attention. By meeting users where they are and anticipating their needs, AI applications can elevate user experiences and redefine expectations. The future of AI isn’t just about solving problems when asked; it’s about solving problems before you even realize they exist. #ExperienceFromTheField #WrittenByHuman #EditedByAI
-
Predicting user behavior is key to delivering personalized experiences and increasing engagement. In mobile gaming, anticipating a player’s next move, like which game table they’ll choose, can meaningfully improve the user journey. In a recent tech blog, the data science team at Hike shares how transformer-based models can help forecast user actions with greater accuracy. The blog details the team's approach to modeling behavior in the Rush Gaming Universe. They use a transformer-based model to predict the sequence of tables a user is likely to play, based on factors like player skill and past game outcomes. The model relies on features such as game index, table index, and win/loss history, which are converted into dense vectors with positional encoding to capture the order and timing of events. This architecture enables the system to auto-regressively predict what users are likely to do next. To validate performance, the team ran an A/B test comparing this model with their existing statistical recommendation system. The transformer-based model led to a ~4% increase in Average Revenue Per User (ARPU), a meaningful lift in engagement. This case study showcases the growing power of transformer models in capturing sequential user behavior and offers practical lessons for teams working on personalized, data-driven experiences. #DataScience #MachineLearning #Analytics #Transformers #Personalization #AI #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gJR88Rnp
-
Exciting AI + accessibility news for the blind community! Be My Eyes has partnered with OpenAI/ChatGPT to create a groundbreaking accessibility tool that uses AI. Users can point their phone at the scenery in front of them, and the phone will provide a visual description and speak back to them in real time for tasks such as hailing down a taxi, reading a menu, or describing a monument. This could be a gamechanger for many blind people, enhancing independence and making the world more accessible for them. As a deafblind woman, it excites me to see a new accessibility tool emerging. This innovation holds great promise, and I’m eager to witness how it empowers the blind community by offering real-time descriptions of their surroundings. Imagine the freedom and confidence this could instill in daily life for blind people, from navigating new places to simply enjoying the beauty of nature. However, blindness varies widely, so this tool might be more suitable for some people than for others. For example, there are still limitations for the deafblind community. As blindness is a spectrum, many blind people still have remaining vision. If they're deafblind like me, they need captions to have full access when receiving auditory information. I'm curious about what blind users will think of the tool once they start to adopt it. While this is a fantastic advancement, there’s always need for continued improvements and iteration. I also care deeply about preventing the harmful impacts of AI so I hope that this is also being thought about. Accessibility technology is crucial for the disability community. It not only enhances our ability to engage with the world but also promotes independence and equity. What are your thoughts on this new development? P.S. Here’s a cool video on it: https://lnkd.in/etfHehCh #Accessibility #AI #DisabilityInclusion
Be My Eyes Accessibility with GPT-4o
https://www.youtube.com/
-
For years, companies have been leveraging artificial intelligence (AI) and machine learning to provide personalized customer experiences. One widespread use case is showing product recommendations based on previous data. But there's so much more potential in AI that we're just scratching the surface. One of the most important things for any company is anticipating each customer's needs and delivering predictive personalization. Understanding customer intent is critical to shaping predictive personalization strategies. This involves interpreting signals from customers’ current and past behaviors to infer what they are likely to need or do next, and then dynamically surfacing that through a platform of their choice. Here’s how: 1. Customer Journey Mapping: Understanding the various stages a customer goes through, from awareness to purchase and beyond. This helps in identifying key moments where personalization can have the most impact. This doesn't have to be an exercise on a whiteboard; in fact, I would counsel against that. Journey analytics software can get you there quickly and keep journeys "alive" in real time, changing dynamically as customer needs evolve. 2. Behavioral Analysis: Examining how customers interact with your brand, including what they click on, how long they spend on certain pages, and what they search for. You will need analytical resources here, and hopefully you have them on your team. If not, find them in your organization; my experience has been that they find this type of exercise interesting and will want to help. 3. Sentiment Analysis: Using natural language processing to understand customer sentiment expressed in feedback, reviews, social media, or even case notes. This provides insights into how customers feel about your brand or products. As in journey analytics, technology and analytical resources will be important here. 4. Predictive Analytics: Employing advanced analytics to forecast future customer behavior based on current data. This can involve machine learning models that evolve and improve over time. 5. Feedback Loops: Continuously incorporate customer signals (not just survey feedback) to refine and enhance personalization strategies. Set these up through your analytics team. Predictive personalization is not just about selling more; it’s about enhancing the customer experience by making interactions more relevant, timely, and personalized. This customer-led approach leads to increased revenue and reduced cost-to-serve. How is your organization thinking about personalization in 2024? DM me if you want to talk it through. #customerexperience #artificialintelligence #ai #personalization #technology #ceo
-
Memory & personalization might be the real moat for AI we’ve been looking for. But where that moat forms is still up for grabs: •App level •Model level •OS level •Enterprise level Each has very different dynamics. 🧵 ⸻ 1. App-level personalization Apps build their own memory & context for users. Examples: •Harvey remembering firm-specific legal knowledge for law firms •Abridge capturing patient conversations & generating notes for doctors •Perplexity building long-term search profiles for individual users ➡️ Most likely in vertical applications with focused use cases and domain-specific data. This is where Eniac Ventures is currently doing most of our investing ⸻ 2. Model-level personalization The model itself becomes personalized and portable across apps. Examples: •ChatGPT memory & custom instructions •Meta’s LLaMa fine-tuned on personal embeddings ➡️ Most likely in general-purpose assistants and broad horizontal use cases where user context needs to travel across apps. ⸻ 3. OS-level personalization Personalization happens at the OS level, shared across apps & devices. Examples: •Google Gemini native to Android •Apple (maybe) embedding Claude via Anthropic ➡️ Most likely in consumer devices and mobile ecosystems where platforms control distribution. ⸻ 4. Enterprise-level personalization Each enterprise owns and controls its own personalization layer for employees & customers. Examples: •Microsoft Copilot trained on company data •OSS models (LLaMa, Mistral) deployed on private infra with platforms like TrueFoundry •OpenAI GPTs fine-tuned & hosted in secure enterprise environments ➡️ Most likely in highly regulated industries (healthcare, financial services) where data privacy and compliance are critical. ⸻ Why it matters: Where memory & personalization “land” may define who captures AI value. Different layers may win in different sectors. Where AI memory lives may reshape who captures the next decade of value.
-
A Director of UX at a SaaS company recently shared a painful calculation with me: Their team of 3 researchers spent 75% of their time on manual analysis. At an average salary of $150K, that's nearly $300K annually spent on analyzing data. But the bigger cost? Critical product decisions made without insights because "we can't wait for research." Most UX and product teams are trapped in a costly cycle of inefficiency: Conduct user interviews → Spend 30+ hours manually analyzing → Create a report → Make decisions based on gut feeling before the report is ready. After watching UX teams struggle with this for years, I've identified the core problem: research insights are treated as artifacts, not conversations. This is why we built AI Wizard into Looppanel - a conversational research companion that transforms how teams extract value from user research. Instead of static reports and manual analysis, AI Wizard allows anyone to simply ask: "What pain points did users mention about the onboarding process?" "Summarize the key recommendations users suggested for improving the checkout flow." "What were the main differences in how novice users versus power users approached this task?" You start by selecting from templates like Pain Points, Recommendations, or Summary. AI Wizard instantly analyzes your project data and engages in a natural conversation - complete with follow-up questions to dig deeper into specific areas. The way I see it, AI Wizard helps solve 3 critical problems: 1. The speed-to-decision problem Waiting weeks for analysis means missing decision windows. AI Wizard delivers TLDR overviews in seconds, not days. 2. The iteration problem No more spending time on data again because of a follow-up question. Answer unexpected stakeholder questions on the spot instead of scheduling another week of analysis 3. The tailored communication problem Automatically format the same insights for different audiences: executives get metrics, designers get details, all without rebuilding presentations. With AI Wizard, your team can: → Start conversations with templates like Pain Points, Recommendations, or Summary → Ask follow-up questions to dig deeper → Get insights from across your entire research repository in seconds → Democratize access to insights throughout your organization Will your team be leading this transformation or catching up to it? If you want to make the shift, sign up for a personalized demo here: https://bit.ly/42PEOlX