From my new Harvard Business Review article, here’s how to create the last of four pillars that innovative organizations need – Innovation Communities: Innovations often happen at intersections, yet many companies lack ways for innovators to connect informally and see where conversations go. This can also make innovation a lonely endeavor. It doesn’t cost much or take a lot of time to provide people with common innovation interests a means to connect and exchange ideas. At the very least, it’ll help keep them motivated. At best, it may trigger new kinds of cross-disciplinary collaborations that open up previously unseen vectors for change. Don’t be Atari, which was abandoned in frustration by an ambitious innovator: Steve Jobs. What to do instead? Cultivate community. Take the German life sciences company, Bayer. Bayer has created an internal community of 700 innovators around the world who use common resources, join competitions against one another, and nominate local representatives to participate in an annual meeting. These connections then enable discussions about ways to cross-apply methods, business models, and other capabilities that can translate across business units. For instance, the program helped create agricultural finance options that are now offered around the world, stemming in part from an idea that originated in Bayer’s corporate finance and marketing departments in Greece. (How have you built innovation communities? Please share your approaches in the comments!)
Collaborative Innovation Tools
Explore top LinkedIn content from expert professionals.
-
-
Over the years, I've discovered the truth: Game-changing products won't succeed unless they have a unified vision across sales, marketing, and product teams. When these key functions pull in different directions, it's a death knell for go-to-market execution. Without alignment on positioning and buyer messaging, we fail to communicate value and create disjointed experiences. So, how do I foster collaboration across these functions? 1) Set shared goals and incentivize unity towards that North Star metric, be it revenue, activations, or retention. 2) Encourage team members to work closely together, building empathy rather than skepticism of other groups' intentions and contributions. 3) Regularly conduct cross-functional roadmapping sessions to cascade priorities across departments and highlight dependencies. 4) Create an environment where teams can constructively debate assumptions and strategies without politics or blame. 5) Provide clarity for sales on target personas and value propositions to equip them for deal conversations. 6) Involve all functions early in establishing positioning and messaging frameworks. Co-create when possible. By rallying together around customers’ needs, we block and tackle as one team towards product-market fit. The magic truly happens when teams unite towards a shared mission to delight users!
-
One critical skill of great Product Managers is that they can take an immense amount of information and make sense out of it to find a path forward. Your job isn’t just to get the data, it’s to create action out of that data. But this is where many people get paralyzed. For product managers who struggle with this, I find tools like Affinity mapping extremely helpful to help organize your thoughts. Affinity Mapping is a basic facilitation and collaboration tool, but it’s extremely powerful. Put simply, it’s a practical way to sort through different pieces of data, group them into common themes, and discover valuable insights. Whether you're dealing with complicated user research or trying to get everyone on the same page, this method helps you focus and find your way forward. Here's how to run an Affinity Mapping session that's not just productive, but also a bit of fun: 1️⃣ Gather Your Data: Start with all the raw data you have – post-its from brainstorming, customer feedback, interview notes, you name it. Get it all on the table. Literally. 2️⃣ Invite the Right People: Bring together a diverse group from your team. Yes, diversity! You want different perspectives – designers, developers, marketers, and especially those who are often quiet but have brilliant thoughts simmering under the surface. 🧠 3️⃣ Create a Safe Space: Before diving in, set the stage for open collaboration. Remind everyone that every idea is valuable and we're here to discover, not judge. This is about finding patterns, not picking favorites. 4️⃣ Sort and Cluster: Now, get sticky! Start placing related ideas together. Don't overthink it. Go with your gut. You'll see themes start to emerge as you cluster similar thoughts. It's like a puzzle where the picture becomes clearer with each piece. 🧩 5️⃣ Label the Themes: Once you have your clusters, give each one a name that captures the essence of the ideas within it. These labels will be your guideposts for action later on. 6️⃣ Reflect and Discuss: Take a step back. What do you see? Any surprises? Discuss as a group and make sure everyone's voice is heard. This is where the magic happens – insights start to bubble up to the surface. 7️⃣ Prioritize and Act: Finally, decide what's most important. Which themes align with your goals? Which insights are game-changers? Make a plan to act on these priorities. Affinity mapping is not just about organizing thoughts; it's about unlocking the collective wisdom of your team. It's a powerful way to build consensus and ensure everyone's voice is heard. So, next time you're grappling with data overload, grab some sticky notes and start mapping! What else have you used to help organize your thoughts and data? #ProductManagement #UserResearch #Collaboration #AffinityMapping
-
Traditional usability tests often treat user experience factors in isolation, as if different factors like usability, trust, and satisfaction are independent of each other. But in reality, they are deeply interconnected. By analyzing each factor separately, we miss the big picture - how these elements interact and shape user behavior. This is where Structural Equation Modeling (SEM) can be incredibly helpful. Instead of looking at single data points, SEM maps out the relationships between key UX variables, showing how they influence each other. It helps UX teams move beyond surface-level insights and truly understand what drives engagement. For example, usability might directly impact trust, which in turn boosts satisfaction and leads to higher engagement. Traditional methods might capture these factors separately, but SEM reveals the full story by quantifying their connections. SEM also enhances predictive modeling. By integrating techniques like Artificial Neural Networks (ANN), it helps forecast how users will react to design changes before they are implemented. Instead of relying on intuition, teams can test different scenarios and choose the most effective approach. Another advantage is mediation and moderation analysis. UX researchers often know that certain factors influence engagement, but SEM explains how and why. Does trust increase retention, or is it satisfaction that plays the bigger role? These insights help prioritize what really matters. Finally, SEM combined with Necessary Condition Analysis (NCA) identifies UX elements that are absolutely essential for engagement. This ensures that teams focus resources on factors that truly move the needle rather than making small, isolated tweaks with minimal impact.
-
Decision-making by consensus is the biggest killer of good product marketing. Case in point: You develop messaging by seeking buy-in from every stakeholder. As a result, the messaging is a kitchensink of what everyone wants to say. It is diluted and ineffective, leaving no one satisfied with the final output. And it probably doesn’t resonate with customers. Yet, too often, Martina Lauchengco and I have seen companies make product marketing decisions by consensus. This doesn't work because it ignores that clarity comes from saying no to important things. So, what do you do, especially if you work in a consensus-driven culture? Here are specific tips from Martina: 💡 Shift from consensus to collaboration. The former means everyone’s wishes are included. The latter means everyone’s wishes are considered, but a decision-maker decides what is best. Active collaboration is about expanding on and incorporating others’ ideas, with a focus on creating a potentially new idea that re-frames and is better than any individual’s idea at reaching a goal. Conversely, consensus rushes to a concept closest to what everyone wants, and then everyone piles more of what they want into the concept. It doesn’t necessarily enrich the idea; it polishes it. But the problem is that you might be polishing a turd. 💩 Here’s what strong collaboration with others looks like: 1. Openly explore the landscape of possibilities. Without judgment, let ideas fly. Explicitly say when you’re in ‘open’ mode just trying to get more inputs. Using a collaboration tool like Miro could help people open up to sharing. 2. Have strong ideas, loosely held. Good teammates have strong opinions. But when collaborating effectively, people show they remain open. 3. Know your why. Is what you’re creating for customers or internal? Anchor collaboration around the why. It helps people separate their personal agendas from what’s actually helpful for an intended purpose. 4. Acknowledge what’s been said. Even if you don’t agree, you can still acknowledge what was said so everyone feels what they’re saying is considered. 5. Suggest options that incorporate what was discussed. A good collaborator shows evidence of other people’s ideas and presents a potential way forward. Again, it doesn’t have to be a blend of everyone’s ideas, but pick and choose what threads the needle of ‘the why’. 6. Be clear on who owns the decision. If this isn’t clear in a room, ask, “Who gets to make the final call on this?” That helps you calibrate what you own vis-a-vis your collaborators and reminds people there are certain things PMM is best equipped to own. Strong collaboration improves the ideas considered. Consensus tends to water ideas down. ❓ Do you have examples of when this happened to you - good or bad? Share it in the comments! P.S. I am SO honored to be collaborating with the legendary Martina Lauchengco - more content to come! #productmarketing #growth #career #leadership #tech
-
Earlier this year, I facilitated a strategy session where one person’s voice dominated while quiet team members retreated into their shells. Halfway through, I paused, put everyone into small groups, and gave them roles to pick up. Here's how it works: 1️⃣ Assign Roles: Each small group had a Questioner, Connector, and Synthesizer. - Questioner: Probes deeper and asks clarifying, “why?” and “how?” questions. - Connector: Links ideas across people, points out overlaps and sparks “aha” moments. - Synthesizer: Distills discussion into concise insights and next-step recommendations. 2️⃣ Clarify Focus: Groups tackled one critical topic (e.g., “How might we streamline on-boarding?”) for 10 minutes. 3️⃣ Reconvene & Share: Each group’s Synthesizer distilled insights in 60 seconds. The result? Silent participants suddenly spoke up, ideas flowed more freely, and we landed on three actionable priorities in our timebox. Next time you sense a lull in your meeting/session/workshop, try role-based breakouts. #Facilitation #Breakouts #TeamEngagement #ActiveParticipation Sutey Coaching & Consulting --------------------------------------------- ☕ Curious to dive deeper? Let’s connect. https://lnkd.in/gGJjcffw
-
AI changes how we measure UX. We’ve been thinking and iterating on how we track user experiences with AI. In our open Glare framework, we use a mix of attitudinal, behavioral, and performance metrics. AI tools open the door to customizing metrics based on how people use each experience. I’d love to hear who else is exploring this. To measure UX in AI tools, it helps to follow the user journey and match the right metrics to each step. Here's a simple way to break it down: 1. Before using the tool Start by understanding what users expect and how confident they feel. This gives you a sense of their goals and trust levels. 2. While prompting Track how easily users explain what they want. Look at how much effort it takes and whether the first result is useful. 3. While refining the output Measure how smoothly users improve or adjust the results. Count retries, check how well they understand the output, and watch for moments when the tool really surprises or delights them. 4. After seeing the results Check if the result is actually helpful. Time-to-value and satisfaction ratings show whether the tool delivered on its promise. 5. After the session ends See what users do next. Do they leave, return, or keep using it? This helps you understand the lasting value of the experience. We need sharper ways to measure how people use AI. Clicks can’t tell the whole story. But getting this data is not easy. What matters is whether the experience builds trust, sparks creativity, and delivers something users feel good about. These are the signals that show us if the tool is working, not just technically, but emotionally and practically. How are you thinking about this? #productdesign #uxmetrics #productdiscovery #uxresearch
-
A proposed qualitative evaluation framework for Generative AI writing tools: This post is my first draft of an evaluation framework for assessing generative AI tools (e.g. Claude, ChatGPT, Gemini). It's something I’ve been working on with Ryan Low — originally in the interest of selecting the best option for Rotational. At some point we realized sharing these ideas might help us and others out there trying to pick the best AI solution for your company's writing needs. We want to be clear that this is not another LLM benchmarking tool. It's not about picking the solution that can count the r's in strawberry or repeatably do long division. This is more about the everyday human experience of using AI tools for our jobs, doing the kinds of things we do all day solving our customers' problems 🙂. We're trying to zoom in on things that directly impact our productivity, efficiency, and creativity. Do these resonate with anyone else out there? Has anyone else tried to do something like this? What other things would you add? Proposed Qualitative Evaluation Criteria 1 - Trust and Accuracy Do I trust it? How often does it say things that I know to be incorrect? Do I feel safe? Do I understand how my data is being used when I interact with it? 2 - Autonomous Capabilities How much work will it do on my behalf? What kinds of research and summarization tasks will it do for me? Will it research candidates for me and draft targeted emails? Will it read documents from our corporate document drive and use the content to help us develop proposals? Will it review a technical paper, provided a URL? 3 - Context Management and Continuity How well does the tool maintain our conversation context? Not to sound silly, but does the tool remember me? Is it caching stuff? Is there a way for me to upload information about myself into the user interface so that I don’t have to continually reintroduce myself? Does it offer a way to group our conversations by project or my train of thought? Does it remember our past conversations? How far back? Can I get it to understand time from my perspective? 4 - User Experience Does the user interface feel intuitive? 5 - Images How does it do with images? Is it good at creating the kind of images that I need? Can the images it generates be used as-is or do they require modification? 6 - Integrations Does it integrate with our other tools (e.g. for project management, for video conferences, for storing documents, for sales, etc)? 7 - Trajectory Is it getting better? Does the tool seem to be improving based on community feedback? Am I getting better at using it?
-
Ever been in a meeting where ideas spiral, voices clash, loud dominates, and progress stalls? Frustrating, right? Here’s the fix: Dr. Edward de Bono’s Six Thinking Hats technique. This method doesn’t tell teams WHAT to think—it focuses them on HOW to think. By separating perspectives and directing focus, it transforms chaos into clarity, ensuring every voice is heard and team alignment is built. 🎩 How it works: ------------------ The Six Hats method assigns specific modes of thinking to each “hat,” creating structure and psychological safety in discussions. Everyone works on the same perspective at a time, avoiding disconnected, adversarial debates and fostering collaboration. Key features: 🔹 One facilitator (Blue Hat): Manages the flow, ensures focus, and guides the group through each thinking mode. 🔹 Everyone else wears metaphorical hats: Encouraged to think from specific angles—facts, emotions, risks, benefits, creativity—one at a time. 💡 Why it works: --------------------- 🔹 Ego-free conversations: Separates personal agendas from productive collaboration. 🔹 Deep exploration: Each hat creates space for substance over surface-level chatter. 🔹 Parallel thinking: Aligns the group around solving problems instead of debating them. 🧢👒⛑️ The Hats in Action:🪖🎩👷🏻♀️👩🏽🍳 ------------------------------------ 🧢 Blue (facilitator): This hat is worn by the session leader, who guides the group through each thinking mode, keeps the conversation focused, and synthesizes insights to ensure actionable outcomes. Then the group dons these hats to discuss each of these perspectives one at a time: 👩🏽🍳 White (facts): What do we know? ⛑️ Red (emotions): How does this feel? 🎩 Black (risks): What could go wrong? 👷🏻♀️ Yellow (benefits): What’s the upside? 🪖 Green (creativity): How can we innovate? Imagine meetings where every perspective is explored, every voice is valued, and decisions move forward—without the drama. That’s the power of Six Thinking Hats. More details in my full post and book excerpt! 👉 Ready to revolutionize your meetings? Share your thoughts below! ----- Follow Mistere Advisory for actionable insights! https://lnkd.in/gXMVhVDG #strategy #meetings #brainstorming
-
Throwing AI tools at your team without a plan is like giving them a Ferrari without driving lessons. AI only drives impact if your workforce knows how to use it effectively. After: 1-defining objectives 2-assessing readiness 3-piloting use cases with a tiger team Step 4 is about empowering the broader team to leverage AI confidently. Boston Consulting Group (BCG) research and Gilbert’s Behavior Engineering Model show that high-impact AI adoption is 80% about people, 20% about tech. Here’s how to make that happen: 1️⃣ Environmental Supports: Build the Framework for Success -Clear Guidance: Define AI’s role in specific tasks. If a tool like Momentum.io automates data entry, outline how it frees up time for strategic activities. -Accessible Tools: Ensure AI tools are easy to use and well-integrated. For tools like ChatGPT create a prompt library so employees don’t have to start from scratch. -Recognition: Acknowledge team members who make measurable improvements with AI, like reducing response times or boosting engagement. Recognition fuels adoption. 2️⃣ Empower with Tiger Team Champions -Use Tiger/Pilot Team Champions: Leverage your pilot team members as champions who share workflows and real-world results. Their successes give others confidence and practical insights. -Role-Specific Training: Focus on high-impact skills for each role. Sales might use prompts for lead scoring, while support teams focus on customer inquiries. Keep it relevant and simple. -Match Tools to Skill Levels: For non-technical roles, choose tools with low-code interfaces or embedded automation. Keep adoption smooth by aligning with current abilities. 3️⃣ Continuous Feedback and Real-Time Learning -Pilot Insights: Apply findings from the pilot phase to refine processes and address any gaps. Updates based on tiger team feedback benefit the entire workforce. -Knowledge Hub: Create an evolving resource library with top prompts, troubleshooting guides, and FAQs. Let it grow as employees share tips and adjustments. -Peer Learning: Champions from the tiger team can host peer-led sessions to show AI’s real impact, making it more approachable. 4️⃣ Just in Time Enablement -On-Demand Help Channels: Offer immediate support options, like a Slack channel or help desk, to address issues as they arise. -Use AI to enable AI: Create customGPT that are task or job specific to lighten workload or learning brain load. Leverage NotebookLLM. -Troubleshooting Guide: Provide a quick-reference guide for common AI issues, empowering employees to solve small challenges independently. AI’s true power lies in your team’s ability to use it well. Step 4 is about support, practical training, and peer learning led by tiger team champions. By building confidence and competence, you’re creating an AI-enabled workforce ready to drive real impact. Step 5 coming next ;) Ps my next podcast guest, we talk about what happens when AI does a lot of what humans used to do… Stay tuned.