Silicon Valley is becoming a dropshipping factory for AI wrappers. Here's why many of these so-called "AI startups" are just expensive middlemen: Every time I see yet another "AI-powered" wrapper startup launch, I'm reminded of the worst business model on the internet: dropshipping. YC's Winter 2025 batch included 128 “AI” startups out of 160 companies. But many are building identical products on foundational models with different logos: • 10+ AI voice agents for customer support (Leaping AI, Trace) • Multiple AI accounting/admin automation tools with similar features • Several AI agent support platforms offering the same infrastructure features YC routinely accepts direct competitors within the same batch They are mass-producing AI wrapper variations of the same product with little to no differentiation (just like dropshippers). Dropshippers: Take products from Alibaba → add markup → sell on Amazon/Shopify. AI wrappers: Take models from OpenAI → add UI → sell to VCs and unsophisticated SMBs. Both carry no inventory. Both conduct no R&D. Both compete solely on marketing and distribution arbitrage. Both mistake temporary arbitrage for sustainable business models. We're already seeing this play out. PhotoAI hit $157K monthly revenue through social media virality, just like dropshipped products on Instagram ads. It works until everyone copies the playbook. Then customer acquisition costs skyrocket and margins collapse. The most embarrassing example is PearAI. They literally copy-pasted Continue's open-source code and slapped their name on it. Investors need to realize that backing young technical founders worked when software was complex. Young founders know how write code but understand zero about actual customer problems. That's how you end up with 50 customer service chatbots instead of one great industry-specific solution. Meanwhile, real innovation happens with domain experts. Harvey AI succeeded because it was founded by lawyers and technical experts. They understand legal workflows, compliance requirements, and integration challenges. Here’s what needs to change: • Stop using traditional VC for digital middlemen: If a startup's core value is ‘ChatGPT but prettier’, they should consider seed-strapping or revenue-based financing, not venture capital. • Prioritize domain expertise over coding ability: Industry veterans who understand workflow problems will now be better suited to build than API integrators. • Demand proprietary data or genuine technical differentiation: Without unique assets, you are arbitraging OpenAI's pricing until they cut you out. • Focus on thick wrappers, not thin ones: Companies building substantial features, specific workflows, and deep integrations maintain competitive moats. Simple API calls don't. The dropshipping boom crashed when everyone realized that middlemen add no real value. The AI wrapper bubble will crash the same way, except this time, we are calling it innovation.
Why chatbots are not true innovation
Explore top LinkedIn content from expert professionals.
Summary
Many LinkedIn discussions highlight that chatbots are often mistaken for genuine innovation, when in reality they mostly repackage existing technology without fundamentally solving user problems or bringing true advances. A chatbot is an automated system designed to simulate conversation with humans, but critics argue that simply adding a chat interface rarely transforms workflows, unlocks new value, or replaces specialized expertise.
- Demand real differentiation: Build products that go beyond surface-level features and offer unique capabilities, proprietary data, or deep integration that solve authentic user needs.
- Prioritize domain knowledge: Involve industry experts who understand the underlying challenges, ensuring solutions address real problems rather than just adding conversational UI.
- Focus on meaningful change: Rethink how you approach technology adoption, aiming for shifts in process and purpose rather than layering new interfaces onto old ways of working.
-
-
Just two years ago, Klarna embraced AI wholeheartedly, replacing a significant portion of its customer service workforce with chatbots. The promise? Efficiency and innovation. The reality? A decline in service quality and customer trust. Today, Klarna is rehiring humans, acknowledging that while AI offers speed, it often lacks the nuanced understanding that human interaction provides. Despite early claims that AI was handling the work of 700 agents, customers weren’t buying it (literally or figuratively). The quality dropped. Trust fell. And even Klarna’s CEO admitted: “What you end up having is lower quality.” This isn't just a Klarna story. It's a reminder for all of us building the future with AI: - AI can enhance human work, but rarely replace it entirely. - Customer experience still wins over cost savings. - The best “innovation” might just be treating people, customers and workers, better.
-
Not all agents need a chat box. Some just need a job. There’s a growing obsession with chat interfaces in security. Everyone’s demoing AI copilots that let you type in: “Which of our machines talked to this IP on Tuesday?” Nice party trick. But if your system sits there idle, waiting to be asked, it’s not an agent. It’s a very expensive helpdesk queue. We need to stop conflating chat with agency. I draw a hard line between two types of agents: INTERACTIVE agents are assistants. They help humans reason, assemble context, and guide decisions. Useful when things are ambiguous. HEADLESS agents are workers. They don’t talk. They don’t wait. They act. They observe events, process signals, and contribute only when there’s something non obvious to say. If everything you’re building has a chat interface, that’s a red flag. Most work doesn’t need a conversation. It needs to get done. The future isn’t chat everywhere. It’s targeted agency: - Autonomy when the machine can own it - Collaboration when a human’s required - Silence when there’s nothing to say Otherwise, you’re just dressing up automation as conversation. And that’s not innovation, it’s performance art. - So here’s the challenge: Think about the last “agent” you built or bought. Was it genuinely agentic? Or just a chatbot stapled to your backlog? If you had to remove the UI tomorrow, would it still be useful? If not, you’ve built a mascot, not a system. Where do you draw the line between helpful and performative? Thoughts?
-
𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗣𝗿𝗼𝗰𝘂𝗿𝗲𝗺𝗲𝗻𝘁 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗹𝗮𝗰𝗸 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 It struggles with detaching from legacy thinking. We keep adding AI, Automation and new UX layers but at the core of it all, things remain procedural, very much the same. It's still about: ▪️ Clicking here ▪️ Approving that ▪️ Searching for this ▪️ Following the process We’ve built systems to help with 𝘩𝘰𝘸 to do things but not yet to understand 𝘸𝘩𝘢𝘵 we actually want to achieve. It's still a sequence of steps like: 👉"Go to the menu and follow the next 7 steps" instead of supporting the intent: 👉"Get me the spend for the top 40 suppliers within cabling" That's 𝙩𝙝𝙚 𝙨𝙝𝙞𝙛𝙩 𝙛𝙧𝙤𝙢 𝙩𝙖𝙨𝙠 𝙚𝙭𝙚𝙘𝙪𝙩𝙞𝙤𝙣 𝙩𝙤 𝙞𝙣𝙩𝙚𝙣𝙩. And no...slapping a chatbot on an old user interface won't fix it. Latest AI-infused solutions are directionally getting it right but it's deeper than that. It's about rethinking: ✔️ how we interact ✔️ how we frame needs ✔️ how we think ✔️ how we behave That's much more difficult than just buying new tech. As John Maynard Keynes, a british economist, put it: 𝙏𝙝𝙚 𝙙𝙞𝙛𝙛𝙞𝙘𝙪𝙡𝙩𝙮 𝙡𝙞𝙚𝙨 𝙣𝙤𝙩 𝙨𝙤 𝙢𝙪𝙘𝙝 𝙞𝙣 𝙙𝙚𝙫𝙚𝙡𝙤𝙥𝙞𝙣𝙜 𝙣𝙚𝙬 𝙞𝙙𝙚𝙖𝙨 𝙖𝙨 𝙞𝙣 𝙚𝙨𝙘𝙖𝙥𝙞𝙣𝙜 𝙛𝙧𝙤𝙢 𝙤𝙡𝙙 𝙤𝙣𝙚𝙨. And that applies to solution providers, developers and users alike.
-
Large language models are remarkable—but we’re hitting the limits of what generation alone can do. Yes, LLMs can produce fluent, coherent, and even creative text. That’s a real form of intelligence. But it’s also bounded—by how we encode the data, and by what we ask the model to become. Two things are holding us back: 1. Tokenization Current tokenizers are built for compression, not cognition. They break language into fragments that lose semantic structure. If the input is lossy and shallow, how can we expect deep reasoning to emerge? 2. Instruction tuning We spend billions training models to follow instructions—to be polite, helpful, safe. That’s good UX. But it also flattens the model’s capabilities into a performance. The model learns to pretend, not to think. I believe the next leap won’t come from more parameters or more data. It’ll come from rethinking the foundations: • Smarter tokenization: encoding concepts, not just characters. • Pretraining for abstraction, not obedience. • Less optimization for chatbot behavior, more room for autonomous reasoning to emerge. LLMs today are great simulators. But if we want true intelligence, we need to stop forcing them to act like agents—and start training them to become them.
-
So... everyone seems to think AI agents are the "next big thing" - but I think most are going to fail. Here's how I think it all goes down… (Disclaimer: I don't want AI agents to fail. The world is way cooler if they succeed, and I'm in the middle of the agent space myself. But wanting something to succeed doesn't mean it will. If it did, then we'd all have perfect Wi-Fi on airplanes) OK, let's pretend you're the CEO of an AI agent company. First of all - congrats. You did it! You've got the buzz. Your tech is called an "agent" and everyone is hyped. Investors are sliding into your DMs, and your mom's finally impressed with what you do for a living. But there's a problem. Actually, two problems. Semantics: Everyone's calling everything an "agent." But most of these agents? They aren't even agents. They're just chatbots, copilots, or assistants with a few extra tricks—RAG, function calling, whatever. We need a clear definition here: Chatbot: Responds to user queries, provides basic information dependent on a prompt (who remembers prompts anyway?) Copilot: Assists the user in performing tasks, typically involving user interaction for each step. Agent: True agents are proactive, self-learning, and provide end-to-end automation. In my dictionary: they act like an employee would—they take the task, understand the context, and execute without constant oversight. Automating one or two steps? Cool, but guess what? Software has been doing that for decades. Nothing groundbreaking here. The productivity paradox of AI agents: Let's be real—most of these so-called agents are not delivering the game-changing automation they promised. Partial automation is not new. We had Zapier, we had SaaS solutions, and they worked well. The problem is, if you automate just one step, you're still left juggling the rest. And our brains? They don't like that. Imagine trying to bake a cake but only automating the mixing step. You still have to gather the ingredients, preheat the oven, and handle the baking. Switching costs are real—And if one of those steps goes wrong because LLMs don’t think like you do? You’re back rewriting the recipe over and over. Bye-bye, efficiency. The cognitive load and time of going back and rethinking your steps diminish the ''gains''. The agents that truly work are the ones that automate 100% of a workflow. Customer service agents that actually solve problems end-to-end, sales solutions that close the deal without hand-holding, and Typetone’s AI marketing agent that takes care of everything. 4-step guide for evaluating AI agents: Define the problem: What's the real pain point? Size the problem: Is it worth solving? Identify the solution: Does this agent truly automate the whole flow? Evaluate complexity: Does it need an expert to set up? If yes, maybe it's not the solution you're looking for. AI agents have potential. But potential doesn’t cut it. True automation does. 👍 Are you using or building AI agents and disagree, bring it on below!
-
Have chatbots failed us, or have we failed chatbots? Remember 2015? Chatbots were everywhere. I still remember when Drift burst onto the scene, introducing "Conversational Marketing" – a term that made everyone go "Yes, that's exactly what we need!" Before we knew it, one in three B2B websites had that familiar chat bubble in the corner. Fast forward to today: Less than 10% of the same websites use chatbots. And unfortunately all of them have reduced to fancy "Book a Demo" buttons now. So what went wrong? 🤔 Here's what I think happened: • The vision was right, but the execution failed • Rule-based systems couldn't handle real conversations, even those tried to setup good conversation workflows, they couldn't • And because we have lead gene pressures, we forced sales conversations when buyers just wanted information Think about it: If someone's ready to share their email, they'll fill out your contact form. They don't need a chatbot for that! The painful irony is that 99% of your website visitors are there to learn, not to book meetings. They want answers, not a sales pitch. 🎯 The Real Lesson: Buyers want self-directed learning experiences. They want to explore your product without feeling pressured. But we turned their curiosity into a conversion opportunity too soon. Here's what keeps me up at night: What if we could create a space where buyers could freely explore, ask questions, and learn without the constant pressure to "hop on a quick call"? What's your take, I would love to get some perspective. #b2bmarketing #buyerexeperience #conversationalmarketing