The internet as we know it was built for humans. Individually-rendered pages, visual interfaces, content designed to be read, not reasoned over—it’s a system made and optimized for human eyes, not intelligence itself. But as Parag Agrawal told us nearly two years ago, the web’s next user isn’t human. AI agents are beginning to browse, research, and synthesize information at a scale and speed no human ever could. And yet, the infrastructure of the web hasn’t evolved to meet them. Since our initial investment in Parallel Web Systems in early 2024, that insight has only grown more prescient. AI systems are quickly becoming the primary consumers of the web. That means they need infrastructure that reflects how machines think, reason, and learn. Parag and his team are re-architecting the web for this new era, building systems that allow AIs to retrieve, reason over, and act on information at machine speed and scale. Their platform already powers deep web research and structured enrichment for some of the world’s most sophisticated AI builders and leading Fortune 100 companies. We’re thrilled to partner once again with Parag and the Parallel team, co-leading their $100M Series A as they build the web’s next chapter. Read more from Shardul Shah on the Index blog 📝 (link in comments)
Index Ventures co-leads $100M Series A in Parallel Web Systems with Parag Agrawal
More Relevant Posts
-
Here's a question nobody's asking yet: What happens when there are 10,000 AI agents and you need to find the right one? We're rushing to build specialized AI agents for everything, scheduling, data analysis, customer support, code review. But we're recreating the pre-Google internet problem. Agents are scattered across: GitHub repos you'll never find, Proprietary marketplaces you have to search separately, Internal catalogs with no standards, Chat interfaces with keyword search that doesn't understand "I need help analyzing sales pipeline data". The same problem that killed UDDI in the 90s. The same fragmentation we see with ChatGPT plugins vs Claude tools vs local agents. So for my weekend project I built MADL (Metis Agent Discovery Layer). It's not a marketplace. It's open infrastructure, like DNS for domains or OpenAPI for REST APIs. -Standardized format for describing any agent (OASP protocol) - Semantic search that understands intent, not just keywords - Works across all platforms (OpenAI, Anthropic, MCP, custom APIs) - Decentralized trust (no gatekeepers) - MIT licensed (anyone can run it, fork it, build on it) The AI agent economy is coming. The question is whether we build it on open foundations or let it fragment into walled gardens.
To view or add a comment, sign in
-
-
I’ve been studying how to build a RAG demo using LangChain…handling every step manually: loading files, chunking documents, generating embeddings, managing a vector database, and retrieval search. Then… boom 🚀 Google releases the File Search tool that does all of that automatically. Now, you can focus only on building amazing apps without the backend weight. This tool truly transforms how we integrate generative AI with knowledge. Learn more: https://lnkd.in/dQ3eW3j3 #AI #RAG #LangChain #GoogleAI #GenAI
To view or add a comment, sign in
-
Google just dropped something huge for AI builders — the new File Search Gemini API. It’s an end-to-end RAG framework baked right into Gemini: embeddings, chunking, vector search, citations — all handled natively. You bring your files; Gemini handles the retrieval intelligence. This could seriously reduce the friction in building AI products that reason over your own data — no custom pipelines, no maintenance overhead, just focused innovation. Excited to see how this changes the landscape for AI product teams. #AI #RAG #GeminiAPI #Google #ProductManagement #Innovation https://lnkd.in/gupHyWA2
To view or add a comment, sign in
-
Big news for devs: Google's just dropped File Search in the #Gemini API—a fully managed #RAG tool that supercharges your apps with semantic search over your docs! Upload files, get cited responses, and say goodbye to manual indexing hassles. Build smarter bots & assistants in minutes. Dive in: https://lnkd.in/gvia_Pry #GeminiAPI #AI #DevTools
To view or add a comment, sign in
-
Large Language Models face a fundamental limitation. Since they are trained with static datasets, LLMs cannot retrieve real-time information for their output. In this article, Tamar Stern shows how Function Calling and AI Agents revolutionize this limitation, turning passive LLMs into active, intelligent systems that can search, calculate, and interact with live data sources. 🌐⚙️ Learn how to connect your AI apps to the web, build reasoning agents, and supercharge your workflows with real-time insights. 👉 Read the full article now on JavaScript Magazine and start building smarter AI-powered apps today! 🔗 https://lnkd.in/dURJ_Xuv
To view or add a comment, sign in
-
-
This is a huge signal 🚀. Grounding the Gemini API in Google Maps means AI agents are officially moving from philosophical novelty to practical, location-aware tools. The future of AI isn't building a general chatbot; it's engineering discrete, reliable agents for specific problems, like real-time itinerary planning or hyper-local property search. This validates our approach at InsidePartners.ai: We specialize in optimizing operations by integrating these advanced tools (like the Maps API) into secure, custom Agentic workflows. Stop theorizing about AI and start using it for mission-critical, grounded tasks. #AgenticAI #GoogleMaps #GenAI #WorkflowAutomation https://lnkd.in/gDNF3D8z
To view or add a comment, sign in
-
We are entering the era of Generative Web Applications, where the browser itself becomes the new platform for intelligence — not just a window to content, but a space for execution, reasoning, and interaction. In the mobile age, the number and quality of apps on the Play Store determined the winners. In this new era, it will be the ecosystem of generative agents and web-native apps that defines leadership. OpenAI is taking a platform-centric route — building the model infrastructure, context protocols, and compute stack that allow developers to create autonomous, data-aware applications on top of its foundation models. With initiatives like Atlas and MCP, OpenAI is turning its models into a programmable ecosystem where knowledge, tools, and user context can interact seamlessly. Perplexity, by contrast, is taking the experience-first path — transforming the web itself into a conversational and executable interface. Its strategy blends contextual search, personalized reasoning, and direct action, turning every query into a task that can be solved, not just answered. Both are racing toward the same destination: a world where AI systems live inside the browser, connecting knowledge, action, and context in real time. The true competition is no longer about bigger models, but about smarter ecosystems — those that can orchestrate intelligence across web, data, and users. As this shift unfolds, the companies that will lead are those that can engineer context, enable interoperability, and deliver intelligence at the point of interaction — not in isolation, but in motion.
To view or add a comment, sign in
-
🌟 New Blog Just Published! 🌟 📌 AI Browser Sidebars: Hidden Data Risks Exposed 🚀 ✍️ Author: Hiren Dave 📖 The rise of AI-enabled browsers has transformed how users interact with the web. Modern browsers now ship built-in language-model assistants that live in a persistent sidebar . These sidebar...... 🕒 Published: 2025-10-24 📂 Category: AI/ML 🔗 Read more: https://lnkd.in/dE3bCvRa 🚀✨ #aibrowsersidebars #datacollection #llmrisks
To view or add a comment, sign in
-
-
🚀 Perplexity Comet: The Browser That's Changing Everything I've been testing Perplexity Comet, and I'm not exaggerating when I say this is a genuine game-changer for how we work online. What makes Comet different? 🤖 AI-Native Architecture Unlike bolt-on AI features in traditional browsers, Comet is built from the ground up with AI at its core. It doesn't just help you search—it understands context, automates workflows, and anticipates your needs. ⚡ Real-World Impact: • Research that used to take 2 hours? Now 20 minutes. • Repetitive web tasks? Automated with simple commands. • Information synthesis across multiple sources? Instant. • Context switching between tabs and tools? Minimized dramatically. The "Aha" Moments: ✨ Ask complex questions and get synthesized answers with sources—no tab hopping ✨ Automate form filling, data extraction, and research workflows ✨ Navigate websites conversationally instead of clicking through menus ✨ Get intelligent summaries of lengthy articles and documents instantly Why this matters now: We're drowning in information but starving for insight. Traditional browsers were designed for the early internet—static pages, simple navigation. Today's work demands more: • Cross-referencing multiple sources • Extracting actionable insights quickly • Automating repetitive online tasks • Making sense of information overload Comet addresses these modern challenges directly. The Bigger Picture: This isn't just a better browser—it's a glimpse into how AI will fundamentally change our relationship with information. We're moving from: ❌ Searching → Finding → Reading → Synthesizing ✅ Asking → Understanding → Acting For professionals spending 6+ hours daily in browsers, this efficiency gain compounds into weeks of recovered time annually. Early Days, Big Potential: Comet is still evolving, but the foundation is solid. If you're in: • Research-heavy roles • Content creation • Data analysis • Any information-intensive work You owe it to yourself to try it. The web browsing paradigm that's dominated for 30 years is finally evolving. Comet is leading that evolution. Have you tried Perplexity Comet yet? What's your biggest productivity bottleneck that AI could solve? #PerplexityComet #ArtificialIntelligence #ProductivityTools #AIBrowser #FutureOfWork #TechInnovation #DigitalProductivity
To view or add a comment, sign in
-
We used our own AI Persona to power the Anam pricing page. It now answers live questions from visitors, gathers feedback, and helps us understand how people actually want to interact with a product. Every day, people visit our pricing page to determine which plan best suits them. Some are founders testing integrations. Others are developers comparing SDK options. A few are just curious what “real-time AI Persona” really means. Before, that curiosity disappeared into bounce rates and form fills. Now, users can ask directly. Meet Mia, our on-page AI persona. Mia lives right inside the pricing page. She speaks in real-time, answers technical and product questions, and collects live feedback that feeds directly back to our team. This is what we mean when we say Anam brings the internet to life. Talk to Mia yourself. Link in the comments.
To view or add a comment, sign in
More details from Index Partner Shardul Shah on why we're excited to double down on Parallel: https://www.indexventures.com/perspectives/parallels-100m-series-a/