Multimodal data can be difficult to analyze together. At #SnowflakeBUILD, Senior Developer Advocate James Cha-Earley will show you how Snowflake Cortex AISQL solves this by letting you analyze text, audio, images, and documents, all with SQL. In this hands-on lab, you’ll learn to build a production-grade pipeline to power AI use cases from customer service data: - Multimodal Processing: Use Cortex AISQL functions for transcription, document parsing, classification, and entity extraction to process voice calls, PDFs, chat logs, and support tickets. - Intelligent Workflows: Create pipelines that surface actionable intelligence for both customers and support agents. Register today: https://lnkd.in/gaB-wgcv
More Relevant Posts
-
We built Softprobe.ai, an AI-native runtime data engine. It captures, replays, and transforms full-stack runtime context — enabling deterministic debugging, faster analytics, and AI-ready data. https://lnkd.in/eqPBfbqd
To view or add a comment, sign in
-
-
From my own experience, learning rapidly advancing technologies in AI space gets easier with access to quality information. I read a new piece from Google by Kimberly Milam and Antonio Gulli, and I really liked their approach to building agentic systems. The core point: an agent isn’t “intelligent” because you wrote a clever system prompt — it’s intelligent because of the context you assemble around it every single time it responds. The way they explain it is straightforward and practical. Every agent needs its own setup before it can think: • A base instruction (the stable part) • A rotating mix of history, retrieved knowledge, tool outputs, and long-term user info (the dynamic part) The magic isn’t in the prompt — it’s in how you structure and refresh the context. They also draw an important separation that most teams blur: • RAG is how an agent learns about the world. • Memory is how an agent learns about you. Those two together turn an LLM from a one-off chatbot into something closer to a real assistant that adapts with every interaction. And yes — they even address the messy part: when long conversations start to dilute model reasoning. Their compaction strategies to prevent “context decay” are certainly useful to consider. If you’re building anything agentic, this framework is worth your time. #AgenticAI #LLM #AIEngineering #ContextDesign #RAG #Memory #AIAgents #MachineLearning #StayCurious
To view or add a comment, sign in
-
We keep talking about agents for code, but how do they handle databases? How do we prevent creative but unsupervised AI agents from causing chaos in our databases? Rotem Tamir from honeybadge labs will demonstrate two powerful open-source tools: MCP Toolbox for Databases (DML guardrail) and Atlas (DDL change management). This is a hands-on guide to building production-ready agents you can actually trust with your data. Register: ainativedev.io/devcon Use the code GUYPO25 for 25% off! #AINativeDevCon
To view or add a comment, sign in
-
-
🚀 Day 2. "Agent Tools & Interoperability with Model Context Protocol (MCP)" (AI Agents Intensive Course from Google) Here are the key things from today: 🛠️ Tools: how to define and create custom tools, differences between function vs agent, available tool types in ADK, agent vs sub-agent implementations. 🤖 MCP: how it works and how to use it, connect tools with MCP, human in a loop, handling events, Workflow and much more. Whitepaper (https://lnkd.in/dyHKrXbj) The recorded livestream below. #AIAgents #Google #AI #MCP #Tools #GenerativeAI
To view or add a comment, sign in
-
80% of enterprise data is unstructured, locked in PDFs, reports, and diagrams that traditional tools can’t parse or govern. Introducing ai_parse_document, state-of-the-art document intelligence on Databricks. With a single SQL command, teams can now turn any document into structured, queryable data, unlocking context that agents can finally reason over at up to 5× lower cost. UNREAL STUFF ‼️ Now in Public Preview! https://lnkd.in/gQRfaqSj
To view or add a comment, sign in
-
-
The era of simple Prompt Engineering is evolving to the new paradigm of Context Engineering. This shift is about more than optimizing a single input; it's about managing a sophisticated, long-term state for AI interactions. Context Engineering achieves this by leveraging two critical pillars: Sessions (for managing dialogue history and state) and Memory (for organized, long-term recall, extraction, and consolidation of information). Mastering Context Engineering is crucial for building robust, production-ready, multi-agent systems and enabling complex, informed conversational flows. Read the full white paper here: https://lnkd.in/g3AkV2ay #ContextEngineering #PromptEngineering #LLMs #Memory #DataScience
To view or add a comment, sign in
-
Out of the box, AI agents can't access your private data or execute the multi-step, expert-level analysis your business relies on. The CARTO MCP Server changes this. It securely packages your CARTO Workflows into 'tools' that your AI agent can call on demand. This 1-minute demo shows just one use case. We turn a complex 'NYC collision hotspots' workflow into a simple tool. A user just asks a question, and the AI runs the analysis, delivering a visualized answer on the map. Imagine this same power for site selection, risk analysis, logistics, and beyond. The MCP Server bridges the gap between your data science team and your business users. Learn more here: https://lnkd.in/dSZxmRNr
Using MCP Tools to spot NYC collisions
To view or add a comment, sign in
-
This guy literally built an entire data stack with one command. Watch Tim Delisle & Chris Crane from fiveonefour explain how one line of code spins up a full dev environment database, APIs, pipelines instantly. Here’s what they discussed: • Why data engineering is being rebuilt for AI • How MooseStack creates full infra locally • Why “local-first” is the future of AI copilots Watch full conversation here: https://lnkd.in/dx-e2fHp
To view or add a comment, sign in
-
While going through tutorials / posts for understanding semantic layer, i understood that even if you are on dbt core and you define model properties (like - tests, descriptions, metrics i.e. essentially metadata) in properties.yml you already have a semantic layer ready, you just need to expose these properties.yml to ai to get its benefits, There are of course 2 parts to it - | 1. Exposing properties.yml for doing data modelling - exposing these properties files to vs code copilot would do wonders in creating data models. 2. Exposing it to the BI tool - this could be a difficult task - haven't tried it, will do soon. #dataengineering #dataanalytics #dbt
To view or add a comment, sign in
-
Today was day 3 of Google and Kaggle's 5-Day AI Agents Intensive Course. I went through the white paper on Sessions and Memory and it finally clicked how real agent systems stay organized across long conversations. The diagrams on page 10 and page 30 show the whole loop, from fetching context to storing it again. A few things I learnt: 1. A session is the workbench for the current conversation, with every turn stored as events. 2. Memory is the filing cabinet that keeps only the useful parts so the agent is not stuck with the full chat forever. 3. Long conversations are managed with strategies like truncation and summarization. 4. RAG handles external facts, while memory handles user specific context. Retrieval timing matters, so not every turn needs to pull memory unless the task needs it. I like how this framework keeps agents fast while still feeling personal. Course link: https://lnkd.in/ey5WzqTX Whitepaper link: https://lnkd.in/ekb8kGWA #llm #ai #rag #contextengineering #google #kaggle #memory #agenticai
To view or add a comment, sign in
We can't wait for this hands-on lab! 👏