You can’t fix what you can’t see. Most data teams discover Snowflake overspend after the bill hits. The warehouses have already been running, the budgets already blown. But what if you could see a breakdown in real time… While queries are still running? That’s the game-changer our latest piece dives into: how real-time Snowflake cost visibility helps data teams stay in control, not just react. 🧠 Insight by Amit Yahalom worth the read if cost predictability keeps you up at night. https://lnkd.in/dzV4-JSb #Snowflake #FinOps #DataEngineering #CloudCost #DataPlatform #CostOptimization
How real-time Snowflake cost visibility can save you from overspend.
More Relevant Posts
-
Snowflake cost problems don’t start with the invoice - they start with lack of observability. Without full visibility into what’s running, who’s using what, and where costs come from, optimization becomes guesswork. Every data leader knows the pain: Snowflake bills that grow faster than usage, “focus weeks” spent chasing query logs, and endless back-and-forth with finance to explain why spend doubled… again. The truth? It’s not a cost issue. It’s an observability issue. When you can’t see how your data assets connect across ingestion, transformation, and BI, you can’t control how they consume resources. Observability changes that. It lets you trace every dollar of compute to a warehouse, model, or user, expose inefficiencies in real time, and finally link data spend to business value. Modern FinOps isn’t about cutting spend - it’s about making every query accountable. That starts with stack-wide observability. 👉 Read more about it - https://lnkd.in/d6rGuVbs #DataOps #Snowflake #DataEngineering #CDO #DataOptimization #ModernDataStack #CloudData #DataEfficiency #FinOps #DataInfrastructure #observability
To view or add a comment, sign in
-
-
One of our Snowflake warehouses quietly doubled its monthly credits with no change in workload, no extra users. The culprit? Hidden inefficiencies we couldn’t see until we built proper observability. This post walks through how I created a simple, reliable monitoring framework inside Snowflake using only real system views, no external tools. Track what actually matters: 1) Slowest and most expensive queries 2) Idle or overused warehouses 3) User-level compute consumption 4) Real-time cost dashboards & alerts All powered by ACCOUNT_USAGE and INFORMATION_SCHEMA views with working SQL and Streamlit examples you can use today. #Snowflake #DataEngineering #CostOptimization #DataObservability #DataOps #SQL #Analytics #SnowflakeTips
To view or add a comment, sign in
-
How to Reduce Snowflake Costs by 40% in Just Two Weeks Most Snowflake environments waste money — not because of bad tech, but because no one’s watching the right things. At Data Prophits, we’ve helped teams cut spend by 30–40% without touching performance. Here’s what actually works: * Identify silent spenders. Unused warehouses, over-provisioned compute, and forgotten clones are the usual suspects. * Automate idle suspension. Use Snowflake Tasks + Policies to pause warehouses when query volume drops. * Spot query bloat. Dashboards showing longest-running or highest-credit queries instantly expose inefficiencies. * Right-size storage tiers. Historical data can live cheaper — automate cold-data archiving to S3 or Iceberg. * Monitor everything. Our “Snowflake Cost Control Pack” alerts you before the bill spikes. Data should power growth, not burn budget. Want the 2-week playbook? Drop a 💬 or DM me — I’ll send the checklist. #Snowflake #DataEngineering #CostOptimization #DataProphits #Domo #CloudEfficiency datprophits.com
To view or add a comment, sign in
-
-
🚀 Snowflake introduces Storage Lifecycle Policies! Managing old or infrequently accessed data just got a lot easier. A Storage Lifecycle Policy is a schema-level object in Snowflake that automatically archives or expires table rows based on conditions you define like data age or compliance rules. Snowflake runs these policies daily using shared compute, so you don’t have to manage them manually. ✨ Why it matters: 💸 Reduce storage costs: Move older data to COOL or COLD archival tiers automatically. 🧾 Stay compliant: Define retention or expiry rules aligned with governance standards. ⚙️ Simplify management: No more manual cleanups — Snowflake handles it for you. 🔍 Retrieve with ease: Bring back archived data precisely when you need it. 📦 Archive tiers: COOL: Fast retrieval, min. 90 days. COLD: 4x cheaper, slower retrieval (up to 48 hrs), min. 180 days. Snowflake’s storage lifecycle automation = smarter data management + lower costs. #Snowflake #DataEngineering #CloudData #DataLifecycle #DataManagement #SnowflakeTips
To view or add a comment, sign in
-
-
🚨 Most Snowflake warehouses aren’t slow – they’re misconfigured. We’ve seen data teams boost performance and cut spend by up to 50% - just by mastering a few key tuning practices. In our new on-demand session, we break down: ✅ Smart vertical scaling tactics that actually save credits ✅ Gen1 vs Gen2 - the truth about which performs better ✅ Real tuning examples you can apply directly to your Snowflake setup 🎥 Watch on-demand (link in comments) - and let’s talk: what’s the toughest Snowflake challenge your team faces right now? #Snowflake #DataEngineering #WarehouseOptimization
To view or add a comment, sign in
-
-
🔥 Snowflake just made the data world a lot more open ❄️ By embracing Apache Iceberg, Snowflake isn’t just adding a feature — it’s changing the game. Data no longer has to live in silos or be locked into one platform. This move signals a new era: Open formats + enterprise performance = true data freedom. The lakehouse is no longer a buzzword — it’s becoming the default. And those who design for openness today will lead tomorrow. #Snowflake #ApacheIceberg #DataEngineering #OpenData #Innovation
To view or add a comment, sign in
-
Snowflake just rolled out a major upgrade to its Snowpipe Streaming architecture and it’s a big deal for any business that depends on near real-time data to drive decisions. The new system makes it faster and more efficient to stream data into Snowflake, with less engineering overhead and more predictable costs. For organizations that rely on fresh insights like financial services, retail, or logistics, this means shorter time-to-decision and better operational agility. One standout example from a global market data provider: they’re using the new architecture to stream over 100 TB of data and nearly 200 billion rows per day while keeping query times under 30 seconds. That level of performance used to require massive infrastructure investment. Now it’s built into Snowflake. If your business runs on up-to-the-minute insights, this upgrade could translate into: – Lower infrastructure and dev costs – Faster decision-making – More scalable data operations – Better ROI on data investments We’re excited about what this means for our clients who want to move faster and smarter with near real-time data. Curious how your organization could benefit? Let’s talk. #dataanalytics #realTimeData #businessintelligence #Snowflake #DataSolutions #datainfrastructure #decisionmaking #DSC
To view or add a comment, sign in
-
Ever wondered how Snowflake lets you share live data securely — without moving or copying it? Here’s a quick breakdown : 🔗 Direct Share: Share data instantly with other Snowflake accounts. 👥 Reader Account: Give access to users without Snowflake accounts. 🛍️ Snowflake Marketplace: Monetize or publish curated datasets. 🏢 Private Data Exchange: Build your own internal data marketplace. ✨ Share once → Consume anywhere → Always up to date. Check out my quick visual deck on “Snowflake Data Sharing” to learn how businesses can collaborate, monetize, and innovate — securely and at scale. #Snowflake #DataSharing #DataEngineering #CloudData #DataExchange #DataMonetization
To view or add a comment, sign in
-
3 Ways to Connect Multiple Data Sources Without Slowing Down Snowflake Tech Tip from Agilityx: Insight doesn’t come from data alone. Snowflake delivers incredible scale and speed — but combining multiple data sources can quickly lead to slow queries and bottlenecks. The difference between frustration and actionable insight often comes down to how you connect your data. Here are three strategies to make Snowflake work smarter, not harder: 1. Start with questions, not queries: What insights do you truly need before writing a single query? Let curiosity drive your data, not the other way around. 2. Integrate with intention: Are you moving data efficiently, or just moving it? Use staging tables, smart clustering, and optimized joins to reduce unnecessary data movement. 3. Automate where it counts: Let Snowflake handle repetitive transformations while your team focuses on interpretation and decisions. Data is powerful, but only when it’s connected thoughtfully. Snowflake helps you get answers faster, but a strategic approach ensures those answers actually move the needle. #Snowflake #techtip #dataanalytics
To view or add a comment, sign in
-
-
A materialized view in Snowflake has one key limitation - it can reference only a single table. To address this, Snowflake introduced 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗧𝗮𝗯𝗹𝗲𝘀, which allow more flexibility. However, they can’t be refreshed instantly; you must define a refresh rate (for example, every 10 minutes or once an hour). Think of 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗧𝗮𝗯𝗹𝗲𝘀 as a combination of a 𝗠𝗮𝘁𝗲𝗿𝗶𝗮𝗹𝗶𝘇𝗲𝗱 𝗩𝗶𝗲𝘄 + 𝗗𝗮𝘁𝗮 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 + 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗲𝗿 - all in one. The real advantage? They can be used to 𝗯𝘂𝗶𝗹𝗱 𝗲𝗻𝗱-𝘁𝗼-𝗲𝗻𝗱 𝗱𝗮𝘁𝗮 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 directly within Snowflake. I’m curious — what are some 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 𝗼𝗳 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗧𝗮𝗯𝗹𝗲𝘀 you’ve implemented in your projects? #Snowflake #DynamicTables #DataEngineering #ETL #DataPipelines #CloudData #DataTransformation #DataAnalytics #SnowflakeCortex #ModernDataStack #SQL #DataAutomation #RealTimeData #IncrementalRefresh #DataOps Image credit: Snowflake Documentation
To view or add a comment, sign in
-
📊 Strategic Data Specialist | Snowflake❄ Cost Optimization | Turning Data into Savings & ROI
2wOne reason I'm excited to be apart of Yuki! Helping teams see Snowflake spend before it "snowballs". 😁