I’ve lost count of projects that shipped gorgeous features but relied on messy data assets. The cost always surfaces later when inevitable firefights, expensive backfills, and credibility hits to the data team occur. This is a major reason why I argue we need to incentivize SWEs to treat data as a first-class citizen before they merge code. Here are five ways you can help SWEs make this happen: 1. Treat data as code, not exhaust Data is produced by code (regardless of whether you are the 1st party producer or ingesting from a 3rd party). Many software engineers have minimal visibility into how their logs are used (even the business-critical ones), so you need to make it easy for them to understand their impact. 2. Automate validation at commit time Data contracts enable checks during the CI/CD process when a data asset changes. A failing test should block the merge just like any unit test. Developers receive instant feedback instead of hearing their data team complain about the hundredth data issue with minimal context. 3. Challenge the "move fast and break things" mantra Traditional approaches often postpone quality and governance until after deployment, as shipping fast feels safer than debating data schemas at the outset. Instead, early negotiation shrinks rework, speeds onboarding, and keeps your pipeline clean when the feature's scope changes six months in. Having a data perspective when creating product requirement documents can be a huge unlock! 4. Embed quality checks into your pipeline Track DQ metrics such as null ratios, referential breaks, and out-of-range values on trend dashboards. Observability tools are great for this, but even a set of SQL queries that are triggered can provide value. 5. Don't boil the ocean; Focus on protecting tier 1 data assets first Your most critical but volatile data asset is your top candidate to try these approaches. Ideally, there should be meaningful change as your product or service evolves, but that change can lead to chaos. Making a case for mitigating risk for critical components is an effective way to make SWEs want to pay attention. If you want to fix a broken system, you start at the source of the problem and work your way forward. Not doing this is why so many data teams I talk to feel stuck. What’s one step your team can take to move data quality closer to SWEs? #data #swe #ai
How to Address Data Integrity Issues
Explore top LinkedIn content from expert professionals.
Summary
Addressing data integrity issues involves implementing practices to ensure data remains accurate, consistent, and reliable throughout its lifecycle. Organizations can protect critical data assets by integrating robust validation, communication, and monitoring processes.
- Set clear data standards: Define and enforce rules around schema consistency, data constraints, and business logic to prevent errors and maintain data quality.
- Implement automated checks: Use tools for anomaly detection, range checks, and data validation at multiple stages, including data pipelines and CI/CD workflows, to catch issues early.
- Foster communication among stakeholders: Ensure data producers, engineers, and consumers communicate about schema changes or dependencies to minimize disruptions and improve collaboration.
-
-
The only way to prevent data quality issues is by helping data consumers and producers communicate effectively BEFORE breaking changes are deployed. To do that, we must first acknowledge the reality of modern software engineering: 1. Data producers don’t know who is using their data and for what 2. Data producers don’t want to cause damage to others through their changes 3. Data producers do not want to be slowed down unnecessarily Next, we must acknowledge the reality of modern data engineering: 1. Data engineers can’t be a part of every conversation for every feature (there are too many) 2. Not every change is a breaking change 3. A significant number of data quality issues CAN be prevented if data engineers are involved in the conversation What these six points imply is the following: If data producers, data consumers, and data engineers are all made aware that something will break before a change has deployed, it can resolve data quality through better communication without slowing anyone down while also building more awareness across the engineering organization. We are not talking about more meaningless alerts. The most essential piece of this puzzle is CONTEXT, communicated at the right time and place. Data producers: Should understand when they are making a breaking change, who they are impacting, and the cost to the business Data engineers: Should understand when a contract is about to be violated, the offending pull request, and the data producer making the change Data consumers: Should understand that their asset is about to be broken, how to plan for the change, or escalate if necessary The data contract is the technical mechanism to provide this context to each stakeholder in the data supply chain, facilitated through checks in the CI/CD workflow of source systems. These checks can be created by data engineers and data platform teams, just as security teams create similar checks to ensure Eng teams follow best practices! Data consumers can subscribe to contracts, just as software engineers can subscribe to GitHub repositories in order to be informed if something changes. But instead of being alerted on an arbitrary code change in a language they don’t know, they are alerted on breaking changes to the metadata which can be easily understood by all data practitioners. Data quality CAN be solved, but it won’t happen through better data pipelines or computationally efficient storage. It will happen by aligning the incentives of data producers and consumers through more effective communication. Good luck! #dataengineering
-
This week, I want to talk about something that might not be the most exciting or sexy topic—it might even seem plain boring to some of you. Very impactful, yet even in many large and complex organizations with tons of data challenges this foundational data process simply doesn’t exist: the Data Issue Management Process. Why is this so critical? Because #data issues, such as data quality problems, pipeline breakdowns, or process inefficiencies, can have real business consequences. They cause manual rework, compliance risks, and failed analytical initiatives. Without a structured way to identify, analyze, and resolve these issues, organizations waste time duplicating efforts, firefighting, and dealing with costly disruptions. The image I’ve attached outlines my take on a standard end-to-end data issue management process, broken down below: 📝 Logging the Issue – Make it simple and accessible for anyone in the organization to log an issue. If the process is too complicated, people will bypass it, leaving problems unresolved. ⚖️ Assessing the Impact – Understand the severity and business implications of the issue. This helps prioritize what truly matters and builds a case for fixing the problem. 👤 Assigning Ownership – Ensure clear accountability. Ownership doesn’t mean fixing the issue alone—it means driving it toward resolution with the right support and resources. 🕵️♂️ Analyzing the Root Cause – Trace the problem back to its origin. Most issues aren’t caused by systems, but by process gaps, manual errors, or missing controls. 🛠️ Resolving the Issue – Fix the data AND the root cause. This could mean improving data quality controls, updating business processes, or implementing technical fixes. 👀 Tracking and Monitoring – Keep an eye on open issues to ensure they don’t get stuck in limbo. Transparency is key to driving resolution. 🏁 Closing the Issue and Documenting the Resolution – Ensure the fix is verified, documented, and lessons are captured to prevent recurrence. Data issue management might not be flashy, but it can be very impactful. Giving business teams a place to flag issues and actually be heard, transforms endless complaints (because yes, they do love to complain about “the data”) into real solutions. And when organizations step back to identify and fix thematic patterns instead of just one-off issues, the impact can go from incremental to game-changing. For the full article ➡️ https://lnkd.in/eWBaWjbX #DataGovernance #DataManagement #DataQuality #BusinessEfficiency
-
It took me 10 years to learn about the different types of data quality checks; I'll teach it to you in 5 minutes: 1. Check table constraints The goal is to ensure your table's structure is what you expect: * Uniqueness * Not null * Enum check * Referential integrity Ensuring the table's constraints is an excellent way to cover your data quality base. 2. Check business criteria Work with the subject matter expert to understand what data users check for: * Min/Max permitted value * Order of events check * Data format check, e.g., check for the presence of the '$' symbol Business criteria catch data quality issues specific to your data/business. 3. Table schema checks Schema checks are to ensure that no inadvertent schema changes happened * Using incorrect transformation function leading to different data type * Upstream schema changes 4. Anomaly detection Metrics change over time; ensure it's not due to a bug. * Check percentage change of metrics over time * Use simple percentage change across runs * Use standard deviation checks to ensure values are within the "normal" range Detecting value deviations over time is critical for business metrics (revenue, etc.) 5. Data distribution checks Ensure your data size remains similar over time. * Ensure the row counts remain similar across days * Ensure critical segments of data remain similar in size over time Distribution checks ensure you get all the correct dates due to faulty joins/filters. 6. Reconciliation checks Check that your output has the same number of entities as your input. * Check that your output didn't lose data due to buggy code 7. Audit logs Log the number of rows input and output for each "transformation step" in your pipeline. * Having a log of the number of rows going in & coming out is crucial for debugging * Audit logs can also help you answer business questions Debugging data questions? Look at the audit log to see where data duplication/dropping happens. DQ warning levels: Make sure that your data quality checks are tagged with appropriate warning levels (e.g., INFO, DEBUG, WARN, ERROR, etc.). Based on the criticality of the check, you can block the pipeline. Get started with the business and constraint checks, adding more only as needed. Before you know it, your data quality will skyrocket! Good Luck! - Like this thread? Read about they types of data quality checks in detail here 👇 https://lnkd.in/eBdmNbKE Please let me know what you think in the comments below. Also, follow me for more actionable data content. #data #dataengineering #dataquality
-
ETL Testing: Ensuring Data Integrity in the Big Data Era Let's explore the critical types of ETL testing and why they matter: 1️⃣ Production Validation Testing • What: Verifies ETL process accuracy in the production environment • Why: Catches real-world discrepancies that may not appear in staging • How: Compares source and target data, often using automated scripts • Pro Tip: Implement continuous monitoring for early error detection 2️⃣ Source to Target Count Testing • What: Ensures all records are accounted for during the ETL process • Why: Prevents data loss and identifies extraction or loading issues • How: Compares record counts between source and target systems • Key Metric: Aim for 100% match in record counts 3️⃣ Data Transformation Testing • What: Verifies correct application of business rules and data transformations • Why: Ensures data quality and prevents incorrect analysis downstream • How: Compares transformed data against expected results • Challenge: Requires deep understanding of business logic and data domain 4️⃣ Referential Integrity Testing • What: Checks relationships between different data entities • Why: Maintains data consistency and prevents orphaned records • How: Verifies foreign key relationships and data dependencies • Impact: Critical for maintaining a coherent data model in the target system 5️⃣ Integration Testing • What: Ensures all ETL components work together seamlessly • Why: Prevents system-wide failures and data inconsistencies • How: Tests the entire ETL pipeline as a unified process • Best Practice: Implement automated integration tests in your CI/CD pipeline 6️⃣ Performance Testing • What: Validates ETL process meets efficiency and scalability requirements • Why: Ensures timely data availability and system stability • How: Measures processing time, resource utilization, and scalability • Key Metrics: Data throughput, processing time, resource consumption Advancing Your ETL Testing Strategy: 1. Shift-Left Approach: Integrate testing earlier in the development cycle 2. Data Quality Metrics: Establish KPIs for data accuracy, completeness, and consistency 3. Synthetic Data Generation: Create comprehensive test datasets that cover edge cases 4. Continuous Testing: Implement automated testing as part of your data pipeline 5. Error Handling: Develop robust error handling and logging mechanisms 6. Version Control: Apply version control to your ETL tests, just like your code The Future of ETL Testing: As we move towards real-time data processing and AI-driven analytics, ETL testing is evolving. Expect to see: • AI-assisted test case generation • Predictive analytics for identifying potential data quality issues • Blockchain for immutable audit trails in ETL processes • Increased focus on data privacy and compliance testing
-
Ensuring data quality at scale is crucial for developing trustworthy products and making informed decisions. In this tech blog, the Glassdoor engineering team shares how they tackled this challenge by shifting from a reactive to a proactive data quality strategy. At the core of their approach is a mindset shift: instead of waiting for issues to surface downstream, they built systems to catch them earlier in the lifecycle. This includes introducing data contracts to align producers and consumers, integrating static code analysis into continuous integration and delivery (CI/CD) workflows, and even fine-tuning large language models to flag business logic issues that schema checks might miss. The blog also highlights how Glassdoor distinguishes between hard and soft checks, deciding which anomalies should block pipelines and which should raise visibility. They adapted the concept of blue-green deployments to their data pipelines by staging data in a controlled environment before promoting it to production. To round it out, their anomaly detection platform uses robust statistical models to identify outliers in both business metrics and infrastructure health. Glassdoor’s approach is a strong example of what it means to treat data as a product: building reliable, scalable systems and making quality a shared responsibility across the organization. #DataScience #MachineLearning #Analytics #DataEngineering #DataQuality #BigData #MLOps #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gUwKZJwN
-
Data quality is one of the most essential investments you can make when developing your data infrastructure. If you're data is "real-time" but it's wrong, guess what, you're gonna have a bad time. So how do you implement data quality into your pipelines? On a basic level you'll likely want to integrate some form of checks that could be anything from: - Anomaly and Range checks - These checks ensure that the data received fits an expected range or distribution. So let's say you only ever expect transactions of $5-$100 and you get a $999 transaction. That should set off alarms. In fact I have several cases where the business added new products or someone made a large business purchase that exceeded expectations that were flagged because of these checks - Data type checks - As the name suggests, this ensures that a date field is a date. This is important because if you're pulling files from a 3rd party they might send you headerless files that you have to trust they will keep sending you the same data in the same order. - Row count checks - A lot of businesses have a pretty steady rate of rows when it comes to fact tables. The number of transactions follow some sort of pattern, many are lower on the weekends and perhaps steadily growing over time. Row checks help ensure you don't see 2x the amount of rows because of a bad process or join. - Freshness checks - If you've worked in data long enough you'e likely had an executive bring up that your data was wrong. And it's less that the data was wrong, and more that the data was late(which is kind of wrong). Thus freshness checks make sure you know the data is late first so you can fix it or at least update those that need to know. - Category checks - The first category check I implemented was to ensure that every state abbreviation was valid. I assumed this would be true because they must use a drop down right? Well there were bad state abbreviations entered nonetheless As well as a few others. The next question would become how would you implement these checks and the solutions there range from setting up automated tasks that run during or after a table lands to dashboards to finally using far more developed tools that provide observability into far more than just a few data checks. If you're looking to dig deeper into the topic of data quality and how to implement it I have both a video and an article on the topic. 1. Video - How And Why Data Engineers Need To Care About Data Quality Now - And How To Implement It https://lnkd.in/gjMThSxY 2. Article - How And Why We Need To Implement Data Quality Now! https://lnkd.in/grWmDmkJ #dataengineering #datanalytics