Key Principles for High-Quality Data

Explore top LinkedIn content from expert professionals.

Summary

High-quality data is essential for building reliable systems and making informed decisions. This concept involves ensuring data is accurate, consistent, and trustworthy, preventing issues like inefficiency, errors, and loss of credibility.

  • Build for reliability: Prioritize data quality checks such as validation, consistency, and anomaly detection during development to avoid costly fixes later.
  • Define ownership: Assign clear accountability for data management, including setting expectations for use, tracking changes, and maintaining trustworthiness.
  • Start with critical data: Focus on safeguarding the most important and frequently used datasets first to mitigate risks and maximize business value.
Summarized by AI based on LinkedIn member posts
  • View profile for 🎯 Mark Freeman II

    Data Engineer | Tech Lead @ Gable.ai | O’Reilly Author: Data Contracts | LinkedIn [in]structor (28k+ Learners) | Founder @ On the Mark Data

    63,144 followers

    I’ve lost count of projects that shipped gorgeous features but relied on messy data assets. The cost always surfaces later when inevitable firefights, expensive backfills, and credibility hits to the data team occur. This is a major reason why I argue we need to incentivize SWEs to treat data as a first-class citizen before they merge code. Here are five ways you can help SWEs make this happen: 1. Treat data as code, not exhaust Data is produced by code (regardless of whether you are the 1st party producer or ingesting from a 3rd party). Many software engineers have minimal visibility into how their logs are used (even the business-critical ones), so you need to make it easy for them to understand their impact. 2. Automate validation at commit time Data contracts enable checks during the CI/CD process when a data asset changes. A failing test should block the merge just like any unit test. Developers receive instant feedback instead of hearing their data team complain about the hundredth data issue with minimal context. 3. Challenge the "move fast and break things" mantra Traditional approaches often postpone quality and governance until after deployment, as shipping fast feels safer than debating data schemas at the outset. Instead, early negotiation shrinks rework, speeds onboarding, and keeps your pipeline clean when the feature's scope changes six months in. Having a data perspective when creating product requirement documents can be a huge unlock! 4. Embed quality checks into your pipeline Track DQ metrics such as null ratios, referential breaks, and out-of-range values on trend dashboards. Observability tools are great for this, but even a set of SQL queries that are triggered can provide value. 5. Don't boil the ocean; Focus on protecting tier 1 data assets first Your most critical but volatile data asset is your top candidate to try these approaches. Ideally, there should be meaningful change as your product or service evolves, but that change can lead to chaos. Making a case for mitigating risk for critical components is an effective way to make SWEs want to pay attention. If you want to fix a broken system, you start at the source of the problem and work your way forward. Not doing this is why so many data teams I talk to feel stuck. What’s one step your team can take to move data quality closer to SWEs? #data #swe #ai

  • View profile for Chad Sanderson

    CEO @ Gable.ai (Shift Left Data Platform)

    89,477 followers

    Here are a few simple truths about Data Quality: 1. Data without quality isn't trustworthy 2. Data that isn't trustworthy, isn't useful 3. Data that isn't useful, is low ROI Investing in AI while the underlying data is low ROI will never yield high-value outcomes. Businesses must put an equal amount of time and effort into the quality of data as the development of the models themselves. Many people see data debt as another form of technical debt - it's worth it to move fast and break things after all. This couldn't be more wrong. Data debt is orders of magnitude WORSE than tech debt. Tech debt results in scalability issues, though the core function of the application is preserved. Data debt results in trust issues, when the underlying data no longer means what its users believe it means. Tech debt is a wall, but data debt is an infection. Once distrust drips in your data lake, everything it touches will be poisoned. The poison will work slowly at first and data teams might be able to manually keep up with hotfixes and filters layered on top of hastily written SQL. But over time, the spread of the poison will be so great and deep that it will be nearly impossible to trust any dataset at all. A single low-quality data set is enough to corrupt thousands of data models and tables downstream. The impact is exponential. My advice? Don't treat Data Quality as a nice to have, or something that you can afford to 'get around to' later. By the time you start thinking about governance, ownership, and scale it will already be too late and there won't be much you can do besides burning the system down and starting over. What seems manageable now becomes a disaster later on. The earliest you can get a handle on data quality, you should. If you even have a guess that the business may want to use the data for AI (or some other operational purpose) then you should begin thinking about the following: 1. What will the data be used for? 2. What are all the sources for the dataset? 3. Which sources can we control versus which can we not? 4. What are the expectations of the data? 5. How sure are we that those expectations will remain the same? 6. Who should be the owner of the data? 7. What does the data mean semantically? 8. If something about the data changes, how is that handled? 9. How do we preserve the history of changes to the data? 10. How do we revert to a previous version of the data/metadata? If you can affirmatively answer all 10 of those questions, you have a solid foundation of data quality for any dataset and a playbook for managing scale as the use case or intermediary data changes over time. Good luck! #dataengineering

  • View profile for Joseph M.

    Data Engineer, startdataengineering.com | Bringing software engineering best practices to data engineering.

    47,898 followers

    It took me 10 years to learn about the different types of data quality checks; I'll teach it to you in 5 minutes: 1. Check table constraints The goal is to ensure your table's structure is what you expect: * Uniqueness * Not null * Enum check * Referential integrity Ensuring the table's constraints is an excellent way to cover your data quality base. 2. Check business criteria Work with the subject matter expert to understand what data users check for: * Min/Max permitted value * Order of events check * Data format check, e.g., check for the presence of the '$' symbol Business criteria catch data quality issues specific to your data/business. 3. Table schema checks Schema checks are to ensure that no inadvertent schema changes happened * Using incorrect transformation function leading to different data type * Upstream schema changes 4. Anomaly detection Metrics change over time; ensure it's not due to a bug. * Check percentage change of metrics over time * Use simple percentage change across runs * Use standard deviation checks to ensure values are within the "normal" range Detecting value deviations over time is critical for business metrics (revenue, etc.) 5. Data distribution checks Ensure your data size remains similar over time. * Ensure the row counts remain similar across days * Ensure critical segments of data remain similar in size over time Distribution checks ensure you get all the correct dates due to faulty joins/filters. 6. Reconciliation checks Check that your output has the same number of entities as your input. * Check that your output didn't lose data due to buggy code 7. Audit logs Log the number of rows input and output for each "transformation step" in your pipeline. * Having a log of the number of rows going in & coming out is crucial for debugging * Audit logs can also help you answer business questions Debugging data questions? Look at the audit log to see where data duplication/dropping happens. DQ warning levels: Make sure that your data quality checks are tagged with appropriate warning levels (e.g., INFO, DEBUG, WARN, ERROR, etc.). Based on the criticality of the check, you can block the pipeline. Get started with the business and constraint checks, adding more only as needed. Before you know it, your data quality will skyrocket! Good Luck! - Like this thread? Read about they types of data quality checks in detail here 👇 https://lnkd.in/eBdmNbKE Please let me know what you think in the comments below. Also, follow me for more actionable data content. #data #dataengineering #dataquality

  • View profile for Benjamin Rogojan

    Fractional Head of Data | Tool-Agnostic. Outcome-Obsessed

    181,279 followers

    Data quality is one of the most essential investments you can make when developing your data infrastructure. If you're data is "real-time" but it's wrong, guess what, you're gonna have a bad time. So how do you implement data quality into your pipelines? On a basic level you'll likely want to integrate some form of checks that could be anything from: - Anomaly and Range checks - These checks ensure that the data received fits an expected range or distribution. So let's say you only ever expect transactions of $5-$100 and you get a $999 transaction. That should set off alarms. In fact I have several cases where the business added new products or someone made a large business purchase that exceeded expectations that were flagged because of these checks - Data type checks - As the name suggests, this ensures that a date field is a date. This is important because if you're pulling files from a 3rd party they might send you headerless files that you have to trust they will keep sending you the same data in the same order. - Row count checks - A lot of businesses have a pretty steady rate of rows when it comes to fact tables. The number of transactions follow some sort of pattern, many are lower on the weekends and perhaps steadily growing over time. Row checks help ensure you don't see 2x the amount of rows because of a bad process or join. - Freshness checks - If you've worked in data long enough you'e likely had an executive bring up that your data was wrong. And it's less that the data was wrong, and more that the data was late(which is kind of wrong). Thus freshness checks make sure you know the data is late first so you can fix it or at least update those that need to know. - Category checks - The first category check I implemented was to ensure that every state abbreviation was valid. I assumed this would be true because they must use a drop down right? Well there were bad state abbreviations entered nonetheless As well as a few others. The next question would become how would you implement these checks and the solutions there range from setting up automated tasks that run during or after a table lands to dashboards to finally using far more developed tools that provide observability into far more than just a few data checks. If you're looking to dig deeper into the topic of data quality and how to implement it I have both a video and an article on the topic. 1. Video - How And Why Data Engineers Need To Care About Data Quality Now - And How To Implement It https://lnkd.in/gjMThSxY 2. Article - How And Why We Need To Implement Data Quality Now! https://lnkd.in/grWmDmkJ #dataengineering #datanalytics

Explore categories