Guide to Employment Discrimination Compliance for Automated Decision Systems

Explore top LinkedIn content from expert professionals.

Summary

Automated decision systems (ADS) are tools that use algorithms and data to assist in employment decisions like hiring or promotions. The “guide to employment discrimination compliance for automated decision systems” focuses on ensuring that these systems are used fairly, transparently, and in compliance with anti-discrimination laws, such as NYC Local Law 144 and related guidelines.

  • Conduct regular bias audits: Work with independent auditors to assess your automated decision systems for any potential discriminatory impacts across demographics, like gender or ethnicity, and document the results transparently.
  • Provide clear and timely notifications: Inform candidates and employees in advance about the use of these tools, detailing the data being assessed and their rights to request accommodations under applicable laws.
  • Maintain accountability: Ensure your team has robust audit trails, clear roles for accountability, and contractual safeguards with AI vendors to mitigate risks and comply with employment laws.
Summarized by AI based on LinkedIn member posts
  • View profile for Odia Kagan

    CDPO, CIPP/E/US, CIPM, FIP, GDPRP, PLS, Partner, Chair of Data Privacy Compliance and International Privacy at Fox Rothschild LLP

    24,164 followers

    The NYC Department of Consumer and Worker Protection issues FAQ regarding NYC Local Law 144 regarding the use of automated decision making tools (AEDT) - Key things to note: Use an AEDT "In the city" means: - The job location is an office in NYC, at least part time. OR - The job is fully remote but the location associated with it is an office in NYC. OR - The location of the employment agency using the AEDT is NYC or, if the location of the employment agency is outside NYC, one of the bullets above is true. Employment Decision: - Using at AEDT to substantially help assess or screen candidates at any point in the hiring or promotion process. - NOT: Assessing someone who is not an employee being considered for promotion and who has not applied for a specific position for employment, - NOT: Scanning a resume bank (NOT of applicants to a particular position); conducting outreach to potential candidates or inviting applications using an automated tool. Bias Audit - The responsibility for the bias audit is of the employer but the vendor can provide the bias audit in order to facilitate the employer's compliance with the obligation (and thus paving the way for the AEDT to be onboarded). - At a minimum, an independent auditor’s evaluation must include calculations of selection or scoring rates and the impact ratio across sex categories, race/ethnicity categories, and intersectional categories. - While the AEDT law doesn't per se require you to take any action if the bias audit indicates disparate impact, there are other laws (Federal, State and NYC) that prohibit discrimination that you should be mindful of. - The Law has no specific requirement about the historical data used for a bias audit. However, the summary of the results of a bias audit must include the source and explanation of the data used to conduct the bias audit. If the historical data was limited in any way, including to a specific region or time period, the audit should explain why. - If you do not collect demographic data from applicants or if have minimal historical data from your use of an AEDT to conduct a statistically significant bias audit, test data can be used to conduct a bias audit - You CANNOT impute demographic information to applicants or use algorithmic software to infer it Notice - You must notify employees and job candidates who are residents of New York City that you are using an AEDT and the job qualifications or characteristics the AEDT will assess. - You must Include in the notice instructions to request a reasonable accommodation under other laws AND Provide the notice 10 business days before using an AEDT. - Provide the notice in a job posting or by mail or email; for candidates, this can be in the employment section of the website; for employees - this can be part of a written policy and procedure. https://lnkd.in/ejSvuFji #dataprivacy #dataprotection #employment #AIregulation #AIprivacy #privacyFOMO

  • View profile for Deanna Shimota

    Cut through the noise.

    5,146 followers

    HR teams aren't slow on AI. They're rational. They're watching Workday get sued for age discrimination because their AI screening tool allegedly filtered out older workers. This isn't theoretical anymore. A year ago everyone was pushing AI-first messaging to win HR tech deals. But I kept seeing deals stall for the same reason: Many HR leaders run the same nightmare scenario in their head. Regulatory heat, potential lawsuits and headlines. They see the risk. Vendors pretend it doesn't exist. If your strategy is leading with AI features, you've got an uphill battle. We're seeing a shift in what actually closes. HR tech companies need to lead with risk mitigation. Three principles: 1. Lead with audit trails, not slogans. Workday's lawsuit made bias a material risk. Buyers now ask about NYC's law requiring bias audits before using AI in hiring. They want proof that you can track whether your tool discriminates against protected groups. If you can't produce impact-ratio reports, model cards and subpoena-ready logs, you won't clear legal or procurement. 2. No autonomous rejections. Shadow mode first. Run in parallel before go-live. Show selection rates by protected class and impact ratios before any automated decision touches candidates. Keep human-in-the-loop at the rejection line, with kill-switches and drift/impact alarms that force manual review. 3. Contractual risk transfer. If you want HR teams to trust your AI, carry part of the tail: algorithmic indemnity (within guardrails), bias-budget SLAs, third-party audits aligned to any legal requirements and explicit audit rights. When Legal asks vendor-risk questions, let the contract do the talking. TAKEAWAY: HR leaders aren't anti-AI. They're anti-risk. Winners don't sell "AI." Winners solve problems and sell evidence that survives discovery. If you're AI-first approach in sales in stalling, study NYC's law requiring bias audits for AI hiring tools. Track Colorado's AI Act slated for June 30, 2026. Seek to understand why HR leaders are hesitating when it comes to AI tools. Your pipeline depends on it.

  • View profile for Shea Brown
    Shea Brown Shea Brown is an Influencer

    AI & Algorithm Auditing | Founder & CEO, BABL AI Inc. | ForHumanity Fellow & Certified Auditor (FHCA)

    21,951 followers

    The Information Commissioner's Office conducted "consensual audit engagements" of providers and deployers of AI recruitment tools, providing detailed findings and recommendations. 👇 The focus was primarily on privacy and UK GDPR compliance, but bias and fairness issues were threaded throughout. Key Findings ------------- 📊 Audit Scope: Focused on AI tools for recruitment, including sourcing, screening, and selection processes. ⚠️ Privacy Risks: Highlighted issues like excessive data collection, lack of lawful basis for data use, and bias in AI predictions. 🔍 Bias and Fairness: Some tools inferred characteristics like gender and ethnicity without transparency, risking discrimination. 🔒 Data Protection: Many providers failed to comply with data minimization and purpose limitation principles. 📜 Transparency: Privacy policies were often unclear, leaving candidates uninformed about how their data was processed. Recommendations -------------------- ✅ Fair Processing: Ensure personal information is processed fairly, with measures to detect and mitigate bias. 💡 Transparency: Clearly explain AI processing logic and ensure candidates are aware of how their data is used. 🛡️ DPIAs: Conduct detailed Data Protection Impact Assessments (DPIAs) to assess and mitigate privacy risks. 🗂️ Role Clarity: Define controller vs. processor responsibilities in contracts. 🕵️ Regular Reviews: Continuously monitor AI accuracy, fairness, and privacy safeguards. Here are some of my hot takes (personal opinion, not those of BABL AI): ------------- 1: There is a clear tension between the desire for data minimization and the need for data in AI training and bias testing. Most vendors have been conditioned to avoid asking for demographic data, but now they need it. 2: Using k-fold cross-validation on smaller datasets to increase accuracy without needing larger datasets (pg 14) is not a practical recommendation unless you are very confident about your sampling methods. 3: The use of inferences to monitor for bias was discouraged throughout the document, and several times it was stated that "inferred information is not accurate enough to monitor bias effectively". While it's true that self-declared demographic data is preferred, many vendors are limited in their ability to collect this information directly from candidates, and until they have such mechanisms in place, inferred demographics are their only option. Furthermore, using inferred demographic information to monitor for bias has been shown to be of real utility in cases where asking people to self-declare their demographic information is problematic or impractical. Reuse of this new special category data is still a big issue. Overall, this is a really great document with a wealth of information, which is typical of ICO guidance. #AIinRecruitment #ICO #privacy Khoa Lam, Ryan Carrier, FHCA, Dr. Cari Miller, Borhane Blili-Hamelin, PhD, Eloise Roberts, Aaron Rieke, EEOC, Keith Sonderling

  • View profile for Rachel See

    Civil Rights Advocate and Consultant at the Intersection of Law, Technology, and Civil Rights

    2,609 followers

    The Department of Labor's latest guidance on AI in employment is a must-read for all employers using AI, and not just federal contractors. Last week, the Department of Labor (OFCCP) issued "Promising Practices" to avoid unlawful bias in employment. They represent OFCCP’s attempt to capture best practices for mitigating AI risks in employment, drawing heavily from concepts from the NIST AI Risk Management Framework. While many concepts in the guidance may be well-known and evident to those experienced in AI risk management, OFCCP’s inclusion of these concepts in last week's "Promising Practices" emphasizes their importance.  Employers are responsible for their AI tools, whether they developed them themselves or if they are using an AI vendor.  OFCCP unambiguously warns that federal contractors cannot delegate or avoid their nondiscrimination obligations when they use vendor-provided AI tools. Also importantly, OFCCP also provides "Promising Practices" for choosing an AI vendor. While OFCCP’s April 29 AI guidance is addressed to federal contractors, the guidance reflects progress toward regulatory consensus regarding AI risk management, and their issuance invites both federal contractors and other employers to evaluate their existing AI risk management practices and to consider whether further proactive processes may be warranted or desirable. Also, have you seen that both the Colorado Senate and the Connecticut Senate have passed versions of AI legislation that broadly affect AI used in employment? The Department of Labor's“Promising Practices” are not mandatory, but aspects of them may soon be mandatory under pending state legislative efforts. Check out my latest #TeamSeyfarth (Seyfarth Shaw LLP) update with Annette Tyman for a deep dive into the AI guidance and what it means for employers.

Explore categories