U.S. state lawmakers are increasingly addressing AI's impact through legislation, focusing on its use in consequential decisions affecting livelihoods, like healthcare and employment. A new report by the Future of Privacy Forum, published 13 Sept 2024, highlights key trends in AI regulation. U.S. state legislation regularly follows a "Governance of AI in Consequential Decisions" approach, regulating AI systems involved in decisions that have a material, legal, or similarly significant impact on an individual’s life, particularly in areas such as education, employment, healthcare, housing, financial services, and government services. These high-stakes decisions are subject to stricter oversight to prevent harm, ensuring fairness, transparency, and accountability by setting responsibilities for developers and deployers, granting consumers rights, and mandating transparency and ongoing risk assessments for systems affecting life opportunities. Examples of key laws regulating AI in consequential decisions include Colorado SB 24-205 (will enter into force in Feb 2026), California AB 2930, Connecticut SB 2, and Virginia HB 747 (all proposed). * * * This approach typically defines responsibilities for developers and deployers: Developer: A developer is an individual or organization that creates or builds the AI system. They are responsible for tasks such as: - Determining the purpose of the AI, - Gathering and preprocessing data, - Selecting algorithms, training models, and evaluating performance. - Ensuring the AI system is transparent, fair, and safe during the design phase. - Providing documentation about the system’s capabilities, limitations, and risks. - Supporting deployers in integrating and using the AI system responsibly. Deployer: A deployer is an individual or organization that uses the AI system in real-world applications. Their obligations typically include: - Providing notice to affected individuals when AI is involved in decision-making. - Conducting post-deployment monitoring to ensure the system operates as expected and does not cause harm. - Maintaining a risk management program and testing the AI system regularly to ensure it aligns with legal and ethical standards. * * * U.S. State AI regulations often grant consumers rights when AI affects their lives, including: 1. Notice: Consumers must be informed when AI is used in decisions like employment or credit. 2. Explanation and Appeal: Individuals can request an explanation and challenge unfair outcomes. 3. Transparency: AI decision-making must be clear and accountable. 4. Ongoing Risk Assessments: Regular reviews are required to monitor AI for biases or risks. Exceptions for certain technologies, small businesses, or public interest activities are also common to reduce regulatory burdens. by Tatiana Rice, Jordan Francis, Keir Lamont
How AI is Regulated in Finance
Explore top LinkedIn content from expert professionals.
Summary
As artificial intelligence (AI) continues to influence decisions in finance, laws and guidelines are emerging to ensure fairness, transparency, and accountability. These regulations are designed to address risks like bias, discrimination, and lack of oversight, particularly in high-stakes areas such as lending and credit decisions.
- Understand your responsibilities: Developers must ensure their AI systems are transparent, fair, and safe, while deployers must monitor their implementation and inform consumers when AI influences decisions.
- Prioritize consumer transparency: Provide clear notifications to consumers when AI is involved in important decisions and allow for explanations, appeals, and corrections of decisions based on inaccurate or biased data.
- Ensure compliance with laws: Stay updated on state-specific and federal regulations for AI in finance to align your systems with requirements like risk assessments, algorithm testing, and data transparency standards.
-
-
The #CFPB just issued guidance on the use of #artificialintelligence or #ai in lending, specifically to deny loan applications. The Equal Credit Opportunity Act and Regulation B require adverse action notices when denying a credit application, and those notices must include up to four of the specific reasons for taking adverse action. The goal is for consumers to understand why they were considered not creditworthy for the product so they can take steps to improve these factors in the future. The Bureau's guidance builds on previously published opinions and circulars, serving as a reminder that Regulation B's model adverse action notices provide a non-exhaustive list of reasons why someone may not qualify for credit, and more specifics related to the AI technology can be necessary such as "behavioral spending data" rather than "purchasing history" or "profession" instead of "insufficient projected income." The level of specificity required means ensuring the software systems used to generate adverse action notices are able to be customized significantly and reflect key parts of an algorithm or AI tool's data set. The CFPB has signaled for a while that lenders should understand what kind of data are used and how lending decisions are made with this kind of technology, which the Bureau has sometimes called a "black box." There will be a balance between the technology provider's desire to protect trade secrets, and lenders' regulatory obligations. Slipped into the press release, almost as an after thought, is the announcement of an advisory opinion that adverse action notices apply with regard to changes to existing credit products. The AI circular can be found here: https://lnkd.in/eCVHATZh The advisory opinion on the application of #ECOA to existing credit accounts can be found here: https://lnkd.in/gGe7Mbax
-
The first comprehensive law in the U.S. to regulate AI was signed this month in Colorado. It comes into effect in February 2026 and regulates artificial intelligence (AI) systems that make decisions that have a consequential impact, such as in employment, health care, essential government services, housing, and financial services (defined as high-risk AI systems under the act). The act sets forth stringent requirements for both developers (i.e., any person or entity that develops or intentionally and substantially modifies an AI system) and deployers (i.e., any person or entity that uses a high-risk AI system) and is intended to promote transparency and protect Colorado residents from algorithmic discrimination. Key provisions include: · Algorithmic Discrimination Prevention: Developers and deployers must use reasonable care to prevent discrimination based on age, color, disability, ethnicity, genetic information, language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other protected classifications. · Documentation and Transparency: Developers are required to provide detailed documentation, including the purpose, intended benefits, and known risks of AI systems, to ensure deployers can conduct thorough impact assessments. Both developers and deployers must publicly disclose the types of high-risk AI systems they work with and how they manage potential risks. · Consumer Notifications: Deployers must notify consumers when a high-risk AI system is being used to make consequential decisions affecting them. Such notice must provide clear information about the AI system and the decision-making process. In cases of adverse decisions, consumers must be given the opportunity to correct inaccurate data and appeal decisions involving human review. · Enforcement and Compliance: The Colorado Attorney General holds exclusive enforcement authority. Compliance with nationally or internationally recognized AI risk management frameworks can serve as a defense against enforcement actions. The law could impact a number of organizations that are developing or deploying AI systems that use geospatial information. Examples of geospatial AI (GeoAI) applications that could be considered high risk under the Colorado law include: · Real Estate Valuation: AI systems estimating property values based on location data might inadvertently incorporate biases, affecting loan approvals and housing opportunities. · Health care access: AI-driven analysis of geospatial data for health care services could prioritize resources inequitably if not carefully designed and monitored. · Predictive Policing: Using AI to analyze geospatial data to predict crime hotspots could lead to over-policing in certain neighborhoods, disproportionately affecting minority communities. #geoai #geoint #geospatiallaw
-
🚨 California’s AG Just Raised the Bar For AI Accountability Last week, California's Attorney General issued a legal advisory notice on the use of AI systems in various sectors including employment, healthcare, lending, housing, marketing, and elections. Here’s what you need to know: ♦️ Consumer Protection Compliance: The advisory clarifies that entities developing, selling, or using AI systems must comply with California laws safeguarding consumers against unfair competition, false advertising, misinformation, and other discriminatory practices in finance and housing. ♦️ Testing & Governance: The advisory also requires testing, validation, and governance of AI systems to ensure they comply with these consumer protection laws. ♦️ Transparency: AI developers and users in California are now required to disclose detailed information about their AI systems, including data on training processes and explanations for any adverse decisions. ♦️ Privacy and Data Rights: California’s strong privacy and data rights laws allow consumers to: ➡️ Understand what personal data is being collected ➡️ Access, correct, or request the deletion of their personal information ➡️ Opt-out of data sharing ➡️ Restrict how their data is used Is your organization ready to comply with the California AG’s AI guidelines? FairPlay’s AI compliance solutions help you navigate the evolving AI regulatory landscape—so you can innovate responsibly and stay ahead of the curve. The California AG’s legal advisory is here: https://shorturl.at/UxE9P