The complex regulatory landscape for artificial intelligence (AI) has become a pressing challenge for businesses. Governments are approaching AI through the same piecemeal lens as other emerging technologies such as autonomous vehicles, ride-sharing, and even data privacy. In the absence of a single unified set of cohesive federal guidelines, state and local governments have been forced to take the lead, leaving individual businesses with the need to track which regulations they must comply with.
Today’s most compelling new Large Language Models (LLMs) have seemingly unlimited applications, each with its own risks. Managing those harms means focusing on regulating AI use cases, not just the models themselves. The multi-state governance burden falls on businesses who must provide evidence of risk mitigation, fairness, and transparency in their specific AI applications. Compliance tools like standardized pre-built model cards will not be able to meet this increased scrutiny.
Governance at the level required by new AI regulations — not just in Europe through the AI Act but in states like New York or Colorado — will not be easy. This compliance burden can be overwhelming, particularly for smaller businesses with limited resources to navigate complex regulatory frameworks. Compliance with these regulations requires a deep understanding of the intricacies of AI algorithms, meticulous documentation, ongoing testing, and context-specific risk mitigation.
All of this will take time from the same teams trying to capture AI’s innovation possibilities. Companies need to proactively keep a pulse on where upcoming requirements are likely to land. Otherwise, they risk being caught flat-footed towards rectifying regulatory issues down the line. With these issues in mind, here are some key themes we’re seeing from state and local government regulation of AI.
Industry-Specific Regulation in HR and Insurance
Industry-specific regulations are emerging in already highly regulated sectors like hiring and insurance. Concerns about potential biases and discrimination have prompted states to take proactive measures, with the burden of proof for fairness and transparency on the companies using them.
New York City, for example, has embraced Local Law 144, which mandates disclosure and bias audits on automated employment decision tools. Violations of the law can result in civil penalties. Other legislation focused on AI in hiring exists in Illinois and Maryland. Notably, the ultimate party responsible for compliance is not the AI model provider but rather the companies deploying them.
A similar story is surfacing in insurance. In 2021, Colorado introduced SB 21-169 to safeguard consumers against unfair discrimination in insurance rate-setting mechanisms. As it moves into the implementation phase, the Colorado Division of Insurance (DOI) has revised the regulation, requiring life insurers to provide additional documentation on their AI systems, conduct thorough testing of their algorithms and models, and establish “risk-based” governance for AI systems utilized in claims, ratemaking, and pricing. Other states like New Jersey, Virginia, and Washington (House and Senate) have proposed comparable laws, emphasizing the need for governance and transparency regarding AI systems in insurance.
In an era where AI is increasingly integrated into critical processes like hiring and insurance, regulators rightly emphasize the need to address potential biases and discrimination. This application-focused regulatory approach means the legal liability will fall squarely on the companies applying the AI systems – not on their supplier. In that same vein, can expect additional sector-specific laws in financial services, healthcare, education, and other regulated industries over the next few years.
Targeting Underlying AI Tools
State regulations are also focusing on underlying AI tools used in decision-making processes, regardless of whether they rely on simple rules-based or deep learning techniques. The crucial question is where human judgment – and human liability – is being displaced.
Legislations like California’s AB 331 or New Jersey’s A4909 regulate using automated decision tools that wield significant influence over people’s civil rights, employment, and essential services. California would require both developers and users of such automated decision-making tools to submit impact assessments and transparency disclosures. Customers would also have the right to opt-out, a difficult feature for companies to add to AI products already in deployment.
This cost of compliance will only grow with time. California’s proposal even allows residents to file lawsuits, a direct financial cost to noncompliance. The trend is clear: all parties involved in creating an AI system will be responsible for managing the risks they introduce.
Building on Privacy Foundations
Privacy regulations are also relevant to AI governance, as AI systems increasingly process personal data with the potential to result in unlawful discrimination. While the California Consumer Privacy Act (CCPA) has been in place since 2018, nine other states have laws already enacted or in progress. These laws restrict the use of personal information and give users the right to access, correct, and control their personal data. AI-specific regulation will likely have similar requirements.
However, simply adopting preconfigured privacy-specific toolkits will not be enough. The pathway for privacy compliance is bottoms up, protecting individual datapoints as they move through increasingly complex data systems. AI systems are broader reaching, and regulation is increasingly taking a top-down approach particularly concerned with the interactions between datasets, analytical systems, and end-user applications.
Requiring Responsible AI through Government Procurement
Government procurement regulations can set the stage for responsible AI practices. With the introduction of SB 1103, Connecticut offers comprehensive guidance on the development, utilization, and evaluation of AI systems within state agencies. The law mandates impact assessments before deploying AI systems to prevent any unlawful discrimination.
As a result, vendors who already perform these impact assessments are likely to offer a compelling advantage in the selection process. Thus this internal procurement standard can be easily scaled across other localities seeking to quickly promote responsible AI practices within their jurisdictions.
Parting Thoughts
In an era where AI is integrated into our everyday lives — sometimes even without our knowledge or understanding — regulators across the U.S. are rightly emphasizing the need for responsible AI governance. However, the challenge lies in the intricate nature of compliance. Companies will need to showcase not only the effectiveness of their AI systems but also the measures they have taken to mitigate risks and harms across various jurisdictions. Unfortunately, in the near term, this means business owners will have to keep abreast of regulations themselves and tread cautiously through a maze of regulations, each with its own set of obligations and expectations.
Read the full article here