By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BrandiaryBrandiary
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
Brandiary > Leadership > State And Local Themes From Recent Legislation

State And Local Themes From Recent Legislation

News Room By News Room July 12, 2023 8 Min Read
Share

The complex regulatory landscape for artificial intelligence (AI) has become a pressing challenge for businesses. Governments are approaching AI through the same piecemeal lens as other emerging technologies such as autonomous vehicles, ride-sharing, and even data privacy. In the absence of a single unified set of cohesive federal guidelines, state and local governments have been forced to take the lead, leaving individual businesses with the need to track which regulations they must comply with.

Today’s most compelling new Large Language Models (LLMs) have seemingly unlimited applications, each with its own risks. Managing those harms means focusing on regulating AI use cases, not just the models themselves. The multi-state governance burden falls on businesses who must provide evidence of risk mitigation, fairness, and transparency in their specific AI applications. Compliance tools like standardized pre-built model cards will not be able to meet this increased scrutiny.

Governance at the level required by new AI regulations — not just in Europe through the AI Act but in states like New York or Colorado — will not be easy. This compliance burden can be overwhelming, particularly for smaller businesses with limited resources to navigate complex regulatory frameworks. Compliance with these regulations requires a deep understanding of the intricacies of AI algorithms, meticulous documentation, ongoing testing, and context-specific risk mitigation.

All of this will take time from the same teams trying to capture AI’s innovation possibilities. Companies need to proactively keep a pulse on where upcoming requirements are likely to land. Otherwise, they risk being caught flat-footed towards rectifying regulatory issues down the line. With these issues in mind, here are some key themes we’re seeing from state and local government regulation of AI.

Industry-Specific Regulation in HR and Insurance

Industry-specific regulations are emerging in already highly regulated sectors like hiring and insurance. Concerns about potential biases and discrimination have prompted states to take proactive measures, with the burden of proof for fairness and transparency on the companies using them.

New York City, for example, has embraced Local Law 144, which mandates disclosure and bias audits on automated employment decision tools. Violations of the law can result in civil penalties. Other legislation focused on AI in hiring exists in Illinois and Maryland. Notably, the ultimate party responsible for compliance is not the AI model provider but rather the companies deploying them.

A similar story is surfacing in insurance. In 2021, Colorado introduced SB 21-169 to safeguard consumers against unfair discrimination in insurance rate-setting mechanisms. As it moves into the implementation phase, the Colorado Division of Insurance (DOI) has revised the regulation, requiring life insurers to provide additional documentation on their AI systems, conduct thorough testing of their algorithms and models, and establish “risk-based” governance for AI systems utilized in claims, ratemaking, and pricing. Other states like New Jersey, Virginia, and Washington (House and Senate) have proposed comparable laws, emphasizing the need for governance and transparency regarding AI systems in insurance.

In an era where AI is increasingly integrated into critical processes like hiring and insurance, regulators rightly emphasize the need to address potential biases and discrimination. This application-focused regulatory approach means the legal liability will fall squarely on the companies applying the AI systems – not on their supplier. In that same vein, can expect additional sector-specific laws in financial services, healthcare, education, and other regulated industries over the next few years.

Targeting Underlying AI Tools

State regulations are also focusing on underlying AI tools used in decision-making processes, regardless of whether they rely on simple rules-based or deep learning techniques. The crucial question is where human judgment – and human liability – is being displaced.

Legislations like California’s AB 331 or New Jersey’s A4909 regulate using automated decision tools that wield significant influence over people’s civil rights, employment, and essential services. California would require both developers and users of such automated decision-making tools to submit impact assessments and transparency disclosures. Customers would also have the right to opt-out, a difficult feature for companies to add to AI products already in deployment.

This cost of compliance will only grow with time. California’s proposal even allows residents to file lawsuits, a direct financial cost to noncompliance. The trend is clear: all parties involved in creating an AI system will be responsible for managing the risks they introduce.

Building on Privacy Foundations

Privacy regulations are also relevant to AI governance, as AI systems increasingly process personal data with the potential to result in unlawful discrimination. While the California Consumer Privacy Act (CCPA) has been in place since 2018, nine other states have laws already enacted or in progress. These laws restrict the use of personal information and give users the right to access, correct, and control their personal data. AI-specific regulation will likely have similar requirements.

However, simply adopting preconfigured privacy-specific toolkits will not be enough. The pathway for privacy compliance is bottoms up, protecting individual datapoints as they move through increasingly complex data systems. AI systems are broader reaching, and regulation is increasingly taking a top-down approach particularly concerned with the interactions between datasets, analytical systems, and end-user applications.

Requiring Responsible AI through Government Procurement

Government procurement regulations can set the stage for responsible AI practices. With the introduction of SB 1103, Connecticut offers comprehensive guidance on the development, utilization, and evaluation of AI systems within state agencies. The law mandates impact assessments before deploying AI systems to prevent any unlawful discrimination.

As a result, vendors who already perform these impact assessments are likely to offer a compelling advantage in the selection process. Thus this internal procurement standard can be easily scaled across other localities seeking to quickly promote responsible AI practices within their jurisdictions.

Parting Thoughts

In an era where AI is integrated into our everyday lives — sometimes even without our knowledge or understanding — regulators across the U.S. are rightly emphasizing the need for responsible AI governance. However, the challenge lies in the intricate nature of compliance. Companies will need to showcase not only the effectiveness of their AI systems but also the measures they have taken to mitigate risks and harms across various jurisdictions. Unfortunately, in the near term, this means business owners will have to keep abreast of regulations themselves and tread cautiously through a maze of regulations, each with its own set of obligations and expectations.

Read the full article here

News Room July 12, 2023 July 12, 2023
Share This Article
Facebook Twitter Copy Link Print
Previous Article How Newsletters Are Providing a Unique Opportunity for Entrepreneurs
Next Article How Overcoming Your Past Helps Shape Your Future
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

The Middle East Has Entered the AI Group Chat
May 17, 2025
How to Take Control of Your Brand’s Story With This DIY Strategy
May 17, 2025
Coworking with Brandon Mikolaski
May 17, 2025
Insider Tips for the 2025 National Restaurant Show
May 16, 2025
Why Your Company’s AI Strategy Is Probably Backwards
May 16, 2025

You Might Also Like

Why Your Company’s AI Strategy Is Probably Backwards

Leadership

She Quit Corporate Life to Run an 8-Figure Side Hustle

Leadership

Why Compliance Is No Longer Just a Back-Office Function

Leadership

Here’s What It Really Takes to Lead a Bootstrapped Business

Leadership

© 2023 Brandiary. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

The Costly Mistake Franchise Recruiters Need to Avoid
How Netflix blew up the TV industry—and shaped a new one
How to Build the Ultimate Partner Network for Your Startup

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?