Artificial intelligence and geopolitics go hand in hand.
On Thursday, July 13th 2023, the Cyberspace Administration of China (CAC) released their “Interim Measures for the Management of Generative Artificial Intelligence Services.” In it, the Chinese government lays out its rules to regulate those who provide generative AI capabilities to the public in China. While many provisions of the law focus on traditional AI safety measures such as IP protections, transparency, and discrimination, other sections are quite unique to China, such as adherence to the values of socialism and the prohibition of generating incitement against the State.
Unsurprisingly, AI stands apart from other technologies in how it has spurred major countries to closely examine their national and geopolitical standings regarding the development of this technology.
Given these recent developments in China, let’s explore how these regulatory approaches differ from those in the United States and the European Union.
China
Key Theme: state control; economic dynamism
The Great (AI) Firewall
The Chinese government sees AI as a strategic technology that can help it achieve its economic and geopolitical goals, and as such has been actively promoting the development and adoption of AI. However, China’s approach to AI also raises concerns about privacy and civil liberties, as the government has been known to use AI for surveillance, censorship, and social control purposes. Generative AI presents a risk for state control beyond those risks presented with the internet.
Under these new regulations, firms must require a license to provide generative AI services to the public and submit a security assessment if public opinion attributes or social mobilization capabilities are used in the model. In China, generative AI providers must uphold the integrity of state power, refrain from inciting secession, safeguard national unity, preserve economic and social order, and ensure the development of products aligning with the country’s socialist values.
China has also been building its bureaucratic toolkits to quickly and iteratively propose new AI governance laws – allowing it to quickly adjust regulatory guidance as new use cases of the technology get adopted.
AI as an Economic Tool
Despite the Chinese government’s concerns about Generative AI applications, the country is deeply committed to investing in AI across sectors. China accounted for nearly one-fifth of global private investment funding in 2021, attracting $17 billion for AI start-ups. In research, China produced about one-third of both AI journal papers and AI citations worldwide in 2021. Researchers estimate that AI can create upwards of $600 billion in economic value annually for the country. Expect China to continue investing in AI to support its transportation, manufacturing, and defense sectors. Lastly, the manufacturing and distribution of semiconductors will also play a critical role in AI development.
China will ensure information generated by AI aligns with the interests of the Chinese Communist Party (CCP). However, recognizing AI’s economic potential, China will strategically utilize it to enhance global commercial and technological goals.
United States
Key Theme: self regulation, pro-innovation
Federal Approach is TBD
The United States Congress has taken a relatively hands-off approach to regulating AI thus far, though the Democratic Party’s leadership has expressed its intent to introduce a federal law regulating AI. Republicans will likely present their version as well. We expect the likelihood of such a law to pass through Congress as low. The country’s regulatory framework is largely based on voluntary guidelines like the NIST AI Risk Management Framework and self-regulation by the industry.
However, US Federal Agencies are likely to step in to regulate within their jurisdictional authority. For example, the Federal Trade Commission (FTC) has been active in policing deceptive and unfair practices related to AI, particularly enforcing statutes such as the Fair Credit Reporting Act, Equal Credit Opportunity Act, and FTC Act. They have released publications outlining rules for AI development and use. These rules include training AI with representative data sets, testing AI before and after deployment to avoid bias, ensuring explainable AI outcomes, and establishing accountability and governance mechanisms for fair and responsible AI use. In addition, certain sectors such as healthcare and financial services are subject to specific regulations related to AI.
While the US generally favors a “light touch” approach to regulation in order to foster innovation and growth in the AI industry, the country is starting to align with the EU on international cooperation of AI, though specifics are unclear. Most of the initiatives revolve around topics of trade, national security, and privacy.
State and Local Take the Lead
In a recent post, we outlined how to navigate the AI regulatory minefield across US State and Local level. In 2018, California adopted the California Consumer Privacy Act (CCPA) as a response to the European Union’s General Data Protection Regulation (GDPR). We expect states in the US to enact legislation on AI regulation due to the lack of federal enforcement, which would create a patchwork of state-level regulations for companies to comply with.
In New York City, Local Law 144 requires employers and employment agencies to provide a bias audit of automated employment decision tools. Colorado’s SB21-169 protects consumers from unfair discrimination in insurance practices using AI, and in California, AB 331 requires impact assessments for developers and deployers of automated decision tools. Moreover, state legislatures in Texas, Vermont, and Washington are introducing legislation that requires state agencies to conduct an inventory of all AI systems being developed, used, or procured – which would likely demand government contractors to more effectively disclose where AI is being used in their public sector contracts.
We expect US states and localities to continue introducing legislation to regulate AI in specific use cases.
European Union
Key Theme: consumer protection; fairness & safety
Global standard for AI regulation
Much like with GDPR, the EU’s AI Act is likely to become the global standard for AI regulation – likely changing how many machine learning engineers do their work. The proposal includes a ban on certain uses of AI, such as facial recognition in public spaces, as well as requirements for transparency and accountability in the use of AI. Most importantly, organizations must assign a risk category to each use case of AI and conduct a risk assessment and cost-benefit analysis before implementing a new AI system, especially if it poses a “heightened risk” to consumers. Controls to mitigate risks should be determined and integrated into business units where risk can arise. From an enforcement standpoint, Europe has learned lessons from GDPR that it will likely apply to AI – such as member-state enforcement agencies and better incident response.
Risk assessments will likely become standard practice for AI implementation, helping organizations understand the cost-benefit tradeoffs of an AI system and enable them to provide transparency and explainability to impacted stakeholders. Our partners at the Responsible AI Institute are one of the leading institutions helping organizations conduct risk assessments.
Conflicting Perspectives on AI innovation
The proposed regulation has been criticized by some as overly burdensome, creating additional costs and administrative responsibilities to organizations already overwhelmed by regulatory complexity. The EU argues that it is necessary to protect individuals from the potential harms of AI.
Interestingly, according to a recent Accenture report, many organizations see regulatory compliance as an unexpected source of competitive advantage. In fact, 43% of respondents think it will improve their ability to industrialize and scale AI, and 36% believe it will create opportunities for competitive advantage/differentiation. Organizations in regulated sectors like healthcare and finance are concerned with developing and deploying AI with few guardrails. Coherent AI regulations that clarify responsibilities and liabilities would allow organizations to confidently adopt AI. The EU is betting on this.
Parting Thoughts
It is clear that each model (US, EU, and China) reflects each region’s societal values and national priorities. These diverging requirements are also potentially in conflict with each other – meaning complying with China’s requirements for socialist values could directly conflict with US and EU standards. Ultimately, this would create a more complex regulatory environment for businesses to operate in.
Over the coming years, governments, businesses, and citizens will ask themselves fundamental questions about the definitions of fairness, human values, and economic trade offs with AI. While each regulatory framework may be perceived as more or less innovative, fair, or safe, all models will require organizations leveraging AI to document certain information about the system. Transparency and explainability (at the organizational, use case, and model/data level) are key to complying with emerging regulations and fostering trust in the technology.
Read the full article here