Generative AI has gained significant public attention in 2022 and is anticipated to become one of the most regulated technologies by 2025. Countless industries, including entertainment, marketing, and social media, turn to this technology for various purposes, to some criticism. And companies spanning search engines, design tools, and even financial services are employing it within their platforms or designing their own tools using this growing technology. 

As a result, globally, countries, including the US, the EU, Brazil, and China, are swiftly moving towards establishing regulations to manage and control its applications. Governments ponder the questions of what constitutes ethical use of Generative AI, how intellectual property should be protected as it pertains to informing large language models (LLMs), and how information security can be maintained. 

With AI tools quickly becoming more popularized, companies should pay attention to how their respective governments choose to regulate this emerging technology. 

European Union: The AI Act for Generative AI Regulation

The European Union continues advancing steadily in establishing foundational regulations for generative AI. They stand to be a global frontrunner in this space. The AI Act, a pioneering legal framework initiated in 2021 and refined subsequently, is in the concluding phases of ratification, with enforcement anticipated by 2025.

The emergence of tools like ChatGPT and DALL-E by OpenAI and the subsequent engagement of tech giants such as Meta, Google, and Microsoft underscores the urgency for robust regulatory safeguards. The AI Act mandates the explicit disclosure of AI-generated content and the datasets employed in training large language models (LLMs).

The AI Act is slated for completion by the close of 2023, with an interim 24-month transition phase preceding its full implementation. This regulatory measure is characterized by risk-based categorizations and prohibition of specific high-risk AI utilities, including real-time biometric monitoring in public domains. Furthermore, it enforces rigorous oversight and disclosure protocols for other high-risk AI applications. The legislative effort aligns with European ethical norms, legal standards, and civil liberties, introducing penalties for non-compliance that could reach 7% of global turnover or €40 million.

In a complementary development on September 28, 2022, the European Commission unveiled the “AI Liability Directive.” This proposal aims to harmonize rules associated with AI-induced damages, safeguard affected individuals, and stimulate the AI industry by enhancing safety assurances. Key features of this directive include mandated disclosure of significant evidence linked to high-risk AI utilities, and it delineates explicit accountabilities for developers and mechanisms for victims to seek redress. Unfortunately, the directive is yet to be finalized, indicating an ongoing deliberative process.

United States: The SAFE Innovation Framework 

However, in the United States, AI regulation progresses at a slower pace. An executive order is under development, signaling the country’s move towards bipartisan regulation. There’s a growing conversation around modifying US copyright law to account for the implications of generative AI on creative industries.

Congressional efforts to regulate AI continue to escalate. Recently, Senate Majority Leader Chuck Schumer announced the SAFE Innovation Framework.  This framework seeks to guide AI legislation with a focus on security, accountability, and explainability while promoting innovation. Although its passage in 2023 is uncertain, Congress expects to persist in developing AI legislation and engaging the industry for insights. 

A National AI Strategy

Additionally, there is also support for establishing a federal agency specifically for AI regulation and an international AI regulatory body. The White House and the Office of Science and Technology Policy actively advance AI R&D and are in the process of formulating a National AI Strategy to ensure fairness and transparency. That being said, federal agencies, including the FTC, already enforce existing laws to address AI-related violations. These laws emphasize fairness and consumer protection.

United Kingdom: Generative AI Whitepaper

The UK adopted a cautious yet strategic approach to AI. They aspire to become a global leader without imposing restrictive regulations that could hamper innovation. To that end, the establishment of a regulatory sandbox exemplifies this cautious approach. It serves as a testing ground to understand AI’s progression and implications before crafting detailed legislation.

A white paper released on March 29, 2023, underscores the UK’s commitment to fostering innovation in AI. The government focused on strengthening existing regulatory mechanisms rather than formulating new laws or creating a specialized AI regulatory entity. The paper underscores adaptivity and autonomy as foundational elements of AI. Additionally, it introduces five key principles, aligned with the OECD’s AI principles, aimed at managing AI-related risks. These principles emphasize safety, transparency, fairness, accountability, and mechanisms for redress.

Initially, these principles are advisory. However, they have the potential to gain legal status, contingent on the future trajectory and societal influences of AI. To facilitate the exploration and assessment of emerging AI innovations, the UK government has invested £2 million in an AI sandbox. This offers a practical environment for testing and understanding the potential regulatory implications of new AI products and technologies.

In the aftermath of the white paper’s publication, the government’s focus will pivot towards operationalizing central functions. It will continue fine-tuning the AI regulatory framework and conducting continuous evaluations to gauge the efficacy of its regulatory approach.

China: Interim Administrative Measures for Generative Artificial Intelligence Service

With regard to Generative AI, China has already implemented regulations governing the technology. These regulations focus on the accuracy of LLMs and their training data. This move could significantly limit the penetration of consumer-level generative AI in the country.

On July 13, 2023, the Cyberspace Administration of China (CAC) released the final version of the Interim Administrative Measures for Generative Artificial Intelligence Service (PRC AI Regulations), governing public AI services in China. These regulations exclude non-public service providers like businesses and research institutions involved in AI R&D. Providers of Generative AI services are mandated to monitor and control content, remove illegal content, act against users engaging in illegal activities, and report to authorities. In addition, generated content must be appropriately labeled, data used for training must be from legitimate sources, intellectual property rights respected, and consent obtained for processing personal information.

The regulations emphasize user privacy and prohibit the unlawful collection and sharing of identifiable data. China’s regulatory approach seems to be industry-oriented, with different departments overseeing AI services within their domains. The PRC AI Regulations build on the earlier Algorithm Provisions and Deep Synthesis Provisions, focusing on safety assessments and obligations for AI service providers and technical supporters concerning data security, transparency, content management, and labeling.

The Future of Generative AI Regulation

In essence, as generative AI continues to evolve, countries worldwide fast-track their efforts to establish regulatory frameworks. Governments try to balance the innovation surge with ethical and security considerations. The impending regulations underscore the global urgency to manage the profound implications of these technologies on society.

That being said, compliance can be turned into a competitive advantage if a business can employ the right approach and use the best tools for achieving and maintaining compliance. In this case, the problem also becomes the solution – AI-powered risk and compliance management can help businesses adhere to regulations and bolster their profits, especially if informed by emerging regulations around growing technologies.

Learn our perspective on Generative AI and its upcoming regulation by contacting us today to speak to one of our consultants.