AI Regulatory Landscape
Artificial intelligence is moving rapidly from research laboratories into everyday systems that influence hiring, healthcare, finance, policing, and online information. As these systems take on greater decision-making roles, governments are increasingly focused on how they should be governed.
For many years, AI oversight relied largely on voluntary ethics principles developed by technology companies, research institutions, and international organizations. These guidelines emphasized fairness, transparency, and accountability but were not legally enforceable.
That approach is now changing. Governments are beginning to translate these principles into formal laws, regulatory frameworks, and technical standards that define how AI systems can be developed, deployed, and monitored. The global regulatory landscape is evolving quickly, with different regions adopting different approaches to balancing innovation, risk management, and public accountability.
How Artificial Intelligence Is Regulated Worldwide
European Union: The World's First Comprehensive AI Law
The EU's Artificial Intelligence Act (2024) is the most significant AI regulation in the world to date. It takes a risk-based approach, meaning the stricter the potential harm, the stricter the rules. The Act is being phased in over several years, with different provisions taking effect on different timelines through 2026 and beyond.
Unacceptable Risk: Applications such as government social scoring and certain uses of real-time biometric identification in public spaces are banned or strictly prohibited.
High Risk: AI used in hiring decisions, credit scoring, medical devices, and criminal justice systems must meet strict requirements for risk assessment, human oversight, data governance, and documentation before deployment.
Limited Risk: Systems that interact directly with people, such as chatbots or synthetic media, must disclose the user is interacting with AI.
Minimal Risk: Applications such as spam filters or AI used in video games have minimal regulatory requirements.
Source: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
United States: A Decentralized Regulatory Approach
The United States does not currently have a single comprehensive federal AI law comparable to the broad regulatory frameworks adopted in some other jurisdictions. Instead, oversight is developing through a combination of federal policy initiatives, agency enforcement under existing laws, and increasing state-level legislation.
At the federal level, several policy initiatives and guidance frameworks have been introduced to address risks associated with advanced AI systems, focusing on areas such as safety testing, transparency, risk management, and responsible development practices. Federal agencies are also addressing AI through existing regulatory authority, applying consumer protection, civil rights, competition law, and data privacy regulations to AI-related issues such as misleading claims about AI capabilities, algorithmic discrimination, and the misuse of personal data. Agency guidance from bodies including the FTC, EEOC, and HHS addresses AI use within their respective areas of jurisdiction.
State governments are playing an increasing role as well. Illinois' Biometric Information Privacy Act imposes strict rules on the collection of biometric data including facial recognition. Colorado has passed legislation requiring impact assessments for high-risk AI systems used in consequential decisions. Other states are actively developing similar frameworks.
As a result, AI governance in the United States is evolving through a decentralized structure that combines federal policy direction, enforcement through existing regulatory frameworks, targeted legislation addressing specific risks, and a growing but uneven landscape of state-level regulation.
Deepfake and Synthetic Media Laws
Synthetic media, meaning AI-generated images, audio, and video that depict real people, has created a new category of legal concern. Governments at the state, federal, and international level are introducing laws targeting nonconsensual AI-generated explicit imagery, synthetic media used to interfere with elections, and platform accountability for hosting or distributing harmful synthetic content. This area of law is developing quickly, and what is covered and what penalties apply vary significantly by jurisdiction.
Examples of Emerging AI Regulation
AI governance is rapidly evolving from voluntary principles into enforceable law. The examples below illustrate how different jurisdictions are approaching oversight of AI systems, digital content, and the societal impacts of emerging technologies.
TAKE IT DOWN Act: The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act targets nonconsensual intimate imagery, including AI-generated deepfake content. Signed into law by President Donald Trump on May 19, 2025, the law makes it illegal to knowingly publish these images and requires online platforms to remove reported content within 48 hours of receiving a valid complaint. Platforms have until May 19, 2026 to establish the required notice-and-removal process. Source: https://www.congress.gov/bill/119th-congress/senate-bill/146
Executive Order on AI: On January 23, 2025, President Donald Trump signed an executive order directing federal agencies to prioritize American leadership in artificial intelligence. The order revoked the previous administration's 2023 executive order on AI safety and directed the development of a new action plan to sustain and strengthen U.S. competitiveness in AI. The current federal approach emphasizes reducing regulatory barriers and promoting innovation rather than the safety testing and disclosure requirements established under the prior order. Source: https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence
California AI Transparency Act (SB 942, as amended by AB 853):This law requires large AI platforms and large online platforms to provide free tools that help identify or label AI-generated or manipulated digital content, with the goal of improving transparency around synthetic media and reducing the spread of deceptive content online. Originally signed in 2024, the law takes effect August 2, 2026. Source: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB942
Global Organizations and Standards
Beyond government regulation, technical standards organizations also influence how AI governance is implemented in practice.
ISO/IEC develops international standards for AI management systems, risk governance, and lifecycle management.
IEEE develops technical and ethical standards addressing transparency, bias mitigation, and responsible AI design.
UNESCO developed the Recommendation on the Ethics of Artificial Intelligence (2021), the first global standard on AI ethics adopted by nearly 200 countries, outlining principles for human rights protection, transparency, environmental responsibility, and oversight of AI systems.
The OECD created the OECD AI Principles, one of the earliest internationally recognized policy frameworks for trustworthy AI, emphasizing human-centered values, transparency, robustness, and accountability.
Although these organizations do not create laws, their standards are frequently incorporated into corporate governance policies, certification programs, and government procurement requirements.
A Note on Scope: The examples above focus on the European Union and the United States, where the most widely referenced regulatory frameworks have been developed. Other governments are also actively developing AI oversight. The United Kingdom is pursuing a sector-based approach coordinated through existing regulators rather than a single AI law. China has introduced regulations targeting specific AI applications including generative AI services and algorithmic recommendation systems. This is a rapidly evolving area, and the frameworks described here represent a snapshot of the landscape as of the date this page was last reviewed.
Why Regulation Matters and What It Cannot Do Alone
Regulation aims to reduce bias and discrimination, protect privacy, ensure accountability, and increase transparency in AI systems. Legal frameworks create real incentives for companies to address risks they might otherwise ignore.
At the same time, regulation works after problems are identified, not always before. Laws are written in response to harms that have already occurred or been anticipated. This is why individual awareness, industry accountability, and ongoing public scrutiny remain important alongside legal frameworks.
Understanding the regulatory framework is important, but putting AI practices into action is key. Learn how to apply these principles on the Safe and Responsible AI Use page.
Last Reviewed: March 2026
Sources and Further Reading
EU AI Act full text: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
TAKE IT DOWN Act: https://www.congress.gov/bill/119th-congress/senate-bill/146
Executive Order on Removing Barriers to American Leadership in AI: https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence
California AI Transparency Act (SB 942, as amended by AB 853): https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB942
UNESCO Recommendation on the Ethics of AI: https://unesdoc.unesco.org/ark:/48223/pf0000381137
OECD AI Principles: https://oecd.ai/en/ai-principles
NIST AI Risk Management Framework: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
ISO/IEC 42001 AI Management System: https://www.iso.org/standard/81230.html