AI acts, orders, and regulations

A good AI policy should guide an organization on AI uses that adhere to the necessary laws and regulations. With generative AI still new in many fields, from medicine to law, regulations are rapidly evolving. A landmark provisional deal on AI regulation was reached by the European Parliament and Council on December 8, 2023, with the EU AI Act). These guidelines laid out in this document apply to AI regulation and use within the 27-member EU bloc, as well as to foreign companies that operate within the EU. It is likely the EU AI Act will serve as a model for AI laws around the globe, for both individual countries and industries.

Countries outside of the EU are drafting their own laws, orders, and standards surrounding AI use, so you and your employees will need to do some research on what it and is not allowed in your local area. Always consult your legal council about the AI regulations that apply to you.

Disclaimer: The thoughts and ideas presented in this course are not to be substituted for legal or ethical advice and are only meant to give you a starting point for gathering information about AI policy and regulations to consider.

The EU AI Act

According to EU policymakers involved in writing the AI Act, the goal of the Act is to regulate AI in order to limit its capacity to cause harm. The political agreement covers the use of AI in biometric surveillance (such as facial recognition), as well as guidance on regulations for LLMs. The EU AI Act divides AI-based products into levels based on how much risk each product might pose to things like data privacy and protection. Higher-risk products with a greater capacity to cause harm face more stringent rules and regulations. Most current AI uses (like systems that make recommendations) will not fall into this higher-risk category.

Final details are still being worked out, but we do know several important aspects of this Act.

  1. All content generated by AI must be clearly identified. Companies must also make it clear when customers are interacting with chatbots, emotion recognition systems, and models using biometric categorization.

  2. Developers of foundational models like GPT as well as general purpose AI systems (GPAI) must create technical documentation and detailed summaries about the training data before they can be released on the market.

  3. High-risk AI systems must undergo mandatory rights impact and mitigation assessments. Developers will also have to conduct model evaluations, assess and track possible cybersecurity risks, and report serious incidents and breaches to the European Commission. They will also have to use high-quality datasets and have better accompanying documentation.

  4. Open-source software is excluded from regulations, with some exceptions for software that is considered a high-risk system or is a prohibited application.

  5. AI software for manipulative strategies like deepfakes and automated disinformation campaigns, systems exploiting vulnerabilities, and indiscriminate scraping of facial images from the internet or security footage to create facial recognition databases are banned. Additional prohibited applications may be added later.

  6. There are exceptions to the facial scraping ban that allow law enforcement and intelligence agencies to use AI applications for facial recognition purposes.

The AI Act also lays out financial penalties for companies that violate these regulations, which can be between 1.5% and 7% of a company’s global revenue. More severe violations result in greater financial penalties.

This is still a provisional agreement. It must be approved by both the European Parliament and European countries before becoming law. After approval, tech companies will have two years to implement the rules, though bans on AI uses start six months after the EU AI Act is ratified.

More information about the EU’s AI Act can be found in these sources.

MIT Technology Review

Reuters

EURACTIV.com

CIO Dive

Industry-specific policies

Some individual industries have already begun adopting policies about generative AI. They may also have long-standing policies in place about the use of other forms of AI, like machine learning. Some countries have also begun creating policies for specific industries and fields. When in doubt, always check with the experts within your organization about what AI policies exist for your industry.

We’ll discuss some specific examples of how different industries are approaching AI regulation in the next section.