Blog Post

Recap of the European Parliament AI Proposal

Lawrence Lerner • Jun 06, 2023

On June 14, 2023, the European Parliament approved amendments to the European Union (EU) Artificial Intelligence Act (AI Act or AIA) ("Parliament Proposal"). The European Union’s AI Act is the first comprehensive set of regulations for AI. 


Why is this noteworthy? It marks the evolution of regulators and policy-makers attempting to stay with the curve of technology. In the past, regulators have had to play catch with the internet, social media, and data privacy technologies. AI has put real motivation into the public, so politicians are taking action. 


You may passively follow AI and LLM (Large Language Models) discussions, such as ChatGPT, as another tech novelty. Worldwide, regulators are taking it very seriously. The European Parliament has worked on this AI act for the past few years. British Prime Minister Rishi Sunak is calling for his country to be ground zero for AI regulation and is putting real dollars pounds behind it.


The European Parliament adopted the AI Act with 499 votes in favor, 28 against, and 93 abstentions. While the regulation is not yet law, it is likely to be one of the first formal rules for AI. The latest amended 144-page version is here “Proposal for a regulation of the European Parliament and of the Council on harmonised rules on Artificial Intelligence” May 9th, 2023.” The US and other government bodies are working on their versions.


AI is more than just headline-grabbing that happened during the initial blockchain hype. $24 million iced tea company says it’s pivoting to the blockchain, and its stock jumps 200%. There are dozens of real-world implementations, such as computer vision, that are raising concerns. 


  1. The proposed regulation aims to create a legal framework for AI to ensure it is safe and respects EU laws and values (Chapter 1, Article 1).
  2. The regulation applies to AI systems placed on the market, put into service, or used in the EU, regardless of whether the provider is established within the EU (Chapter 1, Article 2).
  3. AI systems are categorized based on risk level, with high-risk AI systems subject to stricter requirements (Chapter 2, Article 6).
  4. Providers of high-risk AI systems must ensure that their systems comply with various obligations, including data and data governance, documentation, transparency, and human oversight (Chapter 3, Articles 10-15).
  5. Users of high-risk AI systems must ensure they use the system correctly and follow the instructions provided by the provider (Chapter 4, Article 29).
  6. The regulation establishes a European Artificial Intelligence Board to advise and assist the European Commission and member states in AI-related matters (Chapter 5, Article 56).
  7. National competent authorities will be responsible for monitoring and enforcing compliance with the regulation (Chapter 6, Article 59).
  8. Penalties for non-compliance with the regulation include fines of up to 6% of the provider's total annual worldwide turnover or €30 million, whichever is higher (Chapter 7, Article 71).
  9. The regulation encourages voluntary codes of conduct for providers and users of AI systems that are not considered high-risk to promote responsible AI practices (Chapter 8, Article 76).
  10. The regulation promotes AI research and development by supporting a regulatory sandbox approach, allowing organizations to test AI systems in a controlled environment (Chapter 9, Article 77).


The European Artificial Intelligence Act classifies AI systems into four levels of risk: unacceptable, high, limited, and minimal. Here are descriptions and examples of each risk level:


  1. Unacceptable risk: AI systems posing an unacceptable risk violate fundamental rights and are considered a clear threat to the safety or rights of individuals. These systems are banned or subject to strict limitations.
  2. AI systems that manipulate human behavior, exploit vulnerabilities, or cause physical or psychological harm (Annex III, Point 1).
  3. AI systems that use real-time remote biometric identification in publicly accessible spaces for law enforcement purposes unless certain exceptions apply (Annex III, Point 2).
  4. AI systems evaluate the creditworthiness of individuals, leading to social scoring by public authorities (Annex III, Point 3).
  5. AI systems enable governments to engage in indiscriminate surveillance, including the large-scale monitoring of individuals without their consent (Annex III, Point 4).
  6. AI systems that use subliminal techniques to materially distort a person's behavior that could cause harm or is aimed at a vulnerable group (Annex III, Point 5).
  7. High risk: High-risk AI systems are subject to strict legal and technical obligations, including data governance, documentation, transparency, and human oversight. These systems are typically used in critical sectors like healthcare, transportation, and law enforcement. Here are some examples:
  8. Biometric identification and categorization of natural persons:
  9. Management and operation of critical infrastructure:
  10. Employment, workers management, and access to self-employment
  11. Access to and enjoyment of essential private services and public services
  12. Law enforcement
  13. Administration of justice and democratic processes
  14. Limited risk: AI systems with limited risk have some specific transparency obligations.
  15. Example: Users should be informed when interacting with an AI system like a chatbot.
  16. Minimal risk: Most AI systems fall into this category, and their use is encouraged for innovation and growth.
  17. Example: AI-based recommendation systems used by e-commerce platforms or streaming services, suggesting products, movies, or music based on user preferences and behavior.


Please note that the European Artificial Intelligence Act does not explicitly define these four levels of risk. The Act primarily focuses on high-risk AI systems, while the other risk levels are derived from the regulatory requirements and obligations imposed on AI systems with different risk profiles.


There is much more to unpack and watch. As noteworthy items unfold, I will share those.


What are your concerns or questions about AI? Do you think the US should follow the same path?


ABOUT THE AUTHOR


Lawrence


I translate the CEO, Owner, or Board vision and goals into market-making products that generate $100M in new revenue by expanding into geographies, industries, and verticals while adding customers.


As their trusted advisor, leaders engage me to crush their goals and grow, fix, or transition their businesses with a cumulative impact of $1B


👉🏼 Subscribe to Retail industry news, unpacking trends, and timely issues for leaders.

 

Ready to grow, address change, or transition your business? 👉🏼  Let's brainstorm

Share by: