Book a Demo
EU-AI-ACT-blog

The EU AI Act: What It Means for Companies Developing and Using AI

The European Union’s Artificial Intelligence Act (EU AI Act) is a recent regulatory framework set to reshape how AI is developed and deployed in Europe and beyond. Often compared to the GDPR in its scope and impact, the AI Act introduces a risk-based approach to govern AI systems with the goal of ensuring AI is safe, trustworthy, and aligned with fundamental rights. 

CIOs, CISOs, and compliance leads need to understand what the law requires, how it classifies AI, and when obligations start applying. Below, we break down what the EU AI Act is, the intentions behind it, what it promises to achieve, and an analysis of its pros and cons, especially focusing on practical implications for companies.

What Is the EU AI Act?

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence, establishing harmonized rules across all EU member states. Instead of targeting a specific industry, it applies broadly to any AI system placed on the EU market or used in the EU, including providers and users outside Europe if their AI outputs reach EU users. 

It classifies AI systems by risk level. Unacceptable-risk AI uses are banned entirely, high-risk systems face strict compliance obligations, limited-risk applications require transparency measures, and minimal-risk AI is largely unregulated.

  • Unacceptable Risk (Prohibited AI): At the top of the pyramid are AI practices deemed a clear threat to safety or fundamental rights, which will be outright banned in the EU. This includes examples like AI that manipulates vulnerable people (e.g. a toy encouraging dangerous behavior in children), social scoring of individuals by governments, mass surveillance via real-time facial recognition in public spaces, or AI that exploits human vulnerabilities for harm. Such systems are not allowed to be developed or used in the EU at all. (Notably, some narrow exceptions exist – for instance, law enforcement can use real-time biometric ID in public only for serious crimes with judicial approval).
  • High Risk: These are AI systems that may significantly affect health, safety, or fundamental rights of people. High-risk use cases span critical sectors – for example, AI in critical infrastructure (transport, energy grids), in education (scoring exams or university admissions), in employment and HR (CV-screening or worker monitoring tools), in essential services like credit scoring, in law enforcement or border control (risk assessment tools, crime-prediction systems), and in justice (AI assisting judicial decisions). Providers of high-risk AI must comply with strict legal requirements before and after such systems hit the market.
  • Limited Risk (Transparency-Obligated): This category covers AI systems that are generally not harmful by themselves but require certain transparency disclosures to users. The idea is to preserve user trust by informing people when AI is in use. For example, if a website uses an AI chatbot or virtual assistant, it must be made clear to users that they are interacting with a machine (not a human). Similarly, AI-generated content like deepfake images or videos must be clearly labeled as AI-generated when presented to the public. Providers of generative AI models also need to build in features to prevent illegal content generation and disclose any copyrighted data used in training.
  • Minimal or No Risk: The vast majority of everyday AI systems fall in this bottom tier, which carries no additional legal requirements under the Act. This includes benign applications like AI-powered spam filters, recommendation algorithms, or AI in video games. Such AI can be developed and deployed freely, although the Act encourages voluntary codes of conduct for best practices even in low-risk scenarios. Companies should still monitor even low-risk AI for any changes or new features that might elevate its risk (for instance, adding a feature that interacts with human emotions or personal data might change an AI tool’s risk profile over time).

Intentions Behind the EU AI Act

AI systems can have real-world consequences, yet remain opaque and unaccountable. The AI Act aims to fix that.

Key objectives and motivations include:

  • Safeguarding Fundamental Rights and Safety: The EU’s first priority is to make sure AI does not harm or discriminate against people. The Act explicitly aims to ensure AI systems are non-discriminatory, transparent, and under human oversight, so that outcomes like unfair bias or unsafe decisions are prevented. For example, requiring high-risk AI to use quality data and human oversight is about preventing algorithms from infringing on individuals’ rights or well-being. The inclusion of bans on manipulative and social scoring AI also reflects the EU’s values around human dignity and privacy.
  • Promoting Trustworthy AI and Transparency: The motto of EU’s approach is “trustworthy AI.” Lawmakers intend for citizens to trust what AI has to offer because there are guardrails in place. By mandating transparency (like labeling AI content and bots) and accountability measures, the Act seeks to demystify AI decisions and enable scrutiny. This ties into the broader goal of fostering public confidence in AI-driven technologies so that adoption of AI will not be hampered by fear or scandals.
  • Balancing Innovation with Regulation: Policymakers have repeatedly stated they do not want to strangle innovation – rather, they aim to support AI development in a responsible way. The Act’s risk-based approach was designed to be proportionate: low-risk AI faces no new rules, and even for high-risk AI, the idea is to manage risks, not ban the tech. The EU included provisions to encourage innovation, such as AI “regulatory sandboxes” where startups and research projects can test AI systems in a controlled environment with reduced regulatory constraints.

When Does the EU AI Act Take Effect and How Is It Enforced?

The EU AI Act was politically approved in 2024 and is now on the path to implementation. It is a regulation (not a directive), meaning it will be directly applicable in all EU member states without national laws needed. There is a grace period before most of its provisions kick in, allowing companies time to prepare. The timeline is staged as follows:

  1. August 2024 – Entry into Force: The AI Act formally entered into force on 1 August 2024. From this date, the countdown begins, but the rules are not yet binding on companies (except for some early provisions).
  2. February 2, 2025 – Initial Obligations Begin: Six months after entry into force, certain provisions apply from 2 Feb 2025. Notably, prohibited AI practices become officially banned on this date. This means if any company were using (or planning to use) an AI system that falls under the banned category (unacceptable risk), it must cease by this date.
  3. August 2, 2025 – General-Purpose AI and Governance: One year after entry, rules related to General Purpose AI (GPAI) models apply from 2 Aug 2025. This means providers of large AI models (e.g. foundation models like GPT-type models) have to start complying with the transparency, data disclosure, and risk mitigation duties for GPAI by this date.
  4. August 2, 2026 – Main Requirements Become Enforceable: The bulk of the Act’s provisions – especially those on high-risk AI systems – become fully applicable 24 months after entry into force, i.e. by 2 Aug 2026. This is the date by which providers of high-risk AI must meet all the mandatory requirements (risk management, data quality, documentation, etc.), and high-risk systems will need to have undergone conformity assessments and be registered.
  5. August 2, 2027 – Extended Deadline for Certain AI in Products: There is an extra grace period until 2 Aug 2027 (36 months after entry into force) for certain high-risk AI that is embedded in products regulated by other EU safety laws. 
Enforcement of the AI Act will be carried out by a combination of national authorities and a new central body:
  • National Supervisory Authorities: Each EU country will designate one or more authorities (existing or new) to supervise AI in their jurisdiction. These bodies will handle compliance monitoring, investigations, and can issue orders or fines to violators. They will inspect documentation, test systems if needed, and respond to complaints (the Act allows individuals to file complaints about AI systems).
  • European AI Office and Board: To ensure consistency, the Act establishes a European AI Office (an EU-level body) and a European Artificial Intelligence Board comprised of representatives from member states. The AI Office (set up in 2024) is empowered to oversee general-purpose AI providers in particular.
  • Penalties: The AI Act introduces hefty fines for non-compliance, tiered by severity of the offense. Engaging in a prohibited AI practice (the worst violations) can draw fines up to €35 million or 7% of the company’s worldwide annual turnover, whichever is higher. For other violations (like not meeting high-risk requirements or transparency obligations), fines can go up to €15 million or 3% of global turnover. These levels are even higher than GDPR’s fines (which max out at 4% turnover), signaling how seriously the EU views AI regulation.
  • Inspections and Audits: We can expect that, once the law is in force, authorities will conduct both proactive audits (especially of registered high-risk systems) and reactive investigations (for example, if a high-risk AI causes an incident or gets a complaint). Companies providing high-risk AI must be prepared to provide their technical documentation and logs to regulators on request and demonstrate how they comply. The Act also mandates reporting of serious incidents or malfunctions by providers – akin to how medical device makers must report adverse events.

Practical Implications for Companies and How to Prepare

Steps companies can take to prepare for the AI Act’s requirements:
  1. Inventory and Classify Your AI Systems: Begin with a thorough audit of all AI and automated systems in your products, services, or internal operations. Identify which systems might fall under the Act’s definitions and what risk category they belong to. For each AI use-case, ask: Does this system’s output impact people in ways the Act deems high-risk (e.g. affecting their safety or rights)? This inventory forms the foundation for compliance: you can’t manage risk or fill gaps if you don’t know what AI you’re using.
  2. Implement AI Governance and Policies: Treat AI systems with the same rigor as other regulated domains by establishing AI governance. This means updating or creating internal policies and standard operating procedures for AI development and deployment. Key elements include: AI design principles (aligned with the Act’s requirements – e.g. principles of transparency, fairness, human oversight), policies that translate these into practice (for data management, model validation, etc.), and standards or best practices to follow (you can leverage emerging frameworks like the ISO 42001 AI management system standard or NIST’s AI Risk Management Framework).
  3. Conduct a Gap Analysis and Remediation Plan: With your inventory and governance structures in hand, perform a gap analysis to see where current AI systems and processes diverge from what the AI Act will require. For each high-risk system identified, assess its current status: Do you have proper documentation of the training data? Is there a user manual with required disclosures? Has the model been tested for bias or risk, and is there a risk mitigation plan? Are you logging outputs and decisions for traceability?
  4. Monitor Regulatory Developments and Engage: Keep in mind that practical compliance will be aided by upcoming standards, guidelines, and possibly a Code of Conduct. Stay engaged with industry groups, standards organizations (like CEN/CENELEC, ISO, IEEE), and the official AI Act Service Desk.
  5. Integrate AI Act Compliance into Company Strategy: Given the potential fines and market impact, compliance isn’t just a legal issue – it’s a strategic one. Brief your C-suite and board about the AI Act’s implications (if they aren’t already aware) and ensure AI governance is backed from the top. 


The Bottom Line

The EU AI Act introduces a structured set of obligations that companies will need to assess and integrate into their operations, especially those developing or deploying high-risk or general-purpose AI. Its impact will depend largely on how definitions are interpreted, how national authorities enforce the rules, and how supporting standards evolve.

For organizations placing AI systems on the EU market, the priority should be clarity: understanding how their systems are classified, what obligations apply, and where internal processes may need adjustment. As implementation proceeds, staying informed and preparing early can reduce disruption and help align AI efforts with emerging compliance expectations.

References

European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

European Commission. (2024). EU AI Act: A European approach to artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

European Parliament. (2024). Artificial Intelligence Act: Regulation on harmonised rules for artificial intelligence. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-artificial-intelligence-act 

European Commission. (2023). Questions and answers: Artificial Intelligence Act. https://ec.europa.eu/commission/presscorner/detail/en/qanda_23_3669 

Brussels Times. (2023, June 30). Tech giants warn EU AI Act could damage innovation in Europe. https://www.brusselstimes.com/560858/tech-giants-warn-eu-ai-act-could-damage-innovation-in-europe 

Politico. (2023, July 12). Europe’s AI law: What’s in it and why it matters. https://www.politico.eu/article/eu-ai-act-explained-artificial-intelligence-regulation/ 

Access Now. (2023, December 4). AI Act: What’s good, what’s missing, what’s next? https://www.accessnow.org/eu-ai-act-final-deal-analysis

KPMG. (2023). EU AI Act: What it means for businesses. https://home.kpmg/xx/en/home/insights/2023/06/eu-ai-act.html 

Clifford Chance. (2023). The EU AI Act: What you need to know. https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2023/06/eu-ai-act-what-you-need-to-know.html 

Allen & Overy. (2023). Understanding the EU AI Act: Legal implications for businesses. https://www.allenovery.com/en-gb/global/news-and-insights/publications/eu-ai-act-business-legal-implications 

McKinsey & Company. (2023). How Europe’s AI Act will affect businesses. https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/europes-ai-act-what-it-means-for-business 

U.S. Chamber of Commerce. (2023, August 10). Comments on the EU AI Act and its implications for global competitiveness. https://www.uschamber.com/technology/comments-on-the-eu-artificial-intelligence-act 

Future of Life Institute. (2023). Open letter on EU AI Act from industry leaders. https://futureoflife.org/open-letter/open-letter-to-the-european-commission-and-parliament-on-eu-ai-act/ 

OECD. (2022). OECD Framework for Classifying AI Systems. https://www.oecd.org/going-digital/classifying-ai.htm