AI Act: why 2026 will change everything for AI projects

Dernière mise à jour :

December 16, 2025

5 minutes

On March 13, 2024, the European Union adopted the AI Act, the world's first global regulation dedicated to artificial intelligence, marking a historic turning point comparable to that of the GDPR. From 2026, all companies developing, integrating or using AI systems in Europe will have to prove that their models are traceable, explainable and controlled. The aim is to put an end to AI that is opaque, uncontrolled, and legally risky. Automations without supervision, decisions that are impossible to explain, “black box” models: what was tolerated yesterday is becoming a major legal, financial and reputational risk. The AI Act thus redefines the rules of the game and imposes a new strategic question on businesses: who will be ready in time, and who will discover the cost of non-compliance too late?

The European AI Act frames AI through risk. Learn what's changing for businesses, sanctions, and decisions to make.

AI Act: why 2026 will change everything for AI projects

The key principle of the AI Act: regulation through risk

The AI Act does not regulate AI uniformly. It follows a clear and proportional logic:

the higher the potential impact of a system, the higher the requirements.

The 4 levels of risk defined by the AI Act

1. Unacceptable risk — forbidden

Some uses are banned because they are considered incompatible with fundamental rights. Examples: social scoring, criminal prediction through profiling, unsupervised real-time facial recognition, emotional detection at work or school.

2. High risk — authorized but strictly supervised

These are AI systems used in areas where an error can have a major legal, financial or social impact: health, recruitment, credit, insurance, justice, justice, critical infrastructures, migration. These systems must comply with strict regulatory obligations: data quality and governance, human supervision, traceability, technical documentation, cybersecurity.

3. Limited risk — transparency requirement

When AI interacts directly with a human, the user needs to be clearly informed. Examples: chatbots, synthetic voices, deepfakes.

4. Minimal risk — free use without specific regulatory obligations

Common uses with low impact (anti-spam filters, simple recommendation engines, video games) are not subject to specific constraints.

Generative AIs and generalist models: the end of opacity

The AI Act introduces a specific framework for General Purpose AI (GPAI), like large-scale language models.

These models should:

  • publish a high-level summary of their training data,
  • respect copyright,
  • provide safety tests,
  • declare models with a systemic risk (very high computing power).

“Black box” models are becoming legally risky when used on a large scale or in sensitive contexts.

What the AI Act is really changing for businesses

1. Pure automation becomes risky without human supervision

Systems that take automatic decisions (validation, rejection, scoring, control) should include:

  • human validation mechanisms,
  • explanatory abilities,
  • traceability of decisions.

The “you automate everything and then you look at it” model is no longer viable.

2. Data quality and traceability are becoming central

Businesses will need to demonstrate:

  • where does the data come from,
  • how are they cleaned,
  • how biases are controlled,
  • how each decision can be audited

This directly concerns:

  • the OCR,
  • document automation,
  • the classification of documents,
  • financial and purchasing workflows.

3. Are you affected by the AI Act?

You are most likely concerned if your AI:

  • automates a control or a decision,
  • processes financial, contractual or personal data,
  • feeds a critical business process,
  • works on a large scale without systematic human validation.

In these cases, compliance will not be optional.

From compliance to strategic advantage

The AI Act doesn't just promote legal compliance. He structurally favors some types of AI architectures:

  • explainable systems,
  • documented pipelines,
  • traceable decisions,
  • native integration of Human-in-the-Loop.

On the contrary, it weakens solutions that are opaque, difficult to audit or impossible to explain.

In the medium term, compliance is becoming a Confidence signal, for customers, partners and regulators.

Governance, timetable and sanctions

One AI European Office will oversee the application of the text. The calendar is progressive:

  • 6 months: prohibition of uses at unacceptable risk,
  • 12 months: obligations for GPAIs,
  • 24 to 36 months: compliance of high-risk systems.

Penalties can reach 7% of global turnover, which places the AI Act at the level of the GDPR in terms of challenge.

In summary

The AI Act does not mark the end of AI in business. He scores The end of unmastered AI. Organizations that are investing now in:

  • data structuring,
  • the traceability of decisions,
  • the integration of human controls,
  • responsible AI architectures,

will not only comply: They will get a head start.

Move to document automation

With Koncile, automate your extractions, reduce errors and optimize your productivity in a few clicks thanks to AI OCR.

Author and Co-Founder at Koncile
Jules Ratier

Co-fondateur at Koncile - Transform any document into structured data with LLM - jules@koncile.ai

Jules leads product development at Koncile, focusing on how to turn unstructured documents into business value.

Koncile is elected startup of the year by ADRA. The solution turns procurement documents into actionable data to detect savings, monitor at scale, and improve strategic decisions.

News

8/12/2025