
Koncile is elected startup of the year by ADRA. The solution turns procurement documents into actionable data to detect savings, monitor at scale, and improve strategic decisions.
News
Dernière mise à jour :
December 16, 2025
5 minutes
On March 13, 2024, the European Union adopted the AI Act, the world's first global regulation dedicated to artificial intelligence, marking a historic turning point comparable to that of the GDPR. From 2026, all companies developing, integrating or using AI systems in Europe will have to prove that their models are traceable, explainable and controlled. The aim is to put an end to AI that is opaque, uncontrolled, and legally risky. Automations without supervision, decisions that are impossible to explain, “black box” models: what was tolerated yesterday is becoming a major legal, financial and reputational risk. The AI Act thus redefines the rules of the game and imposes a new strategic question on businesses: who will be ready in time, and who will discover the cost of non-compliance too late?
The European AI Act frames AI through risk. Learn what's changing for businesses, sanctions, and decisions to make.
The AI Act does not regulate AI uniformly. It follows a clear and proportional logic:
the higher the potential impact of a system, the higher the requirements.
1. Unacceptable risk — forbidden
Some uses are banned because they are considered incompatible with fundamental rights. Examples: social scoring, criminal prediction through profiling, unsupervised real-time facial recognition, emotional detection at work or school.
2. High risk — authorized but strictly supervised
These are AI systems used in areas where an error can have a major legal, financial or social impact: health, recruitment, credit, insurance, justice, justice, critical infrastructures, migration. These systems must comply with strict regulatory obligations: data quality and governance, human supervision, traceability, technical documentation, cybersecurity.
3. Limited risk — transparency requirement
When AI interacts directly with a human, the user needs to be clearly informed. Examples: chatbots, synthetic voices, deepfakes.
4. Minimal risk — free use without specific regulatory obligations
Common uses with low impact (anti-spam filters, simple recommendation engines, video games) are not subject to specific constraints.
The AI Act introduces a specific framework for General Purpose AI (GPAI), like large-scale language models.
These models should:
“Black box” models are becoming legally risky when used on a large scale or in sensitive contexts.
Systems that take automatic decisions (validation, rejection, scoring, control) should include:
The “you automate everything and then you look at it” model is no longer viable.
Businesses will need to demonstrate:
This directly concerns:
You are most likely concerned if your AI:
In these cases, compliance will not be optional.
The AI Act doesn't just promote legal compliance. He structurally favors some types of AI architectures:
On the contrary, it weakens solutions that are opaque, difficult to audit or impossible to explain.
In the medium term, compliance is becoming a Confidence signal, for customers, partners and regulators.
One AI European Office will oversee the application of the text. The calendar is progressive:
Penalties can reach 7% of global turnover, which places the AI Act at the level of the GDPR in terms of challenge.
The AI Act does not mark the end of AI in business. He scores The end of unmastered AI. Organizations that are investing now in:
will not only comply: They will get a head start.
Move to document automation
With Koncile, automate your extractions, reduce errors and optimize your productivity in a few clicks thanks to AI OCR.
Resources

Koncile is elected startup of the year by ADRA. The solution turns procurement documents into actionable data to detect savings, monitor at scale, and improve strategic decisions.
News

Why LLM OCR replaces outdated OCR and powers modern document automation.
Comparatives
Three complementary approaches to document fraud detection software, from image forensics to AI-powered consistency checks on financial documents.
Comparatives