<script type="application/ld+json">
{
 "@context": "https://schema.org",
 "@graph": [
   {
     "@type": "FAQPage",
     "@id": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects#faq",
     "url": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects",
     "name": "AI Act FAQ: compliance, risk levels, requirements",
     "mainEntity": [
       {
         "@type": "Question",
         "name": "What is the AI Act?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "The AI Act is the EU regulation that governs artificial intelligence through a risk-based approach, with stricter obligations when a system’s potential impact is higher."
         }
       },
       {
         "@type": "Question",
         "name": "When does the AI Act apply?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "Implementation is phased: prohibited practices are addressed first, then obligations for general-purpose AI models, and full compliance for high-risk systems between 2026 and 2027 depending on the use case."
         }
       },
       {
         "@type": "Question",
         "name": "Is my company affected by the AI Act?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "You are likely affected if you develop, integrate, or use AI in Europe that automates decisions, processes sensitive data, or supports critical business workflows."
         }
       },
       {
         "@type": "Question",
         "name": "What are the four AI Act risk levels?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "The AI Act defines four levels: unacceptable risk (prohibited), high risk (allowed with strict obligations), limited risk (transparency obligations), and minimal risk (free use)."
         }
       },
       {
         "@type": "Question",
         "name": "What is prohibited under the AI Act?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "The regulation bans certain uses considered incompatible with fundamental rights, such as social scoring, crime prediction based solely on profiling, emotion recognition at work or in education (with limited exceptions), and certain biometric-related practices."
         }
       },
       {
         "@type": "Question",
         "name": "What is considered high-risk AI?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "AI is considered high risk when used in sensitive domains (healthcare, recruitment, credit, insurance, justice, critical infrastructure, migration) where errors or bias can cause significant harm."
         }
       },
       {
         "@type": "Question",
         "name": "What obligations apply to high-risk AI systems?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "Key obligations include data quality and bias mitigation, comprehensive technical documentation, human oversight, traceability through logging, cybersecurity safeguards, and in some cases registration in an EU database."
         }
       },
       {
         "@type": "Question",
         "name": "Are chatbots and synthetic content covered by the AI Act?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "Yes, they typically fall under limited risk. The main requirement is transparency: users must be clearly informed they are interacting with AI or consuming AI-generated content."
         }
       },
       {
         "@type": "Question",
         "name": "What does “transparency obligation” mean for limited-risk AI?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "It means clearly disclosing AI use in a way users can understand, to avoid misleading them, especially for chatbots, synthetic voices, and manipulated or generated media."
         }
       },
       {
         "@type": "Question",
         "name": "What does “human-in-the-loop” mean in practice?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "It means meaningful human oversight: the ability to review, validate, correct, or stop automated decisions, especially when AI influences high-impact outcomes."
         }
       },
       {
         "@type": "Question",
         "name": "What is General-Purpose AI (GPAI)?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "General-Purpose AI (GPAI) refers to AI models designed for many different tasks (e.g., large language models). The AI Act adds specific transparency, safety, and copyright-related obligations for these models."
         }
       },
       {
         "@type": "Question",
         "name": "What transparency is required for GPAI models?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "GPAI providers must publish a sufficiently detailed summary of training data and enable legitimate rights claims (e.g., copyright and personal data), while they may protect certain information as trade secrets."
         }
       },
       {
         "@type": "Question",
         "name": "What are the penalties for non-compliance?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "Fines can reach up to 7% of global annual turnover depending on severity. Companies also face legal, operational, and reputational risks."
         }
       },
       {
         "@type": "Question",
         "name": "How should companies start preparing for the AI Act?",
         "acceptedAnswer": {
           "@type": "Answer",
           "text": "Start by mapping AI use cases, classifying them by risk level, then implement documentation, data governance, human oversight, traceability, and security measures appropriate to each category."
         }
       }
     ]
   },
   {
     "@type": "HowTo",
     "@id": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects#howto",
     "url": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects",
     "name": "Quickly assess how the AI Act impacts an AI project",
     "description": "A 4-step path to identify risk level (prohibited, high risk, limited risk) and key obligations (transparency, traceability, human oversight).",
     "totalTime": "PT10M",
     "step": [
       {
         "@type": "HowToStep",
         "position": 1,
         "name": "Classify the system by risk level",
         "text": "Determine whether the use case falls under unacceptable, high, limited, or minimal risk based on context and potential impact on people."
       },
       {
         "@type": "HowToStep",
         "position": 2,
         "name": "Rule out prohibited practices",
         "text": "Check that the use case is not among prohibited practices (social scoring, crime prediction based solely on profiling, emotion recognition at work/education, certain biometric uses)."
       },
       {
         "@type": "HowToStep",
         "position": 3,
         "name": "Check whether the activity is sensitive",
         "text": "If AI is used in healthcare, recruitment, credit/insurance, justice, critical infrastructure, or migration, treat it as potentially high risk."
       },
       {
         "@type": "HowToStep",
         "position": 4,
         "name": "Apply the relevant obligations",
         "text": "High risk: data governance, documentation, human oversight, logging/traceability, cybersecurity. Limited risk: user-facing transparency (chatbots, synthetic voice/video)."
       }
     ]
   },
   {
     "@type": "BreadcrumbList",
     "@id": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects#breadcrumbs",
     "itemListElement": [
       {
         "@type": "ListItem",
         "position": 1,
         "name": "Resources",
         "item": "https://www.koncile.ai/en/ressources/"
       },
       {
         "@type": "ListItem",
         "position": 2,
         "name": "AI Act: Why 2026 Will Change Everything for AI Projects",
         "item": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects"
       }
     ]
   },
   {
     "@type": "WebPage",
     "@id": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects#webpage",
     "url": "https://www.koncile.ai/en/ressources/ai-act-will-change-everything-for-ai-projects",
     "name": "AI Act: Why 2026 Will Change Everything for AI Projects",
     "inLanguage": "en",
     "isPartOf": {
       "@type": "WebSite",
       "@id": "https://www.koncile.ai/#website",
       "name": "Koncile",
       "url": "https://www.koncile.ai/"
     }
   }
 ]
}
</script>

AI Act: why 2026 will change everything for AI projects

Dernière mise à jour :

December 22, 2025

5 minutes

On March 13, 2024, the European Union adopted the AI Act, the world's first global regulation dedicated to artificial intelligence, marking a historic turning point comparable to that of the GDPR. From 2026, all companies developing, integrating or using AI systems in Europe will have to prove that their models are traceable, explainable and controlled. The aim is to put an end to AI that is opaque, uncontrolled, and legally risky. Automations without supervision, decisions that are impossible to explain, “black box” models: what was tolerated yesterday is becoming a major legal, financial and reputational risk. The AI Act thus redefines the rules of the game and imposes a new strategic question on businesses: who will be ready in time, and who will discover the cost of non-compliance too late?

The European AI Act frames AI through risk. Learn what's changing for businesses, sanctions, and decisions to make.

AI Act: why 2026 will change everything for AI projects

The core principle of the AI Act: regulation based on risk

The AI Act does not regulate artificial intelligence in a uniform way. Instead, it is built on a simple but decisive principle: the higher the potential impact of an AI system on individuals, rights, or society, the stricter the regulatory requirements.

This marks a clear shift compared to previous technology regulations. The key question is no longer whether a company uses AI, but how, where, and with what consequences. In other words, the regulation focuses less on the technology itself and more on its real-world use and effects.

Many organizations talk about the AI Act without having actually read it. That is hardly surprising. The regulation is long, dense, and highly technical. Yet understanding its operational logic does not require reading every article line by line.

Even the “high-level summaries” published by the European Commission remain difficult to digest for product, engineering, or business teams. This is why a more practical approach is to reason through the AI Act using a series of structured questions, starting with risk classification.

The four risk levels defined by the AI Act

At the heart of the AI Act lies a four-tier risk classification. This is the starting point for any compliance analysis.

Unacceptable risk – prohibited
Certain AI uses are outright banned because they are considered incompatible with fundamental rights. These include social scoring, predictive policing based on profiling, real-time facial recognition without strict safeguards, and emotion recognition in the workplace or in education.

High risk – allowed but strictly regulated
High-risk systems are permitted, but only under strict conditions. They cover use cases where errors or biases could have serious legal, financial, or social consequences, such as healthcare, recruitment, credit, insurance, justice, critical infrastructure, or migration management. These systems are subject to strong governance and compliance obligations.

Limited risk – transparency obligations
When AI systems interact directly with humans, users must be clearly informed. This applies to chatbots, synthetic voices, and AI-generated or manipulated content.

Minimal risk – free use
Low-impact use cases such as spam filters, basic recommendation engines, or video games are not subject to specific regulatory constraints.

Quickly identifying prohibited AI use cases

Before diving into complex compliance efforts, the AI Act allows organizations to rule out certain scenarios early on. If a system falls under prohibited practices, compliance is not an option; the system must be abandoned or fundamentally redesigned.

For most companies, this step is reassuring. If your AI does not score citizens, predict crimes based on personality traits, or infer employee emotions, you can move forward. This clarification helps reduce much of the anxiety surrounding the regulation.

Sensitive activities: where risk becomes real

The most critical zone begins when AI systems are deployed in so-called sensitive activities. In many real-world scenarios, these systems rely on large volumes of unstructured or semi-structured documents, such as medical records, identity documents, financial statements, or administrative files. Processing these inputs typically requires an OCR step to convert raw documents into usable data, which further amplifies the importance of accuracy, traceability, and human oversight under a regulatory framework like the AI Act.

Critical infrastructure, medical devices, and high-risk products expose individuals to physical harm. Systems used for student evaluation, recruitment, CV screening, credit, or insurance carry risks of discrimination and unequal treatment. AI applied to migration, justice, or democratic processes directly affects fundamental rights.

In these contexts, AI is no longer a simple optimization tool. It becomes a decision-making actor that must be tightly controlled.

High-risk AI: reinforced and concrete obligations

When an AI system is classified as high risk, the AI Act leaves little room for interpretation. Compliance is based on clearly defined operational requirements.

Organizations must demonstrate data quality and bias mitigation, maintain comprehensive technical documentation, ensure meaningful human oversight, enable decision traceability through logging, protect systems against cyber threats, and, in some cases, register models in a European database.

These requirements fundamentally change how AI systems are designed. They favor structured, auditable, and governable architectures over opaque “black box” approaches. This shift aligns closely with the principles of intelligent document processing, which focus on traceability, explainability, and operational control across complex AI-driven workflows.

Limited-risk AI: transparency as the key requirement

Not all AI projects fall into the high-risk category. Many common generative AI use cases are classified as limited risk.

In these situations, the core requirement is straightforward but non-negotiable: transparency. Users must be informed when they are interacting with an AI system or consuming AI-generated content. The goal is to preserve trust and prevent users from being misled, intentionally or not.

Generative AI and general-purpose models: the end of opacity

The AI Act also introduces a dedicated framework for General Purpose AI (GPAI), including large language models. These models must provide sufficiently detailed summaries of their training data, respect copyright law, conduct safety testing, and declare models that pose systemic risk due to their scale or computational power.

Although some implementation details are still being clarified, one principle is already clear: large-scale AI systems can no longer operate in total opacity within the European market.

Governance, timeline, and sanctions

Enforcement of the AI Act will be overseen by a European AI Office. Implementation follows a phased timeline: early bans on prohibited practices, followed by GPAI obligations, and finally full compliance for high-risk systems over the next two to three years.

Fines can reach up to 7% of global annual turnover, placing the AI Act on par with the GDPR in terms of financial and reputational impact.

In summary

The AI Act does not signal the end of AI in enterprise environments. It marks the end of uncontrolled AI. Organizations that invest early in data structuring, decision traceability, and human oversight will not merely achieve compliance. They will turn regulatory constraints into a durable strategic advantage.

FAQ

FAQ — AI Act: what companies need to know
What is the AI Act?
The AI Act is the European regulation that governs the use of artificial intelligence through a risk-based approach. It aims to ensure that AI systems deployed in Europe are transparent, traceable, and properly controlled, especially when they impact people’s rights or safety.
When does the AI Act apply?
The AI Act was adopted in 2024, but its application is gradual. Prohibited practices are addressed first, followed by obligations for general-purpose AI models, and full compliance for high-risk systems between 2026 and 2027.
Is my company affected by the AI Act?
You are likely affected if you develop, integrate, or use AI systems in Europe that automate decisions, process sensitive data, or play a role in critical business workflows. Internal tools can also fall within scope if they influence human decisions.
What is the difference between high-risk and limited-risk AI?
High-risk AI is used in sensitive contexts such as healthcare, recruitment, credit, or justice and must comply with strict governance requirements. Limited-risk AI, such as chatbots or synthetic content, remains allowed but must meet transparency obligations.
Are chatbots and generative AI covered by the AI Act?
Yes. Most chatbots and generative AI systems fall under the limited-risk category and must clearly inform users that they are interacting with AI. Large general-purpose models are subject to additional transparency and safety requirements.
What does “human-in-the-loop” mean in practice?
It means that a human must be able to supervise, validate, correct, or stop automated decisions. The AI Act challenges fully autonomous systems that operate without meaningful human oversight, especially in high-impact scenarios.
What happens if a company is not compliant?
Non-compliance can lead to fines of up to 7% of global annual turnover, depending on the severity of the breach. Beyond financial penalties, companies also face legal, operational, and reputational risks.
How should companies start preparing for the AI Act?
Start by mapping your AI use cases, classifying them by risk level, and putting in place documentation, data governance, human oversight, traceability, and security measures. Early preparation significantly reduces future compliance costs.

Move to document automation

With Koncile, automate your extractions, reduce errors and optimize your productivity in a few clicks thanks to AI OCR.

Author and Co-Founder at Koncile
Jules Ratier

Co-fondateur at Koncile - Transform any document into structured data with LLM - jules@koncile.ai

Jules leads product development at Koncile, focusing on how to turn unstructured documents into business value.