Why the AI Act belongs on the executive agenda in 2026
If your company sells a product in which AI performs a function — a chatbot in customer support, a model that pre-sorts job applications, a recommender in your shop, a RAG system in a maintenance app, a classifier in quality control — then EU Regulation 2024/1689, the AI Act, defines what that product must satisfy to remain on the EU market. This is not a research-team detail. It is market access, liability and penalties of up to €35 million or 7 % of global annual turnover.
The AI Act entered into force on 1 August 2024 and applies in four stages. Prohibited AI practices (Article 5) and the AI literacy obligation for staff (Article 4) have been applicable since 2 February 2025. Obligations for providers of general-purpose AI models (GPAI, Articles 51–56) have been applicable since 2 August 2025. Most high-risk obligations under Annex III become applicable on 2 August 2026 — about three months from the publication of this article. The high-risk obligations under Annex I (products already covered by other EU safety law) follow on 2 August 2027. The decisions you make in the next few months will determine whether you ship on time or retrofit under pressure.
This article describes what the AI Act actually requires — based on the consolidated text in EUR-Lex and the supporting guidelines from the European Commission and the EU AI Office (status May 2026) — and which decisions belong on the table now. It is not a legal handbook. It is a decision aid for management, IT leadership and product owners — with enough technical depth to assess the effort estimates your engineering team will produce.
What is the AI Act? — In brief
The AI Act (officially: Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) is the world's first comprehensive AI regulation. It was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024. It is a regulation, not a directive — it applies directly in every EU Member State and does not need to be transposed into national law. National implementing legislation only designates which authority supervises and how penalties are imposed.
At its core the AI Act is risk-based. It distinguishes four levels: prohibited (AI practices so harmful they cannot be placed on the market), high-risk (AI systems with significant potential for harm — strict obligations), limited-risk (transparency obligations, e.g. for chatbots and deepfakes) and minimal-risk (no specific AI obligations, but other laws such as GDPR continue to apply). A separate track applies to providers of general-purpose AI models — the foundation models such as GPT, Claude, Gemini, Mistral or Llama that underpin most products built today.
Three points carry the whole regulation:
- Scope: Whoever places an AI system on the EU market, puts it into service or uses it in the EU is in scope — regardless of where the company is based. A US provider whose model is used in the EU is in scope.
- Obligations by role: Providers (the parties who develop or place an AI system on the market) carry most obligations. Deployers (those who use AI in a professional context) have fewer but specific obligations — particularly for high-risk systems.
- Penalties: Up to €35 million or 7 % of global annual turnover (prohibited practices), €15 million or 3 % (other obligations and GPAI), €7.5 million or 1 % (false information to authorities) — whichever is higher.
Risk classes — where does your AI sit?
The first question on every project: which risk class applies to the system we are building or deploying? The answer changes the effort by orders of magnitude.
Prohibited AI practices (Article 5) — applicable since 2 February 2025
Article 5 lists practices that may not be placed on the EU market — regardless of how economically attractive they would be. Examples: social scoring by public authorities or private companies that produces detrimental treatment in an unrelated life context; biometric categorisation that infers sensitive attributes (political opinion, trade union membership, religion, sexual orientation); emotion recognition in workplace and educational settings (except for medical or safety reasons); subliminal manipulation or exploitation of vulnerabilities (e.g. of children or persons with disabilities); untargeted scraping of facial images from the internet to build biometric databases; profile-based predictive policing. Building or deploying any of these triggers fines up to €35 million or 7 %. For most Mittelstand setups these are not architecture questions but a compliance check at project start: does the system do any of these things? If no, proceed. If yes, drop it.
High-risk AI Annex III — applicable from 2 August 2026
Annex III names eight areas in which AI is treated as high-risk. Most use cases that are commercially relevant in the DACH Mittelstand land here: employment (CV screening, applicant filtering, performance scoring, promotion or dismissal decisions, automated task allocation); education and vocational training (admission, assessment, exam monitoring); creditworthiness assessment of natural persons (excluding pure fraud detection); risk assessment in life and health insurance pricing; critical infrastructure (transport, water, gas, electricity — safety functions); law enforcement, migration, justice administration, democratic processes; access to essential private or public services; biometric identification outside the prohibited practices. Anyone deploying AI in one of these areas falls under the strict obligations of Articles 8 to 15 — risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy and cybersecurity, conformity assessment, EU database registration. More on these below.
High-risk AI Annex I — applicable from 2 August 2027
Annex I covers AI used as a safety component in products that are already covered by EU sector-specific safety law — machinery, toys, recreational craft, lifts, radio equipment, pressure equipment, medical devices, in-vitro diagnostics, motor vehicles, civil aviation. Conformity assessment here is folded into the existing process under the relevant sector regulation. Build an AI-supported diagnostic model into a medical device, for example, and you run the MDR conformity process — extended by the AI-specific requirements of Articles 8 to 15. The deadline is later, the work is not less.
Limited-risk — transparency, not conformity
Systems that interact with natural persons (chatbots), generate synthetic content (deepfakes, AI-generated text and images) or perform emotion recognition outside the high-risk categories are subject to transparency obligations under Article 50: users must be informed that they are interacting with an AI system or that content is synthetic. The obligation likewise becomes applicable on 2 August 2026. In practice this means a clearly visible AI disclosure in the UI, machine-readable marking of synthetic media, clear labelling of AI-generated press copy. Architecturally manageable — but easy to forget.
General-purpose AI (GPAI, Articles 51–56) — applicable since 2 August 2025
A separate track covers providers of foundation models: OpenAI, Anthropic, Google, Mistral, Meta, Aleph Alpha and similar. They must maintain technical documentation, publish a training-data summary, demonstrate a copyright policy, and provide downstream providers with the information needed for their own compliance. Models with systemic risk (training compute above 10²⁵ FLOPs — as of May 2026 a circle of roughly five to fifteen models worldwide) carry additional obligations under Article 55 (model evaluations, adversarial testing, cybersecurity protections, serious-incident reporting). Most DACH Mittelstand companies will deploy GPAI models, not develop them. But the link is direct: as a downstream provider you need your GPAI provider's documentation to evidence your own Annex III compliance. More below.
Who's affected — and how
The AI Act distinguishes four roles with different obligations. Which one you carry is not always obvious — and most Mittelstand companies systematically underestimate it.
The provider is the natural or legal person that develops an AI system or places it on the market under its own name or trademark. The provider carries most of the load — conformity assessment, technical documentation, risk management system, EU database registration, CE marking. The deployer is the natural or legal person that uses an AI system in a professional context. Deployer obligations (Article 26) are narrower: use the system in line with provider instructions, maintain human oversight, retain logs for at least six months, inform affected employees, conduct a fundamental-rights impact assessment (FRIA, Article 27) where required. Importers and distributors have verification obligations — they must check that the product carries the required conformity markings before putting it on the EU market.
The practical trap: a deployer becomes a provider — and inherits all provider obligations — if under Article 25 they market an existing high-risk AI system under their own name, substantially change its intended purpose, or substantially modify it. Specifically: anyone who fine-tunes a GPAI model for a high-risk use case, rebrands a white-label AI, or uses an existing system for a purpose the original provider did not intend, becomes a provider themselves. A special rule applies to GPAI fine-tuning: only when the modification consumes more than one third of the original training compute does the fine-tuner become a GPAI provider in their own right. That threshold is high — most practical fine-tunes in the Mittelstand do not cross it. For a fine-tune intended for a high-risk application, however, the threshold is irrelevant: the Article 25 reclassification applies regardless of compute.
For many DACH Mittelstand companies the consequence is that they thought they were "just deployers" of a tool such as ChatGPT, Claude or a sector-specific AI service — and are in fact providers themselves the moment they adapt the system for a high-risk use or embed it in their own product. This classification belongs early in every AI project — before the first sprint, not after the audit.
What the AI Act actually requires
For high-risk AI systems under Annex III, the obligations are concrete in Chapter III, Section 2 (Articles 8–15). They are technology-neutral — the regulation specifies the outcome, not the tooling.
- Risk management system (Article 9): a continuous risk management process across the lifecycle — identification, assessment, mitigation, testing. Not a one-off risk analysis before market entry but an ongoing discipline.
- Data governance (Article 10): training, validation and test data must be relevant, representative, as error-free as practicable, and complete. Bias detection and mitigation are explicit.
- Technical documentation (Article 11): comprehensive documentation of the system architecture, training process, data provenance, performance metrics — in a form that is auditable.
- Record-keeping / logging (Article 12): automatic logging of relevant events across the operating life — for deployers retained for at least six months.
- Transparency and information to deployers (Article 13): deployers must understand clearly how the system works, its limitations, how to operate it.
- Human oversight (Article 14): the system must be designed so that effective human control is possible — including the ability to stop the system.
- Accuracy, robustness, cybersecurity (Article 15): in line with the state of the art, with documented performance values.
Before placing on the market, the conformity assessment runs. For most Annex III systems a self-assessment is possible, provided that harmonised standards or common specifications are applied. For certain biometric systems the regulation requires third-party assessment via a notified body. The result: an EU declaration of conformity and CE marking. The system is then registered in the EU database for high-risk AI systems under Article 71. After market placement, a post-market monitoring obligation applies: providers must actively observe whether their system continues to meet the requirements in the field — and inform the competent authorities of serious incidents.
For GPAI providers a separate package applies under Article 53: technical model documentation, information for downstream providers, copyright policy for training-data sourcing, public training-data summary. For models with systemic risk (Article 55) additionally: model evaluations, adversarial testing, risk mitigation, cybersecurity protections, serious-incident reporting. The European Commission published the voluntary GPAI Code of Practice in July 2025 — signing it creates a presumption of compliance; not signing it requires another path of evidence.
Architecture implications — where this actually lands
The AI Act is not a compliance exercise that can be closed with a PDF at the end. It changes what a productive AI platform must do. Five points are decisive from an architecture perspective.
Risk management in code, not in a Word document
Article 9 requires continuous risk management across the lifecycle. If you ship an AI component through a CI/CD pipeline, that manifests practically: every model update goes through risk re-evaluation. Quality gates in the build that test performance metrics against a baseline dataset. Drift monitoring in production — when the input distribution in the field systematically diverges from training, telemetry raises an alarm. Change management with a documented risk assessment per significant update. A risk management system that lives only as a document and not in the pipeline will not survive a serious audit.
Data governance means: training lineage and a bias pipeline
Article 10 requires traceable training-data provenance, representativeness and bias handling. Architecturally that means a data catalogue that records data origin, licence status, anonymisation pipeline and processing steps per dataset. Bias detection as its own pipeline step — fairness metrics (e.g. demographic parity, equalised error rates between subgroups) tested against defined thresholds. When a new training run treats a subgroup worse than before, the quality gate fails. Reproducibility as engineering discipline: training code, dataset snapshot and hyperparameters versioned tightly enough that a model can be reproduced two years later.
Audit trail: append-only event log for AI decisions
Article 12 requires automatic logging. The deployer retention obligation of at least six months (Article 26) makes that a storage problem. Architecturally that translates into an append-only event log per AI decision — input hash (pseudonymised in line with GDPR), model version, output, confidence score where applicable, human override and its rationale where applicable. The lifecycle data model with revision-proof audit trail that we build for industrial IoT platforms ports one-to-one to AI systems — the domain changes, the architecture pattern stays. Add a version registry that ties model, training snapshot, code revision and conformity documentation to each deployed release. When market surveillance asks during an incident which model version was active and how it was conformity-assessed, the answer must be available in minutes, not in weeks.
Transparency and human oversight in the UI
Articles 13 and 14 require that the system can be effectively supervised by a human — and that deployers know what they are dealing with. In the UI that means confidence indicators on every AI recommendation; clear escalation paths when confidence is low; an override function with a rationale field; a stop button for continuous systems. For limited-risk systems (chatbots, deepfakes) additionally the AI disclosure under Article 50: users must recognise that they are interacting with an AI or that content is synthetic. Machine-readable watermarks for AI-generated media are the state of the art (C2PA, SynthID and similar).
GPAI provider layer as architectural component
Embedding a foundation model such as GPT, Claude or Mistral in your own product creates an implicit supply-chain dependency. Pragmatically effective: a provider abstraction layer in the architecture — your application logic does not know "OpenAI", it knows "LLM provider with capability X". Concretely this reduces three risks: switching providers when their obligations or pricing shift; centralising model cards and compliance documentation per provider; writing audit-trail entries that record which provider with which model version answered a given request. In 2026 this separation is no longer optional — it is the precondition for keeping your compliance documentation up to date as models rotate at half-yearly cadence while obligations persist.
One architecture — four regulations
The AI Act does not stand alone. It meets GDPR (especially Article 22 on the right not to be subject to an exclusively automated decision), the EU Data Act (applicable since 12 September 2025, giving users access to data their connected products generate), the Cyber Resilience Act (applicable from 11 December 2027), and the new Product Liability Directive 2024/2853, which from 9 December 2026 brings software, cloud functionality and AI systems within the strict-liability regime as products in their own right. Four regulations that touch the same data model — inputs, outputs, decisions, maintenance logs — and produce different obligations. A platform that bolts these on later builds technical debt. A platform in which data origin, purpose limitation, retention, access rights, model version and audit trail are first-class fields in the data model can answer all four without reinventing itself.
Status of EU applicability and German implementation (May 2026)
The EU deadlines as of writing:
- 2 February 2025: Article 5 (prohibited practices) and Article 4 (AI literacy obligation) applicable — already in force.
- 2 August 2025: GPAI obligations under Chapter V applicable; Member States to designate national supervisory authorities; penalty regime active. The Commission published its Guidelines for providers of general-purpose AI models and the voluntary GPAI Code of Practice for the same date.
- 2 August 2026: high-risk obligations under Annex III and the transparency obligations under Article 50 applicable; Commission enforcement powers towards GPAI providers active.
- 2 August 2027: high-risk obligations under Annex I (safety components in products under EU sector law) applicable; GPAI models placed on the market before 2 August 2025 must be in full compliance.
The EU AI Office within DG CNECT in Brussels is the central enforcement body for GPAI obligations. Since 2025 it has issued guidelines on key concepts (the definition of "general-purpose AI", the interpretation of Articles 53/55, the training-data summary template). Its enforcement powers — requests for information, model access, fines up to €15 million or 3 % — become active on 2 August 2026.
German implementation is not yet in force as of May 2026. The KI-Marktüberwachungs- und Innovationsförderungsgesetz (KI-MIG) — the German national law implementing the AI Act — was approved by the Federal Cabinet on 11 February 2026, introduced into the Bundestag on 20 March 2026 and, after first reading, referred to the Committee for Digital Affairs and State Modernisation. The Bundesrat issued its statement on 1 April 2026 (BT-Drs. 21/5143) with several review requests. Neither chamber has finally adopted the law as of May 2026. The federal government is targeting an expedited process to bring the law into force before the EU deadline of 2 August 2026 — whether that holds is not certain.
Substantively, the KI-MIG designates the Bundesnetzagentur (BNetzA) as the central market surveillance authority. The BNetzA has built up the Coordination and Competence Centre for the AI Regulation (KoKIVO), which is already operational ahead of the formal legal basis. Since July 2025 the BNetzA has run a KI-Service Desk — an online tool with a compliance compass that helps SMEs in particular classify their AI systems; available at bundesnetzagentur.de/ki. For sector-specific cases (financial supervision, medical devices, data protection), the relevant sector authorities remain in charge — the BfDI for example for data-protection-relevant AI in federal authorities, the state data protection authorities for private-sector data processing. KoKIVO coordinates uniform interpretation across them.
In parallel with the EU level, the Commission runs the voluntary AI Pact — over 100 companies have committed to implementing core AI Act obligations earlier than legally required. From the DACH region, signatories include SAP and Aleph Alpha among others. Participation is voluntary and does not replace any legal obligation — it does signal serious engagement with the regulation to customers and partners.
Penalties
Article 99 sets a three-tier penalty regime — whichever of the two amounts is higher, measured against global annual turnover of the previous financial year:
- Up to €35 million or 7 % for breaches of the prohibited AI practices in Article 5
- Up to €15 million or 3 % for breaches of most other obligations — in particular Annex III high-risk obligations and GPAI obligations
- Up to €7.5 million or 1 % for incorrect, incomplete or misleading information to authorities or notified bodies
For SMEs and start-ups the lower of the two amounts applies — the inverse of the NIS2 and CRA construction, and a meaningful concession for Mittelstand setups. Member States designate the national supervisory and fining authority; in Germany the KI-MIG names the Bundesnetzagentur (with sector-specific exceptions).
What to do concretely in 2026
Three tasks belong on the table now — whether you build AI or deploy it.
First: build an AI inventory and classify by risk class. Which AI systems does your company use — self-built models, off-the-shelf tools, embedded LLM features in SaaS products? Which fall under Annex III? What role do you carry per system (provider, deployer, both)? The BNetzA compliance compass is a sensible starting point. A complete list — with risk class, role and next steps per system — is the basis of every further decision.
Second: implement the AI literacy obligation (Article 4). The obligation has applied since 2 February 2025 — even without an explicit fine attached. Staff who deploy or operate AI systems need an appropriate understanding of how those systems work, their limits and their risks. "Appropriate" depends on context — a RAG system for internal research has lower demands than a model that filters job applicants. Documented training is the evidence. Civil-liability relevant: if an untrained employee causes harm with an AI system, the company is liable.
Third: for Annex III high-risk systems, use the 16 months left. By 2 August 2026 a high-risk system must be conformity-assessed, documented, registered in the EU database and CE-marked. Risk management system, data governance processes, logging architecture, audit trail, human oversight in the UI — these are not week-tasks. Anyone without a plan in May 2026 must either prioritise hard or pull the system from the EU market by the deadline. For GPAI deployers add: collect the model cards, training-data summaries and compliance documentation from your GPAI provider in time — you need them for your own Annex III conformity case.
How we approach this
We don't sell "AI Act compliance as a service". Compliance is not a product — it is the consequence of a soundly built AI architecture. What we do: we build AI solutions in which the obligations from AI Act, GDPR, EU Data Act and CRA are not bolted on later but anchored in the data model, the auth layer and the audit trail. Append-only event log per AI decision, model-card pipeline with version binding, provider abstraction layer for GPAI vendors, human oversight in the UI with override and rationale path — as architecture patterns we ship in production.
Our AI development is set up as an architecture discipline: AI agents, RAG systems and custom machine-learning models integrated into the data model, authentication and logging — not as ChatGPT wrappers. EU and CH hosting, GDPR-compliant data handling, AI-Act-compliant architecture as the deliverable. Our B2B IoT project LITE BLOX shows the audit-trail pattern in industrial context: lifecycle data model with a birth snapshot per unit, append-only event log for security-relevant events, continuous telemetry. The same pattern carries AI use cases. If you are responsible for a comparable initiative — a greenfield AI project or a refactor of an existing ML system under AI Act obligations — let's talk about the architecture before the requirements document reaches the first sprint.
Resources
- EU Regulation 2024/1689 (AI Act), consolidated text: eur-lex.europa.eu/eli/reg/2024/1689
- European Commission, AI Act framework page: digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- European Commission, AI Act Service Desk (official interpretive Q&A): ai-act-service-desk.ec.europa.eu
- European Commission, GPAI Code of Practice: digital-strategy.ec.europa.eu/en/policies/contents-code-gpai
- European Commission, Guidelines for providers of GPAI models: digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers
- European Commission, AI Pact: digital-strategy.ec.europa.eu/en/policies/ai-pact
- Bundesnetzagentur, KI-Service Desk and compliance compass: bundesnetzagentur.de/ki
- BMDS, legislative process for the KI-MIG (KI-Marktüberwachungs- und Innovationsförderungsgesetz): bmds.bund.de — Gesetz zur Durchführung der KI-Verordnung
- Product Liability Directive (EU) 2024/2853: eur-lex.europa.eu/eli/dir/2024/2853
Status: May 2026. The German implementation (KI-MIG) is still in the parliamentary process at the time of writing — neither Bundestag nor Bundesrat has finally adopted it. At the EU level too, the interpretation of individual articles can still shift through implementing acts, harmonised standards, EU AI Office guidelines and decisions of national market surveillance authorities. For a legally binding assessment of your specific case, please consult qualified legal counsel.
IntegrIT Solutions
Your Partner for High-Quality Mobile Applications
Email: info@integritsol.de
About IntegrIT Solutions
IntegrIT Solutions is your specialized software agency for developing performant mobile applications. With solid experience in developing business apps for B2B clients, we combine technical competence with business understanding. Our apps are reliable, user-friendly, and deliver measurable business results.
