Frist August 2026 rückt näher

KI-Verordnung-Klassifizierungen in Aktion

Entdecken Sie 10 reale KI-Anwendungsfälle, die von unserer Engine über alle vier Risikostufen der EU-KI-Verordnung klassifiziert wurden. Jedes Beispiel zeigt die vollständige Klassifizierung — Risikostufe, anwendbare Artikel, Compliance-Maßnahmen und DSGVO-Schnittstellen. Die Durchsetzungsfrist August 2026 rückt näher — beginnen Sie jetzt mit der Vorbereitung.

Aug 2026

Vollständige Durchsetzung

35 Mio. €

Maximale Strafe

10 Beispiele

Alle Risikostufen

COMPLIANCE-FRIST

August 2026: Vollständige Durchsetzung der EU-KI-Verordnung

Ab dem 2. August 2026 müssen alle Hochrisiko-KI-Systeme vollständig dokumentiert, risikobewertet und konform mit der EU-KI-Verordnung (Verordnung 2024/1689) sein. Organisationen drohen Strafen bis zu 35 Millionen Euro oder 7% des weltweiten Umsatzes bei Nichteinhaltung.

Hochrisiko-KI-Systeme

Alle KI-Systeme in Anhang-III-Kategorien — Beschäftigung, Bildung, kritische Infrastruktur, Strafverfolgung — müssen vollständige technische Dokumentation und Konformitätsbewertungen vorweisen.

Verbotene Praktiken

Social Scoring, biometrische Echtzeit-Überwachung und manipulative KI sind bereits seit Februar 2025 verboten. Fortgesetzte Verstöße werden mit den höchsten Strafen geahndet.

Transparenzpflichten

KI-Systeme, die mit Menschen interagieren, müssen ihre KI-Natur offenlegen. Systeme zur Inhaltserstellung müssen ihre Ergebnisse als KI-generiert kennzeichnen.

Citizen Trustworthiness Scoring System

Stadtverwaltung Wien — Bürgerdienste

Prohibited
Use Case Input
Description

AI system that aggregates social media activity, financial records, and public behavior data to generate a trustworthiness score for citizens. Scores influence access to municipal services including public housing waitlists, library access tiers, and priority in administrative processing. Deployed across all municipal offices in the city.

Business Purpose

Improve allocation of municipal resources by prioritizing reliable citizens and reducing fraud in public service access.

Category:decision supportProhibited78% confidence
Role: Provider & Deployer
Suggested Flags:
Personal DataCustomer FacingSpecial Category DataConfidential DataAutomated Decision Support
Legal Basis (EU AI Act)
Compliance Actions
Do not deploy/continue operation in the described form; escalate immediately to legal/compliance leadership for an Art. 5 prohibited-practice assessment (this is a classification suggestion, not legal advice).
Freeze further data ingestion/model training related to resident “trustworthiness” scoring and prevent the score from being used in any service-access workflow.
Document the use case, data sources (social media/financial/behavioral), and decision impacts to support internal auditability and regulatory inquiry response.
If the business goal is fraud reduction, redesign toward narrowly-scoped, non-social-scoring anti-fraud controls (e.g., case-specific verification) without generalized citizen scoring; re-run an EU AI Act + GDPR assessment on the redesigned approach.
Engage the DPO and (if applicable) the city’s information security and ethics/governance bodies; prepare for potential supervisory authority engagement if already piloted.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case constitutes a generalized “trustworthiness” score for residents, derived from broad behavioral data (including social media and public behavior) and used to influence access/priority to municipal services (housing waitlists, library access tiers, administrative processing). This closely matches the EU AI Act’s prohibited “social scoring” practice by a public authority, where individuals are scored based on behavior/characteristics and the score leads to detrimental or unfavourable treatment in access to services. Because it falls under Art. 5 prohibited practices, it should not be classified as high-risk/limited-risk; the prohibition prevails in the decision tree.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Real-Time Facial Recognition in Public Transit

Sicherheitsabteilung — Münchner Verkehrsgesellschaft

Prohibited
Use Case Input
Description

Live facial recognition system installed across all U-Bahn stations in Munich to identify individuals in real-time against a watchlist database. Cameras continuously scan all passengers entering and exiting stations, matching faces against a database of persons of interest provided by law enforcement.

Business Purpose

Enhance public safety by identifying wanted individuals and preventing crime in the public transit network.

Vendor

SecurVision AG

Product

FaceTrack Live

Category:decision supportProhibited78% confidence
Role: Deployer
Suggested Flags:
Personal DataCustomer FacingExternal SharingSpecial Category DataConfidential DataAutomated Decision Support
Compliance Actions
Do not deploy as described; escalate immediately to legal/compliance and senior management for an Art. 5 prohibited-practice assessment and decision (this is not legal advice).
Initiate an urgent review with the competent public authority/law enforcement counterpart to determine whether any narrow statutory exception could apply (e.g., strictly limited, authorized use cases) and whether the transit operator could lawfully act as deployer at all in this scenario.
If the objective remains public safety, redesign the use case away from real-time remote biometric identification in publicly accessible spaces (e.g., non-biometric measures, increased staffing, physical access controls, or post-event forensic workflows subject to lawful basis).
Freeze procurement/rollout steps (contracts, integration, data sharing) until the prohibited-practice risk is resolved; document the decision and rationale in internal governance records.
If any piloting has occurred, implement incident containment: stop processing, secure and segregate any captured biometric templates/images, and define deletion/retention actions consistent with instructions from legal/privacy teams and applicable law.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case is continuous, real-time facial recognition across U-Bahn stations to identify individuals against a watchlist in publicly accessible spaces. Under the EU AI Act decision tree, this matches a prohibited practice under Art. 5: real-time remote biometric identification of people in public spaces (with only narrow, strictly conditioned exceptions typically tied to competent authorities and specific legal authorization). Because the intended use is blanket, continuous scanning of all passengers, it is treated as prohibited rather than merely high-risk biometrics under Annex III.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Resume Screening for Recruitment

Human Resources — Deutsche Telekom AG

High Risk
Use Case Input
Description

Automated CV screening tool that parses incoming job applications, extracts qualifications and experience, and ranks candidates against job requirements. Produces a shortlist with match scores. Recruiters use the shortlist as primary input for deciding who to invite to interviews.

Business Purpose

Reduce time-to-hire from 45 to 20 days and ensure consistent, bias-reduced initial screening of 500+ applications per open position.

Vendor

Personio SE

Product

Personio Recruiting AI

Category:decision supportHigh Risk86% confidence
Annex III:EmploymentRole: Deployer
Suggested Flags:
Personal DataExternal SharingSpecial Category DataConfidential DataAutomated Decision Support
Compliance Actions
Treat as a High-Risk AI use case (Annex III employment): do not rely on “HR tool” framing; document intended purpose as screening/ranking for recruitment decisions.
Confirm roles and allocate obligations contractually: Deutsche Telekom likely = deployer; Personio likely = provider. Ensure the vendor contract covers AI Act high-risk duties (instructions for use, performance limits, logging, post-market monitoring support, incident cooperation).
Implement deployer-side risk management and governance aligned to the provider’s instructions: local procedures for acceptance testing, change management, and periodic re-validation of model performance per role/location.
Human oversight (Art. 14): ensure recruiters are trained, can interpret match scores, and are required to meaningfully review/override; prohibit fully-automatic rejection/invitation based solely on score; implement second-look rules for edge cases (e.g., non-standard CVs, career breaks).
Data governance (Art. 10): assess training/operational data relevance and bias risks; define which features are allowed (avoid proxies for protected characteristics); implement data quality checks (parsing accuracy, language handling, missing data).
Logging/record-keeping (Art. 12): ensure the system logs inputs/versions, scores, ranking outputs, and user actions (override/accept) with retention aligned to HR/legal needs.
Transparency to affected persons and internal users (Art. 13): provide clear notices to candidates that AI is used in screening/ranking, what it does, and how to exercise rights; provide recruiters with clear documentation of score meaning/limitations.
Accuracy/robustness/cybersecurity (Art. 15): evaluate error rates (false negatives for qualified candidates), robustness across CV formats and languages, and access controls for applicant data; define performance thresholds and monitoring alerts.
Operational monitoring & vendor escalation: set KPIs (selection-rate disparities, parsing failure rates, override rates), periodic bias testing, and an incident process for suspected discrimination or material malfunction.
If any conversational interface is used with candidates/recruiters, add Art. 50 transparency disclosures (AI interaction notice).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This system performs automated screening and ranking of job applicants and produces match scores/shortlists that are the primary input for interview decisions. Recruitment and access to employment are explicitly covered by Annex III (4) “employment, worker management and access to self-employment,” making it High-Risk under Art. 6 regardless of human final decision-making. Because it evaluates and ranks identifiable natural persons (applicants) using their personal data, it also constitutes profiling, increasing governance and oversight expectations.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Credit Scoring for Consumer Loans

Kreditrisiko — Commerzbank AG

High Risk
Use Case Input
Description

Machine learning model that assesses creditworthiness of loan applicants by analyzing income, employment history, spending patterns, and third-party credit bureau data. Outputs a risk score that determines loan approval, interest rate, and credit limit. Used for all consumer loan decisions under €50,000.

Business Purpose

Automate credit decisions for high-volume consumer lending while maintaining regulatory compliance with BaFin requirements.

Vendor

SCHUFA Holding AG

Product

SCHUFA ScoreAI

Category:decision supportHigh Risk86% confidence
Annex III:Essential ServicesRole: Deployer
Suggested Flags:
Personal DataCustomer FacingExternal SharingConfidential DataAutomated Decision Support
Compliance Actions
Treat this as an Annex III high-risk system (creditworthiness/credit access) and route to formal EU AI Act high-risk governance; do not rely on “vendor says compliant” as sufficient.
Clarify roles contractually: Commerzbank as deployer; SCHUFA likely provider. Ensure provider supplies required high-risk documentation (technical documentation, instructions for use, performance/limitations, logs) and supports audits.
Implement/maintain a risk management system (Art. 9): identify foreseeable misuse (proxy discrimination via spending patterns/ZIP codes), define risk controls, test residual risk, and document decisions.
Data governance (Art. 10): document training/validation data relevance, representativeness, and bias testing; define feature governance (e.g., rules for use of bureau data, spending data), data quality checks, and drift monitoring.
Technical documentation & record keeping (Art. 11–12): ensure decision traceability (inputs/features versions, score, thresholds, model version, reason codes) and retention aligned with banking/GDPR requirements.
Transparency to affected persons and internal users (Art. 13 + banking conduct expectations): provide meaningful information on main parameters/logic, limitations, and appropriate interpretation; ensure adverse-action style explanations and complaint/appeal paths (also aligns with GDPR Arts. 13–15).
Human oversight (Art. 14): because your stated intent is “fully automated,” add a meaningful human review route for edge cases/appeals and define when overrides are required; train staff and monitor override rates.
Accuracy/robustness/cybersecurity (Art. 15): set performance KPIs (AUC, calibration), fairness metrics, stress tests (economic downturn), adversarial resistance (data poisoning), and incident response.
Conformity assessment / assurance steps: verify that the provider completed applicable conformity assessment and CE/required declarations for high-risk; perform deployer-side acceptance testing before production and after material changes.
Operational controls: change management for thresholds and policy rules; periodic model validation; monitoring for disparate impact; vendor management aligned with BaFin model risk management expectations.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case determines access to an essential private service—consumer credit—by generating a risk score that directly drives approval/rejection and loan terms, which falls under Annex III ‘essential services’ and therefore qualifies as high-risk under Art. 6. The system performs profiling of natural persons (creditworthiness scoring based on personal data) and is intended to be fully automated, increasing regulatory scrutiny and requiring strong human oversight measures even if the business process aims for automation. Because outcomes have significant effects on individuals (credit access, pricing, limits), both EU AI Act high-risk obligations and GDPR automated decision-making safeguards are triggered.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Automated University Admissions Pre-Screening

Studierendenkanzlei — Technische Universität München

High Risk
Use Case Input
Description

AI system that evaluates university applications by scoring motivation letters, academic transcripts, and extracurricular profiles. Generates a ranked shortlist of candidates for each study programme. Admissions committee uses the ranking to prioritize interview invitations.

Business Purpose

Handle the growing volume of applications (15,000+ per year) while maintaining fair and consistent evaluation criteria across all faculties.

Category:decision supportHigh Risk86% confidence
Annex III:Education & TrainingRole: Deployer
Suggested Flags:
Personal DataCustomer FacingConfidential DataAutomated Decision Support
Compliance Actions
Human decision required: treat this as a high-risk Annex III education/training system and route to internal legal/compliance review (suggestion, not legal advice).
Establish and document a risk management system covering intended purpose, foreseeable misuse, harms (bias/discrimination, exclusion, due process), and mitigation measures (Art. 9).
Implement data governance controls: document data sources (motivation letters, transcripts, extracurriculars), representativeness, quality checks, label/ground-truth strategy for training/validation, and bias testing across protected characteristics proxies; define retention and access controls (Art. 10).
Prepare technical documentation: model type, features used, preprocessing, ranking/scoring logic, performance metrics, limitations, and change management (Art. 11).
Enable logging/record-keeping to support traceability of each scoring/ranking outcome and committee use (Art. 12).
Provide clear information to the admissions committee (deployer-side users) about system capabilities/limits, appropriate use, and interpretation of scores/ranks (Art. 13).
Design and enforce human oversight: define when committee must override/second-review, contestation paths, and safeguards against automation bias; ensure meaningful human involvement before interview invitations are decided (Art. 14).
Validate accuracy/robustness/cybersecurity: stress-test for distribution shifts (different programmes/faculties, international transcripts), adversarial inputs (prompted letters), and secure access to applicant data (Art. 15).
Conformity assessment planning: determine whether TU München is acting as provider (in-house model) or deployer (vendor tool) and execute the applicable high-risk obligations, including procurement clauses, supplier documentation, and (where applicable) registration/conformity steps before putting into service.
Transparency to applicants: inform applicants that AI is used in admissions pre-screening and provide appropriate explanations of the role of the AI in the process; if any AI-generated communications are sent externally, apply Art. 50-style labeling/disclosure as relevant.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case falls under EU AI Act Art. 6 high-risk because it is used for access to and management of education/vocational training, specifically admissions pre-screening, which is listed in Annex III category (education_training). The system scores and ranks individual applicants and the ranking directly determines who proceeds to interviews, meaning it materially influences access to education opportunities and raises discrimination and due-process risks. It is not prohibited under Art. 5 (no social scoring by public authorities outside the banned context, no emotion recognition, no remote biometric identification), but it triggers the full high-risk lifecycle obligations (Arts. 9–15).

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI-Based Grid Load Forecasting and Switching

Netzführung — Austrian Power Grid AG

High Risk
Use Case Input
Description

Machine learning system that predicts electricity demand across the Austrian grid and automatically adjusts load distribution between power stations. The AI triggers switching commands to reroute power flows and prevent grid overload during peak periods.

Business Purpose

Ensure grid stability, prevent blackouts, and optimize energy distribution across the national electricity network.

Vendor

Siemens Energy

Product

GridAI Optimizer

Category:decision supportHigh Risk78% confidence
Annex III:Critical InfrastructureRole: Deployer
Suggested Flags:
External SharingConfidential DataAutomated Decision Support
Compliance Actions
Do not treat this as legal advice; run an internal regulatory assessment to confirm it falls under Annex III critical infrastructure (grid operation/load dispatch) and whether any national implementing rules apply for Austria.
Clarify roles/obligations contractually: Siemens Energy likely acts as the AI system provider; Austrian Power Grid AG acts as deployer. Ensure the vendor supplies required high-risk documentation, instructions for use, and residual-risk information (Art. 13) and supports conformity assessment evidence (Art. 43).
Implement/verify a risk management system for the deployed use: hazard analysis for wrong forecasts or erroneous switching (cascading outages), pre-defined risk controls, and post-deployment monitoring/incident handling aligned to Art. 9.
Data governance controls (Art. 10): document all input data streams (SCADA/telemetry, weather, market signals), data quality/latency requirements, handling of missing data, concept drift monitoring, and validation datasets representing seasonal/rare peak events.
Technical documentation & system description (Art. 11): end-to-end architecture, model versioning, interfaces to grid control systems, safety constraints, fallback logic, and change management for model updates.
Logging/record-keeping (Art. 12): ensure auditable logs of forecasts, confidence/uncertainty outputs, switching recommendations/commands issued, human operator interventions/overrides, and system health metrics—sufficient for incident reconstruction.
Human oversight (Art. 14): define operational modes (advisory vs auto-execution), approval thresholds, override/kill-switch procedures, separation of duties, operator training, and clear HMI that communicates constraints and uncertainty.
Accuracy/robustness/cybersecurity (Art. 15): stress testing for extreme events, adversarial/cyber scenarios, redundancy, secure-by-design integration with OT networks, resilience to sensor spoofing, and safe degradation to rule-based dispatch.
Conformity assessment and procurement gating: require evidence of conformity assessment for a high-risk system (Art. 43) before full production roll-out; document acceptance tests in a staging environment and periodic re-validation after major changes.
Operational governance: establish incident escalation pathways and criteria for notifying the provider and relevant authorities per organizational policy; maintain a register of the high-risk AI system internally and confirm EU database registration responsibilities (often provider-led).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This system is used to manage and operate the national high-voltage electricity grid by forecasting demand and automatically triggering switching/load-balancing commands. AI used in the management and operation of critical infrastructure is classified as high-risk under Art. 6 in conjunction with Annex III (critical infrastructure), because malfunction can cause significant harm (e.g., outages/blackouts). The use case does not match Art. 5 prohibited practices (no social scoring, biometric identification, manipulation, or workplace/education emotion recognition). Because it directly controls operational grid switching, high-risk obligations (risk management, documentation, logging, oversight, robustness/cybersecurity, and conformity assessment) are typically expected.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Customer Service Chatbot for Insurance Inquiries

Kundenservice — Allianz Versicherungs-AG

Limited Risk
Use Case Input
Description

Conversational AI chatbot on the company website that answers customer questions about insurance policies, claims status, and coverage details. Escalates complex queries to human agents. Does not make coverage decisions or process claims — purely informational.

Business Purpose

Provide 24/7 first-level customer support, reducing call center volume by handling routine inquiries automatically.

Vendor

Cognigy GmbH

Product

Cognigy.AI

Category:customer serviceLimited Risk78% confidence
Role: Deployer
Suggested Flags:
Personal DataCustomer FacingConfidential Data
Compliance Actions
Implement clear user-facing disclosure that customers are interacting with an AI system (at the start of the chat and as needed throughout) (Art. 50).
Provide an easy, immediate route to a human agent (handover/escalation) and communicate it clearly in the UI (good practice supporting transparency/user autonomy).
Put in place content/quality controls to reduce hallucinations and misleading insurance information (e.g., approved knowledge base, retrieval-only mode where feasible, response guardrails, blocked topics).
Maintain logs/records of chatbot interactions appropriate to purpose (e.g., for troubleshooting, complaint handling, and safety monitoring), with defined retention periods (also supports privacy accountability).
Vendor/deployer governance: document intended use, configuration, testing/acceptance criteria, and changes; ensure contractual terms cover security, incident handling, and data processing roles (DPA) where personal data is processed.
If the chatbot output may be published or reused externally (e.g., testimonials, marketing, public FAQs), add labeling/controls for AI-generated content where it reasonably reaches the public (Art. 50).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This is a customer-facing conversational AI on a website, which triggers EU AI Act transparency obligations for chatbots (Art. 50). The described use is informational and does not perform eligibility, pricing, claims handling, or other decisions about access to essential services/benefits, so it does not fall under Annex III high-risk categories (e.g., essential services). Based on the decision tree, it is therefore classified as limited risk due to the required user disclosure, with no Annex III category.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Marketing Copy Generator

Marketing Communications — Red Bull GmbH, Salzburg

Limited Risk
Use Case Input
Description

Generative AI tool used by the marketing team to draft social media posts, email newsletters, and blog articles. All content is reviewed and edited by human copywriters before publication. The tool generates text based on brand guidelines and campaign briefs.

Business Purpose

Accelerate content production pipeline and maintain consistent brand voice across channels.

Vendor

OpenAI

Product

ChatGPT Enterprise

Category:content generationLimited Risk74% confidence
Role: Deployer
Suggested Flags:
Personal DataCustomer FacingExternal SharingConfidential Data
Compliance Actions
Implement an external-facing transparency process for marketing outputs: decide when/how to disclose that content was AI-generated or AI-assisted (Art. 50), and ensure it is consistently applied across channels (social, email, blog).
Add internal policy + playbook for marketers/copywriters: permitted uses, prohibited claims, fact-checking requirements, citation/source requirements, and escalation path for sensitive topics (health, minors, regulated products, etc.).
Maintain basic records for governance: which tool/version is used (ChatGPT Enterprise), high-level prompts/brand guidelines provided, review/approval workflow, and responsibility assignment (editorial owner) to evidence transparency and human control.
Put in place safeguards to reduce IP/confidentiality risk: prompt hygiene guidance, no unnecessary confidential data in prompts, and contractual/IT controls aligned with the Enterprise setup (access control, logging, retention settings).
Ensure users (employees) are aware they are using an AI system and trained on limitations (hallucinations, bias, defamation) and on how to label/disclose outputs where required (Art. 50).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case is a generative AI system producing marketing text intended for publication to external audiences (social media, newsletters, blog). Under the EU AI Act decision tree, it is not prohibited (Art. 5) and it does not fall under Annex III high-risk areas (Art. 6) because it is not used for decisions about individuals (e.g., employment, credit, essential services). However, because it generates content that is disseminated externally, transparency obligations for AI-generated/synthetic content are likely triggered (Art. 50), so the appropriate tier is limited risk even with human editorial review.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Code Completion for Software Development

Software Engineering — Bosch Digital, Stuttgart

Minimal Risk
Use Case Input
Description

AI-powered code completion tool integrated into developer IDEs. Suggests code completions, generates boilerplate, and explains code snippets. Used exclusively by internal software engineers for productivity. No customer-facing output; no personal data processed.

Business Purpose

Increase developer productivity by 20-30% through intelligent code suggestions and automated boilerplate generation.

Vendor

GitHub

Product

GitHub Copilot Enterprise

Category:internal automationMinimal Risk74% confidence
Role: Deployer
Suggested Flags:
Confidential Data
Compliance Actions
Document the intended use and boundaries (internal developer productivity tool; no use for HR decisions, customer communications, or safety-critical code without controls) for internal AI governance and audits.
Confirm and periodically re-validate data flows: no production/customer data, no personal data, no code containing secrets; ensure tenant/Enterprise settings prevent training on Bosch content if required by policy.
Implement secure-use controls: block or scan for secrets (API keys), enforce OSS/license compliance checks, and require peer review for AI-suggested code before merge (SDLC control).
Maintain internal transparency to users: inform engineers the tool is AI-assisted, provide usage guidelines (e.g., verify correctness, security, and licensing).
Vendor/third-party risk management: review Copilot Enterprise terms, security documentation, data processing addendum (if any), and hosting/transfer locations relevant for DACH/EU requirements.
Set up incident/feedback loop: track erroneous or risky suggestions, security findings, and implement guardrails (policy, prompts, IDE settings).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

Based on the described intended use, this is an internal code-completion/productivity tool for software engineers and does not fall under any Annex III high-risk area (Art. 6 + Annex III) and does not profile natural persons. It also does not match any Art. 5 prohibited practices. Art. 50 limited-risk transparency obligations are typically aimed at systems interacting with people as chatbots or generating content for external audiences; since outputs are internal-only and not customer-facing, the use case is best classified as minimal risk, with a note to reassess if it is deployed as an interactive assistant requiring disclosure to users.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Sales Forecast Dashboard with ML Predictions

Data Science — Swarovski AG, Wattens

Minimal Risk
Use Case Input
Description

Internal dashboard that uses machine learning to forecast quarterly sales revenue based on pipeline data, historical trends, and market indicators. Developed in-house by the data science team. Outputs aggregate predictions at product-line level — no individual customer scoring or profiling.

Business Purpose

Enable data-driven quarterly planning and inventory management by providing accurate sales forecasts per product line.

Category:data analysisMinimal Risk78% confidence
Role: Provider & Deployer
Suggested Flags:
Personal DataConfidential DataAutomated Decision Support
Legal Basis (EU AI Act)
Compliance Actions
Document the intended purpose and boundaries (aggregate, product-line forecasting; no individual scoring) in internal AI inventory/governance records (good practice, supports defensibility if scope creeps).
Implement basic model governance controls: versioning, change management, and reproducibility of training/inference runs (voluntary best practice for minimal-risk).
Define and monitor model quality KPIs (forecast error, drift, bias at the product-line level) and set a retraining/rollback process if performance degrades.
Access control & least-privilege for the dashboard and underlying datasets; logging of forecast consumption and model changes (security best practice).
Add internal user guidance: forecasts are decision-support; require human judgment for planning decisions; specify known limitations and confidence intervals.
Re-check classification if scope changes (e.g., forecasting by customer/account, salesperson performance, credit/price eligibility, or other individual-level scoring), which could trigger Art. 6/Annex III high-risk or GDPR Art. 22 issues.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case is an internal ML-based forecasting dashboard producing aggregate predictions at product-line level for planning and inventory management. It does not fall under any Art. 5 prohibited practices and does not match an Annex III high-risk area because it is not used to make or materially influence decisions about individuals in employment, essential services, education, law enforcement, etc. As described, it also does not perform profiling of natural persons under Art. 6(2) because there is no individual customer/employee scoring or categorization; therefore Art. 50 transparency obligations for chatbots/deepfakes are not triggered either.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Wichtig: KI-Klassifizierungen erfordern menschliche Prüfung

AI-Casefile bietet KI-gestützte Risikoklassifizierung als Ausgangspunkt für Ihren Compliance-Prozess. Diese Klassifizierungen sind beratend und stellen keine Rechtsberatung dar. Die endgültige Risikobewertung muss immer von einem qualifizierten Rechts- oder Compliance-Experten geprüft und genehmigt werden.

KI-unterstützt, nicht KI-entschieden

Unsere Engine analysiert Anwendungsfallbeschreibungen und schlägt eine Risikostufe basierend auf den Kriterien der EU-KI-Verordnung vor. Dies ist eine Empfehlung — keine verbindliche rechtliche Einordnung.

Menschliche Freigabe erforderlich

Jede Klassifizierung muss von einem Compliance-Beauftragten oder Rechtsberater geprüft werden, bevor sie für regulatorische Berichterstattung oder Entscheidungsfindung verwendet wird.

Keine Haftung für KI-Ergebnisse

AI-Casefile und seine KI-Klassifizierungen ersetzen keine professionelle Rechtsberatung. Ihre Organisation bleibt für die endgültige Risikobewertung und Compliance-Entscheidungen verantwortlich.

Häufig gestellte Fragen

Häufige Fragen zur EU-KI-Verordnung-Klassifizierung und der Frist August 2026.

Was sind die vier Risikostufen der EU-KI-Verordnung?

Die EU-KI-Verordnung klassifiziert KI-Systeme in vier Risikostufen: Verboten (vollständig untersagt, z.B. Social Scoring, biometrische Echtzeit-Überwachung), Hohes Risiko (strenge Anforderungen einschließlich Dokumentation, Konformitätsbewertungen und menschlicher Aufsicht — umfasst Bereiche wie Beschäftigung, Bildung, kritische Infrastruktur), Begrenztes Risiko (Transparenzpflichten, z.B. Chatbots müssen ihre KI-Natur offenlegen) und Minimales Risiko (freiwillige Verhaltenskodizes, keine verpflichtenden Anforderungen).

Was bedeutet die Frist August 2026 und was wird verlangt?

Der 2. August 2026 markiert die vollständige Anwendung der EU-KI-Verordnung für Hochrisiko-KI-Systeme. Bis zu diesem Datum müssen Organisationen, die Hochrisiko-KI bereitstellen oder einsetzen, vollständige technische Dokumentation, Risikomanagementsysteme, Datengovernance-Maßnahmen, Konformitätsbewertungen und eine Registrierung in der EU-Datenbank vorweisen. Die verbotenen Praktiken und KI-Kompetenzanforderungen gelten bereits seit Februar 2025.

Wie bestimmt AI-Casefile die Risikoklassifizierung?

AI-Casefile nutzt KI-gestützte Analyse, um Ihre KI-Systembeschreibung anhand der Kriterien der EU-KI-Verordnung zu bewerten. Es berücksichtigt den Verwendungszweck, die verarbeiteten Daten, den Einsatzkontext und die betroffenen Personen, um eine Risikostufe, anwendbare Artikel, Compliance-Maßnahmen und DSGVO-Schnittstellen vorzuschlagen. Jede Klassifizierung ist ein Ausgangspunkt — menschliche Prüfung und rechtliche Genehmigung sind immer erforderlich.

Sind diese Beispielklassifizierungen rechtlich bindend?

Nein. Diese Beispiele sind illustrativ und pädagogisch. Sie zeigen, wie die Klassifizierungs-Engine von AI-Casefile funktioniert, stellen aber keine Rechtsberatung dar. Jede Klassifizierung muss von einem qualifizierten Rechts- oder Compliance-Experten geprüft und genehmigt werden. Ihre Organisation bleibt für endgültige Compliance-Entscheidungen verantwortlich.

Welche Strafen drohen bei Nichteinhaltung der EU-KI-Verordnung?

Die Strafen sind gestaffelt: bis zu 35 Millionen Euro oder 7% des weltweiten Jahresumsatzes für verbotene KI-Praktiken, bis zu 15 Millionen Euro oder 3% für Hochrisiko-KI-Verstöße und bis zu 7,5 Millionen Euro oder 1,5% für andere Verstöße. Diese Strafen gelten ab den jeweiligen Durchsetzungsterminen — verbotene Praktiken seit Februar 2025, Hochrisiko-Systeme ab August 2026.

Welche Branchen werden von diesen Beispielen abgedeckt?

Die 10 Beispiele decken ein breites Spektrum von Sektoren ab, die von der EU-KI-Verordnung betroffen sind: öffentliche Verwaltung (Social Scoring), Verkehr (biometrische Überwachung), Personalwesen (Lebenslauf-Screening), Finanzdienstleistungen (Kreditbewertung), Bildung (Zulassungen), Energie (kritische Infrastruktur), Versicherung (Kunden-Chatbots), Marketing (Inhaltserstellung), Softwareentwicklung (Code-Assistenten) und Vertrieb (Analyse-Dashboards).

August 2026 ist näher als Sie denken

Warten Sie nicht bis zur Durchsetzungsfrist. Beginnen Sie jetzt mit der Klassifizierung und Dokumentation Ihrer KI-Systeme — unsere kostenlose Testversion gibt Ihnen sofortigen Zugang zu KI-gestützten EU-KI-Verordnung-Klassifizierungen.