Entdecken Sie 10 reale KI-Anwendungsfälle, die von unserer Engine über alle vier Risikostufen der EU-KI-Verordnung klassifiziert wurden. Jedes Beispiel zeigt die vollständige Klassifizierung — Risikostufe, anwendbare Artikel, Compliance-Maßnahmen und DSGVO-Schnittstellen. Die Durchsetzungsfrist August 2026 rückt näher — beginnen Sie jetzt mit der Vorbereitung.
Aug 2026
Vollständige Durchsetzung
35 Mio. €
Maximale Strafe
10 Beispiele
Alle Risikostufen
Ab dem 2. August 2026 müssen alle Hochrisiko-KI-Systeme vollständig dokumentiert, risikobewertet und konform mit der EU-KI-Verordnung (Verordnung 2024/1689) sein. Organisationen drohen Strafen bis zu 35 Millionen Euro oder 7% des weltweiten Umsatzes bei Nichteinhaltung.
Alle KI-Systeme in Anhang-III-Kategorien — Beschäftigung, Bildung, kritische Infrastruktur, Strafverfolgung — müssen vollständige technische Dokumentation und Konformitätsbewertungen vorweisen.
Social Scoring, biometrische Echtzeit-Überwachung und manipulative KI sind bereits seit Februar 2025 verboten. Fortgesetzte Verstöße werden mit den höchsten Strafen geahndet.
KI-Systeme, die mit Menschen interagieren, müssen ihre KI-Natur offenlegen. Systeme zur Inhaltserstellung müssen ihre Ergebnisse als KI-generiert kennzeichnen.
Sicherheitsabteilung — Münchner Verkehrsgesellschaft
Live facial recognition system installed across all U-Bahn stations in Munich to identify individuals in real-time against a watchlist database. Cameras continuously scan all passengers entering and exiting stations, matching faces against a database of persons of interest provided by law enforcement.
Enhance public safety by identifying wanted individuals and preventing crime in the public transit network.
SecurVision AG
FaceTrack Live
This use case is continuous, real-time facial recognition across U-Bahn stations to identify individuals against a watchlist in publicly accessible spaces. Under the EU AI Act decision tree, this matches a prohibited practice under Art. 5: real-time remote biometric identification of people in public spaces (with only narrow, strictly conditioned exceptions typically tied to competent authorities and specific legal authorization). Because the intended use is blanket, continuous scanning of all passengers, it is treated as prohibited rather than merely high-risk biometrics under Annex III.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Human Resources — Deutsche Telekom AG
Automated CV screening tool that parses incoming job applications, extracts qualifications and experience, and ranks candidates against job requirements. Produces a shortlist with match scores. Recruiters use the shortlist as primary input for deciding who to invite to interviews.
Reduce time-to-hire from 45 to 20 days and ensure consistent, bias-reduced initial screening of 500+ applications per open position.
Personio SE
Personio Recruiting AI
This system performs automated screening and ranking of job applicants and produces match scores/shortlists that are the primary input for interview decisions. Recruitment and access to employment are explicitly covered by Annex III (4) “employment, worker management and access to self-employment,” making it High-Risk under Art. 6 regardless of human final decision-making. Because it evaluates and ranks identifiable natural persons (applicants) using their personal data, it also constitutes profiling, increasing governance and oversight expectations.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Kreditrisiko — Commerzbank AG
Machine learning model that assesses creditworthiness of loan applicants by analyzing income, employment history, spending patterns, and third-party credit bureau data. Outputs a risk score that determines loan approval, interest rate, and credit limit. Used for all consumer loan decisions under €50,000.
Automate credit decisions for high-volume consumer lending while maintaining regulatory compliance with BaFin requirements.
SCHUFA Holding AG
SCHUFA ScoreAI
This use case determines access to an essential private service—consumer credit—by generating a risk score that directly drives approval/rejection and loan terms, which falls under Annex III ‘essential services’ and therefore qualifies as high-risk under Art. 6. The system performs profiling of natural persons (creditworthiness scoring based on personal data) and is intended to be fully automated, increasing regulatory scrutiny and requiring strong human oversight measures even if the business process aims for automation. Because outcomes have significant effects on individuals (credit access, pricing, limits), both EU AI Act high-risk obligations and GDPR automated decision-making safeguards are triggered.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Studierendenkanzlei — Technische Universität München
AI system that evaluates university applications by scoring motivation letters, academic transcripts, and extracurricular profiles. Generates a ranked shortlist of candidates for each study programme. Admissions committee uses the ranking to prioritize interview invitations.
Handle the growing volume of applications (15,000+ per year) while maintaining fair and consistent evaluation criteria across all faculties.
This use case falls under EU AI Act Art. 6 high-risk because it is used for access to and management of education/vocational training, specifically admissions pre-screening, which is listed in Annex III category (education_training). The system scores and ranks individual applicants and the ranking directly determines who proceeds to interviews, meaning it materially influences access to education opportunities and raises discrimination and due-process risks. It is not prohibited under Art. 5 (no social scoring by public authorities outside the banned context, no emotion recognition, no remote biometric identification), but it triggers the full high-risk lifecycle obligations (Arts. 9–15).
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Netzführung — Austrian Power Grid AG
Machine learning system that predicts electricity demand across the Austrian grid and automatically adjusts load distribution between power stations. The AI triggers switching commands to reroute power flows and prevent grid overload during peak periods.
Ensure grid stability, prevent blackouts, and optimize energy distribution across the national electricity network.
Siemens Energy
GridAI Optimizer
This system is used to manage and operate the national high-voltage electricity grid by forecasting demand and automatically triggering switching/load-balancing commands. AI used in the management and operation of critical infrastructure is classified as high-risk under Art. 6 in conjunction with Annex III (critical infrastructure), because malfunction can cause significant harm (e.g., outages/blackouts). The use case does not match Art. 5 prohibited practices (no social scoring, biometric identification, manipulation, or workplace/education emotion recognition). Because it directly controls operational grid switching, high-risk obligations (risk management, documentation, logging, oversight, robustness/cybersecurity, and conformity assessment) are typically expected.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Kundenservice — Allianz Versicherungs-AG
Conversational AI chatbot on the company website that answers customer questions about insurance policies, claims status, and coverage details. Escalates complex queries to human agents. Does not make coverage decisions or process claims — purely informational.
Provide 24/7 first-level customer support, reducing call center volume by handling routine inquiries automatically.
Cognigy GmbH
Cognigy.AI
This is a customer-facing conversational AI on a website, which triggers EU AI Act transparency obligations for chatbots (Art. 50). The described use is informational and does not perform eligibility, pricing, claims handling, or other decisions about access to essential services/benefits, so it does not fall under Annex III high-risk categories (e.g., essential services). Based on the decision tree, it is therefore classified as limited risk due to the required user disclosure, with no Annex III category.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Marketing Communications — Red Bull GmbH, Salzburg
Generative AI tool used by the marketing team to draft social media posts, email newsletters, and blog articles. All content is reviewed and edited by human copywriters before publication. The tool generates text based on brand guidelines and campaign briefs.
Accelerate content production pipeline and maintain consistent brand voice across channels.
OpenAI
ChatGPT Enterprise
This use case is a generative AI system producing marketing text intended for publication to external audiences (social media, newsletters, blog). Under the EU AI Act decision tree, it is not prohibited (Art. 5) and it does not fall under Annex III high-risk areas (Art. 6) because it is not used for decisions about individuals (e.g., employment, credit, essential services). However, because it generates content that is disseminated externally, transparency obligations for AI-generated/synthetic content are likely triggered (Art. 50), so the appropriate tier is limited risk even with human editorial review.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Software Engineering — Bosch Digital, Stuttgart
AI-powered code completion tool integrated into developer IDEs. Suggests code completions, generates boilerplate, and explains code snippets. Used exclusively by internal software engineers for productivity. No customer-facing output; no personal data processed.
Increase developer productivity by 20-30% through intelligent code suggestions and automated boilerplate generation.
GitHub
GitHub Copilot Enterprise
Based on the described intended use, this is an internal code-completion/productivity tool for software engineers and does not fall under any Annex III high-risk area (Art. 6 + Annex III) and does not profile natural persons. It also does not match any Art. 5 prohibited practices. Art. 50 limited-risk transparency obligations are typically aimed at systems interacting with people as chatbots or generating content for external audiences; since outputs are internal-only and not customer-facing, the use case is best classified as minimal risk, with a note to reassess if it is deployed as an interactive assistant requiring disclosure to users.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Data Science — Swarovski AG, Wattens
Internal dashboard that uses machine learning to forecast quarterly sales revenue based on pipeline data, historical trends, and market indicators. Developed in-house by the data science team. Outputs aggregate predictions at product-line level — no individual customer scoring or profiling.
Enable data-driven quarterly planning and inventory management by providing accurate sales forecasts per product line.
This use case is an internal ML-based forecasting dashboard producing aggregate predictions at product-line level for planning and inventory management. It does not fall under any Art. 5 prohibited practices and does not match an Annex III high-risk area because it is not used to make or materially influence decisions about individuals in employment, essential services, education, law enforcement, etc. As described, it also does not perform profiling of natural persons under Art. 6(2) because there is no individual customer/employee scoring or categorization; therefore Art. 50 transparency obligations for chatbots/deepfakes are not triggered either.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
AI-Casefile bietet KI-gestützte Risikoklassifizierung als Ausgangspunkt für Ihren Compliance-Prozess. Diese Klassifizierungen sind beratend und stellen keine Rechtsberatung dar. Die endgültige Risikobewertung muss immer von einem qualifizierten Rechts- oder Compliance-Experten geprüft und genehmigt werden.
KI-unterstützt, nicht KI-entschieden
Unsere Engine analysiert Anwendungsfallbeschreibungen und schlägt eine Risikostufe basierend auf den Kriterien der EU-KI-Verordnung vor. Dies ist eine Empfehlung — keine verbindliche rechtliche Einordnung.
Menschliche Freigabe erforderlich
Jede Klassifizierung muss von einem Compliance-Beauftragten oder Rechtsberater geprüft werden, bevor sie für regulatorische Berichterstattung oder Entscheidungsfindung verwendet wird.
Keine Haftung für KI-Ergebnisse
AI-Casefile und seine KI-Klassifizierungen ersetzen keine professionelle Rechtsberatung. Ihre Organisation bleibt für die endgültige Risikobewertung und Compliance-Entscheidungen verantwortlich.
Häufige Fragen zur EU-KI-Verordnung-Klassifizierung und der Frist August 2026.
Die EU-KI-Verordnung klassifiziert KI-Systeme in vier Risikostufen: Verboten (vollständig untersagt, z.B. Social Scoring, biometrische Echtzeit-Überwachung), Hohes Risiko (strenge Anforderungen einschließlich Dokumentation, Konformitätsbewertungen und menschlicher Aufsicht — umfasst Bereiche wie Beschäftigung, Bildung, kritische Infrastruktur), Begrenztes Risiko (Transparenzpflichten, z.B. Chatbots müssen ihre KI-Natur offenlegen) und Minimales Risiko (freiwillige Verhaltenskodizes, keine verpflichtenden Anforderungen).
Der 2. August 2026 markiert die vollständige Anwendung der EU-KI-Verordnung für Hochrisiko-KI-Systeme. Bis zu diesem Datum müssen Organisationen, die Hochrisiko-KI bereitstellen oder einsetzen, vollständige technische Dokumentation, Risikomanagementsysteme, Datengovernance-Maßnahmen, Konformitätsbewertungen und eine Registrierung in der EU-Datenbank vorweisen. Die verbotenen Praktiken und KI-Kompetenzanforderungen gelten bereits seit Februar 2025.
AI-Casefile nutzt KI-gestützte Analyse, um Ihre KI-Systembeschreibung anhand der Kriterien der EU-KI-Verordnung zu bewerten. Es berücksichtigt den Verwendungszweck, die verarbeiteten Daten, den Einsatzkontext und die betroffenen Personen, um eine Risikostufe, anwendbare Artikel, Compliance-Maßnahmen und DSGVO-Schnittstellen vorzuschlagen. Jede Klassifizierung ist ein Ausgangspunkt — menschliche Prüfung und rechtliche Genehmigung sind immer erforderlich.
Nein. Diese Beispiele sind illustrativ und pädagogisch. Sie zeigen, wie die Klassifizierungs-Engine von AI-Casefile funktioniert, stellen aber keine Rechtsberatung dar. Jede Klassifizierung muss von einem qualifizierten Rechts- oder Compliance-Experten geprüft und genehmigt werden. Ihre Organisation bleibt für endgültige Compliance-Entscheidungen verantwortlich.
Die Strafen sind gestaffelt: bis zu 35 Millionen Euro oder 7% des weltweiten Jahresumsatzes für verbotene KI-Praktiken, bis zu 15 Millionen Euro oder 3% für Hochrisiko-KI-Verstöße und bis zu 7,5 Millionen Euro oder 1,5% für andere Verstöße. Diese Strafen gelten ab den jeweiligen Durchsetzungsterminen — verbotene Praktiken seit Februar 2025, Hochrisiko-Systeme ab August 2026.
Die 10 Beispiele decken ein breites Spektrum von Sektoren ab, die von der EU-KI-Verordnung betroffen sind: öffentliche Verwaltung (Social Scoring), Verkehr (biometrische Überwachung), Personalwesen (Lebenslauf-Screening), Finanzdienstleistungen (Kreditbewertung), Bildung (Zulassungen), Energie (kritische Infrastruktur), Versicherung (Kunden-Chatbots), Marketing (Inhaltserstellung), Softwareentwicklung (Code-Assistenten) und Vertrieb (Analyse-Dashboards).
Warten Sie nicht bis zur Durchsetzungsfrist. Beginnen Sie jetzt mit der Klassifizierung und Dokumentation Ihrer KI-Systeme — unsere kostenlose Testversion gibt Ihnen sofortigen Zugang zu KI-gestützten EU-KI-Verordnung-Klassifizierungen.