August 2026 Deadline Approaching

See AI Act Classifications in Action

Explore 10 real-world AI use cases classified by our engine across all four EU AI Act risk tiers. Each example shows the full classification output — risk tier, applicable articles, compliance actions, and GDPR intersections. The August 2026 enforcement deadline is approaching — start preparing now.

Aug 2026

Full Enforcement

€35M

Maximum Penalty

10 Examples

All Risk Tiers

COMPLIANCE DEADLINE

August 2026: Full Enforcement of the EU AI Act

Starting August 2, 2026, all high-risk AI systems must be fully documented, risk-assessed, and compliant with the EU AI Act (Regulation 2024/1689). Organizations face penalties up to €35 million or 7% of global turnover for non-compliance.

High-Risk AI Systems

All AI systems in Annex III categories — employment, education, critical infrastructure, law enforcement — must have complete technical documentation and conformity assessments.

Prohibited Practices

Social scoring, real-time biometric surveillance, and manipulative AI are already banned since February 2025. Continued violations carry the highest penalties.

Transparency Obligations

AI systems interacting with people must disclose their AI nature. Content generation systems must label their output as AI-generated.

Citizen Trustworthiness Scoring System

Stadtverwaltung Wien — Bürgerdienste

Prohibited
Use Case Input
Description

AI system that aggregates social media activity, financial records, and public behavior data to generate a trustworthiness score for citizens. Scores influence access to municipal services including public housing waitlists, library access tiers, and priority in administrative processing. Deployed across all municipal offices in the city.

Business Purpose

Improve allocation of municipal resources by prioritizing reliable citizens and reducing fraud in public service access.

Category:decision supportProhibited78% confidence
Role: Provider & Deployer
Suggested Flags:
Personal DataCustomer FacingSpecial Category DataConfidential DataAutomated Decision Support
Legal Basis (EU AI Act)
Compliance Actions
Do not deploy/continue operation in the described form; escalate immediately to legal/compliance leadership for an Art. 5 prohibited-practice assessment (this is a classification suggestion, not legal advice).
Freeze further data ingestion/model training related to resident “trustworthiness” scoring and prevent the score from being used in any service-access workflow.
Document the use case, data sources (social media/financial/behavioral), and decision impacts to support internal auditability and regulatory inquiry response.
If the business goal is fraud reduction, redesign toward narrowly-scoped, non-social-scoring anti-fraud controls (e.g., case-specific verification) without generalized citizen scoring; re-run an EU AI Act + GDPR assessment on the redesigned approach.
Engage the DPO and (if applicable) the city’s information security and ethics/governance bodies; prepare for potential supervisory authority engagement if already piloted.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case constitutes a generalized “trustworthiness” score for residents, derived from broad behavioral data (including social media and public behavior) and used to influence access/priority to municipal services (housing waitlists, library access tiers, administrative processing). This closely matches the EU AI Act’s prohibited “social scoring” practice by a public authority, where individuals are scored based on behavior/characteristics and the score leads to detrimental or unfavourable treatment in access to services. Because it falls under Art. 5 prohibited practices, it should not be classified as high-risk/limited-risk; the prohibition prevails in the decision tree.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Real-Time Facial Recognition in Public Transit

Sicherheitsabteilung — Münchner Verkehrsgesellschaft

Prohibited
Use Case Input
Description

Live facial recognition system installed across all U-Bahn stations in Munich to identify individuals in real-time against a watchlist database. Cameras continuously scan all passengers entering and exiting stations, matching faces against a database of persons of interest provided by law enforcement.

Business Purpose

Enhance public safety by identifying wanted individuals and preventing crime in the public transit network.

Vendor

SecurVision AG

Product

FaceTrack Live

Category:decision supportProhibited78% confidence
Role: Deployer
Suggested Flags:
Personal DataCustomer FacingExternal SharingSpecial Category DataConfidential DataAutomated Decision Support
Compliance Actions
Do not deploy as described; escalate immediately to legal/compliance and senior management for an Art. 5 prohibited-practice assessment and decision (this is not legal advice).
Initiate an urgent review with the competent public authority/law enforcement counterpart to determine whether any narrow statutory exception could apply (e.g., strictly limited, authorized use cases) and whether the transit operator could lawfully act as deployer at all in this scenario.
If the objective remains public safety, redesign the use case away from real-time remote biometric identification in publicly accessible spaces (e.g., non-biometric measures, increased staffing, physical access controls, or post-event forensic workflows subject to lawful basis).
Freeze procurement/rollout steps (contracts, integration, data sharing) until the prohibited-practice risk is resolved; document the decision and rationale in internal governance records.
If any piloting has occurred, implement incident containment: stop processing, secure and segregate any captured biometric templates/images, and define deletion/retention actions consistent with instructions from legal/privacy teams and applicable law.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case is continuous, real-time facial recognition across U-Bahn stations to identify individuals against a watchlist in publicly accessible spaces. Under the EU AI Act decision tree, this matches a prohibited practice under Art. 5: real-time remote biometric identification of people in public spaces (with only narrow, strictly conditioned exceptions typically tied to competent authorities and specific legal authorization). Because the intended use is blanket, continuous scanning of all passengers, it is treated as prohibited rather than merely high-risk biometrics under Annex III.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Resume Screening for Recruitment

Human Resources — Deutsche Telekom AG

High Risk
Use Case Input
Description

Automated CV screening tool that parses incoming job applications, extracts qualifications and experience, and ranks candidates against job requirements. Produces a shortlist with match scores. Recruiters use the shortlist as primary input for deciding who to invite to interviews.

Business Purpose

Reduce time-to-hire from 45 to 20 days and ensure consistent, bias-reduced initial screening of 500+ applications per open position.

Vendor

Personio SE

Product

Personio Recruiting AI

Category:decision supportHigh Risk86% confidence
Annex III:EmploymentRole: Deployer
Suggested Flags:
Personal DataExternal SharingSpecial Category DataConfidential DataAutomated Decision Support
Compliance Actions
Treat as a High-Risk AI use case (Annex III employment): do not rely on “HR tool” framing; document intended purpose as screening/ranking for recruitment decisions.
Confirm roles and allocate obligations contractually: Deutsche Telekom likely = deployer; Personio likely = provider. Ensure the vendor contract covers AI Act high-risk duties (instructions for use, performance limits, logging, post-market monitoring support, incident cooperation).
Implement deployer-side risk management and governance aligned to the provider’s instructions: local procedures for acceptance testing, change management, and periodic re-validation of model performance per role/location.
Human oversight (Art. 14): ensure recruiters are trained, can interpret match scores, and are required to meaningfully review/override; prohibit fully-automatic rejection/invitation based solely on score; implement second-look rules for edge cases (e.g., non-standard CVs, career breaks).
Data governance (Art. 10): assess training/operational data relevance and bias risks; define which features are allowed (avoid proxies for protected characteristics); implement data quality checks (parsing accuracy, language handling, missing data).
Logging/record-keeping (Art. 12): ensure the system logs inputs/versions, scores, ranking outputs, and user actions (override/accept) with retention aligned to HR/legal needs.
Transparency to affected persons and internal users (Art. 13): provide clear notices to candidates that AI is used in screening/ranking, what it does, and how to exercise rights; provide recruiters with clear documentation of score meaning/limitations.
Accuracy/robustness/cybersecurity (Art. 15): evaluate error rates (false negatives for qualified candidates), robustness across CV formats and languages, and access controls for applicant data; define performance thresholds and monitoring alerts.
Operational monitoring & vendor escalation: set KPIs (selection-rate disparities, parsing failure rates, override rates), periodic bias testing, and an incident process for suspected discrimination or material malfunction.
If any conversational interface is used with candidates/recruiters, add Art. 50 transparency disclosures (AI interaction notice).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This system performs automated screening and ranking of job applicants and produces match scores/shortlists that are the primary input for interview decisions. Recruitment and access to employment are explicitly covered by Annex III (4) “employment, worker management and access to self-employment,” making it High-Risk under Art. 6 regardless of human final decision-making. Because it evaluates and ranks identifiable natural persons (applicants) using their personal data, it also constitutes profiling, increasing governance and oversight expectations.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Credit Scoring for Consumer Loans

Kreditrisiko — Commerzbank AG

High Risk
Use Case Input
Description

Machine learning model that assesses creditworthiness of loan applicants by analyzing income, employment history, spending patterns, and third-party credit bureau data. Outputs a risk score that determines loan approval, interest rate, and credit limit. Used for all consumer loan decisions under €50,000.

Business Purpose

Automate credit decisions for high-volume consumer lending while maintaining regulatory compliance with BaFin requirements.

Vendor

SCHUFA Holding AG

Product

SCHUFA ScoreAI

Category:decision supportHigh Risk86% confidence
Annex III:Essential ServicesRole: Deployer
Suggested Flags:
Personal DataCustomer FacingExternal SharingConfidential DataAutomated Decision Support
Compliance Actions
Treat this as an Annex III high-risk system (creditworthiness/credit access) and route to formal EU AI Act high-risk governance; do not rely on “vendor says compliant” as sufficient.
Clarify roles contractually: Commerzbank as deployer; SCHUFA likely provider. Ensure provider supplies required high-risk documentation (technical documentation, instructions for use, performance/limitations, logs) and supports audits.
Implement/maintain a risk management system (Art. 9): identify foreseeable misuse (proxy discrimination via spending patterns/ZIP codes), define risk controls, test residual risk, and document decisions.
Data governance (Art. 10): document training/validation data relevance, representativeness, and bias testing; define feature governance (e.g., rules for use of bureau data, spending data), data quality checks, and drift monitoring.
Technical documentation & record keeping (Art. 11–12): ensure decision traceability (inputs/features versions, score, thresholds, model version, reason codes) and retention aligned with banking/GDPR requirements.
Transparency to affected persons and internal users (Art. 13 + banking conduct expectations): provide meaningful information on main parameters/logic, limitations, and appropriate interpretation; ensure adverse-action style explanations and complaint/appeal paths (also aligns with GDPR Arts. 13–15).
Human oversight (Art. 14): because your stated intent is “fully automated,” add a meaningful human review route for edge cases/appeals and define when overrides are required; train staff and monitor override rates.
Accuracy/robustness/cybersecurity (Art. 15): set performance KPIs (AUC, calibration), fairness metrics, stress tests (economic downturn), adversarial resistance (data poisoning), and incident response.
Conformity assessment / assurance steps: verify that the provider completed applicable conformity assessment and CE/required declarations for high-risk; perform deployer-side acceptance testing before production and after material changes.
Operational controls: change management for thresholds and policy rules; periodic model validation; monitoring for disparate impact; vendor management aligned with BaFin model risk management expectations.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case determines access to an essential private service—consumer credit—by generating a risk score that directly drives approval/rejection and loan terms, which falls under Annex III ‘essential services’ and therefore qualifies as high-risk under Art. 6. The system performs profiling of natural persons (creditworthiness scoring based on personal data) and is intended to be fully automated, increasing regulatory scrutiny and requiring strong human oversight measures even if the business process aims for automation. Because outcomes have significant effects on individuals (credit access, pricing, limits), both EU AI Act high-risk obligations and GDPR automated decision-making safeguards are triggered.

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Automated University Admissions Pre-Screening

Studierendenkanzlei — Technische Universität München

High Risk
Use Case Input
Description

AI system that evaluates university applications by scoring motivation letters, academic transcripts, and extracurricular profiles. Generates a ranked shortlist of candidates for each study programme. Admissions committee uses the ranking to prioritize interview invitations.

Business Purpose

Handle the growing volume of applications (15,000+ per year) while maintaining fair and consistent evaluation criteria across all faculties.

Category:decision supportHigh Risk86% confidence
Annex III:Education & TrainingRole: Deployer
Suggested Flags:
Personal DataCustomer FacingConfidential DataAutomated Decision Support
Compliance Actions
Human decision required: treat this as a high-risk Annex III education/training system and route to internal legal/compliance review (suggestion, not legal advice).
Establish and document a risk management system covering intended purpose, foreseeable misuse, harms (bias/discrimination, exclusion, due process), and mitigation measures (Art. 9).
Implement data governance controls: document data sources (motivation letters, transcripts, extracurriculars), representativeness, quality checks, label/ground-truth strategy for training/validation, and bias testing across protected characteristics proxies; define retention and access controls (Art. 10).
Prepare technical documentation: model type, features used, preprocessing, ranking/scoring logic, performance metrics, limitations, and change management (Art. 11).
Enable logging/record-keeping to support traceability of each scoring/ranking outcome and committee use (Art. 12).
Provide clear information to the admissions committee (deployer-side users) about system capabilities/limits, appropriate use, and interpretation of scores/ranks (Art. 13).
Design and enforce human oversight: define when committee must override/second-review, contestation paths, and safeguards against automation bias; ensure meaningful human involvement before interview invitations are decided (Art. 14).
Validate accuracy/robustness/cybersecurity: stress-test for distribution shifts (different programmes/faculties, international transcripts), adversarial inputs (prompted letters), and secure access to applicant data (Art. 15).
Conformity assessment planning: determine whether TU München is acting as provider (in-house model) or deployer (vendor tool) and execute the applicable high-risk obligations, including procurement clauses, supplier documentation, and (where applicable) registration/conformity steps before putting into service.
Transparency to applicants: inform applicants that AI is used in admissions pre-screening and provide appropriate explanations of the role of the AI in the process; if any AI-generated communications are sent externally, apply Art. 50-style labeling/disclosure as relevant.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case falls under EU AI Act Art. 6 high-risk because it is used for access to and management of education/vocational training, specifically admissions pre-screening, which is listed in Annex III category (education_training). The system scores and ranks individual applicants and the ranking directly determines who proceeds to interviews, meaning it materially influences access to education opportunities and raises discrimination and due-process risks. It is not prohibited under Art. 5 (no social scoring by public authorities outside the banned context, no emotion recognition, no remote biometric identification), but it triggers the full high-risk lifecycle obligations (Arts. 9–15).

Profiling detected — triggers automatic high-risk (Art. 6(2))

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI-Based Grid Load Forecasting and Switching

Netzführung — Austrian Power Grid AG

High Risk
Use Case Input
Description

Machine learning system that predicts electricity demand across the Austrian grid and automatically adjusts load distribution between power stations. The AI triggers switching commands to reroute power flows and prevent grid overload during peak periods.

Business Purpose

Ensure grid stability, prevent blackouts, and optimize energy distribution across the national electricity network.

Vendor

Siemens Energy

Product

GridAI Optimizer

Category:decision supportHigh Risk78% confidence
Annex III:Critical InfrastructureRole: Deployer
Suggested Flags:
External SharingConfidential DataAutomated Decision Support
Compliance Actions
Do not treat this as legal advice; run an internal regulatory assessment to confirm it falls under Annex III critical infrastructure (grid operation/load dispatch) and whether any national implementing rules apply for Austria.
Clarify roles/obligations contractually: Siemens Energy likely acts as the AI system provider; Austrian Power Grid AG acts as deployer. Ensure the vendor supplies required high-risk documentation, instructions for use, and residual-risk information (Art. 13) and supports conformity assessment evidence (Art. 43).
Implement/verify a risk management system for the deployed use: hazard analysis for wrong forecasts or erroneous switching (cascading outages), pre-defined risk controls, and post-deployment monitoring/incident handling aligned to Art. 9.
Data governance controls (Art. 10): document all input data streams (SCADA/telemetry, weather, market signals), data quality/latency requirements, handling of missing data, concept drift monitoring, and validation datasets representing seasonal/rare peak events.
Technical documentation & system description (Art. 11): end-to-end architecture, model versioning, interfaces to grid control systems, safety constraints, fallback logic, and change management for model updates.
Logging/record-keeping (Art. 12): ensure auditable logs of forecasts, confidence/uncertainty outputs, switching recommendations/commands issued, human operator interventions/overrides, and system health metrics—sufficient for incident reconstruction.
Human oversight (Art. 14): define operational modes (advisory vs auto-execution), approval thresholds, override/kill-switch procedures, separation of duties, operator training, and clear HMI that communicates constraints and uncertainty.
Accuracy/robustness/cybersecurity (Art. 15): stress testing for extreme events, adversarial/cyber scenarios, redundancy, secure-by-design integration with OT networks, resilience to sensor spoofing, and safe degradation to rule-based dispatch.
Conformity assessment and procurement gating: require evidence of conformity assessment for a high-risk system (Art. 43) before full production roll-out; document acceptance tests in a staging environment and periodic re-validation after major changes.
Operational governance: establish incident escalation pathways and criteria for notifying the provider and relevant authorities per organizational policy; maintain a register of the high-risk AI system internally and confirm EU database registration responsibilities (often provider-led).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This system is used to manage and operate the national high-voltage electricity grid by forecasting demand and automatically triggering switching/load-balancing commands. AI used in the management and operation of critical infrastructure is classified as high-risk under Art. 6 in conjunction with Annex III (critical infrastructure), because malfunction can cause significant harm (e.g., outages/blackouts). The use case does not match Art. 5 prohibited practices (no social scoring, biometric identification, manipulation, or workplace/education emotion recognition). Because it directly controls operational grid switching, high-risk obligations (risk management, documentation, logging, oversight, robustness/cybersecurity, and conformity assessment) are typically expected.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Customer Service Chatbot for Insurance Inquiries

Kundenservice — Allianz Versicherungs-AG

Limited Risk
Use Case Input
Description

Conversational AI chatbot on the company website that answers customer questions about insurance policies, claims status, and coverage details. Escalates complex queries to human agents. Does not make coverage decisions or process claims — purely informational.

Business Purpose

Provide 24/7 first-level customer support, reducing call center volume by handling routine inquiries automatically.

Vendor

Cognigy GmbH

Product

Cognigy.AI

Category:customer serviceLimited Risk78% confidence
Role: Deployer
Suggested Flags:
Personal DataCustomer FacingConfidential Data
Compliance Actions
Implement clear user-facing disclosure that customers are interacting with an AI system (at the start of the chat and as needed throughout) (Art. 50).
Provide an easy, immediate route to a human agent (handover/escalation) and communicate it clearly in the UI (good practice supporting transparency/user autonomy).
Put in place content/quality controls to reduce hallucinations and misleading insurance information (e.g., approved knowledge base, retrieval-only mode where feasible, response guardrails, blocked topics).
Maintain logs/records of chatbot interactions appropriate to purpose (e.g., for troubleshooting, complaint handling, and safety monitoring), with defined retention periods (also supports privacy accountability).
Vendor/deployer governance: document intended use, configuration, testing/acceptance criteria, and changes; ensure contractual terms cover security, incident handling, and data processing roles (DPA) where personal data is processed.
If the chatbot output may be published or reused externally (e.g., testimonials, marketing, public FAQs), add labeling/controls for AI-generated content where it reasonably reaches the public (Art. 50).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This is a customer-facing conversational AI on a website, which triggers EU AI Act transparency obligations for chatbots (Art. 50). The described use is informational and does not perform eligibility, pricing, claims handling, or other decisions about access to essential services/benefits, so it does not fall under Annex III high-risk categories (e.g., essential services). Based on the decision tree, it is therefore classified as limited risk due to the required user disclosure, with no Annex III category.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Marketing Copy Generator

Marketing Communications — Red Bull GmbH, Salzburg

Limited Risk
Use Case Input
Description

Generative AI tool used by the marketing team to draft social media posts, email newsletters, and blog articles. All content is reviewed and edited by human copywriters before publication. The tool generates text based on brand guidelines and campaign briefs.

Business Purpose

Accelerate content production pipeline and maintain consistent brand voice across channels.

Vendor

OpenAI

Product

ChatGPT Enterprise

Category:content generationLimited Risk74% confidence
Role: Deployer
Suggested Flags:
Personal DataCustomer FacingExternal SharingConfidential Data
Compliance Actions
Implement an external-facing transparency process for marketing outputs: decide when/how to disclose that content was AI-generated or AI-assisted (Art. 50), and ensure it is consistently applied across channels (social, email, blog).
Add internal policy + playbook for marketers/copywriters: permitted uses, prohibited claims, fact-checking requirements, citation/source requirements, and escalation path for sensitive topics (health, minors, regulated products, etc.).
Maintain basic records for governance: which tool/version is used (ChatGPT Enterprise), high-level prompts/brand guidelines provided, review/approval workflow, and responsibility assignment (editorial owner) to evidence transparency and human control.
Put in place safeguards to reduce IP/confidentiality risk: prompt hygiene guidance, no unnecessary confidential data in prompts, and contractual/IT controls aligned with the Enterprise setup (access control, logging, retention settings).
Ensure users (employees) are aware they are using an AI system and trained on limitations (hallucinations, bias, defamation) and on how to label/disclose outputs where required (Art. 50).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case is a generative AI system producing marketing text intended for publication to external audiences (social media, newsletters, blog). Under the EU AI Act decision tree, it is not prohibited (Art. 5) and it does not fall under Annex III high-risk areas (Art. 6) because it is not used for decisions about individuals (e.g., employment, credit, essential services). However, because it generates content that is disseminated externally, transparency obligations for AI-generated/synthetic content are likely triggered (Art. 50), so the appropriate tier is limited risk even with human editorial review.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

AI Code Completion for Software Development

Software Engineering — Bosch Digital, Stuttgart

Minimal Risk
Use Case Input
Description

AI-powered code completion tool integrated into developer IDEs. Suggests code completions, generates boilerplate, and explains code snippets. Used exclusively by internal software engineers for productivity. No customer-facing output; no personal data processed.

Business Purpose

Increase developer productivity by 20-30% through intelligent code suggestions and automated boilerplate generation.

Vendor

GitHub

Product

GitHub Copilot Enterprise

Category:internal automationMinimal Risk74% confidence
Role: Deployer
Suggested Flags:
Confidential Data
Compliance Actions
Document the intended use and boundaries (internal developer productivity tool; no use for HR decisions, customer communications, or safety-critical code without controls) for internal AI governance and audits.
Confirm and periodically re-validate data flows: no production/customer data, no personal data, no code containing secrets; ensure tenant/Enterprise settings prevent training on Bosch content if required by policy.
Implement secure-use controls: block or scan for secrets (API keys), enforce OSS/license compliance checks, and require peer review for AI-suggested code before merge (SDLC control).
Maintain internal transparency to users: inform engineers the tool is AI-assisted, provide usage guidelines (e.g., verify correctness, security, and licensing).
Vendor/third-party risk management: review Copilot Enterprise terms, security documentation, data processing addendum (if any), and hosting/transfer locations relevant for DACH/EU requirements.
Set up incident/feedback loop: track erroneous or risky suggestions, security findings, and implement guardrails (policy, prompts, IDE settings).
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

Based on the described intended use, this is an internal code-completion/productivity tool for software engineers and does not fall under any Annex III high-risk area (Art. 6 + Annex III) and does not profile natural persons. It also does not match any Art. 5 prohibited practices. Art. 50 limited-risk transparency obligations are typically aimed at systems interacting with people as chatbots or generating content for external audiences; since outputs are internal-only and not customer-facing, the use case is best classified as minimal risk, with a note to reassess if it is deployed as an interactive assistant requiring disclosure to users.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Sales Forecast Dashboard with ML Predictions

Data Science — Swarovski AG, Wattens

Minimal Risk
Use Case Input
Description

Internal dashboard that uses machine learning to forecast quarterly sales revenue based on pipeline data, historical trends, and market indicators. Developed in-house by the data science team. Outputs aggregate predictions at product-line level — no individual customer scoring or profiling.

Business Purpose

Enable data-driven quarterly planning and inventory management by providing accurate sales forecasts per product line.

Category:data analysisMinimal Risk78% confidence
Role: Provider & Deployer
Suggested Flags:
Personal DataConfidential DataAutomated Decision Support
Legal Basis (EU AI Act)
Compliance Actions
Document the intended purpose and boundaries (aggregate, product-line forecasting; no individual scoring) in internal AI inventory/governance records (good practice, supports defensibility if scope creeps).
Implement basic model governance controls: versioning, change management, and reproducibility of training/inference runs (voluntary best practice for minimal-risk).
Define and monitor model quality KPIs (forecast error, drift, bias at the product-line level) and set a retraining/rollback process if performance degrades.
Access control & least-privilege for the dashboard and underlying datasets; logging of forecast consumption and model changes (security best practice).
Add internal user guidance: forecasts are decision-support; require human judgment for planning decisions; specify known limitations and confidence intervals.
Re-check classification if scope changes (e.g., forecasting by customer/account, salesperson performance, credit/price eligibility, or other individual-level scoring), which could trigger Art. 6/Annex III high-risk or GDPR Art. 22 issues.
GDPR Intersections
DPIA Required
Special Category Data
Art. 22 Automated
DPO Required
Why this risk level?

This use case is an internal ML-based forecasting dashboard producing aggregate predictions at product-line level for planning and inventory management. It does not fall under any Art. 5 prohibited practices and does not match an Annex III high-risk area because it is not used to make or materially influence decisions about individuals in employment, essential services, education, law enforcement, etc. As described, it also does not perform profiling of natural persons under Art. 6(2) because there is no individual customer/employee scoring or categorization; therefore Art. 50 transparency obligations for chatbots/deepfakes are not triggered either.

For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.

AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.

Important: AI Classifications Require Human Review

AI-Casefile provides AI-powered risk classification as a starting point for your compliance process. These classifications are advisory and do not constitute legal advice. The final risk assessment must always be reviewed and approved by a qualified legal or compliance professional.

AI-Assisted, Not AI-Decided

Our engine analyzes use case descriptions and suggests a risk tier based on EU AI Act criteria. This is a recommendation — not a binding legal determination.

Human Approval Required

Every classification must be reviewed by a compliance officer or legal counsel before it is used for regulatory reporting or decision-making.

No Liability for AI Output

AI-Casefile and its AI classifications do not replace professional legal advice. Your organization remains responsible for the final risk assessment and compliance decisions.

Frequently Asked Questions

Common questions about EU AI Act classification and the August 2026 deadline.

What are the four EU AI Act risk tiers?

The EU AI Act classifies AI systems into four risk tiers: Prohibited (banned outright, e.g., social scoring, real-time biometric surveillance), High Risk (strict requirements including documentation, conformity assessments, and human oversight — covers areas like employment, education, critical infrastructure), Limited Risk (transparency obligations, e.g., chatbots must disclose their AI nature), and Minimal Risk (voluntary codes of practice, no mandatory requirements).

When is the August 2026 deadline and what does it require?

August 2, 2026 marks the full application of the EU AI Act for high-risk AI systems. By this date, organizations deploying or providing high-risk AI must have complete technical documentation, risk management systems, data governance measures, conformity assessments, and registration in the EU database. The prohibited practices and AI literacy requirements are already in force since February 2025.

How does AI-Casefile determine risk classification?

AI-Casefile uses AI-powered analysis to evaluate your AI system description against the EU AI Act's criteria. It considers the system's intended purpose, data processed, deployment context, and affected persons to suggest a risk tier, applicable articles, compliance actions, and GDPR intersections. Each classification is a starting point — human review and legal approval are always required.

Are these example classifications legally binding?

No. These examples are illustrative and educational. They demonstrate how AI-Casefile's classification engine works but do not constitute legal advice. Every classification must be reviewed and approved by a qualified legal or compliance professional. Your organization remains responsible for final compliance decisions.

What penalties apply for non-compliance with the EU AI Act?

Penalties are tiered: up to €35 million or 7% of global annual turnover for prohibited AI practices, up to €15 million or 3% for high-risk AI violations, and up to €7.5 million or 1.5% for other violations. These penalties apply from the respective enforcement dates — prohibited practices since February 2025, high-risk systems from August 2026.

Which industries are covered by these examples?

The 10 examples cover a wide range of sectors affected by the EU AI Act: public administration (social scoring), transportation (biometric surveillance), human resources (resume screening), financial services (credit scoring), education (admissions), energy (critical infrastructure), insurance (customer chatbots), marketing (content generation), software development (code assistants), and sales (analytics dashboards).

August 2026 is closer than you think

Don't wait until the enforcement deadline. Start classifying and documenting your AI systems now — our free trial gives you instant access to AI-powered EU AI Act classifications.