Explore 10 real-world AI use cases classified by our engine across all four EU AI Act risk tiers. Each example shows the full classification output — risk tier, applicable articles, compliance actions, and GDPR intersections. The August 2026 enforcement deadline is approaching — start preparing now.
Aug 2026
Full Enforcement
€35M
Maximum Penalty
10 Examples
All Risk Tiers
Starting August 2, 2026, all high-risk AI systems must be fully documented, risk-assessed, and compliant with the EU AI Act (Regulation 2024/1689). Organizations face penalties up to €35 million or 7% of global turnover for non-compliance.
All AI systems in Annex III categories — employment, education, critical infrastructure, law enforcement — must have complete technical documentation and conformity assessments.
Social scoring, real-time biometric surveillance, and manipulative AI are already banned since February 2025. Continued violations carry the highest penalties.
AI systems interacting with people must disclose their AI nature. Content generation systems must label their output as AI-generated.
Sicherheitsabteilung — Münchner Verkehrsgesellschaft
Live facial recognition system installed across all U-Bahn stations in Munich to identify individuals in real-time against a watchlist database. Cameras continuously scan all passengers entering and exiting stations, matching faces against a database of persons of interest provided by law enforcement.
Enhance public safety by identifying wanted individuals and preventing crime in the public transit network.
SecurVision AG
FaceTrack Live
This use case is continuous, real-time facial recognition across U-Bahn stations to identify individuals against a watchlist in publicly accessible spaces. Under the EU AI Act decision tree, this matches a prohibited practice under Art. 5: real-time remote biometric identification of people in public spaces (with only narrow, strictly conditioned exceptions typically tied to competent authorities and specific legal authorization). Because the intended use is blanket, continuous scanning of all passengers, it is treated as prohibited rather than merely high-risk biometrics under Annex III.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Human Resources — Deutsche Telekom AG
Automated CV screening tool that parses incoming job applications, extracts qualifications and experience, and ranks candidates against job requirements. Produces a shortlist with match scores. Recruiters use the shortlist as primary input for deciding who to invite to interviews.
Reduce time-to-hire from 45 to 20 days and ensure consistent, bias-reduced initial screening of 500+ applications per open position.
Personio SE
Personio Recruiting AI
This system performs automated screening and ranking of job applicants and produces match scores/shortlists that are the primary input for interview decisions. Recruitment and access to employment are explicitly covered by Annex III (4) “employment, worker management and access to self-employment,” making it High-Risk under Art. 6 regardless of human final decision-making. Because it evaluates and ranks identifiable natural persons (applicants) using their personal data, it also constitutes profiling, increasing governance and oversight expectations.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Kreditrisiko — Commerzbank AG
Machine learning model that assesses creditworthiness of loan applicants by analyzing income, employment history, spending patterns, and third-party credit bureau data. Outputs a risk score that determines loan approval, interest rate, and credit limit. Used for all consumer loan decisions under €50,000.
Automate credit decisions for high-volume consumer lending while maintaining regulatory compliance with BaFin requirements.
SCHUFA Holding AG
SCHUFA ScoreAI
This use case determines access to an essential private service—consumer credit—by generating a risk score that directly drives approval/rejection and loan terms, which falls under Annex III ‘essential services’ and therefore qualifies as high-risk under Art. 6. The system performs profiling of natural persons (creditworthiness scoring based on personal data) and is intended to be fully automated, increasing regulatory scrutiny and requiring strong human oversight measures even if the business process aims for automation. Because outcomes have significant effects on individuals (credit access, pricing, limits), both EU AI Act high-risk obligations and GDPR automated decision-making safeguards are triggered.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Studierendenkanzlei — Technische Universität München
AI system that evaluates university applications by scoring motivation letters, academic transcripts, and extracurricular profiles. Generates a ranked shortlist of candidates for each study programme. Admissions committee uses the ranking to prioritize interview invitations.
Handle the growing volume of applications (15,000+ per year) while maintaining fair and consistent evaluation criteria across all faculties.
This use case falls under EU AI Act Art. 6 high-risk because it is used for access to and management of education/vocational training, specifically admissions pre-screening, which is listed in Annex III category (education_training). The system scores and ranks individual applicants and the ranking directly determines who proceeds to interviews, meaning it materially influences access to education opportunities and raises discrimination and due-process risks. It is not prohibited under Art. 5 (no social scoring by public authorities outside the banned context, no emotion recognition, no remote biometric identification), but it triggers the full high-risk lifecycle obligations (Arts. 9–15).
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Netzführung — Austrian Power Grid AG
Machine learning system that predicts electricity demand across the Austrian grid and automatically adjusts load distribution between power stations. The AI triggers switching commands to reroute power flows and prevent grid overload during peak periods.
Ensure grid stability, prevent blackouts, and optimize energy distribution across the national electricity network.
Siemens Energy
GridAI Optimizer
This system is used to manage and operate the national high-voltage electricity grid by forecasting demand and automatically triggering switching/load-balancing commands. AI used in the management and operation of critical infrastructure is classified as high-risk under Art. 6 in conjunction with Annex III (critical infrastructure), because malfunction can cause significant harm (e.g., outages/blackouts). The use case does not match Art. 5 prohibited practices (no social scoring, biometric identification, manipulation, or workplace/education emotion recognition). Because it directly controls operational grid switching, high-risk obligations (risk management, documentation, logging, oversight, robustness/cybersecurity, and conformity assessment) are typically expected.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Kundenservice — Allianz Versicherungs-AG
Conversational AI chatbot on the company website that answers customer questions about insurance policies, claims status, and coverage details. Escalates complex queries to human agents. Does not make coverage decisions or process claims — purely informational.
Provide 24/7 first-level customer support, reducing call center volume by handling routine inquiries automatically.
Cognigy GmbH
Cognigy.AI
This is a customer-facing conversational AI on a website, which triggers EU AI Act transparency obligations for chatbots (Art. 50). The described use is informational and does not perform eligibility, pricing, claims handling, or other decisions about access to essential services/benefits, so it does not fall under Annex III high-risk categories (e.g., essential services). Based on the decision tree, it is therefore classified as limited risk due to the required user disclosure, with no Annex III category.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Marketing Communications — Red Bull GmbH, Salzburg
Generative AI tool used by the marketing team to draft social media posts, email newsletters, and blog articles. All content is reviewed and edited by human copywriters before publication. The tool generates text based on brand guidelines and campaign briefs.
Accelerate content production pipeline and maintain consistent brand voice across channels.
OpenAI
ChatGPT Enterprise
This use case is a generative AI system producing marketing text intended for publication to external audiences (social media, newsletters, blog). Under the EU AI Act decision tree, it is not prohibited (Art. 5) and it does not fall under Annex III high-risk areas (Art. 6) because it is not used for decisions about individuals (e.g., employment, credit, essential services). However, because it generates content that is disseminated externally, transparency obligations for AI-generated/synthetic content are likely triggered (Art. 50), so the appropriate tier is limited risk even with human editorial review.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Software Engineering — Bosch Digital, Stuttgart
AI-powered code completion tool integrated into developer IDEs. Suggests code completions, generates boilerplate, and explains code snippets. Used exclusively by internal software engineers for productivity. No customer-facing output; no personal data processed.
Increase developer productivity by 20-30% through intelligent code suggestions and automated boilerplate generation.
GitHub
GitHub Copilot Enterprise
Based on the described intended use, this is an internal code-completion/productivity tool for software engineers and does not fall under any Annex III high-risk area (Art. 6 + Annex III) and does not profile natural persons. It also does not match any Art. 5 prohibited practices. Art. 50 limited-risk transparency obligations are typically aimed at systems interacting with people as chatbots or generating content for external audiences; since outputs are internal-only and not customer-facing, the use case is best classified as minimal risk, with a note to reassess if it is deployed as an interactive assistant requiring disclosure to users.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
Data Science — Swarovski AG, Wattens
Internal dashboard that uses machine learning to forecast quarterly sales revenue based on pipeline data, historical trends, and market indicators. Developed in-house by the data science team. Outputs aggregate predictions at product-line level — no individual customer scoring or profiling.
Enable data-driven quarterly planning and inventory management by providing accurate sales forecasts per product line.
This use case is an internal ML-based forecasting dashboard producing aggregate predictions at product-line level for planning and inventory management. It does not fall under any Art. 5 prohibited practices and does not match an Annex III high-risk area because it is not used to make or materially influence decisions about individuals in employment, essential services, education, law enforcement, etc. As described, it also does not perform profiling of natural persons under Art. 6(2) because there is no individual customer/employee scoring or categorization; therefore Art. 50 transparency obligations for chatbots/deepfakes are not triggered either.
For informational purposes only. This AI-generated screening is not legal advice and does not constitute a binding compliance determination under Regulation (EU) 2024/1689. Always have the final risk assessment reviewed and approved by a qualified legal or compliance professional.
AI-generated classification — requires review and approval by a qualified legal or compliance professional before use.
AI-Casefile provides AI-powered risk classification as a starting point for your compliance process. These classifications are advisory and do not constitute legal advice. The final risk assessment must always be reviewed and approved by a qualified legal or compliance professional.
AI-Assisted, Not AI-Decided
Our engine analyzes use case descriptions and suggests a risk tier based on EU AI Act criteria. This is a recommendation — not a binding legal determination.
Human Approval Required
Every classification must be reviewed by a compliance officer or legal counsel before it is used for regulatory reporting or decision-making.
No Liability for AI Output
AI-Casefile and its AI classifications do not replace professional legal advice. Your organization remains responsible for the final risk assessment and compliance decisions.
Common questions about EU AI Act classification and the August 2026 deadline.
The EU AI Act classifies AI systems into four risk tiers: Prohibited (banned outright, e.g., social scoring, real-time biometric surveillance), High Risk (strict requirements including documentation, conformity assessments, and human oversight — covers areas like employment, education, critical infrastructure), Limited Risk (transparency obligations, e.g., chatbots must disclose their AI nature), and Minimal Risk (voluntary codes of practice, no mandatory requirements).
August 2, 2026 marks the full application of the EU AI Act for high-risk AI systems. By this date, organizations deploying or providing high-risk AI must have complete technical documentation, risk management systems, data governance measures, conformity assessments, and registration in the EU database. The prohibited practices and AI literacy requirements are already in force since February 2025.
AI-Casefile uses AI-powered analysis to evaluate your AI system description against the EU AI Act's criteria. It considers the system's intended purpose, data processed, deployment context, and affected persons to suggest a risk tier, applicable articles, compliance actions, and GDPR intersections. Each classification is a starting point — human review and legal approval are always required.
No. These examples are illustrative and educational. They demonstrate how AI-Casefile's classification engine works but do not constitute legal advice. Every classification must be reviewed and approved by a qualified legal or compliance professional. Your organization remains responsible for final compliance decisions.
Penalties are tiered: up to €35 million or 7% of global annual turnover for prohibited AI practices, up to €15 million or 3% for high-risk AI violations, and up to €7.5 million or 1.5% for other violations. These penalties apply from the respective enforcement dates — prohibited practices since February 2025, high-risk systems from August 2026.
The 10 examples cover a wide range of sectors affected by the EU AI Act: public administration (social scoring), transportation (biometric surveillance), human resources (resume screening), financial services (credit scoring), education (admissions), energy (critical infrastructure), insurance (customer chatbots), marketing (content generation), software development (code assistants), and sales (analytics dashboards).
Don't wait until the enforcement deadline. Start classifying and documenting your AI systems now — our free trial gives you instant access to AI-powered EU AI Act classifications.