If your data or systems interact with the EU, this affects you.
Why it matters
The Act applies to any AI used in or affecting people in the EU, regardless of where it was built. SaaS, APIs, apps, open-source models are included and you are responsible.
This Act introduces new rules for risk, transparency and accountability. Enforcement began in early 2025 therefore you need to be aware of the regulations.
AI risk categories
Your AI fits into one of four risk levels, each with specific obligations:
1. Prohibited AI
Fully banned. Includes:
- Emotion detection in workplaces or schools
- Mass biometric surveillance (e.g. facial recognition from CCTV – AI usage in CCTV is becoming more common for different purposes)
- Social scoring or profiling based on race, beliefs or behavior
These uses violate both the AI Act (2024) and GDPR (2018).
2. High risk AI
Strictly regulated. Applies to AI used in:
- Hiring (e.g. CV screening, interview scoring)
- Credit scoring
- Education, healthcare and policing
You must:
- Test, document and monitor these systems
- Register high-risk AI in the EU’s central database before deployment
- Report serious incidents to national regulators within 15 days
- Be prepared for random or targeted audits if contacted
3. General purpose AI (GPAI)
Covers foundation models like ChatGPT. Providers must:
- Publish summaries of training data
- Respect copyright
- Notify the EU AI Office if models pose systemic risk
- Report incidents and cooperate with regulators
4. Low risk AI
Includes chatbots and productivity tools. You must disclose when users interact with AI, unless it’s obvious. A simple label like “AI-assisted” is enough, this will affect many new startups.
Timeline
- February 2025: Prohibited AI ban took effect
- August 2025: GPAI transparency rules started
- August 2026: High-risk AI compliance begins
- August 2027: AI in regulated products must comply
What you should do now
- Inventory all AI systems, including third-party tools
- Classify each system by risk level
- Document all training and testing of systems
- Train teams on their legal obligations
SUSAN platform supports with multiple of these initiatives.
How ServQual SUSAN Can Help
- EU AI Act Checklist and EU AI Risk exercise: Now available under regional compliance module
- Supports with audits and reports
- Manages third-party risk: Risk scoring and vendor compliance tracking
- Inventory AI systems: Use SUSAN’s Asset and Inventory Management module to support with this
SUSAN keeps you compliant and organised
Next steps
- Explore the EU AI Act Module in SUSAN, provided by ServQual SUSAN: https://srql.com/susan
- Use the checklist provided to assess and learn what to do next
- Update your risk dashboard to see where you and your organisation stand compliance wise.
Negligence Penalties
- Prohibited AI: Up to €35M
- High-risk or GPAI violations: Up to €15M
- Incorrect or missing information: Up to €7.5M
Be aware that these penalties can apply to organisations both within and outside the EU. Companies like Google, OpenAI, Anthropic are committed to this Act and are ahead of the curve, make sure your organisations are as well.
Stay Ahead of EU AI Compliance
Start with SUSAN today and keep your organization audit-ready.
"Where your AI goes, compliance and the law follow”
Dara Sturgeon
Security Success Manager | ServQual
FAQS
We serve B2B SaaS companies, financial institutions, healthcare providers, manufacturing firms, and legal consultancies.
Yes, we have a UK-based team providing 24/7 incident response and support.
Absolutely. We specialize in regulatory compliance and offer full support from gap assessment to certification readiness.
Unlike large vendors, we provide agile, personalized cybersecurity services backed by global expertise and UK-specific support.