EU AI Act: what actually changes for Data, AI and Product leaders
A pragmatic guide to Regulation (EU) 2024/1689 - no buzzwords, with real examples by function.
The AI Act is a Regulation - specifically, Regulation (EU) 2024/1689. This means it is directly applicable in all Member States. It’s not a directive "to adapt later". It’s not a recommendation. It’s law.
What I see many companies getting wrong is treating the AI Act as a "legal checklist". In practice, it will separate those who make AI a working system - with control, adoption and trust - from those stuck in POCs and "shadow AI".
What changes in real life
In leadership language, no legalese:
- You need to know where you already use AI. Not just "the official project". This includes tools purchased by teams, automations, copilots, AI-generated content and plugins embedded in productivity tools.
- Risk classification becomes part of operations. The question shifts from "do we use AI?" to "what’s the risk and what evidence do I need to maintain?"
- Transparency is not a detail. Especially for synthetic content (deepfakes, generated images) and interactions where people need to know they’re talking to a system - Article 50 of the Regulation.
- GPAI / foundation models enter the governance radar. The EU is placing specific obligations on general-purpose models - including technical documentation, safety testing and transparency about training data.
Timeline: what’s already in effect
The AI Act entered into force on 1 August 2024, but obligations are phased in. Here’s what matters:
In practice: what changes by function
This is where most articles stop. They explain the regulation but don’t say what changes day-to-day for the people in charge. I’ll try to be more useful than that.
Credit scoring and credit risk assessment are classified as high risk under Annex III, § 5(b) of the AI Act. This means: documented risk management, dataset governance, bias auditing, human oversight and explainability.
In practice: if you have an ML model that approves or denies credit, you need to document how it was trained, with what data, what bias was measured and have a human with real review authority. The same applies to fraud detection when it triggers automatic actions (blocking accounts, denying transactions).
Action: review all scoring/risk models and build a documentation trail before August 2026.
AI-generated content (text, images, videos) must be clearly identified as synthetic. Article 50 is direct: if you generate content with AI that could be confused with human content, you must label it.
In practice: that LinkedIn post generated by ChatGPT? The campaign with a Midjourney image? The video script written by AI? All need labelling. This isn’t "nice to have" - it’s a legal obligation.
Content personalisation (recommendations, targeting) is generally minimal or limited risk, but watch out: if your recommendations affect access to essential services or use deep behavioural profiling, the risk level goes up.
Action: create an internal labelling policy for synthetic content and review your content pipeline.
Recruitment, CV screening, performance evaluation and promotion decisions with AI are high risk (Annex III, § 4). This is probably the area with the most direct day-to-day impact.
In practice: that CV screening tool the recruitment team uses? Candidate scoring? Sentiment analysis in video interviews? All high risk. You need: - Documentation of how the system works - Bias auditing (gender, age, ethnicity, disability) - Real human oversight (not rubber-stamping) - Informing the candidate that AI is being used
And here’s a point few people mention: productivity monitoring with AI (like activity tracking tools) can also fall under high risk, depending on how it’s used and what decisions it influences.
Action: map every "people analytics" and AI tool in HR. Talk to legal and vendors.
Chatbots, virtual assistants and recommendation systems have a transparency obligation: the user needs to know they’re interacting with an AI system (Article 50). Simple, but many companies ignore it.
E-commerce recommendation systems are generally minimal risk. But if your product uses AI for health, education, access to public services or justice, you’re likely in high risk territory.
For product managers, the main impact is on process: every AI feature needs a risk assessment before going to production. This isn’t bureaucracy - it’s due diligence that protects the product and users.
Action: include "risk classification" in your AI feature discovery/delivery framework.
If you use general-purpose models (GPT, Claude, Gemini, Llama) internally or as part of your product, the new GPAI rules (Chapter V, applicable from August 2025) directly impact you.
For providers: technical documentation, safety testing, copyright policy on training data. For those who deploy these models: you need to understand what goes in and out and be able to respond to audits.
In practice: the engineering team needs to maintain inference logs, have control over prompt templates and document how models are used in each context.
Action: create a registry of all AI models in use (purchased, open-source, APIs), with risk classification and technical owner.
For Risk and Compliance, the AI Act isn’t "just another regulation". It’s a paradigm shift: for the first time, you need AI-specific governance, not just data governance.
This means: - AI inventory: know what exists, who owns it, what’s the risk - Vendor governance: demand documentation from AI vendors (and don’t accept "black boxes") - Incident response: have a clear process for when an AI system goes wrong - Audit trail: keep compliance evidence ready for supervision
The AI Act creates national supervisory authorities and the European AI Office, which will monitor GPAI. Fines can reach €35 million or 7% of global turnover for violations of prohibited practices.
Action: start the AI inventory now. Define ownership. Align with the DPO (GDPR) and start building the bridge between data protection and AI governance.
My take
The AI Act isn’t just about compliance. It forces a mature question: how do we ensure AI increases human capability without increasing risk, inequality or distrust?
If you lead Data/AI, Product, HR, Risk/Compliance or Operations, the initial playbook I recommend is:
- Inventory: map AI uses (including shadow AI - that GPT someone uses in Chrome)
- Risk classification: what’s minimal/limited/high-risk and why
- AI literacy by function: leadership, product, legal, HR, engineering - each needs to know the basics
- Vendor governance: what to demand from suppliers and how to document (especially for GPAI)
- Evidence + incident response: minimum audit trails and correction procedures
Official sources
Need help with AI governance?
I help companies turn the AI Act from a regulatory obligation into a competitive advantage - with clarity, without unnecessary bureaucracy.
Schedule a conversation