WHY TAKE THIS COURSE
Hands-on EU AI Act training for teams building, buying, or deploying AI
This virtual, fast-paced, hands-on session is built for product & PM teams, data/ML teams, legal/privacy/compliance, HR, CX/support operations, procurement/vendor managers, security/IT, and executives/risk owners working in iGaming and similarly data-intensive environments.
Participants learn what “counts” as an AI system, typical failure modes and harms (bias, automation bias, lack of transparency, drift, unintended secondary use), and how the EU AI Act’s risk-based architecture drives real obligations and controls.
The course is highly interactive (breakouts + practical classification exercises) so learners leave with a concrete “next steps” action list for AI literacy, governance, and compliance readiness.
This course includes:
- AI basics: what an AI system is (and is not) under the EU AI Act
- EU AI Act scope and roles (provider/deployer/distributor) and why accountability doesn’t “shift to the vendor”
- Risk tiers (unacceptable/high/limited/minimal) and how classification drives duties
- Hands-on: classify realistic iGaming-style use cases into risk categories
- Human oversight: models (in-the-loop / on-the-loop / in-command) and what oversight is not
- Transparency obligations: when and how to disclose AI interaction/impact
- Data governance + documentation essentials (lightweight “model/data cards”, logging, audit readiness)
- AI security, abuse/misuse scenarios (prompt injection, data leakage, manipulation)
- Incident handling expectations: detect → escalate → document; roles during incidents
Classroom