Onshore, Offshore or Automate? When Decisions Cannot Fail

 

4 min read

All Posts

Why Ignoring the EU AI Act Might Be Your Next Major Compliance Failure

 

4 min read

With the EU AI Act set to reshape the regulatory landscape for artificial intelligence, businesses operating in and around the European Union must prepare for profound change. Drawing on expert guidance from both EY and PwC, this overview highlights what organisations need to know and do to ensure compliance and foster responsible AI innovation.

The Scope and Timeline of the EU AI Act

Both EY and PwC emphasise the Act’s broad territorial reach. It applies not only to organisations based in the EU but also to those outside the bloc if their AI systems are used within the EU. PwC outlines a phased implementation schedule, commencing in April 2025, with prohibited AI systems banned within six months and complete requirements for high-risk systems taking effect over the following two years.

Risk-Based Approach and Classification

The Act classifies AI systems into four risk categories:

  • Prohibited: Systems that pose an “unacceptable risk”, such as those enabling subliminal manipulation or social scoring, are strictly forbidden. Both EY and PwC underscore that these bans are absolute, with no exceptions.

  • High Risk: AI used in critical infrastructure, law enforcement, healthcare, or financial services falls into this category. EY notes that such systems require a stringent conformity assessment, while PwC outlines additional obligations: risk management, data governance, technical documentation, transparency, and robust human oversight.

  • General Purpose AI (GPAI): PwC highlights new obligations for GPAI, especially those with systemic risk, including technical documentation, copyright compliance, and incident reporting.

  • Transparency Obligations: Systems that interact with humans must disclose their AI nature, as both EY and PwC advise.

Governance, Accountability, and Lifecycle Management

PwC provides a detailed roadmap for operationalising compliance. They recommend establishing an AI inventory and portfolio management system, ensuring every AI use case is registered, risk-assessed, and monitored throughout its lifecycle. This approach aligns with EY’s call for robust governance structures and straightforward assignment of responsibilities.

Both firms stress the importance of:

  • Continuous Risk Assessment: Regularly identifying, assessing, and mitigating risks across the AI lifecycle.

  • Documentation and Traceability: Maintaining comprehensive records, technical documentation, and audit trails.

  • Training and Awareness: Ensuring all employees understand their responsibilities and the ethical use of AI.

Penalties and the Cost of Non-Compliance

The stakes are high. PWC reports fines of up to €40 million or 7% of global turnover for the most severe breaches. The message is clear: early and thorough preparation is essential.

Practical Steps for Organisations

Drawing from both EY and PwC, here’s a checklist for businesses:

  1. Map and Classify AI Systems: Inventory all AI applications and assess their risk category.

  2. Establish Governance Frameworks: Set up clear roles, responsibilities, and oversight committees.

  3. Implement Lifecycle Controls: From design to deployment and monitoring, ensure controls and documentation are in place.

  4. Train Staff: Foster a culture of responsible AI use and compliance.

  5. Monitor Regulatory Developments: Stay updated as the Act evolves and further guidance is issued.

Final Thoughts

The EU AI Act marks a significant shift in the governance of AI, with far-reaching implications for businesses worldwide. By leveraging the insights and practical frameworks provided by EY and PwC, organisations can not only achieve compliance but also turn regulatory readiness into a driver of innovation and trust.

For further reading, consult the complete reports from EY and PwC.

Recent Posts

Onshore, Offshore or Automate? When Decisions Cannot Fail

4 min read For decades, regulated businesses have relied on a familiar operating model to manage cost and scale: keep critical work onshore, move volu...

Read more

AI in 2026: Leadership, Risk and Real-World Returns

4 min read Artificial intelligence is no longer a technology experiment. In 2026, AI is becoming a strategic business imperative for organisations see...

Read more

IPMI Insurers and the Automation Imperative for 2026

2 min read International Private Medical Insurance (IPMI) is entering a transformative phase. Insurers manage complex risks, high claims volumes, and ...

Read more

Why Some Business Processes Cannot Tolerate AI Error

4 min read AI automation is delivering real operational gains across regulated industries. From faster case handling to lower costs, the benefits are ...

Read more
2026 Spixii white-paper - Automation with Certainty for Regulated Businesses

Download your FREE Spixii White Paper copy

Unlock the full potential of AI and automation in regulated insurance by downloading our white paper, which provides actionable insights for 2026:
  • Discover how to embed automation while ensuring 100% compliance and deterministic decision-making
  • Learn strategies to increase operational margin, speed of transaction, and customer trust
  • Understand how risk-aware innovation can drive sustainable growth and competitive advantage