4 min read
With the EU AI Act set to reshape the regulatory landscape for artificial intelligence, businesses operating in and around the European Union must prepare for profound change. Drawing on expert guidance from both EY and PwC, this overview highlights what organisations need to know and do to ensure compliance and foster responsible AI innovation.
Both EY and PwC emphasise the Act’s broad territorial reach. It applies not only to organisations based in the EU but also to those outside the bloc if their AI systems are used within the EU. PwC outlines a phased implementation schedule, commencing in April 2025, with prohibited AI systems banned within six months and complete requirements for high-risk systems taking effect over the following two years.
The Act classifies AI systems into four risk categories:
Prohibited: Systems that pose an “unacceptable risk”, such as those enabling subliminal manipulation or social scoring, are strictly forbidden. Both EY and PwC underscore that these bans are absolute, with no exceptions.
High Risk: AI used in critical infrastructure, law enforcement, healthcare, or financial services falls into this category. EY notes that such systems require a stringent conformity assessment, while PwC outlines additional obligations: risk management, data governance, technical documentation, transparency, and robust human oversight.
General Purpose AI (GPAI): PwC highlights new obligations for GPAI, especially those with systemic risk, including technical documentation, copyright compliance, and incident reporting.
Transparency Obligations: Systems that interact with humans must disclose their AI nature, as both EY and PwC advise.
PwC provides a detailed roadmap for operationalising compliance. They recommend establishing an AI inventory and portfolio management system, ensuring every AI use case is registered, risk-assessed, and monitored throughout its lifecycle. This approach aligns with EY’s call for robust governance structures and straightforward assignment of responsibilities.
Both firms stress the importance of:
Continuous Risk Assessment: Regularly identifying, assessing, and mitigating risks across the AI lifecycle.
Documentation and Traceability: Maintaining comprehensive records, technical documentation, and audit trails.
Training and Awareness: Ensuring all employees understand their responsibilities and the ethical use of AI.
The stakes are high. PWC reports fines of up to €40 million or 7% of global turnover for the most severe breaches. The message is clear: early and thorough preparation is essential.
Drawing from both EY and PwC, here’s a checklist for businesses:
Map and Classify AI Systems: Inventory all AI applications and assess their risk category.
Establish Governance Frameworks: Set up clear roles, responsibilities, and oversight committees.
Implement Lifecycle Controls: From design to deployment and monitoring, ensure controls and documentation are in place.
Train Staff: Foster a culture of responsible AI use and compliance.
Monitor Regulatory Developments: Stay updated as the Act evolves and further guidance is issued.
The EU AI Act marks a significant shift in the governance of AI, with far-reaching implications for businesses worldwide. By leveraging the insights and practical frameworks provided by EY and PwC, organisations can not only achieve compliance but also turn regulatory readiness into a driver of innovation and trust.
For further reading, consult the complete reports from EY and PwC.