Artificial intelligence (AI) now underpins everything from trade execution to client engagement. The EU AI Act, the world’s first full-spectrum law on the technology, will apply to any organisation that develops, buys or deploys AI within the European Union. High-risk systems, in asset and wealth management (AWM), face evidence-heavy obligations and tight enforcement timelines: staff must have reached basic AI literacy by 2 February of this year, and high-risk controls go live on 2 August 2026. Firms that act early can turn compliance work into a differentiator.
The Act applies to any organisation that provides, deploys, imports or distributes an AI system in the EU. Asset managers often occupy several of those roles at once, for example when they develop a model in-house yet embed a vendor tool inside a trading platform. The law classifies systems in four risk tiers — minimal, limited, high and unacceptable — based on their potential impact on people or markets. High-risk systems must meet the toughest demands for data governance, human oversight, technical documentation and continuous monitoring.
Common high-risk use cases in AWM include algorithms that initiate or execute trades; models that drive portfolio allocation or rebalancing; recommendation engines that match products to clients; and tools that generate risk reports relied on for regulatory filings.
Start by confirming that each tool meets the EU AI Act’s broad definition of AI. Ask four questions of every process:
With guidance still emerging, firms must exercise judgement and monitor updates from the EU AI Office.
PwC research shows that 82 % of AWM firms list regulatory compliance and bias management as top priorities this year. Four actions create a solid foundation:
Build and maintain a complete AI inventory;
Classify every system by risk tier and record the firm’s role;
Embed a Responsible AI framework covering transparency, data governance and human oversight; and
Produce documentation and audit trails so evidence is generated automatically, not retrofitted.
Many firms adopted AI on a piecemeal basis, leaving legacy models without the documentation the EU AI Act demands. With high-risk obligations taking effect on 2 August 2026, any delay now will compress delivery windows and increase disruption.
Handled well, compliance can reinforce client trust and differentiate an asset manager’s brand. Treating the legislation as a catalyst rather than a constraint lets firms embed robust controls, improve data quality and launch innovative, data-driven services that set new standards for performance and transparency.
An accurate central inventory is the cornerstone of any compliance programme. Catalogue every model and AI-enabled feature, even those in vendor software. For each system capture its purpose, location (cloud or on-premises), data sources, owner, stage and any external providers. Annotate whether the firm is a provider, deployer, importer or distributor under the EU AI Act. Link each entry to your risk register and controls library so gaps surface automatically. Update the inventory through automated discovery scans and a change-management feed. A clear inventory informs budget decisions, guides regulator conversations and prevents surprises during audits. It also becomes the source of truth for training plans, vendor negotiations and model retirement. Publish a dashboard that visualises coverage and risk trends for senior management in real time.
Once the inventory is stable, compare current processes against each obligation in the legislation. Map policies, control tests, model-validation artefacts and documentation to the EU AI Act’s articles, noting where evidence is missing or outdated. Engage legal, compliance, risk, data and technology teams so the review captures both policy and practice. Rank the gaps by regulatory impact and fix complexity, then build a phased remediation roadmap that aligns with milestone dates — in particular, high-risk controls by August 2026. Quick wins might include adding version control to training data or tightening access management. Complex fixes, such as redesigning a high-risk portfolio-construction model, need budget, senior sponsorship and vendor collaboration. Document ownership for every remediation task.
Compliance alone will not guarantee safe outcomes or client trust. Embed a firm-wide framework that blends regulatory requirements with broader responsible use principles. Start by setting board-approved AI values — transparency, fairness, accountability and security — and translate them into practical standards for data, model design, monitoring and incident response. Require explainability for critical decisions and mandate human-in-the-loop checkpoints that can pause or roll back models. Integrate privacy-by-design and strong cyber controls so AI benefits are not undermined by data leaks or manipulation. Provide tiered training that equips quants, business users and senior executives to spot and escalate issues quickly. Finally, review the framework annually against new guidance from global regulators and emerging industry good practice. Publish outcomes to investors to reinforce accountability.
PwC has multidisciplinary teams that combine regulatory insight, data science and operational experience across the AWM sector. We can help you map your AI estate, design proportionate controls, train staff and engage confidently with supervisors. Whether you need a rapid readiness diagnostic or end-to-end programme delivery, our specialists work alongside your own teams to keep disruption low and value high. Contact us to discuss how we can support your journey to responsible, compliant AI.
Menu