PwC’s latest Global Digital Trust Insights Survey highlights three urgent priorities for Irish organisations: strengthening cybersecurity, adapting to the rapid rise of AI, and managing third-party risk. As AI becomes part of everyday operations, organisations need governance and security frameworks that protect data, uphold ethical standards, and anticipate new threats. In this insight, we explore how strong AI governance and security can help organisations build trust, stay resilient, and harness the full potential of intelligent technologies.
AI adoption is moving fast. Success depends on more than technical capability; it relies on sound governance, strong security, and effective change management. These three areas must work together if organisations are to realise the full value of AI while managing risk and integrating it responsibly into existing operations.
AI is reshaping cybersecurity, offering new ways to detect, respond to, and prevent threats. Yet PwC’s survey shows that more than half (52%) of Irish respondents see an unclear risk appetite as the biggest barrier to adopting AI — significantly higher than global and Western European averages. This points to a wider challenge around risk ownership and governance.
A defined risk appetite is essential for sound decision-making. Without clarity, innovation slows and vulnerability increases. Organisations need to set and communicate a clear position on AI risk — one that balances opportunity with accountability. This means setting limits for AI decision-making, embedding risk checks at each stage of the AI lifecycle, and ensuring collaboration across cybersecurity, compliance, and leadership teams. Once risk tolerance is understood, organisations can move from hesitation to confident, well-managed adoption.
Governance remains complex, with ongoing questions around accountability, oversight, and compliance. Strong AI governance requires clear responsibility for monitoring systems, assessing performance, and meeting ethical and regulatory standards.
The EU AI Act makes this even more important. It introduces a harmonised legal framework across the EU, classifying AI systems by risk level and setting detailed rules for high-risk applications. Establishing governance committees that include cybersecurity, IT, legal, and risk leaders helps ensure AI systems meet these requirements.
Transparent processes, such as explainable AI decisions and reliable audit trails, build trust and support compliance. Strengthening governance in line with EU expectations will help Irish organisations clarify risk ownership, reduce legal and reputational exposure, and manage AI initiatives in a structured, responsible way.
As AI becomes more common across operations, the cybersecurity landscape is changing faster than many organisations can adapt. AI brings major benefits but also introduces new and often unfamiliar risks. The most pressing are the expansion of the attack surface, exposure of sensitive data, and the shortage of AI-security skills.
Our Digital Trust Insights Survey shows that 67% of global security executives believe generative AI has increased their organisation’s cyber-attack surface. Integrating AI into business processes, from customer chatbots to automated coding and decision-support systems, increases the number of potential entry points for attackers.
AI models are vulnerable to new forms of attack such as prompt injection, model inversion, and adversarial manipulation, which target the model’s logic directly. Without tailored monitoring and threat detection, these vulnerabilities can go unnoticed until a breach occurs. Continuous testing and model monitoring are now essential parts of secure AI deployment.
AI tools often require access to large volumes of business and personal data. If that data is not properly labelled, classified, and protected, there is a high risk of accidental exposure. For example, a generative AI tool connected to internal knowledge bases may reveal confidential HR, financial, or client information in response to everyday prompts.
Another risk arises when employees use public or third-party AI tools without clear guidance. When staff input confidential material into systems that store or train on that data, they can inadvertently expose sensitive content. Without clear policies and secure environments, organisations face serious data-protection and compliance challenges, especially when AI providers are external or lack enterprise-grade safeguards.
AI security is still a developing discipline. Many organisations do not yet have professionals who combine cybersecurity expertise with an understanding of AI threat models. This limits their ability to perform thorough risk assessments, manage secure deployment, and respond effectively to incidents.
Encouragingly, 54% of Irish respondents to our survey say they are investing in upskilling and reskilling teams to meet this need. Building AI security capability within existing functions is an important step towards sustainable, long-term resilience.
Adopting AI securely and responsibly takes more than technical preparation. It requires a clear, deliberate approach to change management. As AI becomes part of everyday operations, organisations must address not only new security risks but also the leadership and people factors that determine whether adoption succeeds or stalls.
One of the biggest barriers to progress is hesitation at the top. Our survey shows that 42% of Irish organisations report uncertainty among senior leaders, higher than both global (34%) and regional (33%) averages. Many executives remain unsure how to evaluate AI risk, identify strong use cases, or align initiatives with business strategy.
This lack of clarity often leads to slow decision-making, fragmented projects, and underinvestment in the security foundations needed to scale AI safely.
Trust is another major challenge, both in the technology and in how it’s used. Employees are often cautious about AI if they don’t understand how decisions are made, or if they worry about surveillance or job impact. Concerns around transparency, fairness, and unintended outcomes continue to slow adoption.
Building understanding through open communication, clear policies, and visible leadership support is essential. Without it, even well-intentioned AI programmes risk losing momentum before they deliver value.
To strengthen AI adoption, organisations need to bring governance, security, and culture together under one strategy. The following steps offer a practical roadmap for building resilience, trust, and responsible use of AI at scale:
1. Create a cross-functional AI governance framework
Bring together leaders from technology, risk, privacy, legal, compliance, and the business. Define oversight committees and escalation routes to manage risk across the AI lifecycle.
2. Integrate AI risk management into enterprise processes
AI risks should sit within existing risk frameworks, not beside them. Include model bias, drift, and adversarial threats within broader information-security and data-protection programmes. This approach keeps risk identification, assessment, and mitigation consistent across the organisation and makes ownership and accountability clear.
3. Strengthen data security and privacy controls
Because AI relies on large volumes of sensitive data, strong data governance is critical. Apply data-minimisation, classification, encryption, and access controls across the lifecycle. Build privacy and security checks into AI development from the start to ensure compliance with GDPR, the EU AI Act, and other regulatory standards.
4. Apply security controls designed for AI systems
Security measures for AI should align with enterprise cybersecurity policies. Continuous validation, monitoring, and testing of AI models reduce exposure and support compliance. Taking this integrated approach helps protect information assets while maintaining confidence in AI-driven processes.
5. Build awareness and accountability around responsible AI
Responsible use of AI depends on people understanding its impact. Provide training on fairness, accountability, and transparency, and make these expectations part of performance goals. Regular discussions and workshops help teams stay alert to new risks and regulation while promoting confident, informed use of AI.
At PwC Ireland, we bring together expertise in AI strategy, data risk, privacy, and cybersecurity to help organisations adopt AI safely and responsibly. Whether you’re refining your AI strategy, securing deployed systems, or building trust in how data is used, we’re here to help. Contact us today.
Menu