Responsible AI

Building trust in AI from the ground up: How you can secure the data behind it

A person checking a  server
  • Insight
  • 6 minute read
  • March 26, 2026
AI is transforming industries, but how prepared is your data to support it?
Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Jonathan Hayes

Jonathan Hayes

Director, PwC Ireland (Republic of)

Many industry leaders already understand that data is foundational for an enterprise to function effectively. Yet, as data volumes grow and regulatory requirements evolve, companies can struggle to maintain visibility and control. In this environment, data risk becomes an elevated priority.

Managing data risk effectively isn’t just about reducing vulnerabilities; it’s about building trust, improving decision-making, and keeping AI-driven processes aligned with regulatory requirements. With the right governance and security measures in place, chief information security officers (CISOs) can unlock AI’s holistic potential while helping decrease risks.

AI raises the stakes

As AI adoption accelerates, is your organisation keeping pace with the data risks? These risks are not just new challenges; they can exacerbate existing weaknesses in data security and governance. AI adoption magnifies these issues, bringing the underlying issues to the surface and causing organisations to question the overall integrity of the AI model due to the lack of data trust.

As AI-powered tools become more embedded in daily operations, security, data and information leaders now face two important priorities:

  1. Training AI models on high-quality data. Without sufficient oversight, AI systems can reinforce bias, misinterpret information, or generate outputs that are difficult to rely on.
  2. Applying governance and security controls to AI-powered workflows. As AI interacts with sensitive business data, organisations should enforce policies that can prevent compliance failures, security breaches, and unauthorised access.

Data estate gaps are amplifying third‑party risk

Recent research highlights the scale of data-related risk facing Irish organisations, particularly in relation to third-party exposure.

In our 2026 PwC Digital Trust Insights Survey, over one-third (38%) of Irish respondents reported a data breach costing their organisation over €500,000, with third‑party breaches now the number one cyber threat for Irish respondents (48%).

Similarly, software supply chain compromise is also recognised as a significant threat with 28% citing it as a top concern.

As organisations rely more heavily on external platforms, suppliers, and software components, AI will inevitably inherit the risks present in those ecosystems. Without effective governance, AI can amplify third-party vulnerabilities rather than mitigate them.

Compounding the risk, many organisations still struggle to get control of their data estate. Unstructured content, legacy systems, and inconsistent access controls make it harder to enforce effective governance and security.

Data risks that could undermine your AI

How much can you trust the data powering your AI? If data is inaccurate, unprotected or exposed to security gaps, AI models can generate misleading insights, introduce compliance risks, and compromise sensitive information. Before organisations can unlock AI’s potential, they should first mitigate key data risks, including:

  • Data quality risks: AI models trained on unstructured, redundant, or outdated content can generate unreliable outputs, leading to poor decision-making.
  • Data protection risks: Uncontrolled access permissions to sensitive data coupled with AI tools can expose sensitive data to unauthorised users, increasing security vulnerabilities.
  • Data compliance risks: Without proper classification and oversight, AI-driven automation may process sensitive data in ways that violate privacy regulations.
  • Data exposure risks: AI tools with insufficient access controls can increase the risk of insider threats, unauthorised data sharing, and leakage of sensitive information.

Without addressing these foundational risks, AI-driven tools may introduce more uncertainty than innovation. Many organisations often need a clearer strategy for governing the data that fuels AI, or they risk AI working against them instead of for them.

Turning risk into resilience

How can organisations embrace AI and manage data risk? Technology plays an integral role in mitigating risks and strengthening AI readiness. The proper tools can help organisations improve data visibility, enforce security policies, and maintain compliance without slowing down progress. A strong data governance and security framework enables AI models to operate on more precise, trusted data while helping reduce exposure to breaches and regulatory failures.

By addressing risks before scaling AI adoption, organisations can deploy AI models that drive growth while maintaining security and trust.

Strengthening AI governance

One of the biggest challenges organisations face when adopting AI is content sprawl — unstructured, redundant, and outdated information scattered across platforms. Without clear oversight, AI systems may surface or process content that is outdated, irrelevant, or sensitive.

“AI is only as trustworthy as the data and access behind it. Govern your data estate and third‑party software with classification, least‑privilege, and continuous monitoring to keep models compliant and outputs reliable.”

Keith Power, Partner and Responsible AI Leader

Strengthen AI data security and governance

As AI continues to scale, organisations will increasingly be differentiated not by how quickly they adopt new technologies, but by how effectively they manage the risks that come with them.

Taking the following steps can help build a strong foundation for AI-driven innovation while decreasing exposure to threats and regulatory concerns:

  • Improve compliance and governance controls for AI workflows: Classify, label, and protect sensitive data, so AI models interact with governed and compliant information.
  • Reduce data exposure and access sprawl: Enforce least-privilege access, monitor content usage, and secure AI-accessible data.
  • Establish a trusted data foundation for AI models: Map, inventory, and clean up unstructured data, so AI models are trained on precise and authoritative sources.
  • Enhance AI security and threat response: Centralise monitoring and accelerate detection and mitigation of AI-related security risks.
  • Adopt a phased approach to AI governance: Start with targeted governance policies, train teams on Responsible AI use, and refine security controls before scaling AI adoption.

By proactively addressing these areas, organisations can confidently deploy AI at scale, while maintaining the trust that adoption and transformation ultimately depend on.

We’re here to help

At PwC, we bring cyber, data governance, and Responsible AI expertise — paired with hands‑on knowledge of the most advanced AI-powered tools —to reduce exposure and improve decision‑making. We start with a clear view of your data estate and third‑party risk, then design pragmatic controls and the operating model to sustain them. You get auditable governance aligned to EU regulations, faster incident response, and AI outcomes your organisation can stand behind. Contact us today. 

Responsible AI

Harness AI’s limitless potential while managing risks.

PwC’s Global Digital Trust Insights Survey 2026

How Irish firms are adapting to new cybersecurity threats.

Contact us

Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Tel: +353 86 824 6993

David Lee

David Lee

Chief Technology Officer, PwC Ireland (Republic of)

Jonathan Hayes

Jonathan Hayes

Director, PwC Ireland (Republic of)

Tel: 086 853 5234

James Scott

James Scott

Director, PwC Ireland (Republic of)

Tel: +353 87 144 1818

Follow PwC Ireland