Responsible AI: from commitment to capability

Hero image
  • Survey
  • May 05, 2026

Irish organisations have embraced responsible AI in principle, but fewer have built the governance and operating discipline needed to make it work at scale.

Listen to the full audio version

Video

Resposible AI audio full lenght

15:32
More tools
  • Closed captions
  • Transcript
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript

Listen to the audio summary 

Video

Resposible AI audio summary

1:20
More tools
  • Closed captions
  • Transcript
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript

Why execution matters now

As Irish organisations move from AI experimentation towards broader adoption, the pressure is shifting from proving isolated use cases to building the foundations needed to scale with confidence. Responsible AI is now firmly on the agenda for Irish organisations. Progress has been real but uneven, with many organisations moving beyond early experimentation and advancing individual initiatives faster than the enterprise-wide governance needed to sustain them. As AI becomes more pervasive, the challenge is no longer awareness, but execution.

77%

have progressed beyond awareness into active responsible AI use.

70%

are only partially prepared for EU AI Act compliance.

65%

say responsible AI resourcing is not enough.

56%

expect AI governance models to change within the next year.

Adoption is widespread, but effective use remains inconsistent

The survey shows that responsible AI has moved firmly onto the organisational agenda in Ireland, even if enterprise-level maturity remains uneven. In practical terms, most organisations (77%) have moved past the initial policy stage and have commenced practical implementation. That is a significant shift from discussion to action.

The more revealing question is where organisations sit within that curve. The largest share of Ireland’s organisations is in the embedded stage, where governance practices and guidance exist but are not yet fully adopted across the enterprise. Only 19% of organisations say responsible AI is strategic — described in the survey as a recognised business priority with executive sponsorship. In the US, that figure is 28%. Ireland shows momentum, but fewer organisations have reached the point where responsible AI is anchored at leadership level and carried consistently through the operating model.

Responsible AI has therefore moved beyond awareness, but the harder test is whether organisations can sustain it in practice. Many have introduced policies, training, or governance structures. Fewer have fully integrated responsible AI into how systems are designed, procured, deployed and monitored. That’s the difference between adoption and maturity, and it’s arguably where the next phase of progress will be won or lost.

Adoption is established, but maturity remains uneven

Question: How would you describe where your organization is in the process of adopting responsible AI and AI governance practices?

Source: PwC’s 2026 Ireland Responsible AI Survey

The real gap is execution

77%

of Irish organisations are using responsible AI, but execution remains inconsistent.

If adoption is becoming normalised, execution remains uneven — and the gap is even clearer in comparison with the US. The defining gap is not intent but execution. Irish organisations lag US peers on the basic mechanics of responsible AI — clear ownership, consistent standards, and visibility into how AI is used in practice. This is clearly illustrated in the survey with Irish organisations much more likely to rate their governance practices as “somewhat effective” than “very effective”. Only 33% of Irish respondents say they are very effective at applying a risk-based approach to AI governance, compared with 47% in the US — the highest score across all governance areas. After that, confidence drops further: 30% say they are very effective at tracking and inventorying AI use cases (US: 45%), 28% at clear roles and accountability (US: 52%), and 28% at embedding responsible AI into risk, privacy and security processes (US: 48%). Lower scores are recorded for defining and communicating priorities (21% vs 52%), development and deployment standards (16% vs 52%), observability and monitoring (16% vs 45%), and employee training and awareness (14% vs 49%).

The barriers help explain why. The largest challenge by a wide margin is difficulty translating responsible AI principles into scaled, operational processes, selected by 77% of respondents. After that comes lack of clarity on ownership (37%), lack of tools or technical enablers (30%), limited budget or resources (28%), and cultural resistance to change (28%). The problem isn’t simply whether organisations understand the principles, it’s whether those principles have been turned into routines that teams can apply consistently.

Ownership patterns reinforce the point. Primary responsibility most often sits with data/AI teams (28%) or shared cross-functional models (28%), followed by IT/engineering (21%) and legal/compliance (12%). No respondents identify business units as the main owner. That suggests responsible AI is still concentrated in specialist or enabling functions rather than fully embedded as a business-wide discipline. The next gains are therefore likely to come less from new principles and more from clearer ownership, stronger governance routines and better support for teams expected to apply them.  

Effectiveness across Responsible AI practices

Question: How effective is your company in putting responsible AI and AI governance into practice in the following areas?

Source: PwC’s 2026 Ireland Responsible AI Survey

EU AI Act readiness exposes capability gaps

Only 14%

of organisations say they are fully prepared for the EU AI Act.

The EU AI Act is sharpening what readiness really means. Only 14% of Irish organisations say they are fully prepared for compliance. By contrast, 70% say they are partially prepared, 9% are minimally prepared, and 7% are not prepared at all. That distribution suggests many organisations recognise the importance of responsible AI, but most have not yet built the governance capability required for a more formal regulatory environment.

The most important finding is the nature of the barriers. More than half of respondents (53%) cite limited internal expertise or capacity for AI compliance. A further 37% point to budget or resource constraints, and 30% say lack of clarity about EU AI Act requirements is holding them back.

Readiness is unlikely to come from legal interpretation alone. It depends on whether organisations can build an inventory of AI use cases, define documentation requirements, connect legal and risk expectations with technology delivery, and establish governance routines that can withstand scrutiny. In that sense, EU AI Act readiness is best understood not as a narrow compliance project, but as a test of governance capability.

How prepared are Irish organisations for the EU AI Act?

Question: How prepared is your organisation to comply with the EU AI Act?

Source: PwC’s 2026 Ireland Responsible AI Survey

Responsible AI moves from risk control to value creation

The survey also shows that Irish organisations are widening how they think about the value of responsible AI. The most frequently cited benefit is reduced regulatory or compliance risk, selected by 67% of respondents. That is followed by protected brand and reputation (58%), enhanced cybersecurity and data protection (58%), enhanced customer experience (54%), enhanced innovation (51%), and improved return on AI investment (44%). Lower down the list are improved transparency (37%), improved internal stakeholder trust (37%), and improved external stakeholder trust (35%).

Responsible AI is increasingly being understood in practical business terms: as a way to reduce risk, strengthen trust and support better performance outcomes. Irish organisations tend to frame responsible AI first as a trust and risk discipline, while US peers place comparatively more emphasis on responsible AI as an enabler of improved return on AI investment. Our AI Performance Study suggests the organisations seeing the strongest AI returns are those that go beyond productivity and risk management alone, pairing growth ambition with the data, governance and operating foundations needed to scale AI reliably. In that context, Irish organisations may be building the right foundations, but the next opportunity is to connect those foundations more directly to growth and performance outcomes.

Irish organisations, operating in a more regulated context, are articulating value through trust and control first. But the strategic implication is broader: when governance is proportionate to risk, it can support faster, more confident adoption of AI. Responsible AI is therefore becoming part of the infrastructure that enables organisations to scale AI sustainably, rather than simply a mechanism for constraining it.  

Top barriers to operationalising Responsible AI

Question: What are the biggest barriers your organization faces in operationalising responsible AI and AI governance practices? (Select up to 3.)

Source: PwC’s 2026 Ireland Responsible AI Survey

Agentic AI raises the governance bar again

56%

of leaders expect Al agents to reshape governance within the next year.

Autonomous AI agents are emerging as the next major governance test. In Ireland, 56% of respondents believe autonomous agents will reshape how their organisation approaches AI governance over the next year, including 12% who strongly agree and 44% who agree. At the same time, 28% disagree and 19% strongly disagree. The split suggests many organisations see material change ahead, while a substantial minority remain unconvinced about its pace or scale.

That broader context is reflected in our AI Agent Survey, which found that Irish organisations are increasing investment and seeing early gains from AI agents, but remain at an early stage of adoption at scale, with trust still a major constraint. In that light, the split in the Responsible AI Survey looks less like uncertainty about whether agentic AI matters and more like differing views on how quickly it will translate into enterprise-wide change.

The data shows where organisations are focusing. Irish respondents report strongest adoption in foundational controls: 74% cite data access controls and 72% human-in-the-loop oversight. Beyond that, 51% point to role-based permissions, 49% to risk-based approvals, and 47% to agent activity logs, observability and monitoring. Evaluation and testing capabilities are materially weaker at 37%. A further 7% say they have no safeguards in place or planned, and 9% are unsure.

Irish organisations are much less likely than US peers to expect near-term governance change because of autonomous agents (56% versus 87%). At the same time, they’re weaker on evaluation and testing (37% versus 52%), while appearing stronger on data access controls (74% versus 55%) and human-in-the-loop oversight (72% versus 52%). That suggests Ireland may be prioritising control and intervention, while the US is moving faster on assurance mechanisms for agent behaviour. As systems become more autonomous, organisations will need both. 

Who leads Responsible AI?

Question: Which function has primary responsibility for driving responsible AI and AI governance in your organization? (Select one.)

Source: PwC’s 2026 Ireland Responsible AI Survey

What leaders should do next

Many organisations now have responsible AI principles, but far fewer have translated them into day‑to‑day decision‑making. The survey shows that 77% struggle to scale principles into operational processes, with ownership rarely sitting in business units. Without clear decision rights, escalation paths and governance forums, responsible AI remains dependent on individual judgement rather than organisational discipline. Leaders should therefore clarify who owns AI risk decisions, how accountability is shared across business, technology, risk, and compliance, and how issues are resolved in practice.

A risk‑based approach is the strongest area of governance in Ireland, yet only one‑third of organisations say they are very effective at applying it. Proportionate governance allows organisations to focus controls where potential impact is highest, while avoiding unnecessary friction for lower‑risk use cases. This balance is critical: without it, organisations either slow innovation or expose themselves to avoidable risk, undermining trust with regulators, customers and internal stakeholders.

Execution gaps are driven less by missing strategies and more by limited expertise, capacity and operational readiness. Over half of respondents cite internal capability constraints, and 65% say resourcing isn’t enough. Policies and frameworks alone are not enough. Organisations need to invest in governance roles, training, supporting tools and practical guidance for teams building, buying or deploying AI. Without this capability, even well‑designed frameworks will struggle to deliver consistent, scalable outcomes.

Low effectiveness scores for observability, monitoring and evaluation point to a significant assurance gap. This becomes more acute as AI systems become more autonomous. Organisations should prioritise the ability to understand how AI systems behave over time, detect issues early, and intervene when risks emerge. Logging, testing, incident‑response capabilities and clear accountability are not optional controls — they are essential foundations for sustaining trust as AI moves into more critical business processes.

Only 14% of organisations say they are fully prepared for EU AI Act compliance, with most citing capability and resource gaps rather than technical complexity. Readiness should not be treated as a narrow legal exercise. Instead, it should be used as a catalyst to strengthen governance more broadly: establishing an inventory of AI use cases, clarifying documentation expectations, and establishing cross‑functional routines that connect legal, risk and technology teams. This pragmatic approach supports both regulatory compliance and more confident, sustainable AI adoption.

We’re here to help

The findings in this survey point to a common challenge: moving from responsible AI intent to operational excellence. PwC works with organisations facing exactly these issues — from embedding governance into operating models and preparing for the EU AI Act, to strengthening assurance for more autonomous AI systems. We help leaders design proportionate, practical governance that supports innovation, protects trust and enables AI to scale responsibly. Contact us today to discuss your responsible AI challenges and opportunities.

Responsible AI

Harness AI’s limitless potential while managing risks.

Contact us

Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Tel: +353 86 824 6993

David Lee

David Lee

Partner and Chief Technology Officer , PwC Ireland (Republic of)

Tel: +353 86 280 9998

Fidelma Boyce

Fidelma Boyce

Assurance Partner, PwC Ireland (Republic of)

Tel: +353 86 8128831

James Scott

James Scott

Director, PwC Ireland (Republic of)

Tel: +353 87 144 1818

Hide
Follow PwC Ireland