As Irish organisations move from AI experimentation towards broader adoption, the pressure is shifting from proving isolated use cases to building the foundations needed to scale with confidence. Responsible AI is now firmly on the agenda for Irish organisations. Progress has been real but uneven, with many organisations moving beyond early experimentation and advancing individual initiatives faster than the enterprise-wide governance needed to sustain them. As AI becomes more pervasive, the challenge is no longer awareness, but execution.
The survey shows that responsible AI has moved firmly onto the organisational agenda in Ireland, even if enterprise-level maturity remains uneven. In practical terms, most organisations (77%) have moved past the initial policy stage and have commenced practical implementation. That is a significant shift from discussion to action.
The more revealing question is where organisations sit within that curve. The largest share of Ireland’s organisations is in the embedded stage, where governance practices and guidance exist but are not yet fully adopted across the enterprise. Only 19% of organisations say responsible AI is strategic — described in the survey as a recognised business priority with executive sponsorship. In the US, that figure is 28%. Ireland shows momentum, but fewer organisations have reached the point where responsible AI is anchored at leadership level and carried consistently through the operating model.
Responsible AI has therefore moved beyond awareness, but the harder test is whether organisations can sustain it in practice. Many have introduced policies, training, or governance structures. Fewer have fully integrated responsible AI into how systems are designed, procured, deployed and monitored. That’s the difference between adoption and maturity, and it’s arguably where the next phase of progress will be won or lost.
Question: How would you describe where your organization is in the process of adopting responsible AI and AI governance practices?
Source: PwC’s 2026 Ireland Responsible AI Survey
If adoption is becoming normalised, execution remains uneven — and the gap is even clearer in comparison with the US. The defining gap is not intent but execution. Irish organisations lag US peers on the basic mechanics of responsible AI — clear ownership, consistent standards, and visibility into how AI is used in practice. This is clearly illustrated in the survey with Irish organisations much more likely to rate their governance practices as “somewhat effective” than “very effective”. Only 33% of Irish respondents say they are very effective at applying a risk-based approach to AI governance, compared with 47% in the US — the highest score across all governance areas. After that, confidence drops further: 30% say they are very effective at tracking and inventorying AI use cases (US: 45%), 28% at clear roles and accountability (US: 52%), and 28% at embedding responsible AI into risk, privacy and security processes (US: 48%). Lower scores are recorded for defining and communicating priorities (21% vs 52%), development and deployment standards (16% vs 52%), observability and monitoring (16% vs 45%), and employee training and awareness (14% vs 49%).
The barriers help explain why. The largest challenge by a wide margin is difficulty translating responsible AI principles into scaled, operational processes, selected by 77% of respondents. After that comes lack of clarity on ownership (37%), lack of tools or technical enablers (30%), limited budget or resources (28%), and cultural resistance to change (28%). The problem isn’t simply whether organisations understand the principles, it’s whether those principles have been turned into routines that teams can apply consistently.
Ownership patterns reinforce the point. Primary responsibility most often sits with data/AI teams (28%) or shared cross-functional models (28%), followed by IT/engineering (21%) and legal/compliance (12%). No respondents identify business units as the main owner. That suggests responsible AI is still concentrated in specialist or enabling functions rather than fully embedded as a business-wide discipline. The next gains are therefore likely to come less from new principles and more from clearer ownership, stronger governance routines and better support for teams expected to apply them.
Question: How effective is your company in putting responsible AI and AI governance into practice in the following areas?
Source: PwC’s 2026 Ireland Responsible AI Survey
The EU AI Act is sharpening what readiness really means. Only 14% of Irish organisations say they are fully prepared for compliance. By contrast, 70% say they are partially prepared, 9% are minimally prepared, and 7% are not prepared at all. That distribution suggests many organisations recognise the importance of responsible AI, but most have not yet built the governance capability required for a more formal regulatory environment.
The most important finding is the nature of the barriers. More than half of respondents (53%) cite limited internal expertise or capacity for AI compliance. A further 37% point to budget or resource constraints, and 30% say lack of clarity about EU AI Act requirements is holding them back.
Readiness is unlikely to come from legal interpretation alone. It depends on whether organisations can build an inventory of AI use cases, define documentation requirements, connect legal and risk expectations with technology delivery, and establish governance routines that can withstand scrutiny. In that sense, EU AI Act readiness is best understood not as a narrow compliance project, but as a test of governance capability.
Question: How prepared is your organisation to comply with the EU AI Act?
Source: PwC’s 2026 Ireland Responsible AI Survey
The survey also shows that Irish organisations are widening how they think about the value of responsible AI. The most frequently cited benefit is reduced regulatory or compliance risk, selected by 67% of respondents. That is followed by protected brand and reputation (58%), enhanced cybersecurity and data protection (58%), enhanced customer experience (54%), enhanced innovation (51%), and improved return on AI investment (44%). Lower down the list are improved transparency (37%), improved internal stakeholder trust (37%), and improved external stakeholder trust (35%).
Responsible AI is increasingly being understood in practical business terms: as a way to reduce risk, strengthen trust and support better performance outcomes. Irish organisations tend to frame responsible AI first as a trust and risk discipline, while US peers place comparatively more emphasis on responsible AI as an enabler of improved return on AI investment. Our AI Performance Study suggests the organisations seeing the strongest AI returns are those that go beyond productivity and risk management alone, pairing growth ambition with the data, governance and operating foundations needed to scale AI reliably. In that context, Irish organisations may be building the right foundations, but the next opportunity is to connect those foundations more directly to growth and performance outcomes.
Irish organisations, operating in a more regulated context, are articulating value through trust and control first. But the strategic implication is broader: when governance is proportionate to risk, it can support faster, more confident adoption of AI. Responsible AI is therefore becoming part of the infrastructure that enables organisations to scale AI sustainably, rather than simply a mechanism for constraining it.
Question: What are the biggest barriers your organization faces in operationalising responsible AI and AI governance practices? (Select up to 3.)
Source: PwC’s 2026 Ireland Responsible AI Survey
Autonomous AI agents are emerging as the next major governance test. In Ireland, 56% of respondents believe autonomous agents will reshape how their organisation approaches AI governance over the next year, including 12% who strongly agree and 44% who agree. At the same time, 28% disagree and 19% strongly disagree. The split suggests many organisations see material change ahead, while a substantial minority remain unconvinced about its pace or scale.
That broader context is reflected in our AI Agent Survey, which found that Irish organisations are increasing investment and seeing early gains from AI agents, but remain at an early stage of adoption at scale, with trust still a major constraint. In that light, the split in the Responsible AI Survey looks less like uncertainty about whether agentic AI matters and more like differing views on how quickly it will translate into enterprise-wide change.
The data shows where organisations are focusing. Irish respondents report strongest adoption in foundational controls: 74% cite data access controls and 72% human-in-the-loop oversight. Beyond that, 51% point to role-based permissions, 49% to risk-based approvals, and 47% to agent activity logs, observability and monitoring. Evaluation and testing capabilities are materially weaker at 37%. A further 7% say they have no safeguards in place or planned, and 9% are unsure.
Irish organisations are much less likely than US peers to expect near-term governance change because of autonomous agents (56% versus 87%). At the same time, they’re weaker on evaluation and testing (37% versus 52%), while appearing stronger on data access controls (74% versus 55%) and human-in-the-loop oversight (72% versus 52%). That suggests Ireland may be prioritising control and intervention, while the US is moving faster on assurance mechanisms for agent behaviour. As systems become more autonomous, organisations will need both.
Question: Which function has primary responsibility for driving responsible AI and AI governance in your organization? (Select one.)
Source: PwC’s 2026 Ireland Responsible AI Survey
The findings in this survey point to a common challenge: moving from responsible AI intent to operational excellence. PwC works with organisations facing exactly these issues — from embedding governance into operating models and preparing for the EU AI Act, to strengthening assurance for more autonomous AI systems. We help leaders design proportionate, practical governance that supports innovation, protects trust and enables AI to scale responsibly. Contact us today to discuss your responsible AI challenges and opportunities.
Menu