Next-Gen Security - GenAI and Zero Trust

Soumyadipta Das Director, PwC Ireland (Republic of) 13 September, 2023

Where can generative AI fit into zero trust (ZT) architectures? How can it support cyber incident response and what do you need to take into account when embedding AI in security?

Generative artificial intelligence (GenAI) is in the spotlight these days. Beyond the hype, GenAI can bring real value to cybersecurity teams, particularly when related specialists are in short supply, but it also carries risk. 

CIOs, CISOs and their teams should understand the value proposition of using GenAI to implement ZT in areas such as anomaly detection, user risk analysis, security control testing, incident response, and even policy writing, while also being mindful of its limitations.

Inside the network perimeter, users and entities are highly trusted. If that perimeter is breached by threat actors, however, it’s a huge risk. That’s especially true with today’s decentralised workforces and the fact that the number of network-connected devices is soaring with IT/OT convergence.

PwC’s Trust Survey 2023 shows executives see cybersecurity and AI risks as priorities for their businesses. It’s time to take an integrated approach and understand how GenAI can support ZT and reduce cyber risk.

ZT architecture design helps overcome challenges faced by conventional networks by using identities to define the security perimeter. For this modern style of network architecture, you need granular access control, which can be resource-intensive. AI can help reduce overheads while also ensuring consistency in access decisions. 

With a lack of trained cybersecurity specialists in the market, AI-enabled automation gives businesses a real path towards implementing ZT. 

Detecting anomalies

Analysing network traffic is a core task for security teams working with ZT. Each network can contain thousands of users and devices and hundreds of critical resources. That means understanding what qualifies as standard network activity. But recognising abnormal behaviour with such huge amounts of network activity can be challenging.

GenAI can build comprehensive, detailed network diagrams to map expected data flows. These maps can be changed often as needed, as GenAI gives you flexibility to update expected and allowed traffic flows in any dynamic IT environment. 

Using AI, detection technologies can then be trained to detect any deviation from baselines. In such cases AI can act as an early warning system, flagging low confidence signals of anomalous activity and taking appropriate measures automatically.

Creating risk profiles

GenAI can be used to build profiles for users and devices, establishing what constitutes ‘normal’ activity and alerting on risky behaviour. With AI’s ability to rapidly analyse large data sets, risk telemetry about user and device sign-ins and access requests can be weighed almost instantaneously and an access decision made in near real-time. This helps address any concerns about the user experience which may arise when transitioning to a ZT architecture.

Using GenAI, you can build risk profiles to help streamline decision-making and ensure high-risk users are managed consistently in all network resource access requests.

Responding to incidents

Quick responses to cyber incidents can drastically reduce their potential impact. One of ZT’s central tenets is ‘assume breach’. That means working from the basis that internal networks have been compromised, and so security teams need to make sure that networks are continuously monitored. What happens when a real incident is detected?

GenAI can create tailored incident reports to keep different stakeholder groups informed. These could include, for example:

  • technical reports for operational teams

  • summaries for executives

  • public statements.

Natural language processing (NLP) text-generating AI can help security operations teams by creating actionable, insightful reports of detected incidents. The data which GenAI is able to analyse could include indicators of compromise (IOCs), network logs and past user behaviour, for example, giving you critical information about attacks when you need it most. GenAI can support incident reporting by translating raw data into coherent, logical commentaries of incidents aided by clear and meaningful graphs. GenAI could act as a key enabler for your security teams, providing them with insights to quickly act upon and take mitigative action.

In addition to detailed technical reports, GenAI is capable of producing executive summaries to keep leadership updated on incidents as they unfold. Here, GenAI helps senior executives understand the situation and supports quick decision making at the enterprise level.

With reporting on incidents a primary focus for regulators today, GenAI can help reduce the burden of regulatory reporting by writing reports that meet regulatory and official requirements. Using GenAI to augment regulatory reporting can ensure reports are concise, are formatted correctly, and meet all regulatory requirements.

Controlling testing 

Data is at the heart of ZT. Moreover, protecting sensitive data is a critical regulatory consideration in sectors such as pharmaceuticals, healthcare, and financial services. 

AI can help test the efficacy of security controls by generating synthetic, anonymised data that mimics real data. This artificial data can be used to fine-tune controls, including:

  • data discovery – testing if discovery technologies are able to find artificial sensitive data hidden within the network

  • data classification – analysing synthetic data at rest to determine its sensitivity and applying appropriate security controls

  • data loss prevention (DLP) – identifying artificial sensitive data in transit to detect possible exfiltration attempts.

In all cases, security teams can leverage GenAI to avoiding using real data, which could be highly sensitive. AI is capable of helping teams assess cybersecurity controls in other areas such as:

  • code scanning – testing tools which scan code for vulnerabilities by writing code with exploitable weaknesses

  • malware recognition – creating files containing potentially malicious code to test malware scanning tools

  • incident detection – assessing the sensitivity of anomaly detection technologies by creating specialised traffic that is subtly different from expected behaviour.

Security teams can use GenAI to gain confidence in the operational effectiveness of cybersecurity and data protection controls, without needing to sanitise production data.

Creating policies

ZT network security policies can be complex. For organisations with complicated network topologies, implementing ZT may seem daunting. Knowing which traffic should be allowed to flow is crucial to restricting the access of high-risk users and devices to resources. 

AI can help design risk-based network security policies with granular detail. These policies enable systems to make contextual access control decisions based on all available data. AI’s ability to aggregate risk signals to make an informed judgement on whether to grant access can bring a step change into how your business secures its data.

Using risk profiles, GenAI is capable of quickly building network policies flexible enough for fast-paced IT environments. This means teams don’t need to continuously re-write policies based on new users and systems.

Be aware of GenAI security and strategic risks

Organisations must stay mindful of the key issues facing the use of GenAI in security. These include:

  • bias – using biased AI can lead to discrimination in access decisions

  • regulations – authorities are moving to regulate the use of AI, with the European Parliament’s upcoming AI Act expected to be a landmark 

  • susceptibility to compromise – threat actors could exploit weaknesses in AI systems to alter the system’s behaviour, meaning they could launch an attack without detection

  • resource consumption – AI solutions can be computationally demanding, so networks must be able to withstand any related potential resource drain.

Take care, however, if you’re considering involving AI in strategic decision-making. While AI is useful for automating low-risk tasks, avoid using it to make decisions on the organisation’s behalf. Maintaining accountability is important when decisions are being made at the enterprise level.

Bring staff along on the venture to implement GenAI, centring your communications with colleagues on how GenAI can empower them to focus on value-add activities over mundane tasks. Consider developing positive communications campaigns, emphasising benefits of AI such as employee upskilling and how AI can support rather than replace them.

GenAI could be a useful support for your organisation on its zero trust journey, but before implementing GenAI at scale, consider its limitations and potential implications.

Is your business prepared to implement zero trust and GenAI? Here are three actions you can take now.

1. Perform a ZT readiness assessment

To take the first step on your path to achieving zero trust, you need to understand your organisation’s ability to align with ZT principles. 

A ZT-focused current state assessment is important in highlighting what your next steps should be. Should you strive for a full-scale ZT implementation, or would a more selective, subtle approach to ZT alignment better suit your needs? An assessment of your business’s current state can help answer such questions, including highlighting where your security tools can be adapted to implement ZT to rationalise your technology portfolio.

Develop an actionable following this assessment, complete with timelines to help your business map out its path towards implementing a ZT architecture.

2. Analyse your current compliance with AI regulation

The AI Act will set out what providers and deployers of AI must do to help reduce the risk of misuse. 

This will include ensuring robust cybersecurity measures are in place and regularly monitored for effectiveness, particularly for AI systems deemed high risk. These could include systems used in providing critical infrastructure or  essential services such as credit scoring systems. Businesses need to understand their obligations under the AI Act, and where AI is currently used in their business.

For ZT, the AI Act sets out your obligations for designing and using training data sets. Technologies related to AI such as application programming interfaces (APIs) or text generators may be high risk, depending on their intended use. 

Assessing your organisation’s use cases against the AI Act’s requirements is an important first step in ensuring your use of AI is compliant, whether that’s in implementing ZT or otherwise.

3. Design a ZT strategy

Zero trust is not a technology, it’s a vision. Organisations must devise a strategy for aligning with ZT at the enterprise scale. Implementing ZT requires effort from stakeholders across the business, so defining a cohesive strategy will help ensure buy-in and accountability. 

Planning in advance will also support the use of GenAI in a controlled way which respects regulation. That’s especially relevant when it comes to managing sensitive information that can be used for authentication, such as biometric data, or your customer’s data.

Our team is ready to help you

ZT and GenAI have captured the attention of senior executives across all industries. New technologies and concepts bring risk, however. At PwC, our team of experienced cybersecurity professionals can help you navigate your cybersecurity maturity journey and address these key agenda items. 

Are you ready to implement ZT and GenAI to secure your business? Contact us today to find out how you can take a modern approach to securing your systems and data.

Contact us

Leonard McAuliffe

Partner, PwC Ireland (Republic of)

Pat Moran

Partner, PwC Ireland (Republic of)

Soumyadipta Das

Director, PwC Ireland (Republic of)

Follow PwC Ireland