NSP Insights for NZ Businesses

Fastest Technology Adoption Has the Slowest Governance Response | NSP

Written by Dayna-Jean Broeders | May 14, 2026 1:33:05 AM

 The Fastest Technology Adoption in History Has the Slowest Governance Response 

 

There's a pattern playing out in businesses across New Zealand right now, and most of the leaders inside it haven't noticed yet.

Someone in the finance team pastes a draft board report into ChatGPT to clean up the language, it has margin data in it. Someone in sales summarises a client proposal on their personal AI account before a meeting, the proposal has pricing strategy, negotiation notes, client financials. An IT manager uses a personal AI subscription to debug production code, the code represents eight years of competitive development.

None of these people are doing anything wrong, in their view. They're doing their jobs faster. They're being resourceful.

And in every one of those scenarios, your organisation's most valuable information has left the building - processed by a third-party model, retained on a server you have no contract with, potentially contributing to training data you'll never see. There's no breach notification, no audit trail, no moment of realisation. Just a slow, structural leak that most businesses won't notice until it's too late.

This is the AI governance problem. It's running inside your business right now - and the companies that address it deliberately will have a meaningful advantage over the ones that don't.

 

Why this is different from every technology shift that came before it

When the telephone arrived, businesses had decades to figure out what it meant for how information moved. When the internet changed everything, the compliance and governance complexity took years to catch up with the technology. Businesses had time to adapt, time to watch others make mistakes, time to build policy frameworks before the consequences became severe.

AI is not giving anyone that time.

According to AI Forum New Zealand's latest productivity report, 87% of NZ organisations now use some form of AI - nearly double the 48% recorded in 2023. That's not a gradual adoption curve. That's a structural shift compressed into two years. Microsoft ranks New Zealand seventh globally for AI tool adoption. The government's own modelling estimates AI could contribute up to NZ$102 billion to the economy annually by 2038.

The technology has arrived faster than any preceding wave. What hasn't arrived - not yet, not for most businesses - is the governance infrastructure to match it.

That gap is not a technology problem. It's a business risk problem and unlike most business risks, it's one where the exposure grows every single day that the gap remains open.

 

What "information governance" means - and why most businesses don't have it

The plain answer: Information governance is knowing what data your business holds, where it lives, who can access it, where it's permitted to go, and what happens when those rules are broken. It sounds straightforward. In practice, most NZ SMEs have never had to make it explicit - because the consequences of not having it were slow and manageable.

AI made the consequences fast and severe.

Here's the picture for most small and mid-sized NZ businesses: data governance has been treated like building code compliance. Important in principle, something you think about when it becomes a problem, and rarely the first priority when there are clients to serve and a business to run.

The businesses running USB drives with no policy. Personal cloud storage used casually for work files. No clear framework for what constitutes sensitive data and what can leave the organisation. No visibility into which third-party tools employees are using and what they're feeding into them.

This posture was practical. The risk was low, enforcement was rare and the cost of sorting it out seemed disproportionate to the problem.

That calculation changed the moment AI became a mainstream workplace tool.

 

The shadow AI problem: what's already happening inside your organisation

What is shadow AI? Shadow AI refers to the use of artificial intelligence tools by employees without the knowledge, approval, or oversight of their organisation's IT or security function. It's the AI equivalent of shadow IT - but significantly more dangerous, because AI doesn't just store data externally. It processes it, may retain it, and in some cases, learns from it.

The scale of it is harder to dismiss than most leaders expect.

A January 2026 survey found 49% of enterprise workers are using AI tools not sanctioned by their employer. 33% admitted to sharing research or datasets. 27% shared employee data including salaries or performance information. 23% shared company financials. And 60% said using unsanctioned AI was worth the security risk if it helped them meet deadlines.

That last number is the important one. This isn't rogue behaviour. It's rational behaviour in an environment where people are under pressure to perform and the easiest tool available is a personal AI account with no restrictions and no oversight.

For New Zealand specifically, the Qualtrics 2026 Employee Experience Trends Report found only 12% of NZ employees use only company-provided AI tools - the lowest rate of any country in the study. The global average is 20%. New Zealand's gap between personal AI enthusiasm and organisational AI readiness is, by one measure, the widest in the world.

The reason isn't that NZ workers are particularly cavalier about risk. It's that 74% of NZ business leaders admit their organisation lacks a plan and vision for implementing AI. When leadership doesn't provide the structure, people build their own. They always do.

 

What the data says about the cost of getting this wrong

The financial impact of poor AI governance is measurable and significant.

IBM's 2025 Cost of Data Breach Report found that organisations with high levels of shadow AI faced an average of $670,000 more per breach than those with low or no shadow AI usage. 97% of organisations that experienced an AI-related security incident lacked proper AI access controls. 63% had no AI governance policies in place at all.

The Stanford AI Index recorded 233 harmful AI-related incidents in 2024 - a 56% increase year-on-year. AI-crafted phishing emails now achieve click-through rates of around 54%, compared with approximately 12% for traditional attacks.

The WEF's 2026 Global Cybersecurity Outlook identified data exposure - not ransomware, not nation-state attacks - as the top AI-related concern among C-suite executives. That's a significant shift from previous years. Leaders aren't primarily worried about being attacked through AI. They're worried about what's already leaking out through it.

For NZ businesses operating under the Privacy Act 2020, the regulatory dimension adds another layer. The Act places real obligations on how personal information is handled, stored, and transferred. "We didn't realise the tool worked that way" is not a defence when a staff member fed a client database into a public AI model that retained the inputs. The obligation sits with the organisation, not the employee.

 

The pace of regulatory change is accelerating - and NZ businesses are exposed

Is AI governance regulated in New Zealand right now?

New Zealand has taken a deliberately light-touch approach to AI regulation. The government's first AI Strategy, released in July 2025, confirms a principles-based framework - no AI-specific legislation - and positions NZ as a place where AI investment is welcome and governance is largely voluntary.

What that means in practice is that existing law applies fully. The Privacy Act 2020, consumer protection frameworks, and employment obligations around data handling. None of these have AI carve-outs. If your staff are feeding personal client data into an unvetted AI tool, you likely have a Privacy Act exposure. If your employment contracts haven't been updated to address AI tool usage, you may have an ambiguous liability position if something goes wrong.

The international picture is moving faster. The EU AI Act is progressing through phased implementation from 2025 to 2027. Globally, governance expectations are tightening. NZ businesses with international clients, partners, or operations in other jurisdictions are already subject to external frameworks - whether they know it or not.

The businesses that treat this as "a future problem when regulation arrives" are misreading the timeline. The regulation isn't the risk. The data exposure is the risk. Regulation is just what makes it expensive when it surfaces.

 

The companies pulling ahead aren't doing anything exotic

It's worth being direct about what good AI governance looks like for a NZ SME, because the picture that most business owners carry - a complex enterprise framework requiring a team of specialists and six months of consulting - is not the reality.

The businesses making real progress on this are doing five things.

They know what data they have - in practice. They've classified the information that matters: client records, financial data, pricing strategy, IP, personal information. Not everything gets the same treatment, but the things that matter are identified and handled differently. This is the foundation. Without it, everything else is guesswork.

They've made a decision about which AI tools are acceptable - Not a ban - banning tools doesn't work. Research is consistent that prohibition drives shadow AI underground rather than eliminating it. When approved alternatives are provided, unauthorised tool usage drops by as much as 89%. The decision is about providing governed options that actually meet the need, not building walls that people route around.

They've told their people clearly what can and can't go into AI tools - This doesn't require a twenty-page policy. It requires a clear, plain-language position: here's what's sensitive, here's why it can't go into external AI tools, here's what you do instead. People comply with rules they understand. They route around rules that feel arbitrary.

They've closed the obvious gaps first - USB drives used without policy, personal cloud storage for work file and no clear position on what constitutes confidential information. These aren't AI-specific problems - they're information governance fundamentals that AI has made newly urgent. Fix these before anything else.

They treat visibility as the starting point, not surveillance as the end goal - You cannot govern what you cannot see. Understanding which AI tools are actually in use across the business is the first step toward building a framework that works. Most organisations are surprised by the answer. The surprise itself is valuable.

 

The strategic question every NZ business leader needs to answer

This isn't fundamentally a technology question. It's a competitive question.

AI is going to determine which businesses in every sector operate more efficiently, serve clients better, respond faster, and make smarter decisions. The advantage isn't in having AI. It's in having AI that's integrated, trusted, and governed - so it can be used confidently rather than anxiously.

The businesses that get there first will do so because they built the underlying infrastructure for it. They'll know where their data is. They'll know what can be fed into which tools. They'll have staff who understand the framework and trust it. They'll be able to adopt new AI capabilities quickly, because the governance foundation is already there.

The businesses that don't will face a different future. Either they absorb a data governance incident that forces reactive change at the worst possible time, or they find themselves unable to confidently adopt the tools their competitors are already running - because the underlying information hygiene isn't there to do it safely.

The telephone took decades to reshape an industry, the internet took years and AI is doing it in months.

Which side of that gap your business lands on is still, right now, a choice you can make deliberately. The window for making it deliberately - rather than having it made for you - is shorter than most business leaders think.

 

Frequently Asked Questions: AI Governance for NZ Businesses

What is AI information governance and why does it matter for SMEs? AI information governance is the set of policies, controls, and practices that determine how artificial intelligence tools can use your organisation's data. It matters for SMEs because AI tools - including free and personal-account tools used by staff - process and may retain sensitive business data. Without governance, businesses lose visibility over where client, financial, and strategic information is going, and accumulate regulatory exposure under frameworks like the Privacy Act 2020.

What is shadow AI and how common is it in NZ businesses? Shadow AI is the use of AI tools by employees without IT approval or organisational oversight. It's widespread: 49% of enterprise workers globally use unsanctioned AI tools, and New Zealand has the lowest rate of company-provided AI tool adoption of any country surveyed, meaning the gap between what staff use and what organisations know about is particularly acute here.

Does the NZ Privacy Act apply to AI tool usage? Yes. The Privacy Act 2020 applies to how personal information is collected, stored, used, and transferred - regardless of the technology involved. If an employee inputs personal client data into an external AI tool, the organisation may have obligations it's not meeting. The "we didn't know how the tool worked" position does not relieve an organisation of its privacy obligations.

How do you build AI governance without a large IT team? Start with data classification - identify what information is sensitive and treat it differently. Create a simple, clear AI acceptable use policy. Audit which tools staff are actually using. Provide approved alternatives for common use cases rather than simply banning tools. Build visibility before trying to control everything. Most effective AI governance for SMEs is less about technology controls and more about clear decisions, communicated plainly.

What's the business risk of ignoring AI governance? The direct costs include breach exposure - shadow AI adds an average $670,000 per incident to breach costs. Regulatory risk under the Privacy Act. Reputational risk if client data is exposed. And strategically, organisations without governance foundations will be unable to adopt AI tools confidently - creating a competitive disadvantage relative to peers who built those foundations early.

This is the conversation we're having with NZ businesses every week.

If you'd like to understand where your organisation sits - what's exposed, what's manageable, and where to start - we're here to help from start to end.

Let's Talk →

 

Sources: AI Forum NZ Third AI Productivity Report 2025 · Qualtrics 2026 Employee Experience Trends Report · IBM Cost of Data Breach 2025 · Microsoft NZ / EY Economic and Social Impact Report 2025 · Aon AI Risk 2026 · WEF Global Cybersecurity Outlook 2026 · Stanford AI Index 2025 · MBIE NZ AI Strategy 2025 · Robert Half NZ 2025 · BlackFog Enterprise Worker Survey January 2026