Your finance team just pasted last quarter's management report into ChatGPT to "summarise the key points." Your marketing manager fed customer testimonials into an AI tool to draft case studies. Your sales rep uploaded a prospect list to generate personalised outreach emails.
None of them asked permission, none of them thought twice about it and your IT team has no idea it happened.
Between March 2023 and March 2024, the amount of corporate data employees paste into AI tools increased by 485%. Half of all employees now use Shadow AI, AI tools their organisations haven't approved, can't monitor, and don't control.
This isn't a future problem. It's happening right now, in your organisation, whether you know it or not.
Shadow AI is what happens when staff use artificial intelligence tools without IT approval or oversight. It's the logical evolution of Shadow IT, but faster, harder to detect, and significantly riskier.
Here's why it's different.
Traditional Shadow IT was about unapproved software or cloud services. You could spot Dropbox traffic. You could block unauthorised apps at the firewall. You had some visibility.
Shadow AI doesn't work like that.
AI tools are embedded everywhere now. They're built into Microsoft 365, Google Workspace, Zoom, Slack, Canva, Grammarly, and hundreds of SaaS platforms your team already uses. There's no new login to monitor. No suspicious download. It's just turned on.
And even when staff use standalone tools like ChatGPT or Claude, it's often through a browser on a work device, using a personal account. From an IT perspective, it looks like normal web traffic. From a risk perspective, it's a data leak waiting to be discovered.
Recent MIT research found that while only 40% of companies have purchased official AI subscriptions, employees in over 90% of companies regularly use personal AI tools for work. AI adoption is instant because there's no friction. No approval process. No procurement. No training required. Someone hears about a tool, tries it, finds it useful, and keeps using it.
By the time leadership asks "are we using AI?", the answer is already yes.
Let's get specific, because this isn't theoretical.
Your marketing team is feeding brand messaging, customer insights, and campaign briefs into ChatGPT to draft social posts, your finance team is uploading redacted management reports to summarise trends, your HR manager is using AI to write job descriptions and performance reviews and your sales reps are pasting lead lists into tools that auto-generate outreach emails.
Some of this is harmless and some of it isn't.
Here's what we're seeing across New Zealand businesses right now:
Staff pasting sensitive information into public AI tools. Customer data, financial forecasts, strategic plans, contract terms, employee records. It's not malicious, people genuinely don't realise that what they paste into a free AI service might be stored, logged, or used to train future models. In March 2024, 27.4% of corporate data employees put into AI tools was sensitive, up from 10.7% a year earlier.
AI features embedded in everyday SaaS platforms. Microsoft's Copilot is built into Word, Excel, and Teams. Google's Gemini is in Workspace. Notion has AI. So does Salesforce, HubSpot, Monday, Asana, and Xero. These tools don't ask permission before they start suggesting text or analysing your data. They're just on.
Browser plugins and personal accounts. Extensions that summarise articles, rewrite paragraphs, or translate documents are processing company information through third-party servers. Personal ChatGPT accounts used on work devices create a grey zone where corporate data flows outside corporate control.
ChatGPT remains the most frequently used AI application in workplaces, with 73.8% of accounts being non-corporate ones that lack security and privacy controls. For Google's Gemini, that figure jumps to 94.4%.
The problem isn't that your people are using AI. It's that leadership has no idea where it's being used, what data is touching it, or who's accountable when something goes wrong.
Let's be direct: this isn't hypothetical risk. It's current exposure.
Data leakage and loss of intellectual property. When sensitive information gets pasted into an AI tool, you lose control of it. Some AI providers store inputs for training. Others log conversations for compliance. Even if they promise privacy, you're trusting a third party with data that might be commercially sensitive, legally protected, or both.
One leaked contract clause. One mishandled customer list. One strategic plan summarised by an AI that shares outputs with other users. That's all it takes.
Regulatory and privacy exposure. If your business handles personal information, and most do, you're bound by the Privacy Act 2020. New Zealand's Privacy Commissioner has issued specific guidance on AI usage, making it clear that the Information Privacy Principles apply to AI tools just as they do to any other technology. Using AI tools to process customer data without proper safeguards could put you offside. If you're in a regulated sector (finance, health, legal), the compliance risk multiplies fast.
The question your auditor will ask isn't "did you know people were using AI?" It's "what controls did you have in place to prevent misuse?"
Cyber insurance implications. For organisations with high levels of shadow AI, data breaches added $670,000 to the average breach cost, a 16% increase compared to those with low levels or none. Insurance policies are starting to ask specific questions about AI usage and governance. If you can't demonstrate reasonable controls, don't be surprised if your claim gets declined or your premium jumps.
Loss of control and auditability. When something goes wrong, you need to trace what happened. Shadow AI makes that impossible. No logs. No visibility. No way to know what data left the building, where it went, or who saw it.
Reputational damage. Public trust is built slowly and lost instantly. If a customer's personal information ends up in an AI training dataset because an employee pasted it into a free tool, that's a front-page problem. It doesn't matter that it was an accident.
This is not an IT problem. It's a governance problem. And the people accountable for it are sitting in the executive team and the boardroom.
The instinct here is obvious: just ban it. Block ChatGPT at the firewall, add a clause to the acceptable use policy, problem solved.
Except it's not.
Prohibition fails for three reasons.
First, AI is already embedded in tools your organisation depends on. You can't ban Copilot without banning Microsoft 365. You can't block AI in Salesforce without breaking your CRM. Banning AI isn't technically feasible unless you're willing to gut your tech stack.
Second, staff will find workarounds, they'll use their phones, they'll use personal hotspots. They'll use tools you've never heard of. 59% of employees hide their AI use from their bosses, and 85% of employees who have approved AI tools still admitted to using unapproved ones. Bans don't stop behaviour, they just push it further underground and make it harder to manage.
Third, there's a real productivity trade-off. AI tools genuinely help people work faster and better. Drafting documents, summarising research, generating ideas, automating repetitive tasks, this stuff saves time. 83% of knowledge workers say they use AI to save time, 81% say it makes their job easier, and 71% say it improves productivity. If you ban AI entirely while your competitors don't, you're choosing to be slower and less efficient.
The tension is real. Leadership wants control, staff want to get work done and a blanket ban satisfies neither.
What works is governance, not restriction.
Good AI governance doesn't mean locking everything down. It means creating clear rules, making safe options available, and helping people make better decisions.
Here's what that looks like in practice.
AI usage policies that people can actually follow. Spell out what's allowed and what isn't. Be specific. "Don't use unapproved AI tools" is useless. "Don't paste customer data, financial information, or confidential documents into public AI tools" is clear.
Make it obvious what the rules are, why they exist, and what happens if someone breaks them. Then communicate it properly, not buried in a 40-page handbook nobody reads.
Data classification rules that stick. Not all information is equal. Public data can go anywhere. Internal data needs basic controls. Confidential and sensitive data should never touch an unapproved AI tool.
|
Data Classification |
Examples |
AI Usage Rules |
|---|---|---|
|
Public |
Marketing materials, published reports, public website content |
Can be used in any AI tool |
|
Internal |
Internal memos, operational procedures, general business communications |
Approved enterprise AI tools only |
|
Confidential |
Financial data, strategic plans, employee records, vendor contracts |
Enterprise AI with data residency and encryption controls only |
|
Sensitive |
Customer personal data, health information, trade secrets, legal documents |
Prohibited from public AI tools, must use approved, audited systems with full logging |
If your people can't quickly identify what's safe to share and what isn't, your classification system is too complicated. Simplify it.
Approved vs unapproved tools. Give people safe options. If you don't want staff using ChatGPT, provide an enterprise AI tool with proper data handling and security controls. ChatGPT Enterprise, Microsoft 365 Copilot, or Claude for Enterprise all offer data privacy controls, encryption, and the ability to retain control over your data.
Enterprise AI platforms typically encrypt all data at rest using AES-256 and in transit using TLS 1.2+, and critically, they don't use your business data to train their models by default. That's fundamentally different from free consumer tools.
If the approved tools are worse than the free alternatives, people will keep using the free ones. So make the approved tools good.
Identity, logging, and access controls. Ensure AI tools are accessed through corporate accounts, not personal ones. That means you can monitor usage, revoke access, and audit activity if needed. It also means employees understand they're using company resources under company policies.
Logging isn't about surveillance. It's about accountability and incident response. If something goes wrong, you need to know what happened.
Employee education and awareness. Most people don't realise they're creating risk. They're just trying to do their job faster. Education fills the gap between intent and impact.
Run short, practical sessions. Show real examples. Explain what can go wrong and how to avoid it. Make it conversational, not preachy. The goal is to build awareness, not scare people into silence.
This doesn't need to be perfect on day one. Start with the basics. Refine as you learn. The point is to move from "we have no idea what's happening" to "we have a handle on this."
Most organisations don't have the in-house expertise to build AI governance from scratch. And that's fine, it's new territory for everyone.
This is where a good IT and cybersecurity partner earns their keep.
The first step is an AI exposure assessment. Map out what tools are being used, where data is flowing, and what gaps exist in your current policies and controls. You can't manage what you can't see, so visibility comes first.
From there, you build a governance framework tailored to your organisation. Not a generic template pulled from the internet. A practical set of policies, controls, and processes that fit your size, your industry, and your risk appetite.
That includes aligning AI use with your existing security and compliance obligations. If you're handling personal information, health data, financial records, or anything regulated, your AI governance needs to reflect that. A partner who understands both technology and compliance can bridge that gap.
The goal isn't to block innovation. It's to enable safe innovation. That means helping your teams adopt AI tools in ways that improve productivity without creating unmanaged risk. It means making the right thing the easy thing.
And it means ongoing support. AI is evolving fast. New tools appear constantly. Threats change. Regulations tighten. Governance isn't a one-time project, it's an ongoing discipline. A good partner helps you stay on top of it.
Shadow AI is already inside your organisation. The question isn't whether your people are using AI, it's whether you have any control over how they're using it.
This is a leadership issue, not a technical one. It's about accountability, governance, and making deliberate choices about risk.
Banning AI won't work. Ignoring it is worse. What works is treating AI like any other business tool: set clear policies, provide safe options, educate your people, and monitor what's happening.
If you're not sure where you stand right now, what tools are being used, what data is at risk, or what controls you need, that's the first problem to solve.
Because the longer you wait, the bigger the blind spot gets. And blind spots have a habit of turning into very public, very expensive problems.
Want to know where your organisation actually stands on Shadow AI? We can help you assess current exposure, build practical governance, and align AI use with your security and compliance obligations. Get in touch to discuss what good looks like for your business.
Related resources: