Skip to content
Get Started. Free Consult
Services/AI Usage Review
Service · AI Usage Review · Advisory

AI usage review. Find out first.

Most AI tools arrived inside organisations through the side door. The OAIC's Notifiable Data Breaches reports consistently flag misconfigured access and human error as leading causes of breaches. Before you write the policy, find out what is actually happening. We talk to your team, surface real usage, and give you a risk picture you can act on.

15-30min
per staff interview
0
system access required
Nonjudge
judgemental framing
<3wks
typical engagement
01The picture

AI arrived through the side door. No one wrote it down.

Staff started using ChatGPT to draft emails. Someone tried Copilot. Another team uses Gemini through their personal Google account. None of it shows up in any IT system you can see.

You cannot describe your current AI exposure. You do not know what data has been put into which tools. Any policy you write in a vacuum will be ignored or out of date by the time it lands. And you cannot answer "how are you managing AI risk" with a straight face. We give you the honest picture, in plain language, without anyone needing to admit fault.

What we usually findcommon

  • !
    Personal accounts. ChatGPT and Gemini logged in with personal Google or Apple IDs.
  • !
    Sensitive data pasted. Client names, support notes or financial data put into free tools.
  • !
    Browser plug-ins. AI extensions installed by individuals, often touching email and CRMs.
  • !
    No checking step. AI-generated content going to clients without a human review step.
02Scope

The five areas we cover.

01
Tool inventory

Which AI tools are in use across the team. Sanctioned, paid, free or personal accounts. How often, for what tasks. Browser extensions and plug-ins that bring AI into existing workflows.

02
Data exposure

What kinds of information staff are putting into AI tools. Whether client names, health information, support notes, incident reports, financial or commercial data has been entered. Whether files have been uploaded.

03
Awareness and practice

What guidance staff currently have, formal or informal. Whether they understand what their tools do with the data. Where confusion is leading to risky behaviour. What would actually help.

04
Policy position

What an AI usage policy needs to cover for your organisation. A practical first draft you can adapt. The shortest path to making the policy something staff will actually follow.

05
Visibility going forward

How to gain proper visibility over AI use without locking everything down. Tooling options for your environment (Microsoft Defender for Cloud Apps, Purview, alternatives). What your IT provider can help with.

06
Recommendations and owners

Risks rated low, medium or high. Recommendations with the right owner indicated. A short list of what to do this month, this quarter and this year.

03Method

How the engagement runs.

01 · kick-off

Confirm scope and framing

Short call with the project sponsor to confirm priorities, who we will speak to and how the review is introduced to staff.

02 · interviews

Talk to a sample of staff

Short, non-judgemental interviews (15 to 30 minutes each) with a representative slice of the team. The goal is honesty, not enforcement.

03 · policy

Draft the AI usage policy

We draft an AI usage policy outline based on what we have heard, ready for the organisation to adapt. It reflects how staff actually work.

04 · draft

Risk-rated draft report

Findings, risks rated low, medium or high, and practical recommendations with the right owner indicated. Issued for your review.

05 · walkthrough

Final report and plan

Walkthrough session with the leadership team. Final report incorporating any clarifications. Policy outline finalised.

Engagement runs over a few weeks of elapsed time, depending on staff availability. We rely on what staff tell us. With the right framing, people are remarkably honest, especially when they know the goal is to make their working lives easier rather than to catch them out.

04Deliverables

What you walk away with.

Written usage review

Findings from the interviews, risks rated low, medium or high, and practical recommendations with the right owner indicated.

  • Findings by area
  • Risk-rated issues
  • Recommendations
  • Owners indicated
Draft AI usage policy

A first draft policy outline built from what your team is actually doing, not from a template. Designed to be usable, not aspirational.

  • First-draft policy
  • Acceptable use
  • Approval and review
  • Ready to adapt
Leadership walkthrough

A session with the leadership team to step through the report, agree priorities and discuss any sensitive findings privately.

  • Leadership walkthrough
  • Stakeholder Q&A
  • Priority agreement
  • Confidential delivery

▸ fixed price quote agreed before any work starts.

05Boundaries

What this engagement is not.

Not in scope
A technical audit

We do not scan Microsoft 365, devices or networks to verify usage.

Not in scope
A penetration test

If you want technical visibility, we will tell you what to ask your IT provider for.

Not in scope
A staff investigation

This is not a disciplinary process. The framing is supportive.

Not in scope
A formal compliance audit

This is advisory. We point at standards and where you sit against them.

Not in scope
A legal review

We are not lawyers. We will tell you where to get one if you need one.

06Triggers

When to call us in.

Scale unknown
Suspect AI is everywhere

Leadership has realised AI use is happening across the team but has no idea of the scale or what data is involved.

Recent incident
Something happened

A staff member did something with AI that prompted a closer look. You want a structured picture, not a witch hunt.

Policy in draft
Writing the policy

An AI usage policy is being drafted and the executive wants it grounded in reality, not a template downloaded from the internet.

Funder question
How are you managing AI risk?

A board, funder, auditor or insurer has asked the question and you cannot answer it confidently right now.

Copilot rollout
About to roll out Copilot

Microsoft 365 Copilot or another sanctioned tool is about to be rolled out and the executive wants to understand the existing landscape first.

Sensitive sector
Sensitive data, real consequences

Disability, aged care, allied health, education, legal, finance, NFP. Sectors where shadow AI use carries real consequences.

08FAQ

Frequently asked questions.

A technical audit gives you a list of accounts and licences. It does not tell you what staff are doing with personal accounts, paid subscriptions out of expense cards, or browser plug-ins. The interview approach is the only way to surface those. If you want technical visibility on top, we will tell you exactly what to ask your IT provider for.

Yes, with the right framing. We open every interview by saying we are not here to catch anyone out, the leadership team has agreed nothing said becomes a disciplinary issue, and the goal is to make their working lives easier. People are remarkably honest under those conditions, especially compared to a formal audit.

No. The engagement is interview-based and document-light. We do not log into Microsoft 365, devices, networks or any other system. That is a deliberate choice: it keeps the engagement low-friction and keeps the interviews honest.

It is a starting draft, not a final legal document. We design it to be practical, plain-English and aligned to how your team actually works. You will want to run it past HR and legal before it is signed off, but it will be much closer to usable than what most policy templates produce.

Yes. We work with not-for-profits, allied health, disability and aged care providers regularly. The data sensitivity in those sectors is exactly why a usage review matters before a policy is written.

That is a separate scope, and often we recommend your existing IT provider does it because they already manage your environment. We will tell you exactly what to ask for, and we are happy to brief them. Implementation is independent of the review.

Want to know what your team is actually doing? Start with a 30 minute scoping call.

Book a 30 min Scoping Call →

▸ we will tell you whether this engagement is the right fit. No pitch deck.