Tool inventory
Which AI tools are in use across the team. Sanctioned, paid, free or personal accounts. How often, for what tasks. Browser extensions and plug-ins that bring AI into existing workflows.
Most AI tools arrived inside organisations through the side door. The OAIC's Notifiable Data Breaches reports consistently flag misconfigured access and human error as leading causes of breaches. Before you write the policy, find out what is actually happening. We talk to your team, surface real usage, and give you a risk picture you can act on.
Staff started using ChatGPT to draft emails. Someone tried Copilot. Another team uses Gemini through their personal Google account. None of it shows up in any IT system you can see.
You cannot describe your current AI exposure. You do not know what data has been put into which tools. Any policy you write in a vacuum will be ignored or out of date by the time it lands. And you cannot answer "how are you managing AI risk" with a straight face. We give you the honest picture, in plain language, without anyone needing to admit fault.
Which AI tools are in use across the team. Sanctioned, paid, free or personal accounts. How often, for what tasks. Browser extensions and plug-ins that bring AI into existing workflows.
What kinds of information staff are putting into AI tools. Whether client names, health information, support notes, incident reports, financial or commercial data has been entered. Whether files have been uploaded.
What guidance staff currently have, formal or informal. Whether they understand what their tools do with the data. Where confusion is leading to risky behaviour. What would actually help.
What an AI usage policy needs to cover for your organisation. A practical first draft you can adapt. The shortest path to making the policy something staff will actually follow.
How to gain proper visibility over AI use without locking everything down. Tooling options for your environment (Microsoft Defender for Cloud Apps, Purview, alternatives). What your IT provider can help with.
Risks rated low, medium or high. Recommendations with the right owner indicated. A short list of what to do this month, this quarter and this year.
Short call with the project sponsor to confirm priorities, who we will speak to and how the review is introduced to staff.
Short, non-judgemental interviews (15 to 30 minutes each) with a representative slice of the team. The goal is honesty, not enforcement.
We draft an AI usage policy outline based on what we have heard, ready for the organisation to adapt. It reflects how staff actually work.
Findings, risks rated low, medium or high, and practical recommendations with the right owner indicated. Issued for your review.
Walkthrough session with the leadership team. Final report incorporating any clarifications. Policy outline finalised.
Engagement runs over a few weeks of elapsed time, depending on staff availability. We rely on what staff tell us. With the right framing, people are remarkably honest, especially when they know the goal is to make their working lives easier rather than to catch them out.
Findings from the interviews, risks rated low, medium or high, and practical recommendations with the right owner indicated.
A first draft policy outline built from what your team is actually doing, not from a template. Designed to be usable, not aspirational.
A session with the leadership team to step through the report, agree priorities and discuss any sensitive findings privately.
▸ fixed price quote agreed before any work starts.
We do not scan Microsoft 365, devices or networks to verify usage.
If you want technical visibility, we will tell you what to ask your IT provider for.
This is not a disciplinary process. The framing is supportive.
This is advisory. We point at standards and where you sit against them.
We are not lawyers. We will tell you where to get one if you need one.
Leadership has realised AI use is happening across the team but has no idea of the scale or what data is involved.
A staff member did something with AI that prompted a closer look. You want a structured picture, not a witch hunt.
An AI usage policy is being drafted and the executive wants it grounded in reality, not a template downloaded from the internet.
A board, funder, auditor or insurer has asked the question and you cannot answer it confidently right now.
Microsoft 365 Copilot or another sanctioned tool is about to be rolled out and the executive wants to understand the existing landscape first.
Disability, aged care, allied health, education, legal, finance, NFP. Sectors where shadow AI use carries real consequences.
Where AI could add value and what needs to be in place first across data, governance and tooling.
→ServiceIndependent review of where your data lives, who can reach it and what happens if something goes wrong.
→ServiceOnce the policy is in place, lift the team with hands-on workshops covering Claude, ChatGPT and Copilot.
→A technical audit gives you a list of accounts and licences. It does not tell you what staff are doing with personal accounts, paid subscriptions out of expense cards, or browser plug-ins. The interview approach is the only way to surface those. If you want technical visibility on top, we will tell you exactly what to ask your IT provider for.
Yes, with the right framing. We open every interview by saying we are not here to catch anyone out, the leadership team has agreed nothing said becomes a disciplinary issue, and the goal is to make their working lives easier. People are remarkably honest under those conditions, especially compared to a formal audit.
No. The engagement is interview-based and document-light. We do not log into Microsoft 365, devices, networks or any other system. That is a deliberate choice: it keeps the engagement low-friction and keeps the interviews honest.
It is a starting draft, not a final legal document. We design it to be practical, plain-English and aligned to how your team actually works. You will want to run it past HR and legal before it is signed off, but it will be much closer to usable than what most policy templates produce.
Yes. We work with not-for-profits, allied health, disability and aged care providers regularly. The data sensitivity in those sectors is exactly why a usage review matters before a policy is written.
That is a separate scope, and often we recommend your existing IT provider does it because they already manage your environment. We will tell you exactly what to ask for, and we are happy to brief them. Implementation is independent of the review.
▸ we will tell you whether this engagement is the right fit. No pitch deck.