← All insights

Guides

18 April 2026

Enabling provable ISO 42001 compliance

Luke Matison

Director and Founder, Airentect

LinkedIn

Most organisations now have a policy that says: do not put confidential data into AI tools. The harder question is whether anyone can prove the policy is being followed: not once a year on a checklist, but continuously, as ChatGPT, Copilot, Gemini, and Claude change how work gets done.

ISO/IEC 42001 gives you a credible frame for AI governance. What it still needs from you is technical evidence: controls at the point where risk is created, and records that show what actually happened when employees used AI.

What “provable” means in practice

For regulators, boards, and certification bodies, provable compliance usually boils down to:

  1. You can see risky behaviour before or as it happens, not only after a leak.
  2. You can enforce company rules at the moment of submission, not only in training slides.
  3. You can report exposure patterns and policy outcomes in a form risk and audit teams can stand behind.

That is different from “we told people not to.” It is measured behaviour, not assumed compliance.

Where the risk actually lives: the prompt layer

Traditional DLP still matters for email, files, endpoints, and cloud storage. It was not designed to read the text of a prompt inside a browser session to an external LLM. That gap is exactly why a dedicated AI usage enforcement layer exists: monitor and control what employees send to AI tools, applying your security and data policies before the prompt reaches the AI system, while allowing productive use to continue.

In one line: Airentect is the enterprise layer that evaluates prompt content in real time, enforces your rules, and produces the telemetry and reports governance programmes need.

How Airentect works (at a glance)

  • Deployment: a lightweight browser extension rolled out through your existing MDM (e.g. Intune, Jamf) or a managed Chrome install. It operates at the browser layer, inspecting prompt content before it is transmitted, without changing network architecture, proxies, or firewall rules.
  • Coverage: major LLMs used in the browser (ChatGPT, Microsoft Copilot, Google Gemini, Anthropic Claude, and others), cross-LLM and cross-cloud, not tied to a single vendor ecosystem.
  • Real time: prompts are classified and acted on before they leave the endpoint. Actions include allow, block, redaction, and logging, depending on how you configure policy and which tier you run.

Organisations typically start with visibility and evidence (what is actually being sent, where the hotspots are), then move to coaching and intercept flows for risky submissions, and finally hard enforcement where the regulation or risk profile demands it.

What we help you detect

In real deployments, high-risk prompt categories often include:

  • Source code, API keys, and credentials
  • Internal strategy and confidential documentation
  • Customer PII and account identifiers
  • Financial models and unreleased earnings-sensitive material
  • Legal contracts and HR-sensitive content
  • Board-level materials

The point is not a static list: it is your policy, applied consistently at the prompt boundary, with outcomes you can explain to security, privacy, and compliance stakeholders.

Evidence, retention, and ISO 42001

ISO 42001 expects documented AI data governance and controls you can show an assessor. Airentect contributes by generating governance-ready telemetry: policy triggers, risk categories, which tools carried the exposure, and an audit trail aligned to how your teams actually use AI.

Full prompt body retention is configurable. Many programmes start with classification metadata and policy outcomes rather than storing entire prompts; Enforce-style configurations can include auto-redaction so sensitive content does not leave the endpoint at all. Processing can be aligned to in-region and residency requirements, relevant for APRA-regulated financial services, GDPR, and Australian Privacy Act conversations alongside ISO work.

That is how you move from “we have an AI acceptable use policy” to “here is what we measured, what we blocked or redacted, and how we govern it over time.”

Why this complements your stack rather than replacing it

Airentect does not replace email DLP, CASB, SSE, or endpoint agents. It extends them to the AI prompt surface those tools were not built to inspect at text-input depth. In procurement conversations, the additive framing holds: you are closing a new gap, not rip-and-replacing a category that still does valuable work elsewhere.

Getting practical with your ISO 42001 roadmap

If you are mapping controls to engineering reality:

  • Start with the workflows that matter most: customer data, regulated lines of business, and high-risk model use cases.
  • Establish a baseline: volume analysed, high-risk detections, categories of exposure, and how policy evolved as you learned, in a form you can show internally and later to an assessor.
  • Treat user experience as part of the control design: visibility first where needed, then graduated enforcement so productivity and governance move together.

Momentum beats waiting for a “perfect” programme. ISO 42001 is easier to defend when it is backed by observable controls at the layer where generative AI risk is actually created: the prompt.