Compliance brief / international standard

ISO/IEC 42001. The procurement floor for AI vendors.

ISO/IEC 42001:2023 is the world's first AI management system standard. Published December 2023. Now appearing in Fortune 500 vendor security questionnaires. Recognized as a safe harbor by Colorado and others. Here's what it covers, the 38 controls in Annex A, and the path from zero to certified.

TLDR

On this page

  1. 01 What ISO/IEC 42001 is
  2. 02 Why it matters in 2026
  3. 03 The Plan-Do-Check-Act cycle
  4. 04 The 38 Annex A controls
  5. 05 Path to certification
  6. 06 How Northbeams maps to this
  7. 07 FAQ

01 / What ISO/IEC 42001 is

A management system standard for AI.

ISO/IEC 42001:2023 was published in December 2023 as the world's first international management system standard specifically for artificial intelligence. It defines the requirements for establishing, implementing, maintaining, and continuously improving an AI management system inside an organization.

The phrase "management system" is doing real work in that title. Like ISO 27001 for information security or ISO 9001 for quality, 42001 is not a list of things to do once. It is a structured, audited, continuously improving system that makes AI governance a recurring discipline rather than a one-off project.

It uses the Annex SL high-level structure, the same skeleton as ISO 27001, ISO 9001, and other ISO management system standards. That means a company already running an Annex-SL-shaped management system can absorb 42001 without rebuilding the chassis.

The standard is technology-neutral. It does not name specific AI technologies, vendors, or model architectures. It defines the policies, processes, and controls a responsible operator should run, leaving the implementation specifics to the organization.

02 / Why it matters in 2026

Procurement, not regulation, drove adoption.

ISO 42001 is voluntary. No law in 2026 forces a company to be certified. Adoption nonetheless jumped sharply from approximately 1% of businesses to 28% in a single year. Gartner forecasts that more than 70% of enterprises will adopt a formal AI governance standard by the end of 2026.

The reason is procurement. Fortune 500 buyers added "describe your AI governance" and "list your AI management certifications" to their standard vendor security questionnaires through 2025. Companies who cannot point to ISO 42001 (or a credible roadmap toward it) lose deals.

Three more reinforcing trends in 2026:

03 / The Plan-Do-Check-Act cycle

Four stages, one engine.

ISO 42001 runs on the Plan-Do-Check-Act (PDCA) cycle, the same engine used in ISO 27001 and ISO 9001. Each cycle through PDCA is one rotation of the management system.

Plan.

Define the AI policy and scope of the management system. Identify interested parties (employees, customers, regulators, partners). Run an AI risk assessment. Run an AI system impact assessment for each in-scope AI system. Set objectives. Allocate roles and resources.

Do.

Implement the Annex A controls that match your scope and risk profile. Train people. Operate the controls. Document evidence as the controls run. Track suppliers and third parties.

Check.

Monitor and measure. Run internal audits. Surface findings. Track incidents. Hold management review meetings on a defined cadence (annually at minimum, more often where appropriate).

Act.

Feed findings back into the management system. Update the policy. Adjust controls. Add new controls if needed. Schedule the next pass through PDCA.

Most companies new to ISO management systems underestimate the discipline of "Check" and "Act." They do the planning. They implement controls. Then they stop, and a year later the management system has decayed. The PDCA discipline is what keeps it alive.

04 / The 38 Annex A controls

Nine categories. Mapped to operational reality.

ISO 42001 Annex A defines 38 controls organized across nine categories. Below is the operational shape, not the verbatim text. Refer to the published standard for the exact control language.

A.2 Policies related to AI.

An AI policy that the leadership signs and reviews. The policy names the AI uses you allow, the ones you forbid, the ethical principles you apply, and how you publish all of this internally and (where appropriate) externally.

A.3 Internal organization.

Roles, responsibilities, and reporting lines for AI. Who decides whether to deploy a new AI system. Who reviews the impact assessments. Who is accountable to the board.

A.4 Resources for AI systems.

The data, tooling, compute, and human resources you allocate to AI. The supplier relationships that supply them. The documentation that says what version of what was used in production.

A.5 Assessing impacts of AI systems.

The AI system impact assessment. Required for in-scope AI systems before deployment and at defined intervals afterward.

A.6 AI system life cycle.

How you design, develop, test, deploy, monitor, and retire AI systems. Includes versioning, change management, and decommissioning.

A.7 Data for AI systems.

Data quality, data governance, training data sources, data lineage. The control category most likely to interact with privacy law (CCPA, GDPR, the AB 2013 training-data disclosure).

A.8 Information for interested parties.

What you tell users, customers, employees, and regulators about your AI. Includes transparency disclosures, complaint channels, and the published portions of the AI policy.

A.9 Use of AI systems.

The end-to-end controls that govern operating AI in production. Monitoring, incident response, performance oversight, and human review where required.

A.10 Third-party and customer relationships.

Vendor due diligence, contract clauses, customer commitments. Where your AI is built on someone else's model, this category names how you're managing that supply chain.

Annex A controls are not all required. The standard expects you to apply controls that match your scope and risk. The Statement of Applicability documents which controls you've selected and why others were excluded. Auditors compare actual practice to the Statement of Applicability.

05 / Path to certification

Six to twelve months for most companies.

The path from "we want to be 42001-certified" to "we have a certificate" runs through five gates.

  1. Gap analysis. Where you are today vs. where 42001 expects you to be. 4 to 6 weeks for most companies.
  2. Implementation. Build the policy, scope the system, run the impact assessments, implement the Annex A controls you've selected. 3 to 6 months.
  3. Internal audit. A trial run of the audit. Find your own gaps before the certifier does. 4 to 8 weeks.
  4. Stage 1 audit. The certification body reviews your documentation, scope, and readiness. 1 to 2 weeks of audit time.
  5. Stage 2 audit. The certification body audits operational evidence. If you pass, you're certified. 1 to 2 weeks of audit time.

After certification, you have annual surveillance audits and a full recertification audit every three years. The management system is alive, not a one-time event.

Companies already certified to ISO 27001 hit certification roughly 40% faster. The management system infrastructure (leadership commitment, internal audit, document control, management review, corrective-action workflow) is already running. ISO 42001 reuses the same chassis with new control categories. If you have ISO 27001, see the conversion guide →

06 / How Northbeams maps to this

Inventory, classification, signed evidence.

ISO 42001 audits land on three artifacts that most companies struggle to produce: a complete AI inventory, evidence that controls actually run, and signed retention. Northbeams answers all three across browser, desktop, and CLI.

A.4 / A.6 inventory

Continuous discovery across browser, desktop, and CLI.

Every AI tool your team uses appears in the dashboard, dated and categorized. Pre-mapped to ISO 27001 A.5.34 and A.8.10.

A.5 impact assessment input

Per-tool risk classification.

Categories include credentials, PII, source code, customer data, contracts, design IP. The data your AI impact assessor needs.

A.9 use-of-AI controls

Per-tool policy: sanctioned, sandboxed, or blocked.

State changes are timestamped and signed. The control demonstrably ran and someone owns it.

A.6 / 27001 A.12.4 retention

Immutable signed event log.

SHA-256 signed CSV exports. Tamper-evident retention. The auditor trusts the export the same way they trust 27001 A.12.4.

If you're scoping ISO 42001 and your audit team needs a defensible inventory and a signed audit log, Sentinel is the tier you'd buy. See the audit-ready evidence pack →

07 / FAQ

Common questions about ISO/IEC 42001.

What is ISO/IEC 42001?
ISO/IEC 42001:2023 is the world's first international management system standard for artificial intelligence. It defines the requirements for establishing, implementing, maintaining, and continuously improving an AI management system within an organization. Think of it as ISO 27001 for AI: a structured, certifiable framework for governing how a company develops or uses AI.
Is ISO 42001 mandatory?
ISO 42001 is voluntary. No law requires it. The pressure comes from procurement: Fortune 500 buyers increasingly require ISO 42001 certification (or a clear roadmap toward it) in their vendor security questionnaires. Several state AI laws, including Colorado, recognize ISO 42001 as a safe-harbor framework that earns a rebuttable presumption of reasonable care.
How long does ISO 42001 certification take?
Most organizations take 6 to 12 months from kick-off to certificate. Companies already certified to ISO 27001 typically reach 42001 about 40% faster because the management-system infrastructure (leadership commitment, internal audit, document control, management review) is already in place.
What does the Plan-Do-Check-Act cycle mean in 42001?
Plan: define your AI policy, scope, and risk approach. Do: implement the controls and run them. Check: monitor performance, audit internally, surface issues. Act: feed findings back into management review, update the policy, repeat. PDCA is shared across most ISO management system standards.
How does ISO 42001 compare to NIST AI RMF?
ISO 42001 is certifiable, internationally recognized, and procurement-driven. NIST AI RMF is voluntary, US-government-published, and process-oriented. Many companies operate under both: NIST AI RMF as the operational playbook and ISO 42001 as the certifiable wrapper that satisfies enterprise procurement.
How many controls are in ISO 42001 Annex A?
ISO 42001 Annex A defines 38 controls organized across nine categories: AI policies, internal organization, AI resources, impact assessment, AI life cycle, data, information for interested parties, AI system use, and third-party relationships.
Do small companies need ISO 42001?
Most SMBs don't need it on day one. The trigger is usually a Fortune 500 prospect that asks for it in procurement. If you're under 500 employees and not selling to large enterprises that require it, NIST AI RMF is usually the better starting point.

Defensible audit log for your 42001 auditor. By Friday.

Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export. Pre-mapped to A.5, A.6, A.9, and the related ISO 27001 controls.