ISO/IEC 42001:2023 is the world's first AI management system standard. Published December 2023. Now appearing in Fortune 500 vendor security questionnaires. Recognized as a safe harbor by Colorado and others. Here's what it covers, the 38 controls in Annex A, and the path from zero to certified.
01 / What ISO/IEC 42001 is
ISO/IEC 42001:2023 was published in December 2023 as the world's first international management system standard specifically for artificial intelligence. It defines the requirements for establishing, implementing, maintaining, and continuously improving an AI management system inside an organization.
The phrase "management system" is doing real work in that title. Like ISO 27001 for information security or ISO 9001 for quality, 42001 is not a list of things to do once. It is a structured, audited, continuously improving system that makes AI governance a recurring discipline rather than a one-off project.
It uses the Annex SL high-level structure, the same skeleton as ISO 27001, ISO 9001, and other ISO management system standards. That means a company already running an Annex-SL-shaped management system can absorb 42001 without rebuilding the chassis.
The standard is technology-neutral. It does not name specific AI technologies, vendors, or model architectures. It defines the policies, processes, and controls a responsible operator should run, leaving the implementation specifics to the organization.
02 / Why it matters in 2026
ISO 42001 is voluntary. No law in 2026 forces a company to be certified. Adoption nonetheless jumped sharply from approximately 1% of businesses to 28% in a single year. Gartner forecasts that more than 70% of enterprises will adopt a formal AI governance standard by the end of 2026.
The reason is procurement. Fortune 500 buyers added "describe your AI governance" and "list your AI management certifications" to their standard vendor security questionnaires through 2025. Companies who cannot point to ISO 42001 (or a credible roadmap toward it) lose deals.
Three more reinforcing trends in 2026:
03 / The Plan-Do-Check-Act cycle
ISO 42001 runs on the Plan-Do-Check-Act (PDCA) cycle, the same engine used in ISO 27001 and ISO 9001. Each cycle through PDCA is one rotation of the management system.
Define the AI policy and scope of the management system. Identify interested parties (employees, customers, regulators, partners). Run an AI risk assessment. Run an AI system impact assessment for each in-scope AI system. Set objectives. Allocate roles and resources.
Implement the Annex A controls that match your scope and risk profile. Train people. Operate the controls. Document evidence as the controls run. Track suppliers and third parties.
Monitor and measure. Run internal audits. Surface findings. Track incidents. Hold management review meetings on a defined cadence (annually at minimum, more often where appropriate).
Feed findings back into the management system. Update the policy. Adjust controls. Add new controls if needed. Schedule the next pass through PDCA.
Most companies new to ISO management systems underestimate the discipline of "Check" and "Act." They do the planning. They implement controls. Then they stop, and a year later the management system has decayed. The PDCA discipline is what keeps it alive.
04 / The 38 Annex A controls
ISO 42001 Annex A defines 38 controls organized across nine categories. Below is the operational shape, not the verbatim text. Refer to the published standard for the exact control language.
An AI policy that the leadership signs and reviews. The policy names the AI uses you allow, the ones you forbid, the ethical principles you apply, and how you publish all of this internally and (where appropriate) externally.
Roles, responsibilities, and reporting lines for AI. Who decides whether to deploy a new AI system. Who reviews the impact assessments. Who is accountable to the board.
The data, tooling, compute, and human resources you allocate to AI. The supplier relationships that supply them. The documentation that says what version of what was used in production.
The AI system impact assessment. Required for in-scope AI systems before deployment and at defined intervals afterward.
How you design, develop, test, deploy, monitor, and retire AI systems. Includes versioning, change management, and decommissioning.
Data quality, data governance, training data sources, data lineage. The control category most likely to interact with privacy law (CCPA, GDPR, the AB 2013 training-data disclosure).
What you tell users, customers, employees, and regulators about your AI. Includes transparency disclosures, complaint channels, and the published portions of the AI policy.
The end-to-end controls that govern operating AI in production. Monitoring, incident response, performance oversight, and human review where required.
Vendor due diligence, contract clauses, customer commitments. Where your AI is built on someone else's model, this category names how you're managing that supply chain.
Annex A controls are not all required. The standard expects you to apply controls that match your scope and risk. The Statement of Applicability documents which controls you've selected and why others were excluded. Auditors compare actual practice to the Statement of Applicability.
05 / Path to certification
The path from "we want to be 42001-certified" to "we have a certificate" runs through five gates.
After certification, you have annual surveillance audits and a full recertification audit every three years. The management system is alive, not a one-time event.
Companies already certified to ISO 27001 hit certification roughly 40% faster. The management system infrastructure (leadership commitment, internal audit, document control, management review, corrective-action workflow) is already running. ISO 42001 reuses the same chassis with new control categories. If you have ISO 27001, see the conversion guide →
06 / How Northbeams maps to this
ISO 42001 audits land on three artifacts that most companies struggle to produce: a complete AI inventory, evidence that controls actually run, and signed retention. Northbeams answers all three across browser, desktop, and CLI.
A.4 / A.6 inventory
Every AI tool your team uses appears in the dashboard, dated and categorized. Pre-mapped to ISO 27001 A.5.34 and A.8.10.
A.5 impact assessment input
Categories include credentials, PII, source code, customer data, contracts, design IP. The data your AI impact assessor needs.
A.9 use-of-AI controls
State changes are timestamped and signed. The control demonstrably ran and someone owns it.
A.6 / 27001 A.12.4 retention
SHA-256 signed CSV exports. Tamper-evident retention. The auditor trusts the export the same way they trust 27001 A.12.4.
If you're scoping ISO 42001 and your audit team needs a defensible inventory and a signed audit log, Sentinel is the tier you'd buy. See the audit-ready evidence pack →
07 / FAQ
Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export. Pre-mapped to A.5, A.6, A.9, and the related ISO 27001 controls.