The EU AI Act (Regulation 2024/1689) is the most aggressive AI law globally. It uses risk tiers, applies extraterritorially the way the GDPR does, and carries penalties up to 7% of global revenue. High-risk system obligations took effect February 2, 2026. Full applicability is August 2, 2026. Here's what non-EU companies actually need to do.
01 / Who's covered
The EU AI Act assigns roles. Your obligations depend on which role you play with respect to a given AI system.
The Act is extraterritorial. A non-EU SaaS company whose product is used by an EU enterprise is a provider for that purpose. A US staffing firm whose AI screening tool ranks EU candidates is a deployer. A retailer in Berlin reselling a US-built AI tool is an importer. Headquarters location is not the deciding factor.
02 / The four risk tiers
The Act sorts AI systems into four tiers based on the risks they pose. Each tier has its own obligations.
Prohibited under Article 5. The list includes social scoring by public authorities, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions), exploitation of vulnerabilities, manipulative subliminal techniques, predictive policing based solely on profiling, untargeted scraping of facial images for biometric databases, and emotion recognition in workplace and education contexts (with narrow medical and safety exceptions). These prohibitions took effect on February 2, 2025.
Two routes lead to high-risk classification. Annex III lists named high-risk uses: biometric identification, critical infrastructure, education, employment, essential services (including credit), law enforcement, migration and border control, administration of justice, and democratic processes. Article 6(1) treats AI systems that are safety components of regulated products (machinery, medical devices, toys, vehicles) as high-risk. High-risk obligations under Annex III applied from February 2, 2026; Article 6(1) obligations follow on August 2, 2026.
Limited-risk AI systems (chatbots, emotion recognition, biometric categorization, deepfakes, AI-generated content) carry transparency obligations: users must know they are interacting with AI, AI-generated content must be labeled, deepfakes of real people must be disclosed.
Most AI systems fall here: spam filters, recommendation engines, AI-assisted productivity tools without consequential decisions. The Act encourages voluntary codes of conduct but imposes no specific obligation. Article 4 AI literacy still applies.
03 / Article 4 AI literacy
Article 4 is short, plainly written, and broadly applicable. It says providers and deployers of AI systems must take measures to ensure, to the best of their ability, a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI on their behalf.
Three things make Article 4 the obligation companies most often miss:
Concretely, an Article 4 evidence file usually contains: a written AI literacy program, a record of which employees completed which training, the dated AI policy, the inventory of AI tools the program covers, and the named owner of the program. The Northbeams audit log shows which AI tools were actually used by which employees during the period in question, which is the inventory side of the evidence.
04 / High-risk system obligations
High-risk AI systems carry the heaviest obligations under the Act. The list of duties is long; the operational shape is roughly:
Deployers of high-risk AI also have specific duties (Article 26): assign human oversight, ensure input data is relevant, monitor operation, keep logs, inform affected persons in some cases, and conduct fundamental rights impact assessments where the deployer is a public body or provides public services.
05 / Key dates and timeline
Adopted
June 13, 2024 (Regulation 2024/1689)
Entered into force
August 1, 2024
Prohibited practices + Article 4
February 2, 2025
GPAI + governance
August 2, 2025
Annex III high-risk obligations
February 2, 2026
Full applicability (Article 6(1))
August 2, 2026
The phased timeline matters because each phase added a new bucket of obligations. The order also reveals the European Commission's priorities: first ban the worst practices and require AI literacy, then govern general-purpose AI models, then enforce the heavy high-risk system stack, then close the loop on regulated products.
06 / How Northbeams maps to this
Three EU AI Act articles drive most of the operational evidence load. Northbeams produces the data the auditor needs for each.
Article 4 AI literacy
The literacy program needs a defensible list of which tools the program covers. Northbeams produces it across browser, desktop, and CLI. By user, by date.
Article 12 record-keeping
SHA-256 signed CSV exports. Tamper-evident retention. Pre-mapped to SOC 2 CC7.2 and ISO 27001 A.12.4.
Article 13 transparency
What each AI tool does, what data category it touches, and which policy applies. The information you'd cite in instructions for use.
Article 14 human oversight
Human-set state changes are timestamped and signed. The override path is in the dashboard, not buried in a vendor backend.
For the article-by-article checklist, the EU AI Act readiness PDF walks through what evidence applies to non-EU SMBs and where Northbeams fits. Free.
07 / FAQ
Free 7-page PDF. Article-by-article checklist mapped to the evidence Northbeams produces.