Compliance brief / US state law

California AI laws. Five at once.

California has the largest stack of state AI laws in the US. SB 53 governs frontier developers. AB 2013 governs generative AI providers. SB 942 governs AI-generated content. AB 489 governs healthcare AI. SB 243 governs companion chatbots. Here's what each one requires and which one applies to you.

TLDR

On this page

  1. 01 Who's covered
  2. 02 SB 53: frontier AI
  3. 03 AB 2013: training data
  4. 04 SB 942: watermarks
  5. 05 AB 489: healthcare AI
  6. 06 SB 243: companion chatbots
  7. 07 How Northbeams maps to this
  8. 08 FAQ

01 / Who's covered

Pick the law that matches what you actually do.

California's stack is vertical, not horizontal. Each law targets a specific AI activity. Which apply to you depends entirely on what you do.

If none of those describe you, California's AI stack is mostly background context. Your bigger California exposure is probably CCPA + the Colorado AI Act + the EU AI Act, depending on where your customers and employees are.

02 / SB 53: California Transparency in Frontier AI Act

Risk frameworks, incident reporting, whistleblower protection.

SB 53 is California's frontier AI law. It targets developers training the largest foundation models, defined by compute and spend thresholds. Most companies are not in scope.

If you are in scope, SB 53 requires you to:

SB 53's enforcement leans on disclosure and documentation rather than approval. The California AG can pursue civil penalties for material misstatements or omissions in the published risk framework.

03 / AB 2013: Training-data transparency

Publish what your model learned from.

AB 2013 requires generative AI service providers to publish, on their public website, a summary of the data used to train their generative AI systems. The summary must describe data sources at a category level (for example, "publicly available web data," "licensed text from publishers," "user-generated content under [identified] terms"), the rough timeframes of collection, whether copyrighted material was included, and the data-cleaning practices applied.

The disclosure does not require revealing exact datasets or proprietary methods. It does require a summary detailed enough that a reader can understand at a category level what the model was trained on.

Most SMBs are downstream consumers of generative AI, not providers. AB 2013 affects you indirectly when your vendor publishes its summary; that's relevant input for your own risk assessments.

04 / SB 942: California AI Transparency Act

Watermarks, labels, detection tooling.

SB 942 requires generative AI providers serving more than 1 million California users monthly to:

Smaller services have a notification obligation, asking users to disclose when they share AI-generated content. The watermark machinery falls primarily on the largest providers.

If you republish AI-generated content (advertising agencies, content marketplaces, news organizations using AI assistance), the obligations cascade. Your contracts with AI vendors should require them to deliver content with the SB 942 disclosures intact.

05 / AB 489: Healthcare AI disclosures

Clinical AI must say what it is.

AB 489 covers AI used in healthcare. The core obligations:

Healthcare is the domain with the highest overlap to other AI laws (HIPAA at the federal level, the Colorado AI Act for "consequential decisions" in healthcare, and the EU AI Act for healthcare AI in EU markets). Healthcare deployers running US-state-level audits often need an evidence pack that satisfies all four at once.

06 / SB 243: Companion chatbot safety

Safety duties for chatbots that act like friends.

SB 243 covers AI companion chatbots: products marketed or used as friends, partners, therapists, or persistent emotional companions. Operators of these products must:

SB 243 sits next to several other US state laws (notably the New York RAISE Act amendments) that focus on AI-companion safety. If your product has any companion-style affordance and any California users, take this one seriously even if "companion chatbot" was not your design intent.

07 / How Northbeams maps to this

Inventory, classification, signed evidence.

California's AI stack assumes you know which AI tools are in use, what categories of data they touch, and how you're enforcing per-tool policy. Most companies don't. Northbeams answers those three questions across browser, desktop, and CLI, then produces the audit-ready evidence pack the AG and CPPA expect.

AB 2013 vendor risk

Categorize every generative AI tool in use.

Northbeams discovers and labels each generative AI tool your team touches. Cross-reference with the vendor's published training-data summary for your risk file.

SB 942 republish risk

Audit-ready record of AI use in content workflows.

If your team uses AI to draft, edit, or generate content, Northbeams logs which tools were involved and when, by user.

AB 489 healthcare evidence

Per-tool policy, classification, signed log.

Block AI tools that touch PHI from non-sanctioned destinations. Sandbox the ones that are clinically useful. Allow the ones explicitly cleared. The signed log is the evidence file.

SB 53 frontier safe-harbor

Document AI inventory used in training pipelines.

Northbeams tracks AI agents on developer laptops (Claude Code, Aider) and desktop Mac and PC apps. The audit log shows which agents touched proprietary data during model training.

If you're a California-touching company that needs a defensible answer for the AG or CPPA, Sentinel is the tier you'd buy. See the audit-ready evidence pack →

08 / FAQ

Common questions about California's AI laws.

Which California AI laws apply to my company?
It depends on what you do. SB 53 applies to frontier developers training the largest models. AB 2013 applies to generative AI providers. SB 942 applies to anyone publishing AI-generated content at scale. AB 489 applies to healthcare AI providers and the clinicians who use them. SB 243 applies to operators of AI companion chatbots, especially those used by minors. Most companies fall under one or two.
What is a "frontier developer" under SB 53?
SB 53 defines frontier developers using compute thresholds. Roughly: companies training models above a defined floating-point operations (FLOP) threshold or spending above a defined dollar amount on training compute. Most SMBs are not frontier developers. If you're not training your own models from scratch, you're almost certainly not in scope for SB 53.
Do these laws apply to companies outside California?
California's AI laws follow the same extraterritorial pattern as CCPA and the EU AI Act: if your AI tools or content reach California consumers, you can be in scope regardless of headquarters location. The five laws differ on specifics; most reach AI deployed to or used by California residents.
What does the SB 942 watermark requirement actually mean?
SB 942 requires generative AI providers serving more than 1 million California users monthly to embed disclosures in AI-generated content and to offer detection tooling. The disclosures can be visible (a label) or invisible (a cryptographic watermark) depending on the medium. Smaller services have a notification obligation but lighter watermark requirements.
What does SB 243 require for companion chatbots?
SB 243 imposes safety duties on operators of AI companion chatbots, particularly when minors are users. Required: protocols for self-harm and suicide content, periodic reminders that the chatbot is not a human, age-appropriate content controls, and incident reporting. Operators must also publish a public safety report.
How do California's AI laws interact with the Colorado AI Act?
They cover different layers. Colorado is consumer-protection-focused and applies to deployers of high-risk AI. California's stack is more vertical: each law targets a specific AI activity (training, watermarking, healthcare, chatbots, frontier development). Companies in scope often need to comply with both, plus Texas TRAIGA, plus the EU AI Act.
Who enforces California's AI laws?
Primary enforcement runs through the California Attorney General. The California Privacy Protection Agency (CPPA) has overlapping authority where AI use intersects with personal information. SB 942 gives some enforcement responsibility to the California Department of Technology. Civil penalties apply per violation; some laws also provide for private rights of action.

Defensible answer for the California AG. By Friday.

Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export.