Introduction

TL;DR

New York’s governor signed the RAISE Act, strengthening AI safety regulation for frontier AI model developers. It requires large AI developers to publish safety protocol information, report qualifying safety incidents within 72 hours, and submit to oversight via a new office within the Department of Financial Services (NYDFS). Multiple sources report an effective date of 2027-01-01, giving companies a 2026 runway to operationalize compliance.

What the RAISE Act Requires

Safety protocol disclosures (publish “safety protocols” information)

New York’s official announcement states that covered large AI developers must create and publish information about their safety protocols.

Why it matters: Publishing a protocol is not a one-time PDF. It becomes an auditable statement that should match real controls, tests, and governance.

72-hour incident reporting

The law is described as requiring developers to report safety incidents within 72 hours of determining an incident occurred.

Why it matters: 72 hours is operationally tight. You’ll need pre-defined incident taxonomies, evidence capture, and a “determination” workflow that is consistent and time-stamped.

Oversight office within NYDFS

The announcement also states the Act creates an oversight office within NYDFS to assess large frontier developers and publish annual reports.

Why it matters: NYDFS-style oversight tends to emphasize documentation, accountable owners, and repeatable controls-closer to regulated-industry governance than voluntary ethics.

Enforcement and penalties (as publicly described)

New York’s announcement says the Attorney General can bring civil actions for failures to submit required reporting or making false statements, with penalties up to $1M for the first violation and $3M for subsequent violations.

Why it matters: Public-facing disclosures must be vetted like legal statements-because “misleading omissions” can become an enforcement vector.

Coverage and effective date: what’s clear, and what varies by summary

Many sources report the Act takes effect on 2027-01-01.

However, summaries differ on how “large developers” are defined (e.g., revenue-based vs compute/FLOPs-based descriptions). New York’s official release also references “agreed-upon chapter amendments,” which may explain why secondary summaries emphasize different thresholds.

Why it matters: Your compliance workstream starts with scoping. In 2026, prioritize confirming coverage using the final statutory text and any implementing guidance, then map your training/deployment footprint accordingly.

Practical implementation: making 72-hour reporting real

Minimal internal incident report schema (example)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "incident_id": "INC-2025-12-XYZ",
  "determined_at": "2025-12-21T10:30:00Z",
  "reported_at": "2025-12-22T08:00:00Z",
  "model": {"name": "frontier-model-vX", "version": "2025.12.01"},
  "incident_type": ["unauthorized_access", "model_weight_exfiltration"],
  "summary": "Short and plain statement describing the incident",
  "evidence_uris": ["s3://.../logs", "s3://.../eval"],
  "containment_actions": ["disabled_endpoint", "rotated_keys"]
}

Why it matters: A stable schema reduces friction across legal, security, and ML teams and makes on-call execution repeatable.

Conclusion

  • New York’s RAISE Act centers on safety protocol disclosures, 72-hour incident reporting, and NYDFS-based oversight.
  • Multiple sources point to an effective date of 2027-01-01, making 2026 a critical implementation year.
  • Because coverage thresholds are summarized differently across sources, confirm applicability using final text/guidance and then build a determinable, time-stamped incident workflow.

Summary

  • New York signed the RAISE Act to regulate frontier AI model safety and transparency.
  • Core duties: publish safety protocol information; report safety incidents within 72 hours; comply with NYDFS oversight.
  • Prepare now: fix an incident taxonomy, evidence capture, and a “determination” workflow that can meet a 72-hour SLA.

#RAISEAct #AISafety #NYDFS #AIGovernance #IncidentResponse #AICompliance #FrontierAI #ModelRisk #Policy #SB53

References