OpenAI has publicly testified in favor of Illinois Senate Bill 3444, the Artificial Intelligence Safety Act, which would shield AI developers from liability for catastrophic harms including mass casualties and billion-dollar damages. The bill, currently under consideration in the Illinois legislature as of April 10, 2026, has drawn significant criticism from AI policy experts and the public.
Bill Defines Critical Harms as 100+ Deaths or $1 Billion in Damage
SB 3444 defines "critical harms" as death or serious injury of 100 or more people, at least $1 billion in property damage, or bad actors using AI to develop chemical, biological, radiological, or nuclear weapons. The legislation applies specifically to "frontier models"—AI systems trained using more than $100 million in computational costs, a threshold that would cover OpenAI, Google, xAI, Anthropic, and Meta.
Safety Reports Required But Quality Standards Absent
Under the bill's provisions, AI developers are shielded from liability for critical harms if they meet two conditions: they did not intentionally or recklessly cause the incident, and they published safety, security, and transparency reports on their website. Critically, the bill establishes no quality standards for these safety reports—companies simply need to publish them.
AI policy experts told WIRED that SB 3444 represents "a more extreme measure than bills OpenAI has supported in the past." Critics argue the legislation gives AI companies "near total legal immunity" as long as they have "a safety plan, no matter how bad the plan is."
OpenAI Testimony Marks New Stance on Catastrophic Risk Liability
OpenAI representatives testified in favor of the bill at Illinois legislative hearings, marking the company's first public support for limiting their own liability in catastrophic scenarios. The testimony represents a significant policy position as AI systems grow more powerful and potentially more dangerous.
Public Opposition Reaches 90% in Illinois Polling
When polled, 90% of Illinois residents oppose the idea that AI companies should be exempt from liability for harms caused by their products. This overwhelming public opposition contrasts sharply with the bill's support from major AI laboratories.
The Hacker News discussion of the bill generated 317 points and 218 comments, with one commenter noting the legislation amounts to "regulatory capture in real time." Others highlighted the disconnect between frontier AI companies seeking liability protection and the potential scale of harms their systems could cause.
Key Takeaways
- Illinois SB 3444 would shield AI labs from liability for critical harms including 100+ deaths or $1 billion in damages
- OpenAI testified in favor of the bill, marking its first public support for catastrophic risk liability limits
- The bill requires safety reports but sets no quality standards for their content
- 90% of Illinois residents oppose AI company liability exemptions
- Policy experts characterize the bill as more extreme than previous AI regulation proposals