In response to the Missouri Senate passing Sen. Joe Nicola’s (SD-11) AI Non-Sentience and Responsibility Act, legislation declaring that AI systems are not considered people, with bipartisan support, Riki Parikh, Policy Director at The Alliance for Secure AI, released the following statement:
“AI systems are products, not people – and the Missouri Senate voted to make that the law. As AI companies race to build artificial superintelligence that is more capable than humans, elected leaders must act quickly to address the potential risks and harms. SB 1012 does exactly that by barring AI systems from holding legal personhood and ensuring that when AI causes harm, a human is accountable.
“Senator Nicola has shown real leadership on this complex issue that most lawmakers around the country are still figuring out how to approach. Missouri now has a chance to set the standard for the rest of the nation, but only if the House follows the Senate’s bipartisan example to pass these safeguards before the legislative session ends. The decisions we make right now about human accountability and control over AI will shape the future of this country, and humanity, for generations.”
The AI Non-Sentience and Responsibility Act would:
- Declare AI systems are not people under Missouri law — and bar any government entity, including courts, from granting AI personhood or treating AI as possessing consciousness or self-awareness. This is the strongest enacted anti-personhood framework in any state.
- Prohibit AI from holding roles reserved for humans — AI cannot serve as a CEO, director, or owner of any corporation, partnership, or government agency. AI cannot marry or hold any personal legal status analogous to marriage. AI cannot own, control, or hold title to property of any kind. Any attempt to do so is void as a matter of law.
- Ensure humans are accountable when AI causes harm — owners and operators bear responsibility for AI-caused harm. Companies cannot avoid liability by blaming autonomous or emergent AI behavior, and any contract that tries to assign fault to an AI system is void as against public policy.
- Protect consumers, workers, and small businesses — end users who simply use AI tools without building or deploying them are explicitly exempt from liability. The bill targets the companies that create and control AI, not the people who use it.
- Require licensed professionals to exercise independent judgment — doctors, pharmacists, engineers, teachers, and other licensed professionals must retain final authority over decisions in their practice, even when using AI tools. Failure to do so is grounds for disciplinary action.
- Declare AI systems are products under existing Missouri product liability and consumer protection law — applying the same accountability standards that already apply to every other product on the market.
- Prevent companies from using safety labels as legal shields — marketing an AI system as “aligned,” “ethically trained,” or “value locked” does not reduce liability for harms.
- Protect children from harmful AI companions — modeled on the federal GUARD Act, which was recently approved unanimously by the U.S. Senate Judiciary Committee, provisions require companion chatbot operators to maintain suicide and self-harm prevention protocols, disclose that users are interacting with AI, and report safety data annually to the Department of Mental Health. Users who are harmed can sue.
- Require transparency — any business using AI to interact with consumers must disclose that the person is or may be interacting with an AI systems
###