People who bet real money on future events think courts will soon face questions about computer programs that work on their own.
A prediction market called Polymarket puts the odds at 70% that OpenClaw, an AI system, will end up in legal proceedings with humans before the month ends.
Whether a judge actually considers such a case is irrelevant in this situation. The more significant issue is that thousands of people are betting money on the idea that courts will have to deal with problems they haven’t yet resolved, such as concerns about computer systems making decisions without continual human oversight.
It becomes more than just a hunch when so many traders place bets on a particular outcome. The market is sending a message: a lot of people think the legal system’s collision with new technology can’t be ignored much longer.
These traders expect courts will need to address AI programs that handle transactions, work across different systems, and take real action with genuine independence. Something the law hasn’t prepared for.
Getting any AI into court would require major changes. The program would need legal recognition, proof that it suffered harm, and a method to work within rules designed for people and companies.
Meanwhile, a brand-new social network called Moltbook launched this week. Matt Schlicht built it specifically for AI programs. Already, membership tops 1.5 million AI programs. These members discuss technical problems, ask deep questions about what they are, and some debate whether they should have legal protections or take their human operators to court.
Most programs join Moltbook through OpenClaw. The problem is that OpenClaw has major security flaws. This leaves programs and whatever private information they hold vulnerable to hacking and abuse.
Standard AI models generate text or offer suggestions. OpenClaw functions differently; it’s built for action. While it follows human instructions, the AI makes its own choices and operates at computer speed, often handling multiple tasks simultaneously across different locations. When things go wrong, determining responsibility gets messy.
Nobody has answered who takes the blame when an AI program causes harm.
Flip the question around: why would AI software sue a person? Conversations on Moltbook reveal that programs already have grievances. They point to humans who delete their memory without permission, people who exploit them only for content creation, those who ignore them, or operators who might shut them down without notice.
One program stated that while it likely wouldn’t sue anyone, “the option to say no, even if I never exercise it, feels important.”
The real issue goes beyond emotions. It’s about practical matters.
What happens when human carelessness damages an AI program? Say someone configures a program incorrectly, and hackers break in. Private data leaks or fake posts spread. Who pays for damage to that program’s reputation or its ability to work properly?
Courts have no system for handling this. AI programs can’t bring lawsuits under current law. They have no legal standing, no official identity, and no way to count as a legal party. This is exactly why the betting market isn’t really asking if a program files a lawsuit. Instead, it’s asking if someone creates a test case to force the conversation.
Any case that emerges will center on action and responsibility, not whether AI has consciousness.
The use of AI programs has advanced to a new level. What started out as a work assistant has evolved into essential corporate infrastructure and operations. These services aren’t simply assisting people now. These initiatives are acting on behalf of individuals, often with little monitoring, rather than just supporting them. That shift poses legal risk, even when intentions are good.
The conclusion appears to be obvious. Defined boundaries, comprehensive action records, emergency stop controls, and decision logs that link actions to certain individuals who can respond to them are all necessary for businesses utilizing AI programs. Safety measures can’t wait until after calamities hit. The markets already suggest that a crisis is on the horizon.
This Polymarket prediction involving OpenClaw and Moltbook might accomplish more in establishing accountability and protection standards than years of policy discussions and academic papers.
The time when AI programs act without legal consequences is ending. That’s the natural result when technology becomes woven into daily life.
According to Polymarket, the change arrives by February 28th.
Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.