(The Epoch Times)—Whether it’s driving a car or summarizing a doctor’s appointment, autonomous artificial intelligence (AI) systems can make decisions that cause real harm, rapidly changing the landscape of liability.
Attorneys and AI developers say U.S. laws must keep up with the technology as debate persists over who’s responsible when things go wrong.
Lawmakers are looking to close the accountability gap by shifting burdens and expanding who can be held accountable when autonomous AI systems fail. Unlike non-autonomous AI systems, autonomous models are more likely to be unpredictable.
In the United States, a legal patchwork is slowly forming. In 2024, Colorado passed a law, Consumer Protections for Artificial Intelligence, requiring those deploying “high-risk” AI systems to protect consumers from “reasonably foreseeable risks” starting Feb. 1, 2026.
Since 2023, New York has enforced a law that prohibits employers and employment agencies from using automated employment decision tools unless they have undergone a bias audit within one year of the tool’s use. The results of the audit must be made public.
Presently, there’s no concrete federal legal foundation that demonstrates clear-cut accountability when autonomous AI systems fail, prompting some legal experts to say there must be greater transparency.
“For centuries, legal frameworks for assigning liability have relied on well-established principles designed for a human-centric world,” Pavel Kolmogorov, founder and managing attorney at Kolmogorov Law, told The Epoch Times.
Kolmogorov said that cases of negligence require proof of a breach of “duty of care.” Liability related to products holds manufacturers responsible for defects or design flaws. However, both scenarios assume there’s clear human oversight and relatively static, predictable tools.
“Autonomous AI systems fundamentally disrupt this paradigm,” Kolmogorov said. “Their defining characteristics—complexity, operational autonomy, and the capacity for continuous learning—create profound challenges for applying these traditional legal concepts.”
He also said AI’s “black box” problem, where even developers can’t fully explain the specific reasoning behind an AI’s decision, makes it extraordinarily difficult to pinpoint a specific breach or defect in the traditional sense.
Kolmogorov gave an example of a legal quagmire: “When an autonomous vehicle makes a fatal error, was it due to a flaw in its original code, a limitation in its training data, an unpredictable emergent behavior learned over time, or some combination thereof?”
Autonomy in Action
The idea of AI driven cars running people down in the streets is no longer a sci-fi concept. The landmark 2018 case involving a self-driving Uber vehicle that struck and killed a pedestrian in Tempe, Arizona, was the first recorded fatality involving a fully autonomous vehicle. The human passenger, or “backup” driver, was ultimately charged with negligent homicide.
This was far from an isolated incident. Between 2019 and 2024, there were 3,946 autonomous vehicle accidents, according to the Craft Law Firm. Of these cases, 10 percent caused injury and 2 percent resulted in fatalities.
“Right now, the law still treats autonomous driving systems under traditional negligence and product liability principles,” a representative for Valiente Mott Injury Attorneys told The Epoch Times.
“If a driver is expected to monitor the system and fails to intervene, they can be held responsible for negligence. But if the technology itself is defective or marketed in a misleading way, the manufacturer may face liability. In many cases, fault can be shared. We’re essentially applying old legal standards to new technology until the law catches up.”
The representative added that current laws were written with human drivers in mind.
“As autonomy increases, legislatures and courts will need to define how responsibility is allocated between human operators, manufacturers, and possibly even software developers.”
This ambiguity leads to what Kolmogorov called “responsibility fragmentation.”
“Unlike a simple tool with a single manufacturer and operator, an AI system is the product of a long and complex supply chain,” he said. “When a failure occurs, attributing liability becomes an exercise in untangling a dense web of dependencies, making it difficult for a victim to identify the appropriate defendant.”
This autonomous AI supply chain can include data suppliers, software developers, hardware manufacturers, system integrators, and end users, each contributing to the final product.
Kolmogorov noted that the driver bore the criminal responsibility in the 2018 Uber case, but on the civil end, legal experts said Uber had strong liability exposure that could qualify as negligence and product liability.
“The case exposed the split between criminal versus civil standards. The former requires intent or recklessness, while the latter hinges on design and testing failures,” Kolmogorov said.
Similarly, Tesla’s Autopilot has been tied to multiple crashes, including a 2019 fatality in Florida. This August, a jury found Tesla partially liable, saying its AI system contributed to the accident alongside driver negligence.
Autonomous AI systems with the ability to cause harm aren’t limited to self driving cars. Mostly autonomous “agentic” AI models are being integrated into nearly every sector of the United States, from health care to manufacturing, logistics, software, and the military.
Avoiding Hazards
Researchers at IBM have called 2025 the year of the AI agent. At a glance, agentic AI includes mostly autonomous systems that can act independently to achieve goals with minimal human oversight.
Some AI experts, including David Talby, CEO of John Snow Labs and chief technology officer at Pacific AI, believe AI agents can have quality of life impacts as models advance and become increasingly independent of human involvement.
“Health care stands out as one of the most demanding domains. Unlike consumer applications or even some enterprise use cases, AI in health care directly impacts people’s lives and well-being,” he told The Epoch Times.
Talby said many autonomous AI systems already exist in health care, including digital health applications that interact directly with patients, clinical decision support systems, and visit summarization tools that work alongside doctors.
“These systems can independently process complex medical data, draft clinical notes, or guide patients in self-care, but it’s important this is always under a human-in-the-loop framework of accountability,” he said. “While parts of the workflow are fully automated, human oversight is still needed in health care and beyond.”
Talby added that errors can have profound consequences and accountability must extend past accuracy metrics. Issues such as bias in medical datasets, robustness under real-world variability, and adherence to ethical standards all demand what he called “rigorous governance.”
“In health care, our top concerns extend beyond model accuracy. We must ensure that AI systems are not only effective but also safe, transparent, and compliant with regulations,” Talby said.
Kolmogorov said “physicians face dual risks” as AI diagnostic tools become more accurate in health care. They face “negligence for not using validated AI and negligence for over-relying on flawed recommendations.”
He said patients should be informed when AI is used. “And data representativeness is crucial to avoid bias.”
This year, AI developers from Hugging Face, a machine learning company, published research that advocated against deploying fully autonomous AI agents. The article stated, “Risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise.”
In July, the White House unveiled its AI Action Plan, which outlined infrastructure building, investment, and defense initiatives. However, this plan does not address the legal gray areas that remain as autonomous AI continues expanding its reach.
European lawmakers face similar challenges and currently treat AI as a product under its Product Liability Directive, which extends liability to post-sale changes, such as model updates or new machine-learned behaviors.
“I regularly advise clients in emerging tech and mobility sectors on AI autonomy risk,” Kolmogorov said. “The focus is on mitigating exposure before a failure becomes a legal crisis.”
Discover the Freedom of True American Healthcare: Why America First is Revolutionizing Protection for Patriots
In a world where government overreach and skyrocketing premiums are squeezing the life out of hardworking Americans, one innovative agency is standing tall for liberty and affordability. Meet America First Healthcare—the private health insurance powerhouse dedicated to putting *you* first.
Founded by entrepreneur Jordan Sarmiento, this isn’t just insurance; it’s a shield for your family’s future, built on the unshakeable belief that private enterprise delivers better results than bureaucratic red tape.
Picture this: Jordan’s own story hits close to home for so many of us. A sudden medical emergency landed him with a staggering $95,000 bill. Under a traditional plan? He’d be buried in debt. But with America First’s patented health insurance, that nightmare shrank to just $500 out-of-pocket. That’s not a fluke—it’s the promise of coverage that works *for you*, from day one.
Breaking Free from the Chains of Conventional Coverage
Let’s face it: The status quo stinks. Marketplace.gov and big-insurance behemoths hit you with sky-high deductibles—thousands you’d have to pay before benefits even kick in—leaving massive holes in your protection. Need a routine mammogram, colonoscopy, or EKG? Good luck without forking over more cash. And don’t get us started on the gaps in dental, vision, or critical illness support when heart attacks, cancer, or kidney failure strike.
America First Healthcare flips the script. As a proud advocate for private solutions over government intervention, they craft custom plans that slash costs by 20% compared to traditional options. We’re talking comprehensive coverage that includes:
- Preventative and Wellness Care: Physical exams, screenings, and EKGs covered right away—no waiting games.
- Telemedicine Access: Virtual doctor visits anytime, anywhere, for that peace of mind.
- Accident and Critical Illness Protection: Real safeguards against life’s curveballs.
- Add-On Boosts: Dental, vision, disability, and supplemental plans to plug every leak.
Whether you’re an individual stepping off your parents’ plan, a growing family with kids in tow, or a small business owner tired of employee headaches, their tailored approach fits like a glove. Small businesses? Unlock group benefit rates usually reserved for corporate giants—without the red tape.
And for those in-between moments? Short-term insurance steps in as an ultra-affordable bridge, while life insurance ensures your loved ones are never left vulnerable.
Real Americans, Real Wins
Don’t just take our word for it. Thousands of freedom-loving families have already ditched the old system for America First. “Finally, insurance that aligns with our values and actually saves us money,” shares one client. Another raves, “Our small team got big-business perks without the hassle—it’s a game-changer.” These aren’t scripted lines; they’re the voices of patriots who’ve reclaimed control over their health destiny.
Your Move: Secure Your Shield Today
Why settle for less when you can demand better? America First Healthcare isn’t about profits—it’s about powering the American dream with reliable, value-driven protection. Plans are available year-round, no open-enrollment nonsense.
Ready to uncover the gaps in your current setup and lock in savings? Schedule your FREE healthcare review today at America First Healthcare. In under 15 minutes, their experts will map out options that fit your life, your budget, and your principles.
America First isn’t just healthcare—it’s a declaration of independence. Join the movement. Your family’s freedom starts now.


