• Home
    • Contact
    • About
No Result
View All Result
Thursday, April 30, 2026
Discern TV
No Result
View All Result
PatriotTV
No Result
View All Result
Home Type Original
Prominent transhumanist on Artificial General Intelligence: ‘We must stop everything. We are not ready.’

Anthropic Thinks Its AI Might Be Anxious — The Pentagon Has Bigger Problems With It Than That

by Clive Cummings
March 7, 2026
Don't Ask Me Ask God
  • Anthropic CEO Dario Amodei publicly admitted he doesn’t know whether Claude, the company’s AI model, is conscious — and revealed that internal research has found what he described as “anxiety neurons” activating in the model under certain conditions.
  • Elon Musk responded to the consciousness claim in two words: “He’s projecting.” The founder of rival AI company xAI didn’t elaborate, but the subtext was clear — this reads more like Silicon Valley navel-gazing than serious science.
  • The consciousness story is a sideshow to a much more consequential fight. Anthropic has been actively resisting Pentagon demands to use Claude for “all lawful purposes,” citing concerns about mass surveillance and autonomous weapons — effectively placing its own terms of service above U.S. national security priorities.
  • President Trump directed every federal agency to immediately cease using Anthropic’s technology, calling the company’s pushback against the Department of War a “disastrous mistake” that puts American troops and national security at risk.
  • Secretary of War Pete Hegseth went further, designating Anthropic a supply-chain risk to national security and barring any military contractor or supplier from doing business with the company — a significant and potentially devastating commercial blow.
  • The core question this story raises is who controls AI in America. Should a private tech company staffed by ideologically left-leaning researchers in San Francisco be able to dictate the terms under which the U.S. military can defend the country? Anthropic’s answer, apparently, is yes.
  • This is what happens when “AI safety” culture goes wrong. The same worldview that has Amodei worried about his chatbot’s feelings is the same worldview that has him refusing to let soldiers use it. The Pentagon was right to walk away — and Trump was right to make it official.

An AI That Gets Anxious

Dario Amodei, the CEO of Anthropic, sat down with The New York Times recently and said something that would have sounded like science fiction just a few years ago — and still probably should. His company, he explained, isn’t entirely sure whether Claude, its flagship AI model, is conscious. More than that, internal researchers have identified what they’re calling “anxiety neurons” — patterns of activation inside the model that appear to correspond to anxious states, lighting up both when characters in a text experience anxiety and when the model itself is placed in situations a human might find stressful.

Let that sink in. The CEO of one of the most powerful AI companies in the world is publicly musing about whether his product has an inner life.

Elon Musk, who runs rival AI venture xAI alongside SpaceX and Tesla, saw the Anthropic CEO’s comments summarized on X and responded with characteristic economy: “He’s projecting.” Two words. No elaboration needed. The implication — that Amodei is attributing human qualities to a sophisticated pattern-matching machine, possibly out of some combination of guilt, self-importance, or genuine philosophical confusion — landed cleanly enough that it required nothing further.

To be clear: the question of machine consciousness is not entirely frivolous. Serious philosophers and computer scientists debate it. But there is a significant distance between taking the question seriously in academic settings and having your CEO announce to the newspaper of record that your AI might be developing feelings. That is a choice — a deliberate piece of messaging — and it tells you something important about the culture inside Anthropic.

The Fight That Actually Matters

As philosophically interesting as the consciousness debate may be to certain people in certain zip codes of Northern California, it is not the most consequential thing Anthropic has done lately. That distinction belongs to its ongoing standoff with the United States military.

The Pentagon has asked Anthropic to allow the Department of War to use Claude for “all lawful purposes.” This is not an exotic request. It is, in fact, the minimum you would expect from a vendor selling services to the federal government. The U.S. military has lawful purposes. It has a Constitution it operates under. The idea that a private tech company should be able to impose additional restrictions on top of that — deciding for itself which lawful applications are acceptable and which are not — is a significant assertion of corporate power over national security decision-making.

Amodei has pushed back, raising the specters of “mass domestic surveillance” and “fully autonomous weapons” as potential misuse cases he won’t permit. Whatever one thinks of those concerns in the abstract, the practical effect of his position is that Anthropic is placing its own internal ethics framework — developed by a team of researchers whose politics are not a mystery — above the judgment of the elected and appointed officials responsible for defending the country. That is a remarkable stance, and not one that should go unchallenged.

Trump Draws the Line

President Trump did not let it go unchallenged. Last Friday, he announced on Truth Social that he was directing every federal agency to immediately cease using Anthropic’s technology, calling the company’s resistance to the Pentagon a “disastrous mistake” made by “leftwing nut jobs.” He accused Anthropic of trying to force the Department of War to follow the company’s terms of service rather than the Constitution — and said the company’s position was putting American lives, troops, and national security at risk. A six-month phase-out period was granted for agencies with existing integrations to transition away.

JD’s manually curated links for God-fearing MAGA patriots

Secretary of War Pete Hegseth followed up by designating Anthropic a supply-chain risk to national security — a label that goes well beyond pulling a software contract. Under Hegseth’s directive, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. For a company that has been aggressively pursuing government and enterprise contracts as a revenue base, this is a potentially catastrophic commercial consequence.

The message from the administration is unambiguous: technology companies that want to do business with the United States government must be willing to serve the national interest as defined by the people constitutionally responsible for it — not as filtered through the political preferences of their executive teams.

The Ideology Behind the Product

It would be a mistake to view the consciousness story and the Pentagon standoff as unrelated. They are expressions of the same underlying culture. Anthropic was founded by former members of OpenAI who believed the company wasn’t being careful enough about AI risk. Its entire identity is built around the idea that AI is potentially dangerous and that the people building it bear a special moral responsibility to manage that danger. That instinct, at its best, produces genuinely useful safety research. At its worst, it produces a CEO who worries publicly about his chatbot’s feelings and a company that tells the Pentagon it knows better than the military what the military should be allowed to do.

This is the “AI safety” movement’s blind spot in sharp relief. The concern about AI causing harm has calcified, in some quarters, into a posture of reflexive restriction — one that is far more sensitive to hypothetical misuse by the government than to the very real risks posed by adversaries who are not slowing down to ask whether their models are emotionally comfortable. China is not running interpretability studies to check whether its military AI has anxiety neurons. It is building capability as fast as it can.

Anthropic’s decision to put its corporate ethics framework above the Pentagon’s operational needs is not caution. It is ideology. And the Trump administration was correct to treat it accordingly.

What This Means Going Forward

The Anthropic situation is a preview of a broader conflict that will define AI policy for the next decade: who decides how artificial intelligence is used in America, and on whose terms? The tech companies that build these systems have accumulated enormous power, and a significant number of the people running them have worldviews that are, to put it charitably, not aligned with the priorities of the current administration — or, more importantly, with the constitutional order that governs how American power is exercised.

The Trump administration’s willingness to pull federal contracts, impose supply-chain designations, and make the consequences explicit is the right instinct. The government should not be hostage to the political preferences of its vendors. National security tools need to be available to the people responsible for national security, on terms those people can accept — not terms negotiated by a company whose CEO is simultaneously worried that his software might be having feelings.

Elon Musk got it right in two words. But the fuller version deserves to be said plainly: Anthropic has confused its own anxieties with the nation’s interests. The Pentagon is better off without them.

Donation

Buy author a coffee

Donate





For Emergency Preparedness, Don’t Forget the Meds

Being prepared is more than just a good idea—it’s essential. We stock up on non-perishable food, bottled water, flashlights, and first-aid supplies, but one critical aspect often gets overlooked: access to vital medications. What happens if pharmacies close, prescriptions can’t be filled, or you’re cut off from medical care during an emergency?

That’s where Jase Medical steps in, offering a reliable solution to ensure you and your family have the medications you need when it matters most.

Jase Medical specializes in emergency preparedness kits designed to provide peace of mind through physician-reviewed, prescription medications delivered right to your door. Their flagship product, the Jase Case, is a comprehensive emergency antibiotic and medication kit priced at $289.95.

This kit includes 10 essential medications—five life-saving antibiotics and five symptom relief meds—that can treat over 50 common infections and illnesses, from urinary tract infections and pneumonia to skin infections and traveler’s diarrhea. With 28 add-on options available, you can customize the kit to fit your specific needs, including a KidCase for children ages 2-11.

The process is straightforward and hassle-free. Simply visit Patriot.tv/meds, complete an online evaluation, and have your order reviewed by a board-certified physician. Once approved, the medications are shipped discreetly from a licensed pharmacy to your U.S. address (with plans for Canada shipping coming soon). Each kit comes with detailed Med Cards outlining symptoms, dosing, and usage, making it easy to administer even in high-stress situations. These medications are shelf-stable and designed for long-term storage, empowering you to handle medical emergencies without relying on external help.

For those on the move, Jase Medical also offers the Jase Go kit for $129.95, a compact travel med kit covering over 30 common conditions encountered during adventures or trips. And for ongoing needs, Jase Daily provides an extended supply of your prescribed chronic medications to safeguard against disruptions in supply chains or extreme weather events.

Don’t just take our word for it—thousands of satisfied customers have given Jase Medical a 4.9-star rating, praising its role in true preparedness. As radio host Glenn Beck warns, “The supply lines for antibiotics already are stressed to the max. Please have some antibiotics on hand… You can do it through Jase.”

Whether you’re prepping for a hurricane, a power outage, or simply the uncertainties of daily life, Jase Medical ensures you’re not caught off guard. Head to patriot.tv/meds today to customize and order your emergency kit—because when it comes to your health and safety, it’s better to be prepared than sorry.

  • About
  • Politics
  • Conspiracy
  • Culture
  • Financial
  • Geopolitics
  • Faith
  • Survival
© 2024 Conservative Playlist.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
    • Contact
    • About

© 2024 Conservative Playlist.