Powered by Smartsupp

Bipartisan Coalition Unveils "Pro-Human" Framework for Responsible AI Development



By admin | Mar 08, 2026 | 3 min read


Bipartisan Coalition Unveils "Pro-Human" Framework for Responsible AI Development

The recent rift between Washington and Anthropic revealed a troubling absence of clear regulations for artificial intelligence. In response, a bipartisan group of thinkers has created what the government has not: a concrete framework for responsible AI development. The Pro-Human Declaration was completed just before last week’s Pentagon-Anthropic dispute, making the timing of these events particularly striking. "There’s something quite remarkable that has happened in America just in the last four months," noted Max Tegmark, the MIT physicist and AI researcher who helped organize the effort. "Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence."

This newly published document, endorsed by hundreds of experts, former officials, and public figures, begins with a straightforward premise: humanity stands at a crossroads. One path, labeled "the race to replace," leads to humans being displaced first as workers, then as decision-makers, while power flows to unaccountable institutions and their machines. The alternative path envisions AI dramatically expanding human potential. Achieving this positive outcome rests on five core principles: keeping humans in control, preventing the concentration of power, protecting the human experience, preserving individual liberty, and ensuring legal accountability for AI companies.

EMBED_PLACEHOLDER_0

Among its stronger measures, the declaration proposes an outright ban on superintelligence development until scientific consensus confirms it can be done safely and there is genuine democratic approval. It also calls for mandatory off-switches on powerful systems and prohibits architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown. The urgency of these proposals is underscored by recent events. On the last Friday in February, Defense Secretary Pete Hegseth labeled Anthropic—whose AI already operates on classified military platforms—a "supply chain risk" after the company denied the Pentagon unlimited use of its technology, a designation typically applied to firms with links to China. Shortly after, OpenAI struck its own agreement with the Defense Department, which legal experts suggest will be challenging to enforce effectively.

This situation highlights the growing cost of Congressional inaction on AI. As Dean Ball, a senior fellow at the Foundation for American Innovation, observed afterward, "This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems." Tegmark offered a relatable analogy to explain the need for oversight: "You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe," he said, "because the FDA won’t allow them to release anything until it’s safe enough."

While Washington’s internal conflicts seldom create the public pressure needed to change laws, Tegmark points to child safety as the issue most likely to break the current deadlock. Accordingly, the declaration advocates for mandatory pre-deployment testing of AI products, especially chatbots and companion apps targeting young users, to evaluate risks like increased suicidal thoughts, worsened mental health conditions, and emotional manipulation. "If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that," Tegmark stated. "We already have laws. It’s illegal. So why is it different if a machine does it."

He believes that once pre-release testing is established for children’s products, its scope will almost certainly expand. "People will come along and be like—let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government."

The broad support for the declaration is significant, with signatories including former Trump advisor Steve Bannon and President Obama’s National Security Advisor Susan Rice, alongside former Joint Chiefs Chairman Mike Mullen and progressive faith leaders. "What they agree on, of course, is that they’re all human," Tegmark remarked. "If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side."




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!