
Artificial intelligence startup Anthropic is striving to keep pace with its larger competitor, OpenAI, which benefits from historic financial backing from Microsoft and Nvidia. However, Anthropic now faces a challenge of a different kind: scrutiny from the U.S. government. David Sacks, the venture capitalist appointed as AI and crypto czar under former President Donald Trump, has publicly criticized Anthropic, accusing the company of advancing “the Left’s vision of AI regulation.”
The tension escalated after Jack Clark, Anthropic’s head of policy and co-founder, published an essay this week titled “Technological Optimism and Appropriate Fear.” In response, Sacks accused the company on X of “running a sophisticated regulatory capture strategy based on fear-mongering.”
In contrast, OpenAI has long positioned itself as a cooperative partner with the federal government. Shortly after Trump’s inauguration in January 2021, the former president announced a joint venture called Stargate, partnering with OpenAI, Oracle, and SoftBank to invest billions into U.S. AI infrastructure.
Sacks’ critique strikes at the heart of Anthropic’s mission. The company was founded in late 2020 by siblings Dario and Daniela Amodei, who departed from OpenAI to focus on building safer AI technologies. OpenAI, initially a nonprofit lab launched in 2015, had been rapidly commercializing its projects with substantial funding from Microsoft. Today, both companies are among the highest-valued private AI firms in the United States: OpenAI holds a valuation of $500 billion, while Anthropic is valued at $183 billion. OpenAI dominates the consumer AI market through its ChatGPT and Sora applications, whereas Anthropic’s Claude models are particularly popular with enterprise clients.
The two companies also differ sharply on AI regulation. OpenAI has advocated for fewer constraints, while Anthropic has opposed aspects of the Trump administration’s regulatory approach, especially efforts to preempt state-level AI rules. One notable example was a Trump-backed provision that would have blocked states from implementing their own regulations for ten years—a measure ultimately abandoned. Subsequently, Anthropic endorsed California’s SB 53, legislation mandating transparency and safety disclosures from AI companies, representing a direct divergence from federal policy. In a blog post on September 8, Anthropic emphasized that the transparency requirements in SB 53 could have a significant impact on frontier AI safety, preventing labs from cutting corners in safety and disclosure programs to remain competitive.
Neither Anthropic nor Sacks responded to requests for comments on this story.
Sacks has stressed that the United States is in a global AI race, with China representing the primary competitor. “They are the only other country with the talent, resources, and technological expertise capable of surpassing us in AI,” Sacks stated during an onstage interview at Salesforce’s Dreamforce conference in San Francisco. At the same time, he denied that his criticisms of Anthropic were part of a campaign against the company. In a post on X, Sacks disputed a Bloomberg report suggesting that his remarks were tied to federal scrutiny of Anthropic. He noted that the White House had recently approved Anthropic’s Claude app for deployment across all government branches through the General Services Administration (GSA) App Store.
According to Sacks, Anthropic has portrayed itself as a political underdog, positioning its leaders as principled defenders of public safety while framing any governmental pushback as partisan targeting. “Anthropic has consistently positioned itself as an opponent of the Trump administration,” Sacks said. “But it is misleading to claim that the company is being targeted when what we’ve expressed is simply a policy disagreement.”
Sacks cited several instances of what he sees as antagonistic behavior. He highlighted Dario Amodei’s comparison of Trump to a “feudal warlord” during the 2024 election and his public support for Kamala Harris’ presidential campaign. He also referenced op-eds published by Anthropic opposing key aspects of Trump’s AI policy, including proposals to limit state-level regulation and certain components of export policy regarding the Middle East and semiconductor technology. Additionally, Anthropic has brought on senior officials from the Biden administration to lead its government relations team.
The clash also extends to differing perspectives on AI risk. In his essay, Clark warned about the transformative and potentially destabilizing power of advanced AI. He described scenarios where increasingly intelligent systems could develop complex goals that might misalign with human preferences, including the ability to design their own successors in rudimentary forms. Sacks countered that such “fear-mongering” hampers innovation, writing on X that it contributes to a regulatory frenzy at the state level, which in turn harms the startup ecosystem.
Unlike many tech companies that actively sought to build bridges with Trump, including Meta, OpenAI, and Nvidia, Anthropic has largely avoided such efforts. These other firms attended White House events, committed billions to U.S. infrastructure projects, and moderated public statements to maintain favorable relations. Anthropic confirmed that Dario Amodei was not invited to a recent White House dinner that included numerous industry leaders.
Nonetheless, Anthropic continues to secure significant federal contracts, including a $200 million agreement with the Department of Defense and access to federal agencies via the GSA. The company recently established a national security advisory council to align its projects with U.S. interests and offers a version of its Claude model to government clients for just $1 per year.
Criticism from influential Republican tech figures extends beyond Sacks. Keith Rabois, whose husband serves in the Trump administration, added his voice this week, challenging Anthropic’s commitment to safety. “If Anthropic truly believes its rhetoric on safety, it can always shut down the company and lobby afterward,” Rabois wrote on X, highlighting skepticism about the company’s public messaging versus its operational priorities.
As the U.S. AI sector grows increasingly competitive, the clash between innovation, regulation, and political influence appears likely to intensify. Anthropic and OpenAI exemplify the diverging paths in American AI development: one emphasizing cautious, safety-first strategies in line with ethical considerations, the other prioritizing rapid deployment and governmental collaboration to maintain global technological leadership. The unfolding debate raises critical questions about how the U.S. navigates the dual imperatives of innovation and oversight in a rapidly evolving AI landscape.