SAN FRANCISCO — Walking through Anthropic’s offices, President and co-founder Daniela Amodei repeatedly returns to a simple idea that has come to define the company’s strategy: achieving more while using fewer resources.

That mindset runs counter to the dominant narrative in Silicon Valley, where many of the largest AI labs — and their investors — believe that sheer scale will determine the ultimate winners. Across the industry, companies are raising unprecedented amounts of capital, reserving advanced chips years in advance, and constructing massive data centers on the assumption that whoever builds the biggest AI infrastructure will come out ahead.

OpenAI represents the most visible version of that philosophy. Through partnerships and long-term agreements, the company has committed an estimated $1.4 trillion toward compute and infrastructure, racing to build enormous data center campuses and secure next-generation chips at a speed the industry has never witnessed.

Anthropic, by contrast, is betting that there is an alternative path. The company argues that careful spending, better algorithms, and thoughtful deployment choices can keep a lab at the cutting edge without trying to outspend every competitor.

“What we’ve always tried to do at Anthropic is be very deliberate with the resources we have, even though this is a field that demands a huge amount of compute,” Amodei told CNBC. She noted that Anthropic has typically operated with far less capital and compute than its rivals, yet has still delivered models that rank among the most powerful and capable over the past several years.

Ironically, Daniela Amodei and her brother, Anthropic CEO Dario Amodei, helped shape the very scaling-driven worldview they are now pushing back against. Dario Amodei, a former researcher with experience at Google and Baidu, was among those who helped popularize the idea that increasing data, compute, and model size leads to predictable improvements in performance.

That principle has become the financial foundation of today’s AI race. It justifies massive spending by cloud providers, supports soaring chip valuations, and encourages investors to assign enormous valuations to companies that are still far from profitability.

While Anthropic has benefited from this logic, the company is trying to show that future competition will not be decided solely by who can afford the largest pre-training runs. Its approach emphasizes higher-quality training data, post-training methods that strengthen reasoning, and product decisions that make models cheaper to operate and easier to deploy at scale — an important consideration when inference costs continue indefinitely.

Anthropic is not operating on minimal resources. The company has around $100 billion in compute commitments and expects that figure to grow as it works to remain competitive. “The compute needs of the future are enormous,” Amodei said, adding that additional capacity will be necessary simply to stay at the frontier.

Still, she cautioned that many of the eye-catching figures circulating in the industry are not directly comparable. The structure of long-term deals, prepayments, and partnerships can make headline numbers misleading, especially in an environment where companies feel pressure to lock in hardware years ahead of actual demand.

More broadly, Amodei said that even the people who helped establish the scaling thesis have been surprised by how long the exponential curve has held. Year after year, expectations that progress would slow have been proven wrong. “We keep thinking it can’t possibly continue at this pace,” she said. “And then it does.”

That observation captures both the excitement and the unease surrounding the current AI buildout. If exponential progress continues, companies that secured power, chips, and land early may look visionary. If it falters — or if adoption fails to keep up with technological capability — those that overcommitted could be stuck with years of fixed costs tied to underused infrastructure.

Amodei drew an important distinction between technological progress and economic adoption, two factors that are often blurred together. From a technical standpoint, Anthropic has not seen clear signs of slowing innovation. The harder question is how quickly businesses and individuals can actually integrate these tools into daily workflows, where procurement processes, organizational inertia, and human factors often slow adoption.

“No matter how good the technology is, it takes time for people and organizations to really use it,” she said. “The key question is how quickly businesses — and individuals — can turn capability into real value.”

That focus on enterprise adoption helps explain why Anthropic is closely watched as a bellwether for the broader generative AI market. The company has positioned itself as enterprise-first, with much of its revenue coming from businesses embedding Claude into products, workflows, and internal systems. This type of usage can be more durable than consumer-facing apps, where interest can fade once the novelty wears off.

Anthropic says its revenue has increased tenfold year over year for three consecutive years. It has also taken an unusual approach to distribution, making Claude available across multiple major cloud platforms — including through partners that also develop competing models.

Amodei described this not as a truce among rivals, but as a response to customer demand. Large enterprises want flexibility across cloud providers, and cloud platforms want to offer the models their biggest customers are asking for.

This multicloud strategy also reduces reliance on any single infrastructure bet. While OpenAI is building around massive, dedicated campuses, Anthropic aims to stay flexible, shifting workloads based on cost, availability, and demand, while concentrating internally on improving efficiency and performance per unit of compute.

As 2026 begins, these contrasting strategies take on added significance. Both Anthropic and OpenAI are edging toward public-market readiness while still operating in a private-market environment where compute needs are growing faster than certainty. Neither company has announced an IPO timeline, but both are strengthening governance, forecasting, and financial discipline in ways that suggest preparation for greater scrutiny.

At the same time, both continue to raise capital and strike ever-larger compute deals to support the next generation of models. That dynamic sets the stage for a real test of strategy.

If capital continues to reward scale above all else, OpenAI’s approach may remain the dominant template. If investors begin to demand efficiency alongside performance, Anthropic’s philosophy of doing more with less could become a competitive advantage.

Anthropic’s wager is not that scaling is ineffective, but that it is not the only lever that matters. The company is betting that the next phase of the AI race will favor those who can keep improving while spending at a level the broader economy can sustain.

“The exponential continues until it doesn’t,” Amodei said. The open question for 2026 is what happens to the AI arms race — and to the companies fueling it — if the industry’s favorite curve finally starts to bend.