CoreWeave

Stock Symbol: CRWV | Exchange: US Exchanges

Table of Contents

🎙️ Listen to this story

CoreWeave: From Crypto Mining to AI Infrastructure Kingmaker

I. Introduction and The AI Gold Rush

On the morning of March 28, 2025, the Nasdaq felt like it was hosting a coming-out party. CoreWeave, a company most Americans couldn’t have picked out of a lineup, was about to deliver the biggest U.S. technology IPO since 2021. The shares were priced at $40, the deal raised $1.5 billion, and the implied valuation landed around $23 billion.

But the surreal part wasn’t just the size. It was who got them there.

CoreWeave wasn’t born in a Stanford lab or an MIT dorm room. It started with three commodities traders from New Jersey who got hooked on Ethereum mining. First it was one GPU in a closet. Then it was a pool table covered in graphics cards. Then it was a grandfather’s garage turned into a DIY data center. From that very un-Silicon-Valley beginning, they built something that ended up sitting under some of the most important AI workloads on the planet.

So how did three guys trading natural gas futures become a critical supplier to Microsoft’s AI push? How did a crypto mining operation morph into infrastructure that powers OpenAI’s most advanced models? And the question that hangs over the whole thing: is CoreWeave a new kind of AWS for the AI era, or a capital-hungry machine held together by debt, circular relationships, and a dangerously concentrated customer base?

That tension is the heartbeat of the story. This is a company built on timing and nerve: a perfectly executed pivot, a willingness to finance growth at a scale most startups can’t even comprehend, and an opportunistic grab at scarce supply just as demand went vertical. Along the way, it drags you into the biggest business dynamics of the moment—AI’s arms race, the brutal economics of cloud infrastructure, and the shifting balance of power between chipmakers and the clouds that rent those chips back to the world.

To understand where CoreWeave is going, we have to go back to where it started: the chaos of crypto mining, and the unlikely friendships formed on trading floors and lacrosse fields.


II. The Unlikely Origin Story: Atlantic Crypto (2017–2019)

Brian Venturo was desperate. It was 2006, he was about to graduate, and Wall Street wasn’t exactly rolling out the welcome mat. So he did what a lot of people do in that moment: he asked someone he trusted. Venturo went to his lacrosse team captain, who gave him a name and a lead: Michael Intrator.

Venturo called. Intrator said he was too busy to meet.

Most people would’ve taken the hint. Venturo called again the next morning at six. And the morning after that. And the morning after that. Every day, at six a.m., for five months. Eventually, Intrator picked up. They met. Intrator hired him on the spot.

That rhythm—relentless persistence, and a willingness to keep showing up until the door opens—would end up defining the company they built together.

Intrator wasn’t a Silicon Valley founder archetype. He’d studied political science at Binghamton University, earned a master’s in public administration from Columbia, and spent more than fifteen years in finance. He’d been a principal portfolio manager at Natsource Asset Management, then co-founded Hudson Ridge Asset Management in 2013, a hedge fund focused on natural gas. Venturo joined him there, and the two started building machine learning models to make better bets in data-heavy energy markets.

The third piece of the puzzle was Brannin McBee. He ran the data firm Hudson Ridge relied on for trading analytics and had his own background as a proprietary trader in North American natural gas, power, and agriculture.

These guys weren’t technologists. They were traders.

But they understood two things that mattered more than résumés: commodity cycles, and infrastructure. They knew how booms attract capital, how busts wipe out the undisciplined, and how the people who survive are the ones who treat assets like assets—not like religion.

In 2017, crypto was the biggest boom in the world. Bitcoin and Ethereum were ripping upward, and the three partners saw an opportunity. They founded Atlantic Crypto Corp. and started mining Ethereum using Nvidia GPUs. The thesis was straightforward: Ethereum mining was still GPU-friendly, unlike Bitcoin, which had already moved to specialized ASIC hardware. If you could source enough GPUs and run them somewhere with tolerable electricity costs, you could print money. At least for a while.

Their “data center” began as a joke you could’ve walked past without noticing: a single GPU sitting on a pool table in a Wall Street office. Then it was two. Then the pool table vanished under a mess of graphics cards, fans, and cables—heat pouring off the rigs like space heaters. When the office couldn’t take it anymore, they moved the operation to Venturo’s grandfather’s garage in New Jersey and hacked together cooling to keep the whole thing from cooking itself.

None of that was unique. What was unique was the way they thought about it.

Atlantic Crypto wasn’t run by crypto true believers writing manifestos about decentralization. It was run by commodities guys who treated the rigs the way they treated pipelines and storage: as infrastructure that could be moved, repurposed, and monetized in whatever market paid best. While other miners obsessed over token prices, Intrator, Venturo, and McBee obsessed over hardware, uptime, and—crucially—the relationships required to keep getting more of it.

That mattered because on the other side of every expansion plan sat Nvidia.

In 2017 and 2018, Nvidia couldn’t make GPUs fast enough. Demand from gamers collided with demand from miners, and supply got tight. Atlantic Crypto became the kind of buyer Nvidia takes seriously: reliable, high-volume, and consistent. They weren’t clicking “add to cart” on consumer sites. They were placing institutional-sized orders and building real rapport with Nvidia’s sales organization.

At the time, it probably felt like a procurement edge. In reality, they were building what would become their most valuable asset.

Then the cycle turned.

By late 2018 and into 2019, crypto prices collapsed. Bitcoin plunged from its peak to a fraction of the value. Ethereum fell even harder. Across the country, mining operations shut down, dumped GPUs onto the secondary market at fire-sale prices, and walked away. The gold rush ended the way gold rushes usually do: suddenly, and painfully.

For Atlantic Crypto, this was the fork in the road. They could follow everyone else into the liquidation pile. Or they could ask a different question.

They looked around and realized they weren’t actually holding “crypto.” They were holding warehouses of high-performance Nvidia GPUs. They had cooling know-how, power arrangements, and a supply relationship others didn’t. The coin might have cratered, but the compute hadn’t magically stopped working.

So they asked the question that would become the hinge point of the entire story: what else could these machines do?

The answer was: almost everything that mattered next.

GPUs had become the Swiss Army knife of modern computing—exceptional for tasks that thrive on massive parallelism: visual effects rendering, scientific simulation, and, increasingly, machine learning training. The founders saw it clearly. The real asset wasn’t the Ethereum they’d been mining. It was the GPU infrastructure itself.

In December 2019, they renamed Atlantic Crypto to CoreWeave. And they started the pivot—from mining coins to renting compute—that would eventually put them underneath the AI boom.

III. The Pivot: Building a Specialized Cloud (2019–2022)

In 2019, cloud computing was already a three-horse race. Amazon Web Services, Microsoft Azure, and Google Cloud dominated the market, controlling roughly two-thirds of global cloud infrastructure spending. So the idea that a tiny startup—run by three former commodities traders with a garage full of ex-crypto mining GPUs—could take on those giants was laughable.

And it would’ve been, if CoreWeave had tried to fight them on their terms.

Instead, the founders made a bet that was anything but obvious in the moment: don’t build a general-purpose cloud. Build a cloud that is unapologetically, obsessively optimized for GPU workloads.

The hyperscalers did offer GPU instances. But their clouds were designed first for CPU-centric computing—the kind that runs most business software. In practice, renting GPUs from a big cloud often meant renting GPUs bolted onto an architecture that wasn’t built around them. It worked. It just wasn’t the best tool for customers who needed dense, high-throughput GPU compute.

CoreWeave built the whole stack around GPUs from day one. The easiest analogy is this: AWS, Azure, and Google built shopping malls with a GPU store inside. CoreWeave built a GPU superstore. And for customers who cared about raw GPU performance—rendering, simulation, and early machine learning workloads—that focus could mean better performance, lower latency, and often a lower bill.

Their first real customers weren’t AI labs. They were visual effects studios and rendering farms. VFX houses rendering thousands of frames for big-budget films found CoreWeave’s GPU clusters could push through jobs faster, and often more cheaply, than the big clouds. It was a niche, but it was the right kind of niche: demanding customers, spiky workloads, and a clear reason to choose a specialist.

By 2022, CoreWeave was still small by any normal Silicon Valley standard. It operated three U.S. data centers and was generating about sixteen million dollars in annual revenue. But the numbers missed what was happening underneath: the founders were setting up a trade.

That year, CoreWeave spent around one hundred million dollars on Nvidia’s new H100 GPUs—then the most coveted hardware in the industry, purpose-built for AI workloads. The H100 wasn’t just “a faster chip.” It was exactly the kind of step-change upgrade that makes new categories possible, especially for training large machine learning models.

These GPUs were expensive, scarce, and already hard to get. Most companies were placing cautious orders and accepting long wait times. CoreWeave didn’t. They leaned in.

The reason this mattered comes down to supply. In 2022, Nvidia’s manufacturing capacity was constrained, and it could only ship so many top-end GPUs in a given quarter. Demand was building, but the world hadn’t fully internalized where it was headed. By ordering early and aggressively, CoreWeave wasn’t just buying hardware—it was reserving a scarce commodity before the scarcity became obvious to everyone.

Then, in October 2022, they launched an accelerator program for AI startups, offering discounted access to GPU infrastructure. It wasn’t philanthropy. It was distribution. Get the next generation of AI companies running on CoreWeave early, and if they hit product-market fit, they wouldn’t need to migrate—they’d just scale.

Taken together, these moves made CoreWeave look like a company with a crystal ball. But there’s a simpler explanation: the founders were trained to see markets as supply and demand problems. Commodities traders live on scarcity premiums and timing. They secure supply before the squeeze, because once the squeeze starts, it’s too late.

Meanwhile, the hyperscalers weren’t asleep at the wheel—but they were steering a much bigger ship. Their priorities were dictated by enormous, CPU-heavy customer bases. GPUs were important and growing, but they weren’t the center of gravity. That created a window: demand for specialized GPU compute was rising faster than the big clouds were built to respond.

CoreWeave planted itself in that gap.

What no one could have predicted was just how fast the gap was about to rip open.


IV. The ChatGPT Explosion and Microsoft Partnership (November 2022–2023)

On November 30, 2022, OpenAI released ChatGPT to the public. Within five days, a million people had used it. Within two months, that number had climbed to one hundred million—making it the fastest-growing consumer app in history. And almost overnight, the constraints of the AI era snapped into focus.

Because ChatGPT wasn’t just software. It was an industrial-scale computing workload.

Training and serving large language models devours GPUs. Every prompt kicks off a storm of computation across clusters of chips. Do that for millions of people, at the same time, and “a lot of compute” stops being a figure of speech. OpenAI needed more GPUs. Microsoft—already deeply tied to OpenAI and racing to weave AI into Office, Bing, and Azure—needed even more than that. Within months, nearly every major tech company was hunting for the same scarce resource.

There simply weren’t enough GPUs, and there wouldn’t be for a while.

This was the imbalance CoreWeave had been quietly positioning for—even if no one could have predicted the exact trigger. They had GPUs in hand. They had built a cloud designed to run them efficiently. And they had the kind of Nvidia relationship that mattered when supply turned into allocation.

That’s where Microsoft enters the story.

In 2023, Microsoft began partnering with CoreWeave for a blunt reason: demand was arriving faster than Azure could stand up new GPU capacity. Data centers don’t appear on command. They take time—planning, permitting, construction, networking—and the limiting reagent wasn’t just buildings. It was the chips themselves. Securing meaningful volumes of Nvidia’s best GPUs required advance commitments and credibility in the supply chain. CoreWeave had spent years building both.

So Microsoft did something that would have sounded ridiculous a few years earlier: one of the world’s largest cloud providers started renting GPU capacity from a specialist that, not long before, had been mining Ethereum in a garage. CoreWeave became an overflow valve—capacity Microsoft needed now, not after the next build cycle.

In April 2023, Nvidia made the relationship official with a one hundred million dollar investment in CoreWeave. It wasn’t just money. It was a signal. Nvidia was effectively telling the market: this company matters. CoreWeave was becoming a strategic extension of Nvidia’s platform—buying its chips, deploying them at speed, and putting them to work for the most demanding customers on earth. That tight loop would later raise eyebrows for its circularity, but in early 2023 it looked like pure alignment: the chipmaker and the fastest-scaling GPU cloud reinforcing each other.

Of course, there was a catch. This kind of scaling is brutally expensive.

CoreWeave needed billions for GPUs, data center buildouts, networking gear, and the engineering talent to stitch it all together—long before revenue could catch up. In August 2023, the company landed a landmark $2.3 billion debt facility led by Magnetar Capital and Blackstone. The unusual part wasn’t the size. It was the collateral: CoreWeave pledged its Nvidia H100 GPUs to secure the loans.

In other words, lenders were underwriting the chips themselves—and, by extension, the durability of AI demand. It was a piece of financial engineering born from necessity, and it became a template CoreWeave would return to as it kept expanding.

Meanwhile, the physical buildout was happening at full speed. Throughout 2023, CoreWeave’s teams laid more than six thousand miles of fiber-optic cabling to connect new GPU clusters coming online across the country. The company’s revenue jumped from sixteen million dollars in 2022 to $229 million in 2023. But the bigger story wasn’t just growth—it was commitments. CoreWeave was signing longer-term contracts that stretched years ahead, turning chaotic demand into something closer to predictable backlog.

The customer roster started to validate the thesis from multiple angles. Stability AI, the company behind Stable Diffusion, used CoreWeave to train its image generation models. Stability’s own business would later face turbulence, but that didn’t change what its presence proved in the moment: serious AI workloads wanted serious GPU infrastructure, and they were willing to go to a specialist to get it.

By the end of 2023, CoreWeave’s story had fully flipped. This was no longer a clever pivot from crypto mining. It was a company at the center of the AI supply chain—one of the clearest “picks and shovels” plays in a gold rush defined by scarce compute.


V. The Hypergrowth Era: Scaling to Meet AI Demand (2023–2024)

By the time 2023 turned into 2024, CoreWeave wasn’t just growing. It was compounding at a rate that made even seasoned venture investors blink. Financing rounds got bigger, then bigger again—each one effectively a wager that the AI compute shortage wasn’t a moment, but a multi-year condition.

In December 2023, CoreWeave raised $642 million in a secondary investment. That structure mattered: it let some early investors and employees sell shares while new money came in. It also planted a question that would follow CoreWeave all the way to the IPO: when insiders cash out before going public, is that prudent diversification—or a sign the people closest to the business think the valuation is getting ahead of itself?

In the market CoreWeave was serving, that debate was background noise. GPU demand kept climbing, and there were only so many places to get real capacity at scale.

Then came May 2024. Coatue Management led a $1.1 billion Series C that valued CoreWeave at $19 billion. On paper, it looked wild: the company had done $229 million in revenue the year before. But investors weren’t buying the past; they were buying the next curve. And the curve showed up. CoreWeave’s 2024 revenue would land at $1.92 billion—roughly eight times 2023. A valuation that felt aggressive in May looked a lot more reasonable by December.

That same month, CoreWeave also closed a $7.5 billion debt facility—one of the largest private debt financings ever. And once again, the collateral was the GPUs themselves. Almost without anyone formally declaring it, Wall Street had helped create a new kind of asset-backed lending. Nvidia GPUs—hardware that, not long ago, sat in gaming PCs—were now treated more like income-producing industrial equipment. The logic was simple: if long-term cloud contracts turned those chips into predictable cash flows, they could be underwritten the way lenders underwrite planes or real estate.

The headline numbers told a clean story: $16 million in revenue in 2022, $229 million in 2023, $1.92 billion in 2024. But the details told the more fragile one. Microsoft accounted for sixty-two percent of 2024 revenue. Add the second-largest customer—widely believed to be Nvidia, which agreed in 2023 to spend $1.3 billion over four years renting back capacity—and the top two customers made up seventy-seven percent of total revenue.

That kind of concentration is a flashing red warning light in any technology business. If Microsoft accelerated its own buildout, renegotiated pricing, or shifted workloads elsewhere, CoreWeave would feel it immediately. CoreWeave acknowledged the risk in its filings and presentations. The counterargument was just as clear: Microsoft couldn’t build fast enough, and these were long-term contracts with committed minimums.

By the end of 2024, CoreWeave was operating thirty-two data centers with more than 250,000 Nvidia GPUs. In October and November, a secondary sale priced the company at a $23 billion valuation. The direction of travel was obvious: the next stop was the public markets.

So the real question for investors wasn’t whether CoreWeave’s growth was impressive. It was whether it was repeatable. Was this the beginning of a long buildout cycle, where AI demand kept expanding and CoreWeave kept filling the gap? Or was it the front-loaded rush of an industry standing up its first wave of infrastructure—after which demand would normalize, leaving CoreWeave with billions of debt, massive fixed costs, and a fleet of GPUs that might not earn enough to carry the load?


VI. The IPO and Public Market Reality Check (2025)

CoreWeave’s path to the public markets wasn’t a victory lap. It was a scramble.

The company originally set out to sell forty-nine million shares, aiming for a price between forty-seven and fifty-five dollars. The pitch was clean: revenue was exploding, demand for AI compute was still outrunning supply, and CoreWeave had somehow wedged itself into the supply chain of the most important buyers on earth.

Then the market got picky.

In the weeks before the IPO, tech stocks slid and investors started treating anything “AI” with suspicion instead of awe. The private-market story that had carried CoreWeave—growth at any cost, financed by ever-larger piles of debt—was suddenly being judged by public-market standards. The questions that had been easy to wave away in a boom became the whole conversation: Was the customer concentration too dangerous? Was the debt stack sustainable? And was the Nvidia relationship a flywheel—or a circular arrangement that looked strong right up until it didn’t?

CoreWeave responded the way companies do when they can feel demand thinning: it cut the deal down and cut the price. The shares priced at forty dollars, the bottom of a revised range, and the IPO raised $1.5 billion—well below the roughly $2.7 billion the original plan implied.

There was also a strategic wrinkle baked into the offering. OpenAI had signed an $11.9 billion, five-year deal with CoreWeave and took a $350 million equity stake contingent on the IPO closing. Nvidia, meanwhile, anchored the deal with a $250 million order at the IPO price. That wasn’t tourists showing up for a first-day pop. It was the two most important players in the AI stack—one buying the compute, the other selling the chips—putting their own money on the table.

The first days of trading captured the public-market mood perfectly: uncertainty, then whiplash. The stock opened at thirty-nine dollars, dipped below the offering price, and clawed back to close flat at forty. Day two it fell more than seven percent. By day three it had ripped higher to $52.57, up more than forty percent from the IPO price, as sidelined investors decided the initial wobble looked like a bargain.

The months that followed were even more dramatic. The shares eventually ran as high as $187 before falling back, creating a fifty-two-week range from $33.52 to $187. The volatility wasn’t random. It was the market debating, in real time, what CoreWeave actually was: a generational infrastructure company at the center of the AI era—or a leveraged bet on one customer and one supplier.

The bear case had real ammunition. CoreWeave generated roughly six billion dollars of negative cash flow in 2024, and management said 2025 capital expenditures would land between twelve and fourteen billion dollars. And the CFO warned that 2026 capex was expected to be “well in excess of double” the 2025 figure. Even by the standards of capital-intensive infrastructure, this was breathtaking.

Then there was leverage. By the third quarter of 2025, CoreWeave carried roughly $14.2 billion of debt against about three billion dollars of cash. The income statement was moving in the right direction—net loss narrowed from $360 million in the third quarter of 2024 to $110 million in the third quarter of 2025—but profitability was still out of reach, and the spending curve wasn’t flattening.

That’s the context in which CoreWeave made its most ambitious move yet. In July 2025, it announced a proposed all-stock acquisition of Core Scientific—a former Bitcoin miner that had pivoted into AI hosting—for about nine billion dollars. The goal was simple: get more control over the most painful input in the business, which wasn’t GPUs, but power and facilities. The deal would have brought roughly 1.3 gigawatts of gross power capacity across Core Scientific’s national data center footprint, plus an additional gigawatt of potential expansion. Management projected $500 million in annual cost savings and said the transaction would eliminate more than ten billion dollars in future lease payments over twelve years.

But shareholders didn’t buy it.

Two Seas Capital, an event-driven hedge fund, organized opposition. Proxy advisory firms were, at best, unenthusiastic. When Core Scientific shareholders voted, the proposal didn’t get the approval it needed. On October 30, 2025, Core Scientific terminated the merger agreement. CoreWeave CEO Michael Intrator struck a diplomatic tone, saying the company respected the views of Core Scientific’s stockholders and looked forward to continuing the commercial partnership.

The failed acquisition hurt—but it also clarified the playbook. CoreWeave understood that if it was going to survive public-market scrutiny while spending at hyperscaler scale, it needed more than clever financing. It needed structural advantage. Owning more of its infrastructure, rather than renting it, was the straightest path to better margins and stronger control over its destiny.

The message from 2025 was hard to miss: CoreWeave could move fast, but the public markets would make it pay for every risk it took.

VII. The Business Model: Infrastructure-as-a-Service for AI

To understand CoreWeave’s economics, picture it like a hotel chain for GPUs. The company buys Nvidia chips—some of the most expensive, hardest-to-get computing hardware on the planet—installs them in data centers, stitches them together with high-speed networking, and then rents that capacity out. Customers pay for what they use: GPU time, much like paying for electricity by the kilowatt-hour or a hotel room by the night.

That neat metaphor hides what makes the business both powerful and punishing: the upfront bill arrives long before the revenue. Each H100 or newer-generation chip costs tens of thousands of dollars. Put tens of thousands of them into a single facility, and you’re staring at hundreds of millions in investment before you’ve earned a penny. Then add everything around the chips—real estate, power, cooling, networking gear, and the people to operate it all—and “capital intensive” stops being a buzzword and starts being the entire story.

On paper, CoreWeave looked enviable. In 2024, gross margins were roughly seventy-six percent—meaning after the direct costs of running its infrastructure, it kept about seventy-six cents of every dollar of revenue. That’s strong by almost any standard, and it stacks up well against the big cloud providers.

But gross margin is only the top of the waterfall. The real drain sits below it: the massive capital expenditures required to keep building. Those data centers and GPU fleets don’t show up as “cost of revenue,” but they absolutely consume cash—often at a pace that can dwarf the gross profit coming in.

No part of the model is more central—or more debated—than the Nvidia relationship. Nvidia is CoreWeave’s primary supplier, a strategic investor, and also a major customer. In 2023, Nvidia agreed to spend $1.3 billion over four years renting back GPU capacity from CoreWeave. Critics call it circular: Nvidia invests, CoreWeave buys Nvidia hardware, then Nvidia rents the capacity, and the whole loop makes the growth story look cleaner than it really is.

CoreWeave and Nvidia dispute that framing. CoreWeave says Nvidia’s investment dollars aren’t simply routed into chip purchases, but instead go toward data center expansion, research and development, and hiring. Nvidia’s case is that renting capacity is rational—it gets them the compute they need for testing and development without having to build and operate their own fleet of data centers. Both explanations can be true. But the entanglement is real, and it’s the kind of thing public-market investors tend to circle in red.

So why do customers pick CoreWeave over the hyperscalers? The company’s edge is specialization. AWS, Azure, and Google Cloud sell GPUs as one aisle in a massive general-purpose store. CoreWeave sells the GPU aisle, and nothing else—optimized from the network layer up for dense, high-throughput AI workloads. For training large models or running inference at scale, that focus can translate into practical advantages: faster deployment, tighter connectivity between GPUs in a cluster, and, CoreWeave says, pricing that can beat the hyperscalers for comparable work.

Which leads to the question that decides whether this is a generational company or a moment-in-time beneficiary: do specialists keep winning, or do the generalists eventually absorb the market? Sometimes a specialist is permanently advantaged because the workload is fundamentally different. Other times the platforms with the most scale eventually catch up, squeeze pricing, and make the niche player redundant.

AI infrastructure is now the test case. And where it lands on that spectrum will determine whether CoreWeave’s model is durable—or just perfectly timed.


VIII. Power Plays: Competition, Moats, and Market Dynamics

CoreWeave operates in a knife-fight of a market—one where the biggest players in tech are sprinting, not strolling. On one side are the hyperscalers: Amazon Web Services, Microsoft Azure, and Google Cloud. They’re pouring more than fifty billion dollars a year each into new infrastructure, racing to stand up more data centers and more GPU capacity. On the other side is a fast-growing bench of specialists—Lambda Labs, Crusoe Energy, and others—trying to do what CoreWeave did: build a cloud that’s purpose-built for GPUs and move faster than the giants.

The most fascinating—and most precarious—dynamic is that Microsoft is both CoreWeave’s largest customer and its most dangerous potential competitor. Azure sells GPU capacity that directly overlaps with CoreWeave’s core product. Yet Microsoft keeps renting huge amounts of compute from CoreWeave for a simple reason: it still can’t build quickly enough to satisfy its own demand, let alone the demand coming from Azure customers. In the short term, it’s symbiotic. In the long term, it’s loaded. Every dollar Microsoft spends expanding its own GPU fleet is a dollar it won’t need to spend with CoreWeave later.

Then there’s Nvidia. CoreWeave’s business is built on Nvidia hardware, and for the highest-end AI workloads there’s still no true substitute. AMD has been making real progress with its MI300X and follow-on chips, but Nvidia’s CUDA software ecosystem creates enormous switching costs. For many customers, moving off Nvidia isn’t just a procurement decision—it’s a rewrite-the-stack decision. That reality gives CoreWeave a powerful tailwind, but it also creates a dependency: CoreWeave’s trajectory is tethered to Nvidia’s roadmap and to the geopolitical risk embedded in the supply chain, especially the fact that Nvidia’s most advanced chips rely heavily on fabrication capacity in Taiwan through TSMC.

If you want to understand how CoreWeave manages those risks, look at the backlog. By the third quarter of 2025, the company reported $55.6 billion in total revenue backlog, more than doubling from the prior quarter. That included an expanded OpenAI agreement worth up to $22.4 billion in total, and a Meta deal worth up to $14.2 billion through 2031. These weren’t vague “strategic partnerships.” They were binding contracts with committed minimums—exactly the kind of forward visibility that makes lenders less nervous and makes the debt load feel more survivable, even if still intense.

The Meta deal, announced in September 2025, carried extra weight because it attacked CoreWeave’s most obvious weak spot: customer concentration. Before Meta, Microsoft dominated the revenue picture, and the top two customers accounted for more than three-quarters of the total. Landing Meta as another anchor customer—potentially a billion-plus dollars a year—made the business feel less like a single-customer satellite. The market read it that way too: CoreWeave’s stock jumped twelve percent the day the deal was announced.

Could CoreWeave broaden beyond AI? In theory, yes. Its GPU-optimized infrastructure is also suited to video rendering, scientific computing, and other parallelized workloads. But in practice, AI is where the demand is, where the urgency is, and where the contracts are. CoreWeave’s recent growth has been overwhelmingly AI-driven, and the company’s identity is now inseparable from the AI buildout. Diversification may be possible over time, but it’s not the near-term story.

Which brings us to the question that decides whether CoreWeave is a durable institution or a perfectly timed wave rider: is the GPU shortage structural, or does it eventually clear? If AI demand continues to outrun supply, CoreWeave’s specialization stays valuable. If hyperscalers build enough capacity to meet demand—or if AI spending cools—CoreWeave’s core reason for existing starts to shrink. Citi analysts have estimated that AI capital expenditures from 2025 through 2029 could reach $2.8 trillion. If that’s even close to right, the runway is long enough for CoreWeave and the hyperscalers to coexist for years. But the market won’t price the company on runway alone. It’ll price it on what happens when the sprint turns into a marathon.


IX. Playbook: Lessons in Timing, Pivots, and Capital Allocation

CoreWeave’s journey—from crypto mining rigs in a garage to a company sitting under some of the world’s largest AI workloads—reads like a case study in how markets reward the right pivot at the right moment. Regardless of where you land on the stock, there’s a clear playbook here.

First: recognize when your assets are more valuable than your business.

In 2019, Atlantic Crypto had what looked like a dead-end operation and a lot of very expensive Nvidia GPUs. Most miners did the obvious thing when prices collapsed: they dumped the hardware and disappeared. The CoreWeave founders did something rarer. They zoomed out and asked a different question: if the mining business is broken, what’s the highest and best use of the machines?

That reframing—from “we mine crypto” to “we own GPU infrastructure”—is the pivot. It sounds clean in hindsight, but in the moment it required intellectual flexibility and a willingness to abandon the identity you started with.

Second: get positioned before the wave hits.

CoreWeave didn’t predict ChatGPT. But the founders understood something that traders build their careers on: scarcity is everything. GPU supply was tightening, demand signals were building, and when constraints show up in a commodity-like market, the winners are the ones who secured supply early.

So they acted early. The big H100 order. The data center buildout. The years spent cultivating Nvidia relationships. In business, being early often looks like being wrong—right up until it looks inevitable. CoreWeave’s timing wasn’t magic. It was conviction shaped by supply-and-demand instincts.

Third: treat capital strategy as a product, not a footnote.

CoreWeave’s growth wasn’t financed the way most startups scale. They helped pioneer the idea that GPUs could serve as collateral—convincing institutional lenders that Nvidia chips, when tied to long-term contracts, weren’t just hardware but income-producing assets that could secure large loans.

That unlocked speed. The 2023 Magnetar-Blackstone facility, the 2024 mega-facility, and the debt that followed all leaned on the same core insight: GPU infrastructure can be underwritten like an asset class. Without that, CoreWeave’s buildout would have been constrained by equity capital and time—two things the market wasn’t going to give them during an AI supply crunch.

Fourth: customer concentration can be both rocket fuel and a structural risk.

Microsoft drove enormous revenue growth, but it also created CoreWeave’s most obvious vulnerability. The company’s answer—locking in more massive commitments and expanding the roster with deals tied to OpenAI and Meta—was sensible. But there’s a deeper wrinkle that some analysts have pointed out: a meaningful portion of Microsoft demand was itself driven by OpenAI workloads. So even if the customer list broadens on paper, the business may still be heavily exposed to the health of the same underlying AI wave.

Finally, the pre-IPO founder liquidity question is complicated—and worth treating that way.

Those secondary sales gave founders and early employees a chance to take chips off the table before stepping into the volatility of public markets. Critics see that as a warning sign: if insiders really believed in the upside, why sell? The counterargument is just as rational: when your entire net worth is tied up in a single, highly levered, capital-intensive business, diversification isn’t betrayal—it’s risk management.

Both can be true. The signal isn’t clean. But the fact the market obsessed over it says something important: by the time CoreWeave went public, the story wasn’t just about growth anymore. It was about fragility, structure, and who was holding the risk when the cycle eventually turned.


X. Analysis: Bull vs. Bear Case

The Bull Case

The optimistic case for CoreWeave starts with a simple belief: the AI wave is still early. If large language models, image generators, video tools, and autonomous agents keep getting better—and keep finding real-world uses—then demand for GPU compute won’t just stay high. It’ll keep climbing.

From that angle, CoreWeave looks less like a risky upstart and more like a toll road. By the third quarter of 2025, it reported $55.6 billion of backlog—contracted work that, if delivered, turns into revenue. For a company that only recently broke out of niche workloads, that kind of visibility is rare. And the customer list reads like a map of the modern AI stack: Microsoft, OpenAI, Meta, and Nvidia.

The bull story also leans heavily on validation from the supplier that sits at the center of everything. Nvidia’s continued investment—capped by a newly announced two-billion-dollar stake in January 2026—signaled that the company with the clearest line of sight into GPU demand still liked what it saw. Around the same time, Deutsche Bank upgraded the stock to buy, pointing to a solid medium-term outlook. That matters, not because analysts are always right, but because it suggested CoreWeave was becoming legible to institutions that typically avoid fragile stories.

And then there’s optionality: the acquisition angle. If you believe AI compute remains constrained, CoreWeave’s infrastructure and customer relationships become strategically valuable. In that world, a hyperscaler buying CoreWeave—particularly Microsoft or Oracle, both battling GPU shortages—starts to look less like fantasy and more like a rational way to buy time.

The Bear Case

The pessimistic case starts with the same facts and reaches the opposite conclusion.

CoreWeave’s customer concentration is still extreme. Even with the Meta and OpenAI agreements, the business remains exposed to a small number of underlying demand engines. If OpenAI’s growth slows, if Microsoft renegotiates terms, or if Meta decides to shift more workloads onto its own internal infrastructure, CoreWeave could see revenue fall faster than it can replace it.

Then there’s leverage. CoreWeave had about fourteen billion dollars of debt while still not generating positive free cash flow. That’s not just “aggressive.” It’s a structural risk. Management’s capital expenditure trajectory—expected to push past twenty-five billion dollars in 2026—implies the company must keep accessing capital at reasonable prices. If credit tightens or equity markets sour, CoreWeave may be forced to slow buildouts. In a market where customers buy from whoever can deliver capacity fastest, slowing down can become a self-inflicted wound.

Finally, the hyperscalers are not standing still. AWS, Azure, and Google Cloud are spending tens of billions a year on infrastructure, and an increasing share of that is aimed directly at GPU capacity. If they close the supply gap, CoreWeave’s reason for existing narrows. And even if demand stays high, the big clouds have structural advantages CoreWeave can’t copy: global distribution, diversified customer bases, integrated software ecosystems, and effectively unlimited capital.

Framework Analysis

Viewed through Michael Porter’s competitive forces, this is a tough neighborhood. Supplier power is high because CoreWeave depends overwhelmingly on Nvidia. Buyer power is moderate—long-term contracts help—but it’s concentrated in a handful of massive customers. Substitution risk rises as hyperscalers expand. The technology itself isn’t a moat, but capital, execution speed, and Nvidia relationships create enormous practical barriers to entry. And rivalry is intense, because the competitors are some of the best-capitalized companies in history.

Under Hamilton Helmer’s Seven Powers, CoreWeave’s clearest strength is counter-positioning: hyperscalers have sprawling, general-purpose clouds and huge installed bases, which makes it harder for them to match a pure GPU-first platform quickly, at least in the near term. CoreWeave also benefits from switching costs—once customers deploy training pipelines and operational workflows, migration is painful—and from scale economies inside its narrow niche.

But the other powers are less convincing. Network effects are limited. Brand matters, but it’s not decisive. Process power is hard to claim in a market that’s evolving this fast. And the cornered resource advantage—the Nvidia relationship—may erode as Nvidia produces more chips and broadens its partner set.

Key Performance Indicators

If you’re tracking whether CoreWeave is winning the long game, two signals matter most.

First: backlog. Backlog growth is the cleanest read on whether AI infrastructure demand is still accelerating or starting to level off. The leap from roughly $25 billion to $55.6 billion in a single quarter in Q3 2025—driven by the OpenAI and Meta deals—was a shock. Repeating anything like that would strengthen the bull case; a stall would raise uncomfortable questions.

Second: capital efficiency. CoreWeave is spending staggering amounts to build capacity. The only thing that makes that rational is if each incremental wave of capex translates into durable revenue—and, eventually, free cash flow. If the capex required to produce each new dollar of revenue starts rising, it’s a sign of either overbuilding or pricing pressure. And either one cuts straight to the heart of the thesis.


XI. Epilogue: What CoreWeave Means for the AI Era

The question that will decide CoreWeave’s place in business history is deceptively simple: is AI infrastructure becoming a permanent layer of the technology stack, or is it a one-time buildout that eventually settles into commodity economics—owned and optimized by the hyperscalers?

If it’s the former, CoreWeave can become something durable: a specialist that lives in the gap between Nvidia’s hardware and the AI applications that need it. The closest analogy isn’t another cloud company. It’s independent power producers that emerged after electricity markets deregulated—companies that owned and ran generation assets, selling power to utilities that couldn’t build capacity fast enough to keep up with demand.

If it’s the latter, CoreWeave starts to look like a transitional player—created by a moment when Azure, AWS, and Google simply couldn’t stand up GPU capacity quickly enough, and vulnerable the moment they can. The darker analogy is the fiber builders of the late 1990s: massive infrastructure rushed into the ground during the dot-com boom, only for economics to collapse when supply got ahead of demand.

Reality is probably somewhere between those extremes. The AI infrastructure market is almost certainly large enough for specialists to coexist with hyperscalers. But the competition will be ruthless. The companies that survive will have to keep executing—staying ahead on deployment speed, controlling costs, and keeping customers locked in even as the giants catch their breath.

CoreWeave’s arc—from a single GPU on a pool table to a company valued at more than $46 billion, backed at various points by Nvidia, Microsoft, OpenAI, and Meta—is one of the strangest and most telling corporate transformations of the last decade. Whether it ends up as a cautionary tale about leverage and hype, or a foundational chapter in the buildout of the AI era, hinges on variables nobody can fully price today—not the founders, not the lenders, not public-market investors.

What is clear is this: three commodities traders from New Jersey saw the bottleneck before most of the world did. They secured supply, financed the buildout, and wedged themselves into the narrowest part of the AI supply chain.

The market will eventually render its verdict. For now, the infrastructure keeps rising, the contracts keep getting signed, and the GPUs keep humming—inside data centers spanning two continents—turning electricity into the computations that may reshape the world.


XII. Recent News

By the third quarter of 2025, CoreWeave was big enough that quarterly results started to feel like macro data points rather than startup milestones. Revenue came in at $1.36 billion, up 134 percent year over year and ahead of analyst expectations. Adjusted EBITDA was $838 million, a sixty-one percent margin, and the net loss narrowed to $110 million from $360 million a year earlier. But the quarter also showed what “hypergrowth with heavy construction” looks like in practice: CoreWeave guided full-year 2025 revenue to $5.05 billion to $5.15 billion, a touch below consensus, pointing to delays from a third-party data center developer.

The biggest story, though, was customer expansion—because customer concentration has always been the pressure point. CoreWeave expanded its agreement with OpenAI by up to $6.5 billion, bringing the total contract value to roughly $22.4 billion. In September 2025, it signed a six-year deal with Meta worth up to $14.2 billion, which included access to Nvidia’s latest GB300 systems. Those wins helped dilute Microsoft’s share of revenue, even as Microsoft remained the single largest customer.

Meanwhile, the company’s attempt to buy its way into more control over power and facilities ran into the realities of public shareholders. The proposed all-stock acquisition of Core Scientific, valued around nine billion dollars, was terminated in October 2025 after Core Scientific shareholders voted it down. CoreWeave said it would keep the commercial partnership in place while looking for other ways to verticalize its data center footprint.

Then, in January 2026, Nvidia doubled down. It invested an additional two billion dollars in CoreWeave at $87.20 per share, roughly doubling its equity stake. CoreWeave said the money would fund data center expansion and help support plans to build more than five gigawatts of AI computing capacity by the end of the decade. Separately, Nvidia also agreed to buy more than six billion dollars of services from CoreWeave through 2032.

As a public company, the stock told its own story: conviction, doubt, and whiplash. In its first year after the IPO, shares traded between $33.52 and $187. By late January 2026, the stock sat around $93, putting CoreWeave’s market cap at roughly $46 billion. An investor lawsuit tied to AI data center delays added another layer of uncertainty. And operationally, the buildout kept spreading outward: data centers were already running in the U.K., with new facilities under construction in Norway, Sweden, and Spain.


If you want to go deeper on CoreWeave, start with the primary sources.

CoreWeave’s S-1, available through the SEC’s EDGAR system, is the most complete snapshot of the business as it went public—financials, risk factors, customer concentration, and the mechanics of how the company actually runs. From there, the company’s investor relations site at investors.coreweave.com is where you’ll find quarterly earnings releases, investor presentations, and press releases.

For reporting context, CNBC’s coverage of CoreWeave’s path from crypto mining to IPO—especially the March 2025 reporting on Nvidia’s multi-year involvement—is useful for understanding just how intertwined the Nvidia relationship became.

If you want the skeptical view, the Kerrisdale Capital short report from September 2025 lays out the clearest bear case, with particular focus on circular financing dynamics and customer concentration risk.

To zoom out from CoreWeave to the broader landscape, Citi’s research projecting roughly $2.8 trillion in AI capital expenditures through 2029 provides a helpful macro frame for why the GPU arms race has been so intense.

For a more human lens, TechCrunch’s post-IPO interview with Brian Venturo offers rare firsthand detail on the founding team’s mentality and the scrappy early years.

And for the most current read on what management is saying in real time, quarterly earnings call transcripts—available through Seeking Alpha and CoreWeave’s investor relations page—are the best place to follow strategy, guidance, and how the company explains its buildout pace.

Last updated: 2026-02-01