CoreWeave
CoreWeave did not start in a boardroom chasing AI headlines. It started in the trenches of volatility, where Michael Intrator, Brian Venturo, and Brannin McBee were trading energy futures and learning how to stomach risk when the numbers swing and the clock does not care. In 2017, alongside technologist Peter Salanki, they built Atlantic Crypto and stacked GPUs to mine Ethereum at scale. Then the market cracked. Most would have sold the metal and walked. They did the opposite. They kept the machines and changed the question. What else can this compute do when the world needs more than coins. That question now echoes across the startup ecosystem, where compute is no longer a backend detail but the front line of competition.
That pivot became CoreWeave, now headquartered in New Jersey and operating as a purpose built AI cloud with one clear mandate. Be the essential cloud for AI. Not a generalist trying to please everyone, but a specialist tuned for workloads that break traditional systems. GPU dense compute, high performance networking, storage that keeps pace with models that do not sleep, and a Kubernetes native stack that moves teams from idea to production without wasting weeks in setup. In a market where speed defines survival, CoreWeave sells time back to builders across the startup ecosystem.
Michael Intrator operates as CEO, President, and Chairman. Brian Venturo moves with precision as Chief Strategy Officer. Brannin McBee drives expansion as Chief Development Officer. Peter Salanki holds the architecture together as Chief Technology Officer. Four founders, still close to the machine, still making decisions where infrastructure meets consequence. That proximity shows up in how the company executes and how it earns trust from the most demanding customers in the startup ecosystem.
The numbers are not subtle. Revenue scaled from $16M in 2022 to $229M in 2023, then to $1.9B in 2024. Remaining performance obligations reached $15.1B by the end of 2024. More than 250,000 GPUs across 32 data centers, powered by over 360MW of active capacity with more already contracted. Customers include OpenAI, Microsoft, Meta, IBM, Mistral, NVIDIA, and Cohere. These are not casual users. These are companies where milliseconds matter and downtime costs real money.
Under the hood, CoreWeave leans into control. Proprietary lifecycle systems manage node health, orchestration layers keep clusters tight, and observability operates like a live command center. In MLPerf benchmarks, their clusters have delivered training results that compress timelines in ways that change how fast teams can iterate. Faster training becomes faster insight, and faster insight compounds into market advantage.
The strategy is disciplined. Long term, take or pay contracts create visibility. Deep integration with customers turns infrastructure into partnership. This is not transactional compute. This is embedded capacity that grows with the customer and scales with their ambition.
For builders, this is where it gets real. CoreWeave is hiring across engineering, infrastructure, operations, and go to market. The work sits at the intersection of distributed systems, hardware, and real world impact. The kind of environment where your output does not sit in staging. It trains models, powers products, and quietly underwrites the next wave of AI companies.
If you are tracking where AI is actually built, not just discussed, CoreWeave belongs in your field of view. And if you are the kind who prefers to build the engine instead of riding in the passenger seat, there is a seat open at the table.









