CoreWeave CEO Michael Intrator discusses the competitive AI landscape
One of the companies NVIDIA has back to meet the demand for specialized cloud infrastructure to power the AI boom is core weave. Last week the startup raised $1.1 billion in funding that has reportedly grown its valuation the $19 billion. Joining me now exclusively to discuss is the CEO and Co founder of core Weave, Michael and trader IT. Michael, it’s great to have you back on the show. I want to start right there with your response to Huang’s comments about this renaissance, especially given the fact that you are sitting on such huge access to those GPU’s that have been so hard to come by from NVIDIA. So first of all, thank you for having me back. It’s great to be back following Jensen and Idris. I I feel like this is some pretty big shoes for me. But you know look it’s it’s it’s been a it’s been a phenomenal run for us you know and and the the $1.1 billion has just been an incredible validation. The investors that have commenced for us has been an incredible validation of the way that we’re approaching building the infrastructure that is really standing up the the the revolution that’s going on within artificial intelligence. And so it’s it’s been a great front for us and and we’re really excited about where we are and excited about how we’re going to progress over the next couple of years. So how are you going to progress over the next couple of years especially since you have been growing your footprint pretty dramatically. Last year you expanded data center footprint from 3 to 14. What is $1.1 billion now enable you to do? Yeah, so, so we’re we’re expanding in a in a lot of different ways. And so by the end of the year we will be in 28 data centers. We’ve expanded into Europe. We have an office in London. We have data centers in London and Spain up in the Nordics. We have data centers across the US and you know it, it’s really put us in an incredible position not only to invest in the infrastructure, but really to to to invest in the people and invest into the software stack that is so necessary to be able to manage this type of infrastructure at such amazing scale to to enable us to really build the intelligence that that that Jensen was speaking to. When you talk about investing into the software stack, what does that look like, especially as we are 24 hours out from long talking about the role of NVIDIA in inferencing for example. I mean we know that core weaves an alternative cloud provider, you’re basically taking on the hyperscalers. So what does that enable through that process? Yeah, it’s it’s an interesting position that we play in the market, right, where we we really built our business from the ground up to be able to serve the particular use cases and the particular way that compute is consumed by these artificial intelligence and and kind of massive scale parallelized consumers, right. They they the amount of compute that they consume, the way that they compute, the type of compute, the speed with and flexibility they require. We we’ve really built our software stack, our our hardware stack right up from the ground making the decisions without any compromise to be able to deliver the highest quality experience to those type of consumers. We’re not really built in as the legacy clouds. We’re to be able to support everything for everybody really trying to narrow down on a specific use case on a specific way that compute was going to be used and make sure that we built from the ground up to be as good as you can be at that one particular way that compute gets consumed to build artificial intelligence. So then I guess how does that speak to how this competitive landscape is evolving because on the one hand you’re competing with the hyperscalers, on the other, I would imagine you’re there’s an increasing opportunity to supplement capacity. Case in point, perhaps your deal with Microsoft last year and you’re seeing some of these deep pocketed incumbents, Microsoft, Amazon, Alphabet, get more competitive, potentially more competitive on price. They’re also starting to custom design their own. So as that begins to roll out, what happens next? Yeah, so, so look, once again, we, we really have built our business on the idea that our solution is the correct solution. It’s really been built to be able to support these type of consumers. And these type of consumers can be AI labs. Those are, those are you know when you think about going out and using artificial intelligence or you’re sitting at your desk and you make a query, you make an inference request on the models, There’s a reasonably good chance that at the end of the day you’re going to be hitting our infrastructure with those queries. And so our business plan is really built around the idea that that by building such an excellent environment for delivering this compute regardless if it is for training of these models or serving of these models, We’re really built to be able to stand that up and deliver compute regardless of whether it’s to a cloud player, whether it’s to a an AI lab or whether it’s to an enterprise consumer.