Good afternoon. My name is Regina and I will be your conference operator. Today at this time I would like to welcome everyone to nvidia's, first quarter earnings, You good afternoon everyone and welcome to nvidia's conference. Call for the first quarter of fiscal 2025. With me today from Nvidia or Joseph, Huang president and chief executive officer and collect Chris Executive, Vice President and Chief Financial Let me turn the call over to collect Q1 was another record quarter revenue of 26 billion, was up, 18, sequentially and up 262 year on year and well, above our Outlook of 24 billion. Starting with data center data, center revenue of 22.6 billion was a record of 23 sequentially and up for 127 year on year, driven by continued strong demand for the Nvidia Hopper, GPU Computing platform. Get Revenue, grew more than 5x and networking Revenue. More than 3x from last year. Strong sequential data. Center growth was driven by all customer types and by Enterprise and consumer internet companies, Large cloud providers continue to drive, strong growth as they deploy and ramp Nvidia. AI infrastructure at scale, and represented the mid-40s as a percentage of our data center Revenue. Training and inferencing. AI on Nvidia Cuda is driving meaningful acceleration in Cloud rental Revenue, growth delivering an immediate and strong return on cloud providers investment. Probably one dollar spent on Nvidia AI infrastructure. Cloud providers have an opportunity to earn five dollars in GPU instant Hosting revenue over four years. And video, how much of software stock and ecosystem and tight integration with Cloud providers. Makes it easy for end, customers up, and running on Nvidia GPU instances in the public Cloud. Cloud rental customers, Nvidia gpus offer the best time to train models, the lowest cost to train models and the lowest cost to inference large language models. From public Cloud providers. Nvidia brings customers to their Cloud, driving Revenue growth and returns from their infrastructure Investments. Leading L&M companies. Such as open. AI, adept and Tropic character AI coher databricks. Deepmind meta Mr. All xai and many others are building on Nvidia, AI in the cloud. Enterprises drove strong sequential growth in data center. This quarter, we supported Tesla's expansion of their training, AI cluster to 35 000 h-100 gpus their use of Nvidia, AI infrastructure paves the way for the Breakthrough performance of fsd's version 12 their latest autonomous driving software based on Vision. Commerce while consumed significantly more Computing are enabling dramatically better autonomous driving capabilities and upelling significant growth for video AI infrastructure across the automotive industry. We expect Automotive to be our largest Enterprise vertical within data center this year. Driving a multi-billion Revenue opportunity across on-prem and Cloud consumption. Consumer internet companies are also a strong growth vertical. A big highlight. This quarter was met us, announcement of Roma Reeves their latest part of tomorrow which was painted on us of 24. 000, h100 gpus. Three Powers meta AI, a new AI system available on Facebook, Instagram, WhatsApp, and messenger. 113 is openly available and has kick-started a wave of AI development across Industries. Of AI development across Industries. As soon as of AI makes its way into more consumer internet applications, we expect to see continued growth opportunities as inference scales, both with model complexities, as well as with the number of users and number of queries per user. Driving much more demand for AI compute. And our trailing four quarters, we estimate that inference drove about 40 percent of our data center Revenue. Both training and inference are growing significantly. Large clusters, like the ones built by meta and Tesla are examples of the essential infrastructure. For AI production, what we refer to is AI factories. These Next Generation data centers host Advanced full stack, accelerated Computing platforms where the data comes in and intelligence comes out. We worked with over a hundred customers building AI factories ranging inside size from 100 to tens of thousands of gpus with some reaching a hundred thousand gpus. From a geographic perspective. Data center, Revenue continues to diversify as countries around the world invest in Sovereign, AI Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure data, Workforce and business networks nations are building up. Domestic Computing capacity through various models Are procuring and operating Sovereign AI cloud in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local Cloud. Partners to provide a shared AI Computing platform for public and private sector use. For example, Japan plans to about more than 740 million in key, digital infrastructure providers, including kddi, Sakura internet and shopping to build out the nation's, Sovereign AI infrastructure. France-Based scaleway, a subsidiary of the iliad group is building Europe's. Most powerful Cloud, native AI supervisor. Italy, Swiss calm group will build the nation first and most powerful Nvidia DTX powered, super computer to develop the first llm natively trained in the Italian language. And in Singapore. The national supercomputer Center is getting upgraded with Nvidia Opera gpus by singtel is building nvidia's accelerated AI factories across southeast Asia. Invidia's ability to offer end-to-end compute to networking Technologies, olfac software, AI expertise and Rich ecosystem of Partners. And customers allows Sovereign Ai and Regional Cloud providers to jumpstart their country's, AI Ambitions. From nothing the previous this year, we believe, Sovereign, AI revenue can approach the high single digit billions this year. Importance of AI has caught the attention of every nation. We round new products designed specifically for China that don't require support controlled license. Our data center Revenue. In China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward. From a product perspective, the vast majority of compute Revenue was driven by our Hopper GPU architecture. Demand for Hopper during the quarter kind of continues to increase. Thanks to cooler algorithm Innovation. We've been able to accelerate llm inference on h-100, by up to 3x which can translate to a 3X cost reduction for serving popular models like llama 3. We started sampling the h.200 in a ink line and are currently in production with shipments on track for Q2. H200 system was delivered by Jensen to Sam Altman and the team at openai. Howard, they're amazing gpt40 demos last week. H200 nearly doubles. The inference performance of h100, delivering, significant value for production employment. Example, using vomit free which with 700 billion, parameters, a single Nvidia hgx h200 server can deliver 24, 000 tokens per second supporting more than 2400 users. At the same time. That means for every one dollar spent on Nvidia HDX, h200 servers at current prices per token, an API provider serving, all the free tokens can generate seven dollars in Revenue over four years. Going social optimizations. We continue to improve the performance of Nvidia, AI infrastructure for serving, AI models. While supply for h100. Hello. We are still constrained upon age 200 at the same time Blackwell is in full production. We are working to bring up our system and Cloud partners for Global availability later this year. Demand for h200 as Blackwell is well ahead of Supply and we expect demand May exceed Supply well into next year. Grace Hopper Superchip is shipping and volume last week at the international supercomputing conference. We announced that nine new supercomputers worldwide are using Grace Halper for combined 200, exoplots of energy, efficient AI processing, power delivered this year. These include the Alps super computer at the Swiss National supercomputing Center the fastest AI supercomputer in Europe. It's on Bart AI at the University of Bristol in the UK. And Jupiter in the ulex supercomputing Center in Germany. We are seeing an 80 percent attach rate of Grace to hover in supercomputing due to its high Energy Efficiency and performance. We are also proud to see supercomputers powered with Grace Hopper. Take the number one, number two, and the number three spots of the most energy efficient supercomputers in the world. Strong, networking year-on-year growth was driven by infinabad. We experienced a modest sequential decline, which was largely due to the timing of Supply with demand. Well, ahead of what we were able to show. Networking to return to sequential growth in Q2. In the first quarter, we started shipping our new spectrum X ethernet networking solution optimized for AI, from the ground up and includes our Spectrum, 4 switch blue field, 3 DPU and new software Technologies to overcome the challenges of AI on ethernet to deliver 1.6. X higher networking performance, for AI processing compared with traditional ethernet. Is ramping in volume with multiple customers. Including a massive 100 000 GPU cluster. Spectrum X opens a brand new market to Nvidia networking and enables ethernet only data centers to accommodate large-scale AI. We expect Spectrum X to jump to a multi-billion dollar product line within a year. At GTC in March, we launched our next generation, AI Factory platform Blackwell, A black wall, GPU architecture, delivers up to 4X faster training and 30X faster in French. And the h100 and enabled realized generative. AI a trillion parameter large language models. Jack Blackwell is a giant league with up to 25 x or TCL and energy consumption and helper. Blackwell platform includes the fifth generation. MV link with a multi-gpu spine and new inciniband and ethernet switches. The x800 series designed for a Trillium parameter. Scale AI. Bipol is designed to support data centers universally from hyperscale to Enterprise training to inference x86, to great CPUs, ethernet to cinnabad, networking and air, cooling to liquid cooling. Blackwell will be available in over 100, OEM and oem systems at Lodge or the double the number of Hoppers mods and representing every major computer maker in the world. This will support fast and Broad adoption across the customer types, workloads and data center environments in the first year shipment. Blackwell time to Market customers include Amazon, Google Meta, Microsoft open AI, Oracle Tesla and xai. We announced some new software product with the introduction of Nvidia and French microservices or new. And provide secure and performance optimized containers, powered by Nvidia Cuda acceleration in network Computing and inference software including Triton and from server and tensor RT. Llm with industry standard apis, a broad range of use cases, including large language, models from text speech Imaging Vision, robotics genomics and digital biology. End up enable developers to quickly, build and deploy. Generated AI applications using leading models. From Nvidia AI, 21, adapt, come, here data, images and shutter stock, and open models from Google pluginface, meta. Microsoft misceral, AI And stability, AI. And he'll be offered as part of our Nvidia, AI Enterprise software platform for production deployment in the cloud or on-prem. Moving to gaming and aipc's. Amy revenue of 2.65 billion was down eight percent sequentially and up 18 year on year consistent, with our outlook for seasonal decline, the G-Force, RTX supers gpus. Perception is small and end, demand and channel inventory, remain healthy across the public range. From the very start of our AI Journey. We equipped G-Force RTX gpus with Cuda tensor cars now with over 100 million of an installed base GeForce RTX gpus are perfect for gamers creators, AI enthusiasts and offer unmatched performance for running generative, AI applications on PCS. Has full technology stock for deploying and running fast and efficient generative. AI inference on, GeForce RTX, PCS Srt llm now, accelerates, Microsoft, I3 mini model and Google's demo, 2B, m7b models, as well as popular AI Frameworks, including Langtang and llama index. Yesterday, Nvidia and Microsoft announced AI performance optimizations for Windows to help run llms up to 3x faster on Nvidia. GeForce RTX, AI PCS. And talk young developers, including Yeti's games, tencent and UV software. Bracing Nvidia Avatar character engine to create lifelike, avatars to transform interactions between Gamers and non-playable characters. Moving to Provis. Revenue of 427 million was down eight percent sequentially and up 45 year on year. We Believe, generation, Ai and Omniverse industrial. Digitalization will Thrive the next wave of professional visualization growth. At GTC, we announced new Omniverse Cloud, apis to enable developers to integrate Omniverse, industrial digital twin and simulation Technologies into their applications. Some of the world's largest industrial software makers are adopting. These apis including answers, Cadence bdxite atdos systems brand and Siemens. Solvers can use them to stream industrial digital twins. The spatial Computing devices such as Apple Vision Pro. Omniverse Cloud apis will be available on Microsoft Azure later this year. Companies are using Omniverse to visualize the workflows Omniverse power. Digital twins. Enable with strong one of our manufacturing Partners to reduce end-to-end production cycle Times by 50 and defense rates by 40 percent. Yd. The world's largest electric vehicle maker is adopting Omniverse over both Factory funding and Retail configurations. Moving to Automotive Revenue was 329 million at 17 sequentially and up 11 year in year. Two points of growth was driven by the ramp of AI cockpit Solutions with global OEM customers and strength in our self-driving platforms. Your growth was driven primarily by self-driving. We supported xiaomi in the successful launch of its first electric field goal. The su-7 sedan built on the Nvidia Drive Oren, our AI car computer for software bite a beasley. We also announced a number of new design wins on Nvidia Drive store, the successor to Lauren powered by the new. Nvidia blackball architecture with several leading leading Edie makers, including BYD X10 aac's ion hyper and Merle. Plated the production vehicles starting next year. Okay, moving to the rest of the piano Gap, gross margin, expanded sequentially to 78.4% and non-vap gross. Margins to 78.9 percent on lower inventory targets. As not as last quarter both Q4 and q1 benefited from favorable component costs. Sequentially Gap. Operating expenses, were up 10 percent and non-gaap operating expenses were up 13 primarily reflecting higher compensation related costs and increased compute and infrastructure Investments. And q1 re-returned 7.8 billion to shareholders in the form of share repurchases. And cost dividends today we announced a 10-4-1 split of our shares with June 10th as the first day of tradies on a split adjusted basis. We are also increasing our dividend by 150. Her into the Outlook for the second quarter. Total revenue is expected to be 28 billion plus or minus 2%. We expect sequential growth in all Market platforms. Gap, a non-out gross. Margins are expected to be 74.8 percent and 75.5 respectively plus or minus 50 basis points consistent with our discussion last quarter. Full year, we expect both margins to be in the mid 70s percent range. Operating expenses are expected to be approximately 4 billion and 2.8 billion respectively. All year Opex is expected to grow in the low. 40 percent range, got the non-gaap other income and expenses are expected to be an income of approximately 700, excuse me, of approximately 300 million excluding gains and losses from non-affiliated Investments. Gap, tax rates are expected to be 17 percent, plus, or minus one percent. Excluding any discrete items. Further financial details are included in the CFO commentary and other information available in our IR website. I would like to now turn it over to Jensen as he would like to make a few comments. Thanks a lot. The industry is going through a major change. Before we start q a, let me give you some perspective on the importance of the transformation. The next industrial Revolution has begun. Companies and countries are partnering with Nvidia to shift. The trillion dollar installed base of traditional data centers. To accelerated Computing. And, Build a new type of data center. Ai factories. To produce a new commodity. Artificial intelligence. Hey, I will bring significant productivity gains to nearly every industry and help companies be more cost and energy efficient. While expanding Revenue opportunities. Csps, were the first generative AI movers. With Nvidia's. Csps, accelerated workloads to save money and power. The tokens generated by Nvidia Hopper, Drive revenues for their AI Services. And Nvidia Cloud instances attract rental, customers from our Rich ecosystem of developers, Strong and accelerated demand accelerating demand for generative, AI training and inference on the Harper platform. Propels our data center roof, Training continues to scale as models, learn to be multimodal. Understanding text. Speech images video and 3D. And learn to reason and plan. Our inference workloads are growing incredibly. With generative AI. Inference. Which is now about Token generation at massive, scale has become Incredibly complex. Gerton AI is driving a from Foundation up. Full stack Computing platform shift that will transform every computer interaction. From today's information retrieval model. We are shifting to an answers and skills generation model of computing. Hey, I will understand context. And our intentions. Ineligible reasons, plan and perform tasks. We are fundamentally changing how Computing works and what computers can do. From general purpose, CPU. To GPU accelerated Computing. From instruction driven software. To intention understanding models. From retrieving information. To performing skills. And at the industrial level. From producing software. To generating tokens. Manufacturing, digital intelligence. Second generation will drive a multi-year build out of AI factories. The unconscious service providers generates of AI has expanded to consumer internet companies. And Enterprise. Sovereign AI. Automotive and Healthcare customers. Creating multiple multi-billion dollar, vertical markets. The Blackwell platform is in full production. And forms the foundation for trillion parameter scale, generative AI. The combination of Grace CPU. Blackwell gpus. Envy link. A quantum. Spectrum, Nixon switches. High-Speed interconnects. And a rich ecosystem of software and partners. Let us expand and offer a richer and more complete solution for AI factories than previous generations. Cycle, Max opens a brand new market for us. Bring large-scale AI. To ethernet only data centers. And Nvidia, Nims is our new software offering that delivers Enterprise grade optimized generative AI to run on Cuda everywhere. From the cloud to on-prem data centers to RTX aipcs, through our expansive network of ecosystem Partners. From Blackwell to Spectrum, X to nems. We are poised for the next wave of growth. Thank you. Thank you Jensen. We will now open the call for questions. Operator, could you please poll for questions at this time? I would like to remind everyone in order to ask a question. Press star, the number one on your telephone keypad, we'll pause for just a moment to compile the Q a roster. As a reminder, please limit yourself to one question. Our first question comes from a line at the Stacy. Rothcon with Bernstein. Please go ahead. Hi guys, thanks for taking my questions. Um, my first one I wanted to do a little bit into the the Blackwell comments in full production now. What did that suggest with regard to shipments and delivery timing? If that product is doesn't sound like it's sampling anymore. Um, what does it mean? When deduction customers hand, if it's in production now, Uh, we will be shipping. Well, we've been in production. For for a load at a time. Our production shipments will start. Q2 and ramping Q3 and customers can have data centers stood up in Q4. We will see a lot of Blackwell Revenue this year. Our next question will come on the line? It's Timothy R Curry with PBS. Please go ahead. Thanks a lot. Um, I wanted to ask, um, Jensen about the deployment of, uh, you know, Blackwell versus Hopper. Um, just given the system's nature and, you know, all the demand for, you know, treaty that you have. How does the deployment of this stuff differ from Hopper? I guess, I asked because liquid cooling at scale hasn't been done before and there's some engineering uh, challenges built up in node level and within the data center. So do these complexities sort of elongate the transition and how do you sort of uh you know, think about how it's all going? Thanks. Blackwell comes in many configurations. Blackwell is a platform. Uh, not a GPU and the platform includes support for air cooled, liquid cooled. X76 and Grace. Inciniband. Now, Spectrum X. And very large MBA link domain that I demonstrated at GTC shown at GDC And so, uh, for some customers. And they will ramp into their existing install base. Of data centers that are already shipping, Hoppers. It will easily transition from h100 to h200 to B100. And so, uh, Blackwell Systems have been designed to be backwards compatible if you will electrically mechanical Of course, the software staff that runs on Hopper will run fantastically on on blackball. We also. Have been climbing the pump if you will. Um, with the entire ecosystem, getting them ready for liquid cooling. Uh, we've been talking to the ecosystem. A lot about Blackwell for quite some time. And the CSP the data centers. On the odms the system makers. Our supply chain Beyond them, the cooling, the cooling supply chain base. Uh, the liquid cooling supply chain base data, center supply chain base. No one is going to be surprised. When it's blackball coming and the capabilities that we would like to deliver with Grace black belt, 200 gp200, spoon, beeps exceptional. Our next question will come from the line at the band. Aria, with Bank of America Securities. Please go ahead. Jensen, how are you? Ensuring that there is enough utilization of your products and that there isn't the full ahead of holding Behavior because the time Supply competition or other factors what what basically what checks have you built in the system to give us confidence that a monetization is keeping Pace with your very uh you know, very strong shipment group. Well, um, I guess I guess, uh, there's the there's the big picture view that I'll come to and then but I'll I'll answer your question directly. Uh, the demand for for gpus in all the data centers is incredible. Uh we're racing every single day and the reason for that is because uh applications like chat gbt and gpt40 and now it's going to be multimodality and A Gemini and it's ramp and Anthropic, and You know, all of the work that's being done and all the csps are consuming Every GP that's out there. There's also a long line of generative, AI startups, some 15 000. 20. 000 startups. That in all different fields from from multimedia. Uh to um, digital characters. Um, of course, all kinds of design tool uh application productivity applications. Digital biology. A movement, the moving of the AV industry to video so that they can train. End-To-End models. Uh, to expand the operating domain of self-driving cars. The list is just quite extraordinary. We're racing, actually, Um, uh, customers are are Uh, are Putting a lot of pressure on us to deliver deliberative systems and stand it up as quickly as possible. And of course, I haven't even mentioned all of the Sovereign AIS who would like to train in all of their Regional natural natural resources of their country, which is their data. And from their original models and there's a lot of pressure to stand those systems up. Uh so so anyhow the demand I think is is um really really high and it outstrips our supply Uh, longer term. Uh, that's what that's You know, that's the reason why I jumped in to make a few comments. Longer term. You know, we're we're completely redesigning how computers work. And it's, this is, this is a platform shift, of course, has been compared to other platform ships in the past. But but, um, uh, time will clearly tell that this is much much more profound than previous platform ships. And the reason for that is because the computer is no longer, an instruction driven, only computer. It's an intentional understanding computer. And understands and understand, of course, the way we interact with it, but it also understands our meaning what we intend that we ask it to do and has the ability to reason Inference iteratively to, to process a plan. And, uh, come back with a solution. And so, so every aspect of the computer is changing in such a way that instead of retrieving Pre-Recorded files. It is now generating contextually relevant intelligent answers. And so that that's going to change Computing Stacks um all over the world. And you saw a bill that in fact, even the PC Computing stack is going to get revolutionized. And this is just the beginning of all the things that, you know, what people see today are the beginning of the things that we're working in our labs. And, and the things that we're doing with all the startups and large companies and um, developers all over the world, it's going to be, it's going to be quite quite extraordinary. Our next question will come from the line of Joe Moore with Morgan Stanley. Please go ahead. Great. Thank you. Um understand what you just said about how strong demand is. Uh you have a lot of demand for h200 and for Blackwell products. Do you anticipate any kind of pause with copper? Uh, n, h100, as you sort of, migrate to those possible, people wait for those new products, which be good product to have. Or do you think there's enough demand for h100 to sustain growth? Uh, we see increasing demand of Hopper through this quarter. And we expect to be, we expect demand. To outstrip supply. For some time. As we now transition to page 200 as we transition to Blackwell, Everybody is is anxious to get their infrastructure online and the reason for that is because they're saving money and making money and they would like to do that as soon as possible. Our next question will come from the line of toshia Hari with Goldman Sachs. Please go ahead. Hi uh thank you so much for taking the question. Um, Jensen I wanted to ask about competition. I think many of your Cloud customers have announced, you know, new or updates to their existing internal programs right in parallel to what they're working on with you guys to what extent did you consider them as competitors medium to long term and in near view, do you think they're limited to addressing mostly internal workloads or could they be broader uh, and what they address going forward? Thank you. They were different in some ways. Um, Uh, Nvidia is in solving Computing architecture. Allows. Customers to process every aspect of their pipeline from unstructured data processing to prepare for training. To structured data processing data from processing like SQL to prepare for training. Um, to training to difference. And as I was mentioning in my remarks, that inference has really fundamentally changed, it's now generation. It's not trying to just detect the cat, which is, which was plenty hard in in itself, um, but it has to generate every pixel of a cat. And so, so the generation process is a fundamentally different processing architecture. And it's one of the reasons why tensor rtlm was so so well received We improve the performance in using the same chips on our architecture by a factor of three, that kind of tells you something about the richness of our architecture and the richness of our software. So once you could use Nvidia for for everything, from computer vision, to image processing to computer, Graphics to, you know, all modalities of computing. And as the world is, is now Suffering from Computing cost and Computing energy inflation because general purpose. Computing has run, its course. Accelerated Computing is really the sustainable way of going forward. So accelerating Computing is how you're going to save money and Computing is how you're going to save energy and Computing. And so the versatility of our of our platform, uh, results in the lowest TCO for their data center. Second, And so for developers, uh, that are are looking for a platform to develop on starting with Nvidia, is always a great choice. And we we uh, we're on Prem we're in the cloud. Uh, you know, we're in computers of any sizing shape. We're practically everywhere, and so, uh, that's the second reason. The third reason has to do with the fact that that And we build AI factories. And this is this is becoming more and apparent to people that that AI is not a chip problem. Um only it starts of course, with very good chefs. Um and we've built a whole bunch of chips for our AI factories, but it's a systems problem. In fact, even even AI is now a systems problem. It's not just one large language model is a complicated complex system of a whole bunch of large language models are that are working together. And so the fact that Nvidia builds the system um causes us to optimize all of our chips to work together as a system to be able to have software that operates as a system and to be able to optimize across the system. I'm just a point of perspective and simple numbers. You know, if you have a, if you had a five billion dollar infrastructure, And you improve the performance by a factor of two, which we routinely do. Um, you know, when you improve the infrastructure by factor two, the value to use five billion dollars. All the chips in that data center doesn't pay for it. And so the value of it is really quite extraordinary and this the reason why today performance matters everything, you know, this is this is at a time when, when, um, The highest performance is also the lowest cost because the infrastructure cost of carrying all of these chips costs a lot of money and it takes a lot of money to to fund the data center to offer the data center to people that goes along with it. The power that goes along with it development state that goes along with it, you know, all of it, all of it adds up. And so the highest performance is also the lowest TCL. Our next question will come from the line of Matt Ramsey with TD Cowan. Please go ahead. Um, thank you very much. Good afternoon everyone. And I have been in a data center industry. My whole career, I've never seen the velocity that you guys are introducing new platforms. At the same combination of the performance jumps that you're getting. I mean, five accidents in training. The stuff you talked about at, you can see up to 30X in inference. Um, and it's amazing thing to watch. But it also creates an interesting, just a position where the, the current generation of product that your customers are spending billions of dollars on. Um, it's going to be not as competitive with your new stuff. Um, very very much more quickly than the appreciation Cycles of that product. So I'd like you to if you wouldn't mind speak a little bit about how you're seeing that situation evolve itself with customers, as you move to Blackwell, they're going to have very large installed bases. Obviously, software compatible, but large install bases of product. New generation stuff and being interesting to hear what you see happening with customers along that path. Thank you. Yeah, I really appreciate it. Three three, uh, three points that I like to make. If you're if you're five percent Into the build out. Um, versus if you're 95 into the build out, You're going to feel very differently. And because you're only five percent into the build out, anyhow, You know, you you, you build, you build as fast as you can. And uh, you know, when black hole comes it's going to be terrific. And then after Blackwell as you mentioned, we have we have uh, other black boss coming and then her says short. There's, you know, we're on a one-year Rhythm, as you as we've explained to the world and we want our customers to see our road map for as far as they like, um, but they're they're uh, they're early in their build out anyways and so they had just keep on building. Can be a whole bunch of ships coming at them and they just got to keep on building and just, you know, if you will uh, performance average your way into it. So that's, that's the smart things. The smart thing to do. They need to make money today. They want to save money today. And I and time is really, really valuable to them. Let me give you an example of time being really valuable. Why does idea of standing up a data center instantaneously is so valuable and getting this thing called time to train is so valuable. The reason for that is because the the next the next company who reaches the next major plateau, To announce a groundbreaking Aeon. And it's the second one after that gets to announce something that's you know, 0.3 percent better. And so the question is, do you want to be repeatedly? The company delivering, groundbreaking AI. More the company, you know? Delivering 0.3% better. And that's the reason why this this race as in all technology races. Uh, the race is so important. Uh, and, and you're seeing this race across multiple companies, because this is so vital to have technology leadership for companies to, uh, Trust the leadership and want to build on your platform and know that the the platform that they're building on is going to get better and better. Um, and so leadership matters, a great deal time to train matter is a great deal, the difference between Time to explained, others. Three months earlier just to get it done, uh, in order to get time to train on three months project, you know, getting started three months earlier is everything. And so which the reason why we're standing up Hopper systems, um, like mad right now, because the next Plateau is just around the corner. So so that's the second reason. The first, the first comment that you made is really a great comment which is you know how is it that we're doing? We're moving so fast and and advancing so quickly because we have all the stats here We literally build the entire data center and we can monitor everything measure, everything optimize across everything we know where all the bottlenecks are, we're not guessing about it. We're not putting up PowerPoint slides that look good. We're actually you know we also like our PowerPoint slides to look good but but we we're delivering systems that perform at scale and the reason why we know they performance scale is because we built it all here. Now, we one of the things that we do, that's a bit of a miracle is that we build an entire AI infrastructures here. But then we we disaggregate it and integrated into our customers data centers. However they like But we know how it's going to perform and we know where the bottlenecks are we know where we need to optimize with them. And we know where we have to help them improve their infrastructure to achieve the most performance. This deep intimate knowledge, at the entire data center scale is fundamentally. What sets us apart today? You know, we Build every single chip from the ground up. We know exactly how processing is done across the entire system. And so we understand exactly how it's going to perform and how to get the most out of it with every single generation. And so I appreciate those are three points. Your next question will come from the line of Marco caucus with evercore isi. Please go ahead. Hi, thanks for taking my question. Um, Jensen in the past. You've made the observation that general purpose, Computing, ecosystems typically. Dominated each Computing era, and I believe the argument was that they could adapt to different workloads, get higher utilization, Drive cost compute cycle down. And this was a motivation for why you were driving to a general purpose GPU, Cuda ecosystem, Accelerated Computing and I mischaracterized that observation. Please do let me know. So the question is, Given that the workloads that are driving demand for your Solutions, are being driven by neural network training and inferencing which on the surface seemed like a limited number of workloads. Um, And it might also seem to lend themselves to custom Solutions. And so the question is to that, does the gel purpose, Computing framework become more at risk? Or, is there enough variability or a rapid enough evolution on these workloads? That that support? That historical, general purpose framework? Thank you. Yeah. Uh and these accelerated Computing is versatile but I wouldn't call it general purpose, this would be very good at running this switch. That was really designed for that was your only provide for general purpose compute. So there's a, there's a, there's a, there's a, uh, the benominations probably isn't probably is. So, I would say that, I would say that, that we're brilliant. That's usually the way. So that's usually the words. There's a ridge, are able to accelerate over the years, but they all have a lot of commonalities. Um, you know, maybe maybe some deep differences, but commonalities, you know, they're all things that I can run in parallel, they're all height heavily threaded. Represents 99 of the runtime. For example, you know, those are all properties of accelerated Computing. Um, And versatility of our platform. And the fact that we designed entire systems is the reason why. Over the course of 10 years or so. The number of startups that you guys have asked me about in these conference calls is fairly large. Every single one of them because of the brittleness of their architecture, the moment, the moment generative AI came along, um, on the moment the fusion models, came along the moment, the next models, you know, the next models are coming along now. Um and and now all of a sudden look at this. Large language models with memory. I think it's a large language model needs to have memory, so they can carry on a conversation with. You understand the context, all of a sudden, the versatility of the Grace memory became super important. And so, if each one of these advances and generative Ai and the advancement of AI, Really begs for not having a A widget that's designed for one model. But to have something that is really good for this entire domain, the properties of this entire domain, but obeys the the first principles of software. That software is going to continue to evolve that software is going to keep getting better and bigger we believe in the scaling. Of these models. Uh, there's a lot of reasons why we're going to scale. Um, by easily a million times in the coming few years for for good reasons. And and we're looking forward to it and we're ready for it. Uh, and so the versatility of our platform is really quite key. And it's not uh, if you're too, if you're too brittle, too specific, you might as well just build an fpga or you build an Asic or something like that, but that's hardly a computer. Our next question will come from the line of Blaine Curtis with Jeffries. Please go ahead. Thank you. Thanks for taking my question. I actually kind of curious. I mean, I was being Supply constrained. How do you think about? I mean, you came out with with a product for John age, 20. I'm assuming there'd be a ton of demand for it but obviously trying to serve your your customers with uh, you know, the other Hopper products. It's kind of serious, how you're thinking about that in the second half. You can elaborate, you know, any impact what you can use for sales as well as for as far as Questions, something believed out age, 20 and how you're thinking about allocating Supply between the different Hopper products. Well, you know, we just we have, we have customers that we honor And, And we do our best for every customer. Um, Is this the case? That um, our business in China. And substantially lower. And the levels of the past. And and um it's a lot more competitive in China. Now, Um, because of the meditations on our technology. And and um, so those those matters are true, uh, however To, to do our best to serve the customers, and the markets there and and to the best of the our ability, we'll we'll uh, we'll do our best, you know? And so but that I think overall the comments that we made Uh demand outstripping Supply is, is for the entire entire market and and um, particularly so for h200 and Blackwell towards the end of the year, Our next question will come from the line of Serenity peasuri. With Raymond James, please go ahead. Thank you, Jensen, actually more of a clarification on what you said. Um, gp200 systems. It looks like there's a significant demand for systems. Historically, I think you've sold a lot of agx, boards. And some gpus and the systems business was relatively small. So I'm just curious, you know, why is it that now, you are seeing such a strong demand for systems going forward. Is it just the TCO or is it something else? Or is it? Just the architecture? Thank you. I appreciate that. In fact. The way we sell gb200 is the same. We disaggregate. All of the components that make sense. And we integrate it into computer makers. Um, we have a hundred different computer system configurations that are coming this year for Blackwell. And and that is that is off the charts. Uh, hover Hopper, um, frankly. Only half but that's at its peak. You know, it started out with, with way. Less than that even. And, and so, uh, you're going to see liquid cooled version air cooled version x86 versions, uh, Grace versions. Uh, you know, so on and so forth. And there's a whole bunch of systems that are being designed and and they're offered from, uh, all of our ecosystem of great Partners. Uh, nothing. Nothing has really changed now. Of course. The blackball platform. Is has expanded our offering. Tremendously. I the the um, integration of CPUs and uh, the much more compressed density of computing. Liquid cooling is going to save data centers, a lot of money and provisioning power. And not to mention to be more energy efficient. And so so, uh, it's it's a much better solution. Uh, it's more expensive, meaning that we we offer a lot more components. Of a data center and everybody wins, you know, the data center gives much higher performance networking, uh, from networking switches networking Of course, next we have ethernet now. Um, so that we can bring Nvidia AI to Ai and large-scale, Nvidia AI to customers who only operate, you only know how to operate ethernet because of the the The ecosystems that they have. And um, And so so uh Blackwell's much more expansive, we have a lot more to offer our customers this from this time, this generation around. Our next question will come from the line of William Stein with the Truitt Securities. Please go ahead. Uh, great thanks for taking my question. Um, Jen said at some point Nvidia decided that um When they were, well, they were reasonably good CPUs available for data center operations. You're our base. Brace CPU, provided some real Advantage that made that technology. Worth delivering to customers. Uh, press related to cost or power consumption or technical synergies between Grace and topper. Gracie, Blackwell. Can you address? Whether there could be a similar uh Dynamic that might emerge on the client side whereby while there are very good Solutions. You've highlighted that you know intelligence partners and deliver great products like Asics. But there might be some especially emerging AI work with some Advantage that Nvidia can deliver that others have more challenge. Well um you mentioned you mentioned some really good reasons. Uh it is true that for many of the applications uh our partnership with x86 are x56 partners are are really terrific and we build excellent systems together. But Grace allows us to do something that isn't possible when the configuration the system configuration today. The memory system between Grace and Hopper are coherent and connected. The interconnect between the two chests. And I'm calling it. Two chips is almost weird because it's like a super chip. The two of them are connected with this with this interface that's like a terabytes per second. Yeah, it's off the charts. And the memory that's used by Grace is lpder. It's the first data center, grade low, power memory. And so we save a lot of power on every single node and then finally because because of the architecture because we can create our own architecture with the entire system now and we could create something that has a really large Envy link domain which is vitally important to the Next Generation large language models for inferencing. And so I you saw that gb200 has a 72 node, nv-link domain that's like, 72, black Wells connected together into one giant GPU. And so we needed, we need a Grace black boss to be able to do that. And so there's a there are architectural, reasons, they're software, programming reasons. And then there's just some reasons that that are essential for us to build them that way. And so if we see opportunities like that, um, you know, we'll explore explore it. And, and so today, as you saw, as you saw at the build yesterday which I thought was really excellent, Um, it's such an announced, um, Uh, the Next Generation PC is called pilot plus PC which which, um, I which runs fantastically on nvidia's RTX gpus that are, that are shipping and laptops. Um, but you know, it also supports arm beautifully. And so, uh, it opens up, it opens up opportunities for system, Innovation. Uh, even for even for PCS Our last question comes from the line of CJ, Muse, with Canter Fitzgerald. Please go ahead. Good afternoon. Thank you for taking the question, I guess Jensen a bit of a long-term question. Um I know Blackwell hasn't even lost it but obviously investors are forward-looking and amidst Rising, potential competitions and gpus and classmates how are you thinking about nvidia spaces Innovation that your million-fold scale of the last? They did and truly impressive, kudos part of these decision for his career and connectivity. When you look forward, what frictions need to be solved in the coming decade and I guess, maybe more importantly, what are you willing to share with us today? Well, I can announce that. After Blackwell, there's another chip. And and uh, we are on a one-year Rhythm. And so, And, and You can, you can also count that count on us having new networking technology. On a very fast Rhythm. We're announcing Spectrum expert, ethernet? But we're all in on ethernet. And we have a really exciting roadmap coming for ethernet. We have a rich rich ecosystem of Partners, Dell announced that they're taking Spectrum X to Market. Uh we have a rich ecosystem of customers and partners who are going to announce taking our entire AI Factory architecture to Market. And so, Uh, for companies that want the ultimate performance. We have infiniband Computing fabric. Inciniban is a Computing. Fabric ethernet's a network? And, and cinebad over the years started out as a Computing fabric became a better and better Network. Ethernet is a network and with Spectrum X, we're going to make it a much better Computing fabric. And we're committed fully committed to all. Links nv-link community Computing fabric. On a single single Computing domain. To infiniband Computing fabric. To ethernet. Networking Computing, fabric. And so, so we're going to take all three of them. Ford at a very fast clip. And so you're going to see new switches, coming new. Next coming new capability, new software Stacks, that run on all three of them, new CPUs. New gpus new networking decks new switches. Um, a mountain of chips that are that are coming. And all of it. The beautiful thing is ultimate runs Cuda and all of it runs our entire software stack. So if you invest today On our software stack, uh, you know, without doing anything at all. It's just going to get faster and faster, and faster and faster. And if you invest in our, our architecture today without doing anything, uh, it will go to more and more clouds, and more and more data centers and everything. Just once, So, I I think the, the pace of innovation that we're bringing, uh, I will drive up the capability on the one hand and drive down to TCO on the other hand. And so, we should be able to scale out, uh, with the Nvidia architecture for this new era of computing and um start this new industrial Revolution where we manufacture not just software anymore but we manufacture artificial intelligence tokens and we're going to do that at scale. That will conclude our question and answer session and our call for today. We thank you all for joining and you may now disconnect |
News-ai Nvidia-earnings-cal