How Jensen Huang co-founded a chip company at a diner table in 1993 with no idea how to run a business, invented the GPU that changed computing forever, and built the single most consequential company of the artificial intelligence era with a stock gain of over 1,000% in just two years.
In early 1993, three engineers sat down at a booth inside a Denny's diner in East San Jose, California. Jensen Huang was 30 years old, working at LSI Logic. Chris Malachowsky and Curtis Priem had both come from Sun Microsystems. Their shared conviction: the future of computing was going to be driven by graphics, and nobody was building the right chip to power it. On April 5, 1993, they incorporated Nvidia Corporation. Huang has since said publicly that he had no idea how to start a business at the time. That admission now carries the weight of understatement: Huang went on to build Nvidia into the world's first company to surpass a $5 trillion market capitalisation, a milestone achieved in October 2025.
The early years were genuinely precarious. Nvidia's first chip, the NV1, launched in 1995 to mediocre reception. Its second major product, the NV3, was built around a standard that Microsoft's DirectX platform made obsolete almost immediately. By 1996, Nvidia had spent through most of its cash. Huang made the decision to cancel the existing roadmap and bet the entire company on a new architecture compatible with DirectX. It worked. The RIVA 128, launched in 1997, sold a million units in four months and saved the company from bankruptcy. The pattern of surviving near-death through a high-stakes pivot would become a recurring theme in Nvidia's story.
ALSO READ: Dow Futures Fall Amid Inflation and Middle East War FearsOn August 31, 1999, Nvidia launched the GeForce 256 and coined the term GPU, Graphics Processing Unit, to distinguish it from the CPU. This was not mere marketing. The GeForce 256 was the first chip capable of performing transform and lighting calculations that had previously required the main processor, freeing the CPU and enabling a new era of 3D gaming. Nvidia went public on the NASDAQ the same year at $12 per share. By 2000, Nvidia had secured an exclusive contract to supply graphics chips for Microsoft's original Xbox console, cementing its position as the dominant force in PC and console graphics.
What Nvidia understood earlier than almost anyone was that a GPU is not simply a graphics chip. It is a massively parallel processor capable of performing thousands of calculations simultaneously. Where a CPU handles tasks sequentially with a small number of powerful cores, a GPU handles thousands of tasks in parallel with thousands of smaller cores. That architectural difference, optimal for rendering pixels in gaming, turned out to be equally optimal for training neural networks in artificial intelligence. That insight, which Nvidia began acting on from around 2006, would eventually transform the company from a gaming chip maker into the backbone of global AI infrastructure.
"Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries, and nations." Jensen Huang, Nvidia CEO, Q4 FY2024 Earnings
| Fiscal Year | Revenue (USD) | Key Driver |
|---|---|---|
| FY2016 | $5.0 billion | Gaming GPUs, early data centre |
| FY2018 | $9.7 billion | Gaming, crypto mining demand |
| FY2020 | $10.9 billion | Gaming, Mellanox acquisition |
| FY2022 | $26.9 billion | Gaming boom, early AI demand |
| FY2023 | $26.9 billion | Gaming slowdown, AI pivot |
| FY2024 | $60.9 billion | H100 AI GPU supercycle |
| FY2025 | $130.5 billion | Blackwell ramp, AI data centres |
| TTM (2026) | $187+ billion | Sovereign AI, Blackwell B200 |
In 2006, Nvidia launched CUDA, Compute Unified Device Architecture, a programming platform that allowed developers to write software that could run directly on Nvidia GPUs. At the time it seemed like a niche tool for researchers who wanted to use GPUs for non-graphics computing. In retrospect, CUDA was the most strategically significant software investment in the history of the semiconductor industry.
Every AI researcher, every machine learning framework, every major neural network trained over the past 15 years has been built on CUDA. PyTorch runs on CUDA. TensorFlow runs on CUDA. The models powering ChatGPT, Gemini, Claude, Llama, Stable Diffusion, and every major AI system were trained on Nvidia GPUs using CUDA. Competitors like AMD have produced technically competitive chips, but the accumulated ecosystem of CUDA libraries, tools, and developer expertise built over nearly two decades represents a switching cost so high that most AI labs simply will not leave Nvidia, regardless of price. CUDA is Nvidia's deepest and most durable competitive advantage, more important than any single chip generation.
ALSO READ: Today's Oil Market: Price Surge Driven by Middle East TensionsThe launch of ChatGPT by OpenAI in November 2022 triggered what is now called the AI supercycle, and Nvidia was the single greatest direct beneficiary. Every company racing to build or deploy AI needed Nvidia's H100 GPU, based on the Hopper architecture. Demand so far exceeded supply that H100s were trading on grey markets for $40,000 each, more than twice their list price. Microsoft, Google, Meta, Amazon, Oracle, and dozens of sovereign governments placed orders worth billions. Nvidia's data centre revenue went from $15 billion in FY2023 to $47.5 billion in FY2024 to over $115 billion in FY2025.
The Blackwell architecture, announced at GTC 2024 and ramping through 2025, represents Nvidia's next generation: the B100 and B200 chips deliver up to four times the training performance of the H100 and up to 30 times the inference performance for large language models. Every major hyperscaler has committed to Blackwell deployments worth tens of billions of dollars. According to Reuters, Nvidia's Blackwell chip orders have already exceeded $500 billion in committed customer spend, making it the most pre-ordered product in semiconductor history.
Jensen Huang was born in Taipei, Taiwan, in 1963. When he was five, his family moved to Thailand. In 1972, at age nine, he and his brother were sent to live with relatives in the United States. They were enrolled in the Oneida Baptist Institute in rural Kentucky, a school that also housed children from troubled backgrounds. Huang later recalled those years as formative, teaching resilience and discipline. He studied electrical engineering at Oregon State University, where he met his future wife, Lori. He earned his master's degree from Stanford in 1992, a year before founding Nvidia.
Huang's leadership style is unusual for Silicon Valley. He insists on extreme operational speed, telling engineering teams to first imagine the fastest possible way to achieve something with no constraints, then work backward to reality. He is famous for his leather jacket uniform, worn to virtually every keynote and public appearance. He delivers technical presentations with the charisma of a performer, turning chip architecture launches into cultural events his GTC keynotes now fill arenas with thousands of attendees. In December 2025, Time magazine named Jensen Huang in its Person of the Year issue, and the Financial Times named him its Person of the Year for 2025.
Nvidia is often described as a chip company, but Jensen Huang insists it is a computing platform company. The distinction matters. Nvidia does not just sell GPUs; it sells integrated systems combining chips, networking, software, and cloud services. Its DGX SuperPOD systems are complete AI supercomputer clusters. Its NeMo framework is enterprise AI model training software. Its Omniverse platform is a 3D simulation environment for robotics, autonomous vehicles, and industrial design. Its CUDA ecosystem is the programming standard for AI globally.
According to BBC Technology, Nvidia's operating margin exceeded 62% in its most recent fiscal year, an extraordinary figure for a hardware company and one that reflects the pricing power that comes with near-monopoly control of the AI GPU market. No customer has a credible near-term alternative to Nvidia for frontier AI training workloads. ALSO READ: Europe Stocks Drop as Energy Prices Spike Over Iran War
Nvidia's dominance has attracted intense competition. AMD's Instinct MI300X chip has won some cloud deployments at Microsoft Azure and Meta. Google's custom TPU chips handle a significant portion of its own AI workloads internally. Amazon, Microsoft, Apple, and even Nvidia customers Meta and Google are developing in-house AI chips to reduce their Nvidia dependence. Startups including Groq, Cerebras, SambaNova, and Graphcore have raised billions to challenge Nvidia's architecture.
None of these challengers has yet dented Nvidia's market share at scale. The CUDA ecosystem lock-in, the performance lead of Blackwell over any competitor in general training workloads, and the installed base of Hopper chips that Nvidia can upgrade with software keep its position extraordinarily secure in the near term. The more credible long-term risk is not a single competitor but a gradual diversification of the AI compute market as hyperscalers build more of their own silicon, reducing their dependence on any single vendor including Nvidia.
Nvidia's immediate roadmap centres on the full commercial ramp of Blackwell B200 and GB200 NVL72 rack systems, which deliver dramatically higher AI inference performance and are already committed by every major hyperscaler. The next architecture, codenamed Rubin, is expected in 2026, maintaining Nvidia's one-year cadence of generational chip releases.
Beyond chips, Nvidia is building toward a future where it sells not just GPUs but complete AI factories: data centre-scale systems that combine compute, networking, cooling, and software into a single product. Its Cosmos physical AI platform, NIM microservices, and Omniverse industrial simulation tools point toward a software revenue layer that could eventually rival the hardware business.
Watch: Rubin GPU architecture reveal, China export control developments, sovereign AI data centre contract announcements, and Nvidia's push into robotics and autonomous vehicle markets through 2025-2026.
