When Bill Dally joined Nvidia’s research lab in 2009, it employed only about a dozen people and was focused on ray tracing, a rendering technique used in computer graphics.

That once-small research lab now employs more than 400 people, who have helped transform Nvidia from a video game GPU startup in the nineties to a $4 trillion-dollar company fueling the artificial intelligence boom.
Now, the company’s research lab has its sights set on developing the tech needed to power robotics and AI. And some of that lab work is already showing up in products. The company unveiled Monday a new set world AI models, libraries, and other infrastructure for robotics developers.
Dally, now Nvidia’s chief scientist, started consulting for Nvidia in 2003 while he was working at Stanford. When he was ready to step down from being the department chair of Stanford’s computer science department a few years later, he planned to take a sabbatical. Nvidia had a different idea.
David Kirk, who was running the research lab at the time, and Nvidia CEO Jensen Huang, thought a more permanent position at the research lab was a better idea. Dally told TechCrunch the pair put on a “full-court press” on why he should join Nvidia’s research lab and eventually convinced him.
“It wound up being kind of a perfect fit for my interests and my talents,” Dally said. “I think everybody’s always searching for the place in life where they can make the biggest, you know, contribution to the world. And I think for me, it’s definitely Nvidia.”
When Dally took over the lab in 2009, expansion was first and foremost. Researchers started working on areas outside of ray tracing right away, including circuit design and VLSI, or very large-scale integration, a process that combines millions of transistors on a single chip.
The research lab hasn’t stopped expanding since.
Techcrunch event
San Francisco
|
October 27-29, 2025
“We try to figure out what will make the most positive difference for the company because we’re constantly seeing exciting new areas, but some of them, you know, they do great work, but we have trouble saying if [we’ll be] wildly successful at this,” Dally said.
For a while that was building better GPUs for artificial intelligence. Nvidia was early to the future AI boom and started tinkering with the idea of AI GPUs in 2010 — more than a decade before the current AI frenzy.
“We said this is amazing, this is gonna completely change the world,” Dally said. “We have to start doubling down on this and Jensen believed that when I told him that. We started specializing our GPUs for it and developing lots of software to support it, engaging with the researchers all around the world who were doing it, long before it was clearly relevant.”
Physical AI focus
Now, as Nvidia holds a commanding lead in the AI GPU market, the tech company has started to seek out new areas of demand beyond AI data centers. That search has led Nvidia to physical AI and robotics.
“I think eventually robots are going to be a huge player in the world and we want to basically be making the brains of all the robots,” Dally said. “To do that we need to start, you know, developing the key technologies.”
That’s where Sanja Fidler, the vice president of AI research at Nvidia, comes in. Fidler joined Nvidia’s research lab in 2018. At the time, she was already working on simulation models for robots with a team of students at MIT. When she told Huang about what they were working on at a researchers’ reception, he was interested.
“I could not resist joining,” Fidler told TechCrunch in an interview. “It’s just such a, you know, it’s just such a great topic fit and at the same time was also such a great culture fit. You know, Jensen told me, come work with me, not with us, not for us, you know?”
She joined Nvidia and got to work creating a research lab in Toronto called Omniverse, an Nvidia platform, that was focused on building simulations for physical AI.

The first challenge to building these simulated worlds was finding the necessary 3D data, Fidler said. This included finding the proper volume of potential images to use and building the technology needed to turn these images into 3D renditions the simulators could use.
“We invested in this technology called differentiable rendering, which essentially makes rendering amendable to AI, right?” Fidler said. “You go [from] rendering means from 3D to image or video, right? And we want it to go the other way.”
World models
Omniverse released the first version of its model that turns images into 3D models, GANverse3D, in 2021. Then it got to work on figuring out the same process for video. Fidler said they used videos from robots and self-driving cars to create these 3D models and simulations through its Neuric Neural Reconstruction Engine, which the company first announced in 2022.
She added these technologies were the backbone of the company’s Cosmos family of world AI models that were announced at CES in January.
Now, the lab is focused on making these models faster. When you play a video game or simulation you want the tech to be able to respond in real time, Fidler said, for robots they are working to make the reaction time even faster.
“The robot doesn’t need to watch the world in the same time, in the same way as the world works,” Fidler said. “It can watch it like 100x faster. So if we can make this model significantly faster than they are today, they’re going to be tremendously useful for robotic or physical AI applications.”
The company continues to make progress on this goal. Nvidia announced a fleet of new world AI models designed for creating synthetic data that can be used to train robots at the SIGGRAPH computer graphics conference on Monday. Nvidia also announced new libraries and infrastructure software aimed at robotics developers too.
Despite the progress — and the current hype about robots, especially humanoids — the Nvidia research team remains realistic.
Both Dally and Fidler said the industry is still at least a few years off from having a humanoid in your home, with Fidler comparing it to the hype and timeline regarding autonomous vehicles.
“We’re making huge progress and I think you know AI has really been the enabler here,” Dally said. “Starting with visual AI for the robot perception, and then you know generative AI, that’s being hugely valuable for task and motion planning and manipulation. As we solve each of these individual little problems and as the amount of data we have to train our networks grows, these robots are going to grow.”