Metropolis Spatial AI Powers Mega Omniverse Blueprint Featured at CES 2025
Published:
I’m incredibly proud to announce that our work on Metropolis Spatial AI was featured in the CES 2025 Keynote by NVIDIA CEO Jensen Huang, as a core component of the newly unveiled Mega Omniverse Blueprint. As the Tech Lead of this initiative within the Metropolis team, it’s a huge honor to see our technology playing a foundational role in enabling intelligent infrastructure at industrial scale.
The Mega Omniverse Blueprint is NVIDIA’s comprehensive framework for simulating, deploying, and scaling robot fleets in highly realistic, physics-informed digital twins. These virtual environments, powered by the NVIDIA Omniverse platform, allow organizations to design, test, and optimize robotics and AI systems before deploying them in the real world.
Our contribution through Metropolis Spatial AI agents includes real-time location systems (RTLS) powered by advanced multi-camera tracking, 3D perception using our BEV-SUSHI framework, and GNN-based tracking modules. These components enable accurate and scalable monitoring of autonomous agents—such as robots, vehicles, and humans—within massive industrial environments like warehouses and factories.
In collaboration with teams across NVIDIA, we integrated Metropolis Spatial AI into the Omniverse ecosystem alongside Isaac Sim™ and other robotics tools. The result is a tightly integrated digital twin solution for spatial intelligence—demonstrated in factory and warehouse scenarios during the CES keynote.
This marks a major step forward for real-time vision AI in digital twins, unlocking new levels of automation, safety, and operational efficiency. I’m beyond excited to see where this journey leads as we continue to push the boundaries of AI-powered infrastructure.