Robotics and autonomous systems programs are finding that simulation environments produce models that fail when confronted with real-world sensor noise and the chaos of ordinary deployment conditions.

Physical AI programs keep running into the same wall over and over again.

A robotics system trained solely in simulation begins to make errors in a real facility that were not present in the scenario library. Programmers often blame the model architecture for the issue, but the data used for training the model consistently reveals the underlying causes.

The deployment of robotics programs in actual facilities has sparked a debate over synthetic versus real-world data, revealing real consequences. As autonomous systems programs move from research settings into production environments, data gaps are showing up as unexpected behaviors and costly rework cycles.

Despite this gap, synthetic data has undeniable strengths in the following scenarios:

In simulation environments: In the NVIDIA ISAAC-Sim, synthetic data accelerates early-stage training by giving embodied AI systems a structured space to learn, train and test before any physical hardware is available.

For edge-case scenarios: This includes construction zones, unusual lighting, rare object configurations and unexpected weather. These situations often occur unexpectedly in the real world, making it difficult to collect enough examples for training. Simulation can generate those scenarios on demand, filling the gaps that real-world collection can’t close within a reasonable timeline.

In regulated industries: Using real patient data and sensitive operational data can raise legal and privacy concerns. Synthetic data that resembles the statistical properties of real data allows training to proceed without exposing sensitive information.

Steve Nemzer, Senior Director of Artificial Intelligence Research & Innovation at TELUS Digital, who has worked extensively on annotation strategy for physical AI and robotics programs, says,  “The balance to strike is to use synthetic data to fill specific data gaps while anchoring training on real-world data that grounds the model in the long tail of real-world variability. Synthetic data can’t teach models about the sensor artifacts or adversarial conditions they’ll encounter in production.”

The Microscopic Gap That Simulation Misses

The sim-to-real gap is particularly consequential for world models because AI systems are trained to build internal representations of how physical environments behave. A robot that navigates warehouse floors perfectly in simulation may struggle with a surface variation that creates unexpected friction. The simulation may have been accurate in general situations, but the gap would have emerged in the small details. And it’s these tiny details that reveal precisely where real-world deployment malfunctions.

Real-world sensor data looks different from simulation across every modality:

  • LiDAR returns in rain or heavy dust look different from clean simulation data
  • Camera feeds in shifting light conditions carry noise that synthetic pipelines can’t fully replicate
  • Radar signals in dense urban environments pick up reflections and interference that controlled environments exclude by design

Models that lack exposure to these conditions treat them as anomalies, resulting in unforeseen failures in physical AI systems. Unlike large language models trained on decades of accumulated, human-generated web text, physical AI lacks an equivalent corpus to draw from. Data services providers like TELUS Digital have spent years building a workforce infrastructure capable of operating at the collection and annotation scale this problem demands. Even at that scale, physical AI programs are still in the process of building the necessary datasets to close the real-world collection gap.

Annotation Complexity Compounds the Problem

Collecting real-world data is only half the problem. Once acquired, every object in every sensor feed has to be labeled consistently across all sensors at once. The same pedestrian detected by LiDAR must be labeled the same in the camera feed and radar return. That level of precision requires annotation tools and workflows built specifically for multi-sensor data. Many general-purpose annotation platforms weren’t designed for that level of precision. When the labels don’t align across sensors, the model learns from conflicting information as a result, and these discrepancies eventually show up as failures in the field.

Robotics programs need egocentric data, which is footage captured from the robot’s own perspective. Collecting it requires instrumented operators to perform tasks in real environments, with every action time-stamped and labeled in context. This is the only way to capture the lighting shifts and physical unpredictability of the real world.

The Pipeline Question: What Is Synthetic Data Being Asked to Do?

Synthetic data is a useful tool, but it shouldn’t be the primary foundation of a physical AI training pipeline. It works well for specific defined purposes such as training in regulated environments where real data can’t be used and getting early-stage models off the ground before real-world data is available. But a model that primarily relies on synthetic data won’t be prepared for the variability it encounters in real deployment. Real-world data has to anchor the training, while synthetic data should work to fill the gaps around it.

Physical AI is now at a stage where ambition and data infrastructure are visibly out of alignment. The models teams are trying to build require annotated sensor data that, in many cases, simply doesn’t exist yet. The industry is beginning to organize around that reality, making adjustments to launch programs past the pilot phase and into deployment.

FAQ

What is the sim-to-real gap in physical AI development? 

It is the gap that appears when models trained in simulation fail in deployment because the simulation wasn’t able to replicate real-world conditions such as sensor noise and surface friction. The gap shows up the moment the model encounters something the simulation excluded.

Why can’t synthetic data replace real-world data for robotics training? 

Simulation reflects the parameters it was built around. Situations like LiDAR in rain and radar in dense urban environments produce data that synthetic pipelines don’t accurately model. Physical AI systems trained without that exposure encounter real-world conditions as anomalies.

How do physical AI programs differ from large language model programs in data requirements? 

LLMs draw on decades of accumulated web content. Physical AI requires annotated sensor data from real environments, and nowhere near enough of it exists. The field is building those datasets from scratch, which makes data strategy a fundamentally different and harder problem.

Where does synthetic data provide genuine value in physical AI training? 

Three places: early-stage simulation training before hardware is available, edge-case scenarios too rare to collect at scale in the field, and regulated industries where real-world data can’t be used. Outside those scenarios, it shouldn’t be carrying the training load.

What is cross-modal consistency, and why does it matter for physical AI? 

It means the same object is labeled identically across every sensor stream. A pedestrian in a LiDAR point cloud has to match the same pedestrian in the camera frame and radar return. Without that alignment, the perception model receives conflicting signals about the same scene.