Creating a augmented reality content that is accurate and realistic require a lot of physic.

1 : First let's talk about the sensors.

The most common sensor is a camera, that use standard light traveling physics to work.

Then there are TOF sensors that emit infrared light or else and measure the time taken to come back to a dedicated sensor. This is the best and most precise way to measure the distance of a point. You can see the result by comparing Camera tracking and lidar tracking.

2 : then, the tracking and occlusion

After mapping every frame individually, the device must determine the motion of the device to reverse calculate the actual position of the world. Here is the physics of motion. Lot's of machine learning are involved into understanding how rolling shutter and motion blur may false the result and estimate the right one.

We can add that the 3d mapping of the world help us understand how a 3d object is obstructed from our view by real object, like walls, tables, chairs, that would be between the 3d object and the viewer.


3 : after motion, light.

One key feature to make a 3d object look real is the accurate lighting and shadow. Nowadays 3d models use PBR materials (Physically based Rendering). It means that the object will look like a real object if the light is exactly the same as in the real world.

In cinema, when filming while preparing the inclusion of 3d models. They add on the shot a simple sphere with the exact material the models will have, and the way it react to light is used for reference for the 3d artists that will integrate the models. The sphère is then rotoscoped out of the shot.

In real time, this reference lighting have to be interpreted from the light behavior of the real world, I won't go into details on how this is related to physics but you get the picture.

Conclusion :

Augmented reality is related to the light part of physics (except the orientation sensors).