The Apple Vision Pro has quickly made its mark, with captivating real-world AR videos showcasing people skateboarding, navigating subways, driving Teslas, and more. Just days after its launch, these astonishing videos highlight the capabilities of the Vision Pro, Apple’s eagerly awaited spatial computing device.
As individuals explore the capabilities of the Vision Pro headset, they are pushing the boundaries of what is possible with face computer devices like the Apple Vision Pro. From strolling through city streets to operating vehicles, users are discovering new ways to integrate this technology into their daily lives.
Apple has implemented a sophisticated mechanism within the Vision Pro to achieve a stereoscopic effect for rendering 3D images of a person’s eyes. This involves incorporating a lenticular lens atop the OLED display, which can present different images from varying angles. According to iFixit, the Vision Pro utilizes this setup to generate a “3D face via the stereoscopic effect” by slicing, interpolating, and displaying facial images from minute angles for the left and right eyes.
However, displaying both images on the same screen necessitates a reduction in resolution, resulting in a somewhat blurry outcome, as explained by the firm. Additionally, the presence of a second lens atop the device helps expand the 3D view produced by the lenticular layer, preventing the wearer’s eyes from appearing excessively close to the nose.
This layered approach serves to diminish the visibility of the wearer’s eyes via the EyeSight feature while also narrowing the viewing angles. iFixit has supplemented their findings with a video demonstrating the OLED display beneath the two layers, revealing artifacts at the edges of the screen, which are only visible when the additional layers are removed. Further details on the external display of the Vision Pro can be found in the linked teardown post.
Spatial computing and how’s Apple popularising it
The concept of spatial computing has been circulating for a couple of decades, but with the introduction of the Vision Pro headset, its significance in the tech world is becoming increasingly apparent. But what exactly does spatial computing entail, and why is it important? Let’s delve into it.
What is spatial computing? While it may seem like a novel idea, the term “spatial computing” was coined as far back as 2003 by researcher Simon Greenwold. He defined it as “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.” In simpler terms, this technology merges the digital and physical realms by overlaying computer interfaces onto the real world. Instead of gazing at a screen, users engage with digital elements and information using natural movements within 3D space.
Examples of spatial computing include receiving visual driving directions projected onto the road via your car’s windshield, collaborating with colleagues in a virtual office metaverse using avatars, or playing Pokémon Go, where digital characters are superimposed onto the real world for interaction.