Skip to main content Redeem Tomorrow
I used to be excited about the future. Let's bring that back.
About Get my help

Leviathan Wakes: the case for Apple's Vision Pro

VR is dogshit.

The problem is the displays. The human eye processes vast amounts of data every second, and since I took the plunge into modern VR in 2016, my eyes have always been underwhelmed.

They call it “screen door effect:” we readily perceive the gaps in between a headset’s pixels, and the cumulative effect as you swing your vision around is the impression you’re seeing the virtual space through a mesh. Obviously, this breaks the illusion of any “reality,” once the initial charm has worn off.

Density has slightly improved over the years, but the quality of the image itself remains grainy and unpersuasive.

This problem is one of bare-knuckle hardware. It’s expensive to use displays that are small and dense enough to overcome the headwinds of our skeptical eyes. But even if those displays were plentiful and cheap, the hardware needed to drive all those pixels efficiently is your next challenge. Today’s most powerful graphics processing cards are themselves as large as a VR headset, with massive cooling fans and heatsinks.

Of course, mobile phone vendors can’t indulge that size. By contrast to the gluttonous power budgets of a desktop gaming PC, phones have to be lean and efficient. They have to fit comfortable in the pocket, and last all day and into the evening.

Against these constraints, Apple has been engineering custom silicon since their 2008 acquisition of PA Semi, an independent chip design firm. As Apple’s lead in mobile phone performance—including graphics processing sufficient to high-end graphics on premium, high-density screens—one thing became clear to me:

Eventually their chips would reach a point where they could comfortably drive an ultra-high density headset that defeated all the obstacles faced by cheaper entrants in the VR race.

After years of quiet work beneath the surface, Apple’s AR leviathan is upon us, barely at the border of the technologically possible.

And it’s a good thing it costs $3,500

No, hear me out.

VR/AR/XR is being held back by the rinky-dink implementations we’re seeing from Oculus, Vive and others. There’s not enough beef to make these first-class computing devices. It’s not just that they’re underpowered, it’s also that they’re under-supported by a large ecosystem.

By contrast, Apple said “damn the cost, just build the thing, and we’ll pack every nicety that comes with your Apple ID into the bargain.”

So it has it all:

  • LiDAR sensor
  • God knows how many cameras
  • Infrared sensors to detect eye movement
  • An iris scanner for security
  • A display on the front portraying your gaze so that you can interact with humanity
  • Ultra-dense 4K+ displays for each eye
  • Custom silicon dedicated to sensor processing, in addition to an M2 chip

How much of these are necessary?

Who knows. But developers will find out: What can you do with this thing? There’s a good chance that, whatever killer apps may emerge, they don’t need the entire complement of sensors and widgets to deliver a great experience.

As that’s discovered, Apple will be able to open a second tier in this category and sell you a simplified model at a lower cost. Meanwhile, the more they manufacture the essentials—high density displays, for example—the higher their yields will become, the more their margins will increase.

It takes time to perfect manufacturing processes and build up capacity. Vision Pro isn’t just about 2024’s model. It’s setting up the conditions for Apple to build the next five years of augmented reality wearable technology.

Meanwhile, we’ll finally have proof: VR/AR doesn’t have to suck ass. It doesn’t have to give you motion sickness. It doesn’t have to use these awkward, stupid controllers you accidentally swap into the wrong hand. It doesn’t have to be fundamentally isolating.

If this paradigm shift could have been kicked off by cheap shit, we’d be there already. May as well pursue the other end of the market.

Whether we need this is another question

The iPhone was an incredible effort, but its timing was charmed.

It was a new paradigm of computing that was built on the foundations of a successful, maturing device: a pocket-sized communicator with always-on connectivity. Building up the cell phone as consumer presence was a lengthy project, spanning more than two decades before the iPhone came on the scene.

Did we “need” the cell phone? Not at first. And certainly not at its early prices. It was a conspicuous consumption bauble that signaled its owner was an asshole.

But over time, the costs of cellular connectivity dropped, becoming reasonable indulgences even for teenagers. Their benefits as they became ubiquitous were compelling: you could always reach someone, so changing plans or getting time-sensitive updates was easier than it had ever been in human history. In emergencies, cellular phones added a margin of safety, allowing easy rescue calls.

By the time iPhone arrived, the cell phone was a necessity. As, for so many today, is the smartphone.

The personal computer is another of today’s necessities that was not always so. Today, most people who own computers own laptops. The indulgence of portability, once upon a time, was also reserved exclusively for the wealthy and well-connected. Now it is mundane, and a staple of every lifestyle from the college student to the retiree.

Will augmented reality follow this pattern?

I’ll argue that the outcome is inevitable, even if the timing remains open to debate.

Every major interactive device since the dawn of consumer computing—from the first PCs to the GameBoy to the iPhone—has emphasized a display of some sort. Heavy cathode-ray tubes, ugly grey LCD panels, full-color LEDs, today’s high-density OLED panels… we are consistently interfacing through the eyes. Eyes are high-throughput sense organs, and we can absorb so much through them so quickly.

By interposing a device between the eyes and the world around us, we arrive at the ultimate display technology. Instead of pointing the eyes in the direction of the display, the display follows your gaze, able to supplement it as needed.

An infinite monitor.

The problem is that this is really fucking hard. Remember, we started with CRT displays that were at least 20 pounds. Miniaturization capable of putting a display over the eye comfortably is even now barely possible. Vision Pro requires a battery pack connected by a cable, which is the device’s sole concession to fiddly bullshit. Even then, it can only muster two hours at a single charge.

Apple is barely scraping into the game here.

Nevertheless, what starts as clunky needn’t remain so. As the technology for augmented reality becomes more affordable, more lightweight, more energy efficient, more stylish, it will be more feasible for more people to use.

In the bargain, we’ll get a display technology entirely unshackled from the constraints of a monitor stand. We’ll have much broader canvases subject to the flexibility of digital creativity, collaboration and expression.

What this unlocks, we can’t say.

The future is in front of your eyes

So here’s my thing:

None of the existing VR hardware has been lavished with enough coherent investment to show us what is possible with this computing paradigm. We don’t know if AR itself sucks, or just the tech delivering it to us.

Apple said “let’s imagine a serious computer whose primary interface is an optical overlay, and let’s spend as much as it costs to fulfill that mandate.”

Unlike spendy headsets from, say, Microsoft, Apple has done all the ecosystem integration work to make this thing compelling out of the box. You can watch your movies, you can enjoy your photos, you can run a ton of existing apps.

Now we’ll get to answer the AR question with far fewer caveats and asterisks. The display is as good as technologically possible. The interface uses your fingers, instead of a goofy joystick. The performance is tuned to prevent motion sickness. An enormous developer community is ready and equipped to build apps for it, and all of their tools are mature, well-documented, and fully supported by that community.

With all those factors controlled for, do we have a new paradigm here?

What can we do with a monitor whose size is functionally limitless?

I’m eager to see how it plays out.


Leviathan Wakes: the case for Apple's Vision Pro Leviathan Wakes: the case for Apple's Vision Pro