Skip to main content Redeem Tomorrow
I used to be excited about the future. Let's bring that back.
About Hire me

Posts 11–20 of 46

  • The privilege of knowledge work in climate crisis

    I’d been hearing warnings for days.

    The storm would be serious. I should take precautions. My neighbors said it. People in town said it. The radio said it.

    But this wasn’t about the recent, torrential summer rains that scoured the northeast, flooding towns, destroying homes and carving away roads.

    This was six and a half months ago, as a strange winter storm bore down.

    That day started with intense rains and wind, and felt oddly warm. Things were closer to the t-shirt weather I’d expect in Vermont’s October, despite being a week from the New Year. Still, the strength of the storm knocked out power as early as 8 AM.

    By noon, grim state officials warned that devastation was substantial. More than 70,000 homes had lost power. Unlike other storms, we would have to be prepared for restoration to be a multi-day effort. Worse yet, the day’s events weren’t over. Things would get even worse as evening came and temperatures dropped. We should plan to be somewhere safe no later than 4 PM.

    I took the opportunity to stock up on sandwich fixings and other foods that wouldn’t need a working kitchen.

    Then the turn came. I’ve never seen anything like it.

    At 4 PM, you could be out comfortably in shorts. By 6 PM, the temperature had dropped to below freezing, snow piling up fast.

    At home, we scrambled to insulate the windows with towels, trapping what meager heat we could. Without power, the boiler couldn’t run. If the house’s internal temperature dropped to 20 degrees Fahrenheit, we’d run the risk of pipes bursting, damaging everything.

    A large pile of wood was fed continuously into the wood stove. In the mornings, I’d fight a 120v generator, yanking its pull cord desperately. After 20 minutes of these ministrations, it would keep the refrigerator running and charge our devices.

    In total, my house spent six days without power.

    At Foo Camp, Mika McKinnon had once advised that you survive disaster scenarios with the help of your neighbors. And it’s true: the kindness of neighbors whose generator could power their boiler and well pump provided a few much-needed showers, as essential to sanity as hygiene.

    That week felt like one of the longest in my life. The house needed care and protection, lest it succumb to the cold. The cats needed the same. I was stuck with the situation.

    And yet, I was lucky

    As a knowledge worker, whether I’m drawing a salary or working freelance, I have abundant flexibility. I can move my schedule around, take time off, and work where I want to.

    A week without power was a week where I could focus on the problems right in front of me with no economic penalty, and no work-related distractions. As a salaried employee of a public technology firm, in fact, I’d already been given that Christmas-to-New Year’s time off.

    I needed every minute of the day, either to work or to rest. Keeping a house working amidst infrastructure failure is a serious job. It required hauling wood from outside to inside, and managing buckets and improvised plumbing for the toilet. The generator needed to be tended on a regular schedule, to keep the food safe in the refrigerator, along with errands to replace the propane that it consumed for fuel.

    But I had those minutes.

    The structural feedback loop of luck

    Last week was no picnic, either. After 16 hours of rain, the drainage system in my basement gave up. Water came pouring from two floor drains, swiftly filling the basement by two inches at its worst. Holding the line against the water was a whole-day project of pumps and wet/dry vacuum cleaners, and eventually the water started winning.

    But even in this, I was among the lucky. Just minutes down the road, homes and businesses too close to the river would be properly flooded, and no amount of pumping could protect them. The waters would destroy or contaminate everything they touched, from bedding to appliances to furniture, inventory and clothing. The basics of everyday life and commerce, lost overnight.

    Further on, some places got it even worse, with houses torn loose from their foundations. There are now streets lined with ruined household possessions, caked with mud, piled eight feet high.

    None of these people experiencing these losses did anything to deserve this. They were just living where their economic circumstances allowed, in a historically tight housing and rental market.

    As a knowledge worker, I had a little exposure to the extreme prosperity of the innovation economy. I still have to work for a living, but I got just enough to give me a downpayment on a house, and broad choices about where to settle down.

    Fearing exactly this category of weather, I chose a house with characteristics that seemed resilient. In this case, being on high ground, with slopes and other features that would draw water away from the most important structures.

    Water pooled in the basement because the ground was saturated by continuous rainfall, not because the immediate surroundings were flooding.

    Just as it seemed the battle against the basement drains couldn’t be won, experts arrived to install a sump pump. The proposition cost $4,000 and involved a jackhammer punching a hole in the floor. The tradesmen had to deploy a temporary solution at first, so they could get themselves back home—rains were intensifying more quickly than we’d all expected.

    But by 6 PM, drains were draining again, a sump pump was clearing the water, and there was no further danger to my house.

    The day was stressful and exhausting.

    But that night, I slept in my bed. I didn’t have to abandon my house to mud, stay in a shelter.

    Once again, I had the flexibility in the day to do battle with the elements. When it came to expert help with my crisis, ready access to credit made an easy decision: spend $4k now to protect the long-term integrity of the house, and keep it habitable in the immediate term.

    Not everyone has this flexibility.

    Renters don’t get to make this category of decision. They’re at the whims of their landlord, who may have a very different decision making calculus around whether to preserve immediate habitability, versus taking an insurance payout.

    Worse still, not everyone can absorb a sudden $4,000 expense. For me that still hurts, but the math works out: money put into the house increases its value. I can tell any future buyer that the property is now resilient even against hundred year rainstorms.

    The climate comes for us all, but first it comes for the vulnerable

    Things have been edgy since the flood. The threat of more rains kept anxieties simmering. At last we got some serious sunny days to raise our spirits.

    Only for the air quality index to soar, adding haze to the skies and to our lungs. Forest fires in Canada are ongoing.

    As a knowledge worker, I’m handling my client work from inside. My economic leverage allowed the addition of heat pumps to the house, so I can keep things cool without opening the windows. Air filtration in multiple rooms keeps fine particles out of my lungs.

    Once again, I am annoyed and inconvenienced, but overall I am safe.

    For those whose jobs keep them outside—work in the trades, agriculture, and countless others—the air poses more serious problems that can impair breathing and, eventually, long term health.

    Luxury in the climate crisis is maintaining routines and self-regulation

    Someday, I must assume the climate crisis will come for me. It probably won’t be flooding. I’m hopeful it won’t be fire. Extreme winds are probably my biggest threat in the future. Moreover, I’m not independently wealthy: I have to work for a living.

    Still, my economic position as a knowledge worker gives me substantial resilience against many other threats. Let’s recap:

    • Past access to prosperity granting me more options for where to live
    • Schedule flexibility
    • The ability to create serious economic impact regardless of where I’m located geographically
    • Economic leverage to make climate-related improvements to my house, even at a moment’s notice

    This stuff leaves me likelier to remain in the driver’s seat of my life, even with the variety of curveballs the climate throws at us every year.

    The more insulated from these consequences I am, the more likely I am to be well-rested, able to maintain my health and continue making good decisions. I’ve been absolutely wrecked by the events of the last week. Limiting the damage from water took days of effort. Mold is serious shit, and I’m deathly allergic to it.

    But I’ve been merely inconvenienced. In between ripping out basement carpet, I could maintain progress with my business, continue meeting with clients, continue doing work.

    I can’t imagine how much harder it would be to be displaced from my home—especially with the lingering effects of Covid adding risk to sharing shelter with others. I can’t imagine losing my everyday property. Seeing my clothes ruined, my kitchen unusable, my house unlivable. I can’t imagine trying to keep the economic wheels turning with those pressures on my back.

    Ten years ago, in reference to McIntosh, I took some time to unpack my own knapsack. While there is much there I would likely say differently today, I’m back to the same conclusion:

    I feel fortunate within my circumstances, and feel a responsibility to others who don’t share my advantages.

    These crises will continue. We’ve just had the hottest week on record. Whether it’s extremes of temperature or intense weather, whether it’s immediate natural disaster or distant fires polluting our air, whether it’s new invasive life or even disease—the effects of climate change are now constantly upon us.

    People are hurting. These events are callous and damaging. The immediate stress and exhaustion are serious problems, but there’s long term trauma to contend with as well.

    We must also understand elements of the growing labor movement in this context. A primary issue for UPS drivers is an aging fleet of trucks not equipped for modern heatwaves. Heat is harming farmworkers as well. Entire categories of work are becoming consistently unsafe as a consequence of climate change.

    You don’t have to take my word for it, because none of this is isolated. There’s been catastrophic flooding of New York City’s subways, plus heat waves and forest fires in the Bay Area. Tangible evidence is all over the place. And however it manifests itself, some workers are exposed, while knowledge workers have more options.

    Justice and decency demand we look this problem in the eye, in proactive solidarity. We owe each other better than letting the most vulnerable among us simply absorb the brunt of these consequences.

    This is just the beginning. I can and do take local steps to support my community, but I don’t know what to do about just how big the problem seems.

    If you have any ideas, I hope you’ll reach out and share them with me.


  • Cyberspace: The case for Augmented Reality

    In the 90’s, we over-used the term “cyberspace.”

    We needed something to explain the increasing incursion of the digital world into our everyday reality. Computers and the internet were becoming more central to our lives, and in the process, becoming a destination.

    We needed a phrase to describe the abstract plane of existence that could stitch together far-flung computer systems, and their associated humans. If we were “going online,” where exactly would we arrive?

    It was cyberspace.

    As time went on, the term grew passé. The internet went from curiosity to compulsion to mundane component of everyday life. “Cyberspace” was an artifact of a more naive time, and we rolled our eyes.

    But today I want to argue that we need this word. We need it to understand the perils and opportunities of the future. We need it to understand the motivations of enormous corporations who are investing in the next stage of our explorations into the digital realm.

    Cyberspace was here all along. We just stopped giving it a name.

    Cyberspace, defined

    Let’s start with the basics.

    You and I were born in real space. It has fixed and stubborn rules. Gravitation, thermodynamics, chemistry, and perhaps most importantly, locality.

    In real space, you are where you are, and cannot be anywhere else except by crossing all the places in between, which can take quite some time. Real space is governed by scarcity: you have one favorite shirt, unless through some miracle of prescience you grabbed a second one off the rack, and then you’d have to pay for both of them.

    Cyberspace has different rules. Here’s the most important:

    Information is physical.

    The underlying substrate of cyberspace lives here in real space. Cyberspace emerges from a global mesh of computing devices. If one of those devices is unplugged or crashes its hard drive, then whatever components of cyberspace that computer is responsible for will blip out of existence. If the computer is slow, its rendition of cyberspace will be, too. Meanwhile, the fabric joining cyberspace must also be physical: electrical signals, pulses of light, or radio waves. It takes time for these signals to propagate. Nothing is instant, merely fast: nearly the speed of light.

    Still, the speed of light is so much faster than our minds are equipped to perceive. This creates a functional illusion of non-locality in cyberspace.

    In other words, you can be anywhere instantly.

    In practice, there’s an asterisk. Even the speed of light adds up over time. If there are too many hops between you and your destination, you may start to perceive the physical distance between computers as a delay.

    But overall, the effect of cyberspace is to collapse global distances. Through cyberspace, you and a team spread across continents can comfortably share the same information and resources. You can communicate in real time. The magic of the internet was, and remains, our ability to reach people and information anywhere in the world.

    Meanwhile, everything copies for basically free. Once again, the real space underpinnings show up here: you need a physical location that stores data. But with that requirement satisfied, everything can be copied at virtually no cost. Entire business revolutions were built on this exact premise.

    That’s just the start, though.

    Reefs of the imagination

    Because you can create in cyberspace.

    Through these information systems, we can create bubbles within cyberspace that are subject to rules we choose. Quake II, Fortnite, Twitter, World of Warcraft… all of these are places where the infinite plasma of cyberspace is sculpted by the imagination into reliable structures. We can enter these bubbles and share experiences with one another, creating memories, building relationships and honing skills.

    In 2023, we are surrounded by vast reefs of the imagination built in cyberspace. They are games, social spaces, markets, and clubhouses.

    Tens of millions of knowledge workers wake up every day to architect, design, build and maintain these structures. It’s a complex job.

    Yet, every year it becomes a little easier. While literacy for the required topics expands, the complexity of the tools becomes more and more manageable.

    Portals into cyberspace

    Since the dawn of consumer computing, the screen has been a primary portal into cyberspace.

    What began as a few thousand pixels has exploded into the millions on high-end displays. Display technology is of particular interest to gaming, which has often occupied the bleeding edge of cyberspace possibility. Gamers spend thousands of dollars for large, pixel-packed displays and the high-end video cards that can drive them. Rich detail is emphasized, but so is frame rate: the number images per second a simulation can push onto the screen. 60 frames per second was once the ideal, but now high end rigs shoot for 120.

    Why go to this trouble? Because gamers are working to thin the barrier between real space and cyberspace.

    This is the point where some find it easy to moralize about the virtues of real space over cyberspace. To declare these interests antisocial, or even corrosive. While I am the first to find fault with all manner of radicalizing internet subcultures, my position on pursuing visual fidelity in simulation is that it’s value neutral, even if gratuitous in some cases. This is a form of craft, magic, and entertainment, and all of these things can exist on a spectrum across the toxic to the beneficial.

    A diagram showing a person gazing through the narrow window of a monitor into cyberspace

    Still, even the biggest monitor and most powerful video card have limitations. They are, at best, a small window onto a much larger universe.

    A person surrounded by a bubble of cyberspace

    Augmented Reality (AR) technology—or as Apple prefers it, “Spatial Computing”—would replace the window with an all-encompassing membrane. Cyberspace is all around you, a soap bubble over real space.

    Which means the cyberspace powers of instant information, non-local communication, creativity, and unlimited duplication can leak further than ever into real space. Meanwhile, the scale of our relationship to cyberspace changes. We can be enveloped by it, and we can make it fit into contexts previously reserved for things like furniture.

    Anything digital can be as big as your couch.

    Or as big as your room.

    The problem is that pulling this off in a way that’s persuasive is exceedingly difficult. We don’t have technology like holograms that can project cyberspace onto real space. The best we can do right now is to ingest a bubble of real space into cyberspace, and then draw pictures over top of it.

    Mostly, attempts to do this have been shoddy at best. It’s just a fundamentally hard problem. The eye is not easy to fool, and even now it’s barely possible to get the components small enough to fit comfortably on the face.

    An ostrich shoves its head into a purple hole, inside a bubble of cyberspace

    So instead, companies have been selling virtual reality, which like an ostrich, asks you to shove your head in a dark hole while your ass hangs out.

    It’s not great technology. It’s neat for an hour, but the novelty wears off quickly. For one thing, it’s wildly inconvenient. You can’t actually see any of the real space artifacts that will absolutely kick your ass if you trip over them.

    Worse still is that the images aren’t quite nice enough to be worth becoming your entire universe. The video is kind of grainy and, for many, causes motion sickness.

    What comes next

    Apple put us all on notice this week that we’re now on the threshold of proper AR. The era of ostrich malarkey is at an end. They have the technology all fitting together such that it can persuade our eyes and minds.

    It’s not perfect.

    For one thing, the first model costs $3500. But before you assume it’s doomed, remember the first Macintosh cost almost twice that after adjusting for inflation. So most people didn’t own the first Macintosh.

    But many more owned later models. And more still owned the Mac’s Microsoft-driven cousins. It took about 16 years to make the shift, but the GUI/screen/mouse portal to cyberspace became affordable and ubiquitous, helped along by advances in computing and improved efficiencies in mass production.

    AR is going to need its own advances, because consensus among reviewers is that while Vision Pro is doing something impressive, it’s also quite heavy.

    It is interesting that Apple chose the moment when the technology integration was good enough, but not perfectly comfortable, nor particularly affordable. It suggests they think there’s a solid early adopter crowd they can nonetheless make happy, that the state of the technology won’t poison the well.

    I suppose Tim Cook did learn that lesson the hard way.

    Skeptics with an eye toward history could use this opportunity to compare the Vision Pro with a number of failed CD-based consoles of the 90’s. Hell, Apple even had one: Pippin.

    But what’s different here is that Apple has spent so much effort building a unified ecosystem of content and services, merely implementing their own apps could make the Vision Pro compelling to the sort of person who can afford to buy it.

    Meanwhile, with so many frameworks shared with iOS, there’s an enormous community of developers who already know the basics of building for this platform. There’s already a library of apps ready to run. If this truly is the next paradigm of computing, plenty of devs will enjoy the shortest learning curve in history while transitioning to it. They already know the programming language, the APIs and design patterns, even the quirks of the platform vendor’s distribution agreements.

    In short, I think this chapter of computing really is Apple’s to lose, and that their track record is strong enough to assume they really have their arms around the technology challenges.

    At least, that’s what the reviews are suggesting. Gadget reviewer Marques Brownlee summarizes by saying it has the best “passthrough” (of visual surroundings) he’s seen in this category of device. And that it’s kind of heavy.

    So: Not perfectly comfortable, but it can produce the illusion of reality on a persuasive level. Which means soon enough, through ongoing miniaturization, it’ll be both comfortable and persuasive, and then it’s over the for the computer monitor.

    The next few years will be about figuring out a whole new chapter in user interface and interaction design. We’re going to see novel forms of creative expression that that both target this technology, and are enabled by it. I can imagine a whole new chapter of CAD software, for example, along with the next generation of 3D/spatial painting products in the vein of Google’s Tilt Brush.

    I also think this chapter of computing is really going to freak some people out. This will occupy a spectrum from the productive to the sanctimonious, but I think all transformational technology is worth a skeptical eye.

    There are some who will instinctively react with a deep, limbic revulsion against being enveloped in this way. Everyone is allowed a preference, though mine would not match yours.

    I’m not worried about privacy or platform integrity with Apple, but experience with cell phones shows us that the lower end of the market is a free-for-all in terms of privacy-eroding crapware. AR is loaded with cameras and sensors, and if Apple’s approach is to be believed, eye tracking is essential to making the illusions of AR persuasive. This is so much potential for abuse of everything from biometrics to images inside the home. Consumers should stay vigilant to these threats.

    More abstractly, it’s worth asking about the long term consequences of immersion in cyberspace. A sufficiently advanced version of this technology could create persuasive images that left reality feeling lacking, depending on your reality.

    More dramatic still: what does it mean to be influenced on this level? What can I make you believe with AR that would be impossible with film or prose? Forget advertising… imagine political manipulation and hate movements. And what about harassment? That’s all sobering shit.

    So I think blind optimism may be unwise. This really is powerful stuff, and any serious power comes with sharp edges.

    On the other hand, I’m excited. Cyberspace has been a part of my life since I was seven years old. I value my adventures and accomplishments there. I have spent decades honing skills for expression and creativity in that strange plane of possibility.

    I can’t wait to see what is creatively possible for us on this new frontier. I want to see what can be made here that’s good, positive and optimistic. I want to know how we can be closer, share more experiences, even if we’re far apart. There are so many ethical lessons this generation of computing professionals can draw from. We’ve learned so much the hard way. I want to see how can we use this power for good.

    And, more practically: I want to know if I can avoid ever going to a fucking office again.

    However you lean, I would buckle up. Something big is coming.


  • Leviathan Wakes: the case for Apple's Vision Pro

    VR is dogshit.

    The problem is the displays. The human eye processes vast amounts of data every second, and since I took the plunge into modern VR in 2016, my eyes have always been underwhelmed.

    They call it “screen door effect:” we readily perceive the gaps in between a headset’s pixels, and the cumulative effect as you swing your vision around is the impression you’re seeing the virtual space through a mesh. Obviously, this breaks the illusion of any “reality,” once the initial charm has worn off.

    Density has slightly improved over the years, but the quality of the image itself remains grainy and unpersuasive.

    This problem is one of bare-knuckle hardware. It’s expensive to use displays that are small and dense enough to overcome the headwinds of our skeptical eyes. But even if those displays were plentiful and cheap, the hardware needed to drive all those pixels efficiently is your next challenge. Today’s most powerful graphics processing cards are themselves as large as a VR headset, with massive cooling fans and heatsinks.

    Of course, mobile phone vendors can’t indulge that size. By contrast to the gluttonous power budgets of a desktop gaming PC, phones have to be lean and efficient. They have to fit comfortable in the pocket, and last all day and into the evening.

    Against these constraints, Apple has been engineering custom silicon since their 2008 acquisition of PA Semi, an independent chip design firm. As Apple’s lead in mobile phone performance—including graphics processing sufficient to high-end graphics on premium, high-density screens—one thing became clear to me:

    Eventually their chips would reach a point where they could comfortably drive an ultra-high density headset that defeated all the obstacles faced by cheaper entrants in the VR race.

    After years of quiet work beneath the surface, Apple’s AR leviathan is upon us, barely at the border of the technologically possible.

    And it’s a good thing it costs $3,500

    No, hear me out.

    VR/AR/XR is being held back by the rinky-dink implementations we’re seeing from Oculus, Vive and others. There’s not enough beef to make these first-class computing devices. It’s not just that they’re underpowered, it’s also that they’re under-supported by a large ecosystem.

    By contrast, Apple said “damn the cost, just build the thing, and we’ll pack every nicety that comes with your Apple ID into the bargain.”

    So it has it all:

    • LiDAR sensor
    • God knows how many cameras
    • Infrared sensors to detect eye movement
    • An iris scanner for security
    • A display on the front portraying your gaze so that you can interact with humanity
    • Ultra-dense 4K+ displays for each eye
    • Custom silicon dedicated to sensor processing, in addition to an M2 chip

    How much of these are necessary?

    Who knows. But developers will find out: What can you do with this thing? There’s a good chance that, whatever killer apps may emerge, they don’t need the entire complement of sensors and widgets to deliver a great experience.

    As that’s discovered, Apple will be able to open a second tier in this category and sell you a simplified model at a lower cost. Meanwhile, the more they manufacture the essentials—high density displays, for example—the higher their yields will become, the more their margins will increase.

    It takes time to perfect manufacturing processes and build up capacity. Vision Pro isn’t just about 2024’s model. It’s setting up the conditions for Apple to build the next five years of augmented reality wearable technology.

    Meanwhile, we’ll finally have proof: VR/AR doesn’t have to suck ass. It doesn’t have to give you motion sickness. It doesn’t have to use these awkward, stupid controllers you accidentally swap into the wrong hand. It doesn’t have to be fundamentally isolating.

    If this paradigm shift could have been kicked off by cheap shit, we’d be there already. May as well pursue the other end of the market.

    Whether we need this is another question

    The iPhone was an incredible effort, but its timing was charmed.

    It was a new paradigm of computing that was built on the foundations of a successful, maturing device: a pocket-sized communicator with always-on connectivity. Building up the cell phone as consumer presence was a lengthy project, spanning more than two decades before the iPhone came on the scene.

    Did we “need” the cell phone? Not at first. And certainly not at its early prices. It was a conspicuous consumption bauble that signaled its owner was an asshole.

    But over time, the costs of cellular connectivity dropped, becoming reasonable indulgences even for teenagers. Their benefits as they became ubiquitous were compelling: you could always reach someone, so changing plans or getting time-sensitive updates was easier than it had ever been in human history. In emergencies, cellular phones added a margin of safety, allowing easy rescue calls.

    By the time iPhone arrived, the cell phone was a necessity. As, for so many today, is the smartphone.

    The personal computer is another of today’s necessities that was not always so. Today, most people who own computers own laptops. The indulgence of portability, once upon a time, was also reserved exclusively for the wealthy and well-connected. Now it is mundane, and a staple of every lifestyle from the college student to the retiree.

    Will augmented reality follow this pattern?

    I’ll argue that the outcome is inevitable, even if the timing remains open to debate.

    Every major interactive device since the dawn of consumer computing—from the first PCs to the GameBoy to the iPhone—has emphasized a display of some sort. Heavy cathode-ray tubes, ugly grey LCD panels, full-color LEDs, today’s high-density OLED panels… we are consistently interfacing through the eyes. Eyes are high-throughput sense organs, and we can absorb so much through them so quickly.

    By interposing a device between the eyes and the world around us, we arrive at the ultimate display technology. Instead of pointing the eyes in the direction of the display, the display follows your gaze, able to supplement it as needed.

    An infinite monitor.

    The problem is that this is really fucking hard. Remember, we started with CRT displays that were at least 20 pounds. Miniaturization capable of putting a display over the eye comfortably is even now barely possible. Vision Pro requires a battery pack connected by a cable, which is the device’s sole concession to fiddly bullshit. Even then, it can only muster two hours at a single charge.

    Apple is barely scraping into the game here.

    Nevertheless, what starts as clunky needn’t remain so. As the technology for augmented reality becomes more affordable, more lightweight, more energy efficient, more stylish, it will be more feasible for more people to use.

    In the bargain, we’ll get a display technology entirely unshackled from the constraints of a monitor stand. We’ll have much broader canvases subject to the flexibility of digital creativity, collaboration and expression.

    What this unlocks, we can’t say.

    The future is in front of your eyes

    So here’s my thing:

    None of the existing VR hardware has been lavished with enough coherent investment to show us what is possible with this computing paradigm. We don’t know if AR itself sucks, or just the tech delivering it to us.

    Apple said “let’s imagine a serious computer whose primary interface is an optical overlay, and let’s spend as much as it costs to fulfill that mandate.”

    Unlike spendy headsets from, say, Microsoft, Apple has done all the ecosystem integration work to make this thing compelling out of the box. You can watch your movies, you can enjoy your photos, you can run a ton of existing apps.

    Now we’ll get to answer the AR question with far fewer caveats and asterisks. The display is as good as technologically possible. The interface uses your fingers, instead of a goofy joystick. The performance is tuned to prevent motion sickness. An enormous developer community is ready and equipped to build apps for it, and all of their tools are mature, well-documented, and fully supported by that community.

    With all those factors controlled for, do we have a new paradigm here?

    What can we do with a monitor whose size is functionally limitless?

    I’m eager to see how it plays out.


  • Fandom: The grand unified business case for developer experience

    Once upon a time, UX become a big buzzword in software.

    Reasonable enough: UX ensured the person who landed on your digital product could happily navigate the critical path that ended with your making money. It was discovered that frustrated or confused customers abandoned shopping carts. Good money was being spent on everything from ads to headcount to server infrastructure, all with the hope that customers would engage in a transaction. But pesky things like subjective human experience were getting in the way.

    Bad user experience increased customer acquisition costs, because some percentage of customers would bounce off of your product even if it absolutely could solve their problem, simply because they couldn’t be bothered to finish the transaction.

    Investments in UX sealed gaps in the funnel, but successful UX had more holistic consequences. My first paid development gig was running mobile for Hipmunk, a travel search startup that differentiated on user experience. Hipmunk’s investment in UX translated into powerful word of mouth marketing. Users gushed to friends about our Gantt chart UI and a helpful sorting algorithm that penalized high “agony” flights.

    Hipmunk was the best way to find a flight. It was fast and easy to pick something that would work great for your budget, comfort and plans. Skimming the Gantt chart made it much clearer how long any given trip was, and what kinds of layover penalties you endured in exchange for a cheaper ticket. It was easy to be confident because the best flights popped out of the results like beacons.

    Sadly, the margins on selling plane tickets were shit and Hipmunk took an exit. But the lessons here are germane to developer experience as well.

    Developer experience practice perceives a funnel, smooths its rough edges, and closes its gaps

    A developer tools business has a funnel like anyone else. At the top of the funnel are people with problems. At the bottom of the funnel are people who depend on your tool to solve those problems, thanks to a successful integration.

    There’s some Pareto stuff going on here: a smaller chunk of these developers will actually make you money. But with luck, the pleasure of using your tool will make most of this population happy to recommend you, regardless of individual spend. We’ll get back to this.

    First, let’s talk about the funnel.

    An abstract developer tools funnel

    A funnel illustration with steps progressing from awareness, experimentation, success and fandom

    In a developer product, your job is to shepherd developers through these steps:

    1. Awareness: “I understand how this tool can help me”
    2. Experimentation: “I am playing around with the tool to evaluate its power”
    3. Success: “The tool has helped me reach a goal”

    The more people who reach the end state of that funnel, the broader the network of people willing to recommend your product and help each other use it.

    So the first question becomes: How do we optimize that funnel?

    DevEx: What Actually Drives Productivity” argues that developer experience depends on flow state, feedback loops and managing cognitive load. Their perspective is much more about corporate productivity than is relevant for our purposes here, but those levers are absolutely the correct ones.

    Cognitive load is a primary concern: if it seems painful or complex to a solve a problem with your developer product, plenty will bounce off right at the top.

    We manage cognitive load a number of ways, for example:

    • Abstractions around complex tools with affordances that make it clear how to access their power (think: pre-built containers with a terminal interface, or a GUI for a debugger)
    • Docs that sequence learning to provide just enough context to make exciting progress, without overwhelming in the details
    • Example projects demonstrating a working implementation of your tools in a context that’s analogous to a developer’s end goal
    • Organized code snippets that demonstrate viable approaches to common tasks

    Every time you see a project on GitHub with a snippet of “setup” or “basic usage” code near the top of the readme, you see the management of cognitive load.

    By making cognitive load more manageable by more developers, you unlock the next phase of your funnel: a developer engaging in feedback loops with your product.

    Feedback is essential: it tells developers if their efforts are working. Modern UI development integrates this power deeply, as everything from web frameworks to native development for mobile now offers hot reloading and live previews.

    Feedback also shows up in the form of compiler warnings and errors, and this is central to modern workflows with typed languages. As you code, your editor can give you immediate feedback about the correctness of your implementation, warning you about a mismatch between what one function returns and what another accepts as input. This feedback is powerful, as it prevents entire categories of error that, in previous eras, would become crashes at runtime.

    Error logging is yet more feedback, and the more detailed it is, the more developers can get help from an even more powerful source: other human beings. A developer community is an essential source of feedback, as not everything can be handled by automation. A strong developer community acts as an exception handler for issues that fall through the existing documentation and feature set. More than that, a strong community can help developers become better at their work, by providing suggestions and perspective.

    Successfully facilitated, agreeable cognitive load and consistent feedback lead to flow state, which is the ultimate goal of your funnel. When developers are capable and confident enough to maintain consistent flow toward their goals, you’re almost guaranteed to arrive at a successful integration.

    But this is more than just a business outcome. This is a personal accomplishment on the part of the developer your tools have been enabling. The results of this can range from short term satisfaction all the way to career advancement and even wealth.

    As a field, computing has a lot of power, and having power shared with you can be an emotional experience.

    ‘Developer tools’ is culture, and culture breeds fandoms

    All of these represent culture:

    • A dog-eared script for King Lear
    • East Coast vs. West Coast rap beef
    • A Star Trek convention
    • Building Iron Man armor with a 3D printer
    • Your favorite programming language

    Fans are people who strongly identify with an element of culture, willing to invest time in the replication of that culture within the larger story of humanity. Fans make costumes, props, creative homages, and outright remixes of their favorite culture.

    To be a fan is to become part of something greater, because you think it’s worthy.

    So, of course, developer tools have fandoms. Make someone a success and you’ve probably earned a fan. This is a relationship that transcends mere business transactions. This is certainly belief in your mission: conviction that you’re creating something special. But it goes deeper, into the primordial relationship of trust and gratitude we feel toward the tools that help us grasp beyond our reach.

    A fan can become someone who will work actively themselves to improve the total of your developer experience. They might create tutorials, supplemental tools or code, even give their time to help other people become more fluent in your product. Fans will generate pull requests on your projects, and open issues that help you improve.

    En masse, fans can even re-shape larger technology culture to better adopt your stuff. This is the goal of the vaunted “bottom-up” or “developer-led” growth strategy: developer consensus that yours is the best way to solve some category of problem.

    It’s also why we can have such emotional responses to acquisitions of software products and firms. The guiding ethos that shapes a technology subculture can be vetoed by the new owner, changing basic assumptions that have guided our investments—that is, our hopes for the future.

    Having a fandom is a serious responsibility because it means managing the vibes and health of a community.

    A better world is possible

    Fandoms aren’t set-it-and-forget-it. They require stewardship. So let me tell you about one of the healthiest fandoms in speculative fiction.

    It’s The Expanse.

    Here’s how we know this:

    Some years back, actor Cas Anvar (Alex Kamal) was accused by multiple members of SFF fandom of abusive behavior. An investigation ensued and Anvar was written out of the show.

    On /r/TheExpanse, mods organized immediately to set norms to respond to this crisis. Comments on the controversy were actively cordoned to specified threads. Participation in these threads was governed by a set of agreements, including compulsory reading of specific accounts from the accusers. As a whole, the subreddit maintained order. In the higher-stakes discussions, clear terms of engagement allowed moderators to shape productive conversation, dismiss bad actors, and prevent toxic flame wars.

    It didn’t devolve into the kinds of vitriolic misogyny that is too common in these situations. Meanwhile, the community was able to reach consensus by talking through the substance of actually happened, instead of fracturing along chaotic lines of misunderstanding.

    Enabling this was culture established from the very top. The Expanse was written in a way that always respected its women characters. The creatives made deliberate decisions to eschew gratuitous sexual violence as a storytelling mechanism, by contrast to projects like Game of Thrones. And, when trouble emerged within their own ranks, the show’s leadership took it seriously, communicated the investigation to the larger fandom, and took decisive action.

    Fandoms can be scary stuff, but consistent leadership opens the door to norm-setting, orderly discussion of high-energy topics, and a place where people don’t feel excluded or pushed out because of their identities.

    And so, the theory:

    1. You can make more people more likely to be successful with your developer tools
    2. People who you’ve made successful will want to share that success, encouraging others join the party you started
    3. This creates a positive feedback loop that encourages fandom to form
    4. Productive fans energize your ecosystem and enroll yet more developers if you remain aligned with their success, and maintain community health

    It’s obviously much messier than that. Especially the bit where you need to come up with something that genuinely moves the needle on someone’s individual sense of power.

    But in the game of software automation, power is all around us. We’re saturated by unprecedented opportunity to create impact and leverage, with a ceiling in the billions of customers. It’s never been more possible to package up power and wisdom and share with those who are eager to apply it through code.

    Do with this knowledge what you will.


  • Building a business in 2023 (mine)

    It’s an exciting week for me. I’m launching my new business.

    Over a 20 year career, I’ve cycled continuously between self-employment and having a job. Thing is, I really prefer the self-employment. I like doing things my own way, and subordinating myself to the systems and policies of someone else’s company really stifles my everyday sense of creative joy.

    Still, my origins are economically humble. Building a business out into a sustainable, lucrative engine takes time and capital I haven’t always had. Out of necessity, I’ve had to leave the self-determination of my own work in order to keep the bills paid.

    I’m hoping to break that cycle today.

    I know so much about how software works. I understand not just how it’s built, but how it’s used. I want to apply decades of this understanding to an emerging field: developer experience.

    I’m calling it Antigravity. You can read the pitch if you want, but I’m not here to pitch you. I want to share the remarkable opportunities in 2023 for the solopreneur to punch above their weight.

    Over two decades, I’ve had some success building systems that made me money on the internet. Again, never quite enough, but also never nothing. I’ve never had a better time doing it than I have in the last few months.

    Vast creative markets

    Consider my logo.

    I’m not primarily a designer, but I’ve done plenty of design over my career. Part of how I get away with it is knowing my own limitations. For example, I cannot draw or illustrate for shit.

    In the past, this has limited my ability to get as deep as I want to go with logo and identity. Mostly I stuck with tasteful but bland logotypes. But not anymore.

    I didn’t have the budget for a design firm, but I do have a membership to The Noun Project, which hosts a vast array of vector art covering every conceivable subject, across a multitude of creative styles.

    I had a vague concept in mind for my logo, refined over many iterations. Then I found it: the flying saucer of my dreams. It was exactly what I was looking for. The right level of detail, the stylistic crispness that made it feel almost like a blueprint.

    I had my logo.

    Much love to Brad Avison for this lovely illustration. Check out more of his iconography here.

    Meanwhile, this model has permeated other creative work. For my launch video, I knew I’d need good music. But I was worried: historically, royalty-free music sounds like shit. It turns out, though, that we live in a new age. On Artlist, I found yet another vast library, offering music and sound effects that sounded great.

    My thanks to Ian Post, whose track New World proved just the vibe my intro video needed.

    Like Noun Project, Artlist is all you can eat. Subscribe for a year, get whatever you want, use it as needed. They even provide documentation of your perpetual sync license, for your legal records, right in your download page.

    I really hope creatives are getting a good overall deal from these platforms, because for me, they’ve proven transformative to how I represent myself online.

    Adobe on-tap

    I used to pirate the shit out of Adobe, as a kid. I learned Photoshop, After Effects, Premiere, and others, just for the fun of the creativity.

    You know what? Adobe got a great deal out of this loss-leader. Today I pay them monthly for access to Creative Cloud.

    I hate to say this—I really don’t love Adobe—but the CC subscription is a phenomenal deal. For a flat fee you get access to their entire catalog of creative tools, plus some bonuses (a lite version of Cinema4D lurks within After Effects, for example).

    But even better, you get access to Adobe’s font library, and it too is vast. You can use the typography locally, in all of your native apps, and you can set up web fonts to use on your site. This addition is savvy: it creates the case to maintain your subscription even if your usage is light. Now, past a certain point, it probably makes sense to just buy licenses for the type you want.

    Still, being able to audition so much typography out in a production environment is powerful. Moreover, Adobe handles all the tedium of making these things load fast around the world via their CDN infrastructure.

    The result is a distinctive web presence that can go toe-to-toe typographically with anyone else on the web. Having used Google’s free font service for years, I find there’s really no comparison. Adobe’s library is the premium stuff and it shows.

    Wild advances in creative hardware

    Ten years ago, I learned by accident just how powerful it is to have a video artifact representing me online. The opportunity showed up just at the right moment, as I was freelancing unexpectedly. It helped a lot.

    Fun fact: I have a film degree.

    It’s nothing fancy, sold out of a Florida diploma mill. But I spent more than a year getting my hands dirty across the full spectrum of film production tools. I learned the basics of film lighting, camera optics, audio strategies, editing and other post-production tasks.

    It left me a talented amateur, with a solid map of what I need to know about making things look right in front of a camera. I can create my own video artifacts.

    So when I tell you we live in an age of videography miracles, please believe me. A mirrorless camera and a decent lens can get you incredible, 4K footage, for about $1,000. Now, you don’t need all that data. But what you get with 4K is the ability to punch into a shot and still have it look crystal clear. It’s like having an extra camera position for every shot, for free.

    Capturing footage requires only a cheap SD card, with a nice specimen from SanDisk storing two hours for just $24.

    Even better, the lighting kits have gotten so small and affordable! For $250, you can get a couple of lights with a full spectrum of color temperatures, an even broader array of RGB colors, and incredible brightness without heat. No worrying about gels, no endless inventory of scrims to limit brightness. Just a couple of dials to get exactly the lighting you need.

    Here’s the thing: I remember lighting, I remember some terminology. I knew enough to know what to look for. The internet did the rest, with more video tutorials on how to solve these problems than I could possibly watch.

    And my M2 MacBook Air handled all of it beautifully. Editing was fast, rendering footage was fast. I can’t believe working with 4K footage can be so easy with everyday hardware.

    Videography is an amateur’s paradise. I can’t imagine how the rest of the industry has evolved since I first learned this stuff.

    And more magic besides

    I’ve already talked about how much I enjoy using Svelte, and of course Antigravity uses a similar strategy for its site. The incredible power of open source to amplify your ambitions is a big part of why I’m launching a DX consultancy! I want more of this.

    Using the GPT-4 LLM-based pattern synthesizer to explore marketing topics, debug problems with code, and otherwise work out a path forward, has been super helpful as well. A completely novel tool I didn’t expect.

    Speaking of AI, Adobe also has a weird “AI”-based audio processing tool. It sounds pretty cool, at first, but eventually becomes uncanny. I tried it out for my Antigravity intro video but didn’t end up using it.

    It’s an exciting time to try to build something new. While the internet and its various paradigms have largely matured, the resulting fertile platforms have left me more creatively effective than I’ve ever been.

    So, wish me luck. And if you know any seed-stage through Series A startups, send them my way!


  • Twitter, the real-time consensus engine

    As Twitter shambles steadily toward its grave, I’ve been thinking about its role on the internet and our culture broadly.

    Why did so many disparate groups—from politicians, journalists, activists and the posturing wealthy—all agree it was worth so much time and energy? Why did companies spend so much money not just on marketing for Twitter, but also for staffing to monitor and respond to it?

    It’s not because Twitter was the largest social media platform—it wasn’t. It’s not because Twitter drives large amounts of traffic—it doesn’t.

    Instead, for the last decade, Twitter has occupied a unique role: it is an engine for synthesizing and proliferating consensus, in real-time, across every conceivable topic of political and cultural concern. In this, it was a unique form of popular power, with each voice having potential standing in a given beef. Litigants were limited only by their ability to argue a case and attract champions to it.

    This had both its virtues and its downsides. Still, we’d never seen anything quite like it. In the aftermath of its capture by a wealthy, preening dipshit, we may never again.

    The Twitter cycle

    Like other media, Twitter was subject to cyclical behavior. Unlike other media, these cycles could unspool at breakneck speed, at any time of the week.

    Broadly, the cycle comprised these phases:

    • Real-time event: A thing has happened. An event transpired, a statement given, a position taken. The grain of sand that makes the pearl has landed within Twitter’s maw.
    • Analysis and interpretation: What does this event mean? What are the consequences? Who has been implicated? Why does it matter? What larger context is necessary to understand it? In Twitter’s deeply factionalized landscape, interpretations could be broad.
    • Consensus participation: Eventually a particular interpretation reaches consensus. Traveling along the viral paths of an influential few with broad audiences, a given point of view becomes definitive within certain groups. The factions thus persuaded join the fray, contributing their agreement, or refinement. Others, either contrarian by nature, or outside the factions with consensus, may start their own interpretation, and yet another sub-cycle toward consensus may kick off.
    • Exhaustion: The horse, long-dead, is now well and truly beaten. The cycle concludes, added to the annals of Twitter, waiting to be cited as precedent in the analysis of a future event. Though settled, lasting schisms may result from the fracas, the fabric of friendships and alliances torn or rended entirely.

    For example: Bean Dad

    In the earliest days of 2021, musician John Roderick shared a window on his parenting approach. In the process, he became caught up in the consensus engine, becoming Twitter’s main character for several days, until an attack on the US Capitol Building by far-right extremists reset the cycle.

    Event:‌ In a lengthy thread, Roderick explained how he tried to instill a sense of independence in his young daughter by offering the most minimal of assistance in the workflow of decanting some beans.

    Interpretation: Maybe this is not ideal parenting. Maybe his daughter should receive more support.

    Consensus: It is irrefutable: Bean Dad is a fuck. He has no place in polite society. It may be that his daughter is being starved instead of timely receiving the nourishment she needs to survive.

    Exhaustion: Roderick deleted his account. His music was removed from a podcast that used it for an intro. Days later, mainstream media reported what you could learn in real-time just by keeping an eye on Twitter.

    What the fuck?

    Yeah, it really did work like that.

    Politics is the negotiation and application of power. Twitter was a political venue in every sense of the word. Not just electoral politics, I stress here, but the everyday politics of industry, culture, even gender and sexuality. Twitter was a forum for the litigation of beefs.

    You’ll see a lot of people roll their eyes at this. Dismissiveness of Twitter, though, often correlates to a sense of overall larger political power.

    In our world, not everyone has a good deal. Not everyone has the basic dignity they deserve. In short, their beefs are legitimate. What was transformational about Twitter was the ability to bring your legitimate beef to larger attention, and in doing so, potentially change the behavior of those with more power.

    The powerful loved Twitter. They had a shocking pattern of admissions against interest, feeding the above cycle. Twitter was a direct pipe to the minds of the wealthy, stripping past the careful handling of publicists and handlers. Rich guys could say some truly stupid shit, and everyone could see it!

    I caution you against heeding the narrative of Twitter as an “outrage machine.” It’s not that this is strictly untrue, rather that it strips nuance.

    We live in a time of ongoing humiliation and pain for so many people. Power is a lopsided and often cruel equation. There is much to be legitimately outraged about. To respond with indifference would violate the basic empathy of many observers, and so outrage was a frequent consensus outcome.

    The cybernetic rush of consensus

    Nevertheless, I will not argue that Twitter was unalloyed good. Twitter could be a drug, and some of the most strident interactions may be tinged with individual traumas and biases. This invalidates neither the beefs nor their litigants, but it does create opportunities for behavior more destructive than constructive. I speak here from a place of both experience and some sheepishness.

    I was, at one point, a small-time Twitter outrage warlord.

    There is a rush of strange power open to any Twitter user of even modest following. In the algorithmic timeline era, much of this power has been tempered by invisible weighting and preference.

    But in the early days, it was simple: if there was some fuckshit, and you could name it, and describe its error persuasively, you could provoke a dialogue.

    In my case, as I’d arrived in Silicon Valley, and my tech career, I found myself promptly alienated from my peers. Their values were not my values, and in a case of legitimate beef, there was serious exclusion baked into the industry’s structures.

    After a lifetime in love with technology, this was where I had to make my career?

    It was a frustration that simmered quickly into rage, and in Twitter, I found a pool of similar-minded resentments. It was possible to raise a real stink against the powerful architects of these exclusionary systems.

    When these things took on a viral spread, it was a power that few humans had ever had before. I mean, you needed to own a newspaper to shape consensus on this level in any other age. Seeing the numbers tick up, seeing others agree and amplify… I enjoyed it at the time. I’m not proud to say it, but I’m telling you this because I think it’s instructive.

    Having a following doesn’t mean you’re financially solvent, but it does open certain doors. As only Nixon could go to China, I went to GitHub as it undertook a multi-year project to rehabilitate its image in the wake of an all-too-characteristic workplace scandal.

    Such is the power of consensus.

    Six months after I joined, someone close to me asked if I wanted to spend the rest of my life angry. If I wanted to spend my energies in the project of stoking rage in the hearts of others, indefinitely. They asked if this was the best use of my gifts. In an instant I knew I needed a different path.

    The years since have been a long journey away from the power of anger. There’s no doubt it works. There’s no doubt you can attract quite an audience when you show rage toward toward the wicked and unjust. But I think the spiritual cost isn’t worth the reward.

    This is not to discount the role of anger. We should be outraged when we see people doing wrong, when we see people hurt, when we see corrupt power enact abuse.

    But I must also look to what comes after anger. I think we need hope, care, and laughter as much as we need the activating flame of anger to break free of our cages.

    But Twitter was more than outrage amplification

    I don’t want to fall into the trap of tarring Twitter with the all-too-common rage brush. It certainly brewed plenty of anger, but it was also a space of connection, community, and even laughter.

    By contrast to its early contemporaries, Twitter used a subscription model akin to RSS. Symmetrical links between users were optional. As a result, we could be observers to vast communities we would otherwise have no exposure to.

    It was an opportunity for education and enlightenment on a variety of subjects. Consensus wasn’t always an angry or contentious process. Indeed, during the Covid crisis, finding emerging medical consensus through legitimate practitioners was powerful, filling many gaps left by a mainstream media that was slow to keep up.

    As global events unfolded, participating in the Twitter consensus left you better-informed than those stuck hours or days behind consuming legacy media. If you could find honest brokers to follow, nothing else came close in terms of speed and quality of information (though this, at points, could become exhausting and demoralizing).

    Participation in consensus, too, could build relationships. People have found love on Twitter, and I’ve had multiple life-changing friendships emerge through this medium. Hell, GitHub wasn’t even the first job I got thanks to Twitter.

    In short, Twitter was a unique way not just to understand the ground truth of any situation, but also to build connection with those who shared your needs, convictions and interests.

    I would make different choices than I had in the past, perhaps, but I still look upon Twitter as an incredible education. I’m a better, more knowledgeable, less shitty person thanks to all I learned in 14 years of participation.

    What’s with all the past-tense? Twitter.com still loads, my guy

    I mean, so do Tumblr, LiveJournal, Digg. Social products are simultaneously fragile and resilient. It doesn’t take much to knock them out of health, but they can keep scraping along, zombie-like, long after the magic pours out of them.

    This, I think, is Twitter’s fate. Interfering with legitimate journalism, cutting off public service API access, offering global censorship on behalf of local political beef—none of this is good news for Twitter, and this is just from the last couple weeks.

    Twitter is entering its terminal phase now. I’m sure it will live on for years more, but its power is waning as it is mismanaged.

    Will anything ever take its place?

    Mastodon is perhaps Twitter’s most obvious heir. It has picked up millions of accounts since Twitter’s disastrous acquisition. But certain design decisions limit Mastodon’s ability to meet Twitter’s category of power. For one thing, there is no global search of Mastodon content, and no global pool of trending topics. Moreover, while Twitter was an irresistible lure to the powerful, they have yet to join Mastodon.

    I’m personally hopeful about the Fediverse, and the open, decentralized potential it suggests. A real-time social medium that’s easy to integrate into anything on the web, and which no one person owns, is exciting. But it’s early days yet. Anyone could take this.

    There are other entrants. BlueSky, which promises an open platform and spun off from Twitter. Post News, which makes me skeptical.

    It’s entirely possible that, as Twitter dies, so will its unique power. Entirely, and forever. The dynamics that gave people popular power may be deliberately designed against in future platforms.

    But as Twitter wanes, know what we’re losing. It’s incredible that this existed, incredible that it was possible. Is it possible to design a replacement that emphasizes its virtues without its downsides? I’m not sure.

    I am a little sad, though, for this loss, warts and all.


  • The science fiction of Apple Computer

    For the so-called “Apple Faithful,” 1996 was a gloomy year. Semiconductor executive Gil Amelio was driving the company to the grave. The next-generation operating system project that would rescue the Mac from its decrepit software infrastructure had stalled and cancelled, with the company hoping to buy its way out of the problem.

    In the press, this “beleaguered” Apple was always described as on the brink of death, despite a passionate customer base and a unique cultural position. People loved the Mac, and identified with Apple even as its marketshare shrank.

    Of course, we know what came next. Apple acquired NeXT, and a “consultant” Steve Jobs orchestrated a board coup that ousted Amelio and most of Apple’s directors. There followed a turnaround unlike anything else in technology history, building the kernel of a business now worth $2.6 trillion.

    Much has been said about the basic business hygiene that made this turnaround possible. Led by Jobs, Apple leadership slashed the complex product line to four quadrants across two axes: desktop-portable and consumer-professional. They got piled-up inventory down from months to hours. They cleaned up the retail channel and went online with a web-based build-to-order strategy.

    Of course, they integrated NeXTSTEP, NeXT’s BSD-based, next-generation operating system, whose descendent underpinnings and developer tools live on to this day in every Mac, iPhone, and Apple Watch.

    But none of these tasks alone were sufficient to turn Apple’s fortunes around. The halo of innovation that Apple earned put winds in its cultural sails, creating a sense of potential that drove both adoption and its stock price.

    But what does “innovation” mean in practice?

    Apple rides the leading edge of miniaturization, abstraction and convenience

    If you look at the sweep from the first iMac to the latest M2 chips, Apple has been an aggressive early-adopter of technologies that address familiar problems with smaller and more convenient solutions.

    Apple’s turnaround was situated during a dramatic moment in hardware technology evolution, where things were simultaneously shrinking and increasing in power. What was casually termed innovation was Apple’s relentless application of emerging technologies to genuine consumer needs and frustrations.

    The USB revolution

    Before USB, I/O in personal computing was miserable.

    A babel of ports and specifications addressed different categories of product. Input devices needed one form of connector, printers another, external disks yet another.

    These connectors were mostly low-bandwidth and finicky, often requiring the computer be powered down entirely merely to connect or disconnect them. Perhaps the worst offender was SCSI, a high-bandwidth interface for disks and scanners, packed with rules the user had to learn and debug like individual device addresses and “termination” for daisy-chaining what was usually a single port per-computer. You could have only a handful of SCSI devices, and some of that number was gobbled up by internals.

    Originally conceived by a consortium of Wintel stakeholders, the USB standard emerged quietly to fix all of this just as Apple entered the worst of its death throes, with limited initial adoption.

    With the release of the first iMac, in 1998, Apple broke with precedent, making USB the computer’s exclusive peripheral interface. By contrast to previous I/O on the Mac, USB was a revelation in convenience: devices were hot-pluggable, no shutdown required. The ports and connectors were compact, yet offered 12 mbit connections, supporting everything from keyboards to scanners to external drives. Devices beyond keyboards could even draw enough energy from USB to skip an extra power cable. Best of all, USB hubs allowed endless expansion, up to a staggering 127 devices.

    Though the iMac was friendly and fresh in its design, its everyday experience was a stark departure in ease-of-use, thanks in part to the simplicity of its peripheral story. Mac OS was retooled to make drivers easy to load as needed, eventually attempting to grab them from the web if they were missing on disk.

    Meanwhile, in an unprecedented example of cross-platform compatibility, Apple’s embrace of USB created a compelling opportunity for peripheral manufacturers to target both Mac and PC users with a single SKU, reducing manufacturing and inventory complexity. In Jobs’s 1999 Macworld New York introduction of the iBook, he claimed that USB peripheral offerings had grown 10x, from just 25 devices at the iMac’s launch, to over 250 under a year later. Seeing the rush of consumer excitement for the iMac, manufacturers were happy to join the fray, offering hip, translucent complements to this strange computer that anyone could use.

    Today, USB is ubiquitous. Every office has a drawer full of USB flash drives and cables. Desperate to make a mark, with nothing to lose, Apple went all-in on the next generation of peripheral technology, and won the day.

    AirPort and WiFi

    In that iBook intro, Steve’s showmanship finds a dramatic flourish.

    Rather than telling the audience about his “one more thing,” he showed it to them, forcing them to draw their own conclusions as they made sense of the impossible.

    Loading a variety of web pages, Jobs did what is now commonplace: he lifted the iBook from its demonstration table, carried it elsewhere, and continued clicking around in his browser, session uninterrupted, no cables in sight.

    The audience roared.

    This magic, he explained, was fruits of a partnership with Lucent, and the commercialization of an open networking standard, 802.11. Jobs assured us this was a technology fast heating up, but we didn’t have to wait. We could have it now, with iBook and AirPort.

    Accompanying the small network card that iBook used to join networks, Apple also provided the UFO-like AirPort base station, which interfaced with either an ethernet network or your dialup ISP.

    Inflation-adjusted, joining the WiFi revolution cost about $700, including the base station and a single interface card. Not to mention a new computer that could host this hardware.

    Nevertheless, this was a revolution. No longer tethered to a desk, you could explore the internet and communicate around the world anywhere in your house, without running cables you’d later trip over.

    More than anything else Apple would do until 2007, the early adoption of WiFi was science fiction shit.

    Miniaturized storage and the iPod

    By 1999, Apple was firmly out of the woods. Profitable quarters were regular, sales were healthy, and their industrial design and marketing prowess had firmly gripped culture at large.

    But it would be the iPod that cemented Apple’s transition out of its “beleaguered” era and into its seemingly endless rise.

    While iPod was a classic convergence of Apple taste and power, the technical underpinnings that made it possible were hidden entirely from the people who used it everyday.

    The earliest iPods used a shockingly small but complete hard drive, allowing them to pack vast amounts of music into a pocket-sized device. Before USB 2.0, Apple used FireWire, with its 400 mbit throughput, to transfer MP3s rapidly from computer to pocket.

    Whereas a portable music collection had once been a bulky accordion file packed with hour-long CDs, now even a vast collection was reduced to the size of a pack of playing cards. Navigation of that collection was breezy and fully automated, a convenient database allowing you to explore from several dimensions, from songs to artists, albums to genres. You were no longer a janitor of your discs.

    Instead of fumbling, you’d glide your thumb over a pleasing control surface, diving effortlessly through as much music as you ever cared to listen to.

    iPod was far from the first MP3 player. But most of them had far less capacity. iPod was not the first hard drive-based MP3 player, either. But its competitors were far bulkier, and constrained by the narrow pipes of USB 1.1, much slower besides.

    Once again, at the first opportunity to press novel technologies into service, Apple packaged them up, sold them at good margins, and made a ton of money.

    Multitouch and the iPhone

    For most of us, our first exposure to gestural operating systems came from Minority Report, as Tom Cruise’s cop-turned-fugitive, John Anderton, explored vast media libraries in his digital pre-crime fortress.

    The technology, though fanciful, was under development in reality as well. A startup called FingerWorks, launched by academics, were puttering around trying to make a gestural keyboard. It looked weird and they did not find commercial traction.

    Nevertheless, they were exploring the edge of something pivotal, casting about though they were in the darkness. In 2005, the company was acquired by Apple in a secretive transaction.

    Two years later, this gestural, multi-touch interface was reborn as the iPhone interface.

    By contrast to existing portable computers, the iPhone’s gesture recognition was fluid and effortless. Devices by Palm, along with their Windows-based competition, all relied on resistive touch screens that could detect only a single point of input: the firm stab of a pointy stylus. Even at this, they often struggled to pinpoint the precise location of the tap, and the devices were a study in frustration and cursing.

    The iPhone, meanwhile, used capacitance to detect touches, and could sense quite a few of them at once. As the skin changed the electrical properties of the screen surface, the iPhone could track multiple fingers and use their motion relative to one another to make certain conclusions about user intent.

    Thus, like any pre-crime operative, we could pinch, stretch and glide our way through a new frontier of computing.

    Minority Report depicted a futuristic society, with a computing interface that seemed impossibly distant. Apple delivered it to our pockets, for a mere $500, just five years later.

    The future of the futuristic

    Apple has continued this pattern, even as the potential for dramatic changes has stabilized. The shape of computers has remained largely constant since 2010, as as has appearance of our phones.

    Nevertheless, the Apple Watch is a wildly miniaturized suite of sensors, computing power, and even cellular transceivers. It is a Star Trek comm badge and biometric sensor we can buy today.

    AirPods do so much auditory processing, even noise canceling, in an impossibly small package.

    And Apple’s custom silicon, now driving even desktop and portable Macs, provides surprising performance despite low power consumption and minimal heat.

    We are no longer lurching from one style of interface abstraction to the next, as we did from 1998 through 2007, trading CRTs for LCDs, hardware keyboards for multitouch screens. Still, Apple seems to be maintaining its stance of early adoption, applying the cutting edge of technology to products you can buy from their website today.

    As rumors swirl about the coming augmented reality headset Apple has been cooking up, it will be interesting to see which science fiction activities they casually drop into our laps.

    The challenge is formidable. Doing this well requires significant energy, significant graphics processing, and two dense displays, one for each eye. Existing players have given us clunky things that feel less futuristic than they do uncomfortable and tedious: firm anchors to the present moment in their poorly managed constraints.

    But for Apple, tomorrow’s technology is today’s margins. I wonder what they’re going to sell us next.


  • The modern web and Redeem Tomorrow's new site

    Redeem Tomorrow has a new site. Please let me know if you have any trouble with it.

    I had a few goals in rebuilding the site over the last few weeks:

    • High availability without admin burden
    • Power to shape the product
    • Tidiness and credibility
    • Approximate feature parity with Ghost

    I was thrilled to hit all of these marks. In the process, I’ve been struck by just how much the web has evolved over the last 20 years. So here’s a walkthrough of the new powers you have for building a modern site, and how much fun it is.

    In the beginning

    WordPress is the farm tractor of the internet.

    Nothing comes close to its versatility and overall power. But it was architected for a different time. In the beginning of the last cycle, we still thought comments sections were a universally good idea, for example.

    Pursuing the nimbleness of a lively, reactive machine, WordPress is built to constantly respond to user activity, updating the contents of the website when appropriate.

    To do this, WordPress needs run continuously, demanding a computing environment that provides things like a database, web server, and runtime for executing php code. Reasonable enough.

    The wrinkle is this: for most websites, most of that infrastructure is sitting idle most of the time. It swings to life when an admin logs in to write and publish a post, then sits quietly again, serving pages when requested.

    Nevertheless, you pay for the whole month, idle or not, when you host a web application built in the WordPress style. It’s not that much money. DigitalOcean will sell you a perfectly reasonable host for $6 a month that can do the job. Still, at scale, that’s not the best use of resources.

    Technologists hate inefficiency.

    Enter the static site generator

    Meanwhile, storage is cheap. Serving static content to the web can happen in volume. There are efficiencies to be found in bulk hosting.

    Under the static site paradigm, you use time-boxed, live computing on the few occasions where it’s actually needed: when you’re authoring new content to be inserted into your website. On these occasions, the raw contents of your site are converted, according to various templating rules, into real, styled HTML that anyone can use.

    Even better, because your content has been simplified into the raw basics of static files, it can now be served in parallel without any particular effort on your part. Services like Netlify offer workflows with transient computing power to generate your site, then distribute it around the world through Content Delivery Networks (CDNs). With a CDN, a network of hosts around the world all maintain copies of your content.

    With your content on a CDN, your visitors experience less delay between making a request and your site fulfilling it. Even better, through the parallel design of a CDN, it’s much less likely that a spike in traffic will knock your site offline. A drawback with WordPress was that a surge in popularity could overwhelm your computing resources, with a $6 virtual server suddenly handling traffic better suited to a $100 machine.

    When a CDN backs your content, you don’t have this category of problem. Of course, it was always possible to bolt this kind of performance onto a WordPress installation, but this required special attention and specialist knowledge.

    In the realm of static site generation, you don’t need to know anything about that sort of thing. The hosting provider can abstract it away for you.

    Having fun with SvelteKit

    The issue I’ve had with static site frameworks is this:

    The ergonomics never sat right for me. Like a pair of pants stitched with some subtle but fundamental asymmetry, I could never become comfortable or catch a stride. Either the templating approach was annoying, or the abstractions were too opaque to have fun with.

    Then, in my last months at Glitch, I saw mention of SvelteKit, which had just hit 1.0. I was working with some of the smartest people in web development, so I asked: “What do you think?”

    The glowing praise I heard convinced me to plow headlong into learning, and I’ve been having a blast ever since. Here’s how Svelte works:

    <script>
    	//You can code what you want inside the `script` tag. This is also where you define input arguments that bring your components to life.
    
    	export let aValue;
    </script>
    
    <div>
    	<!-- Use whatever script values you want, just wrap them in curly braces and otherwise write HTML -->
    	Here is some content: {aValue}
    </div>

    Components and Layouts can encapsulate their own scriptable functionality. Just write JavaScript.

    If you need more sophisticated behavior—to construct pagination, for example—you can write dedicated JS files that support your content.

    This stuff can get as complex as you like. Make API requests, even expose your own API endpoints. Best of all, for me, is that SvelteKit’s routing is completely intuitive. Make a directory structure, put files into it, and then your site will have the same structure.

    But what’s really badass is that all of it can be designed to generate static content. I don’t really understand or even care about the many layers of weird shit going on to support JS web development, and with SvelteKit, I don’t have to. The abstractions are tidy and self-contained enough that I can see results before becoming an expert.

    All I have to do is make a fresh commit to my site’s GitHub repo, and Netlify will update my site with the new content. Whatever automated tasks I want to do during this phase, I can. I feel like I have all the power of what we’d call “full-stack” applications, for my blogging purposes, but with none of the costs or maintenance downsides.

    Sounds neat, but what can you do with it in practice?

    OpenGraph preview cards

    For one thing, every post on this site now provides a unique OpenGraph card, based on its title and excerpt. When you share a post with a friend or your feed, a preview appears that reflects the contents of the link.

    To accomplish this, my SvelteKit app:

    • Loads font assets into memory
    • Transforms an HTML component into vector graphics
    • Renders that SVG into a PNG
    • Plugs the appropriate information into OpenGraph and Twitter meta tags.

    All of this on a fully automated basis, running as part of the generator process. I don’t need to do anything but write and push to GitHub. Special thanks to Geoff Rich, whose excellent walkthrough explained the technology and got me started on the right path.

    Converting Markdown to HTML and RSS

    As standard with static site generators, I write my posts in Markdown, including metadata like title and date at the top.

    From this, I can convert everything into the fully-realized site you’re reading right now. In this I owe another debt, to Josh Collinsworth, whose excellent walkthrough of building static sites with SvelteKit provides the foundation for my new home on the web. It explained everything I needed to know and do, and I wouldn’t have escaped the straitjacket of Ghost without this help.

    Access a treasure trove of resources

    I’ve found icon libraries, a date formatter, and even a component for building the newsletter subscription modal that pops up when you click the “subscribe” link.

    Svelte’s community has put serious love into the ecosystem, so there’s always a helping hand to keep your project moving.

    The power to build a product

    I’ve been building digital products for 20 years, but never got to the fluency I wanted on the web.

    This feels different. I feel like I can imagine the thing I want, and then make it happen, and then hand it to you.

    For example, I always wanted to do link-blogging posts on my Ghost site, but Ghost was quite rigid about how it structured posts.

    Now, I can imagine exactly how I want a link post to behave, down to its presentation and URL structure, and then build that. I can even decide I want to offer you two versions of the RSS feed: with and without link posts. It’s all here and working.

    But despite this power, I don’t have to worry about a server falling over. Someone else holds the pager.

    I feel like I’ve been struck by lightning. I feel like I’m in love. Contentment and giddiness all at once.

    Anyway, call it a 1.0. Hit me up if you see any bugs.


  • Replicate's Cog project is a watershed moment for AI

    I’ve argued that Pattern Synthesis (“AI”, LLMs, ML, etc) will be a defining paradigm of the next technology cycle. Patterns are central to everything we as humans do, and a mechanism that accelerates their creation could have impact on work, culture and communications on the same order as the microprocessor itself.

    If my assertion here is true, one would expect to see improving developer tools for this technology: abstractions that make it easy to grab Pattern Synthesizers and apply them to whatever problem space you may have.

    Indeed, you can pay the dominant AI companies for access to their APIs, but this is the most superficial participation in the Pattern Synthesis revolution. You do not fully control the technology that your business relies upon, and you therefore find yourself at the whims of a platform vendor who may change pricing, policies, model behaviors, or other factors out from beneath you.

    A future where Pattern Synthesis is a dominant technical paradigm is one where the models themselves are malleable, first-class development targets, ready for experimentation and weekend tinkering.

    That’s where the Cog project comes in.

    Interlude: the basics of DX in Pattern Synthesis

    The Developer Experience of Pattern Synthesis, as most currently understand it, involves making requests to an API.

    This is a well-worn abstraction, used today for everything from accepting payments to orchestration of cloud infrastructure. You create a payload of instructions, transmit it to an endpoint, and then a response gives your application what it needs to proceed.

    Through convenient API abstractions, it’s easy to begin building a product with Pattern Synthesis components. But you will be fully bound by the model configuration of someone else. If their model can do what you need, great.

    But in the long term, deep business value will come from owning the core technology that creates results for your customers. Reselling a widget anyone else can buy off the shelf doesn’t leave you with much of a moat. Instead, the successful companies of the coming cycle will develop and improve their own models.

    Components of the Pattern Synthesis value chain

    Diagram of the stack described below

    In order to provide the fruits of a Pattern Synthesis Engine to your customers, you’ll be interacting with these components:

    1. Compute substrate: Information is physical. If you want to transform data, you’re going to need physical hardware somewhere that does the job. This a bunch of silicon that stores the data and does massive amounts of parallelized computation. This stuff can be expensive, with Nvidia’s A100 GPU costing $10k just to get started.
    2. Host environment: Next, you’re going to need an operating system that can facilitate the interaction between your Pattern Synthesis model and the underlying compute substrate. A host environment does all of the boring, behind-the-scenes stuff that makes a modern computer run, including management of files and networking, along with hosting runtimes that leverage the hardware to accomplish work.
    3. Model: Finally we arrive at the Pattern Synthesizer itself. A model takes inputs and uses a stored pile of associations it has been “trained” with to transform that input into a given pattern. Models are diverse in their applications, able to transform sounds into text, text into images, classify image contents, and plenty more. This is where the magic happens, but as we can see, there are significant dependencies before you can even get started interacting with the model.
    4. Interface: Finally, an interface connects to these lower layers in order to provide inputs to the model and report its synthesized output. This starts as an API, but this is usually wrapped in some sort of GUI, like a webpage.

    This is the status quo, and it’s not unique to “AI” work, either. You can swap the “model” with “application” and find this architecture describes the bulk of how the existing web works.

    As a result, existing approaches to web architecture have lessons to offer the developer experience for those building Pattern Synthesis models.

    Containerization

    In computing, one constant bugbear is coupling. Practitioners loathe tightly coupled systems, because such coupling can slow down future progress. If one half of a tightly coupled system becomes obsolete, the other half is weighed down until it can be cut free.

    A common coupling pattern in web development exists between the application and its host environment. This can become expensive, as every time the application needs to be hosted anew, the environment must then be set up from scratch to support its various dependencies. Runtimes and package managers are common culprits here, but the dependency details can be endless and surprising.

    This limits portability, acting as a brake on scaling.

    The solution to this problem is containerization. With containers, an entire host environment can be snapshotted and captured into fully portable files. Docker, and its Docker Engine runtime, is among the most well-known tools for the job.

    Docker Engine provides a further abstraction between between the host environment and its underlying compute resources, allowing containers that run on it to be flexible and independent of specific hardware and operating systems.

    There’s a lot of devil in these details. There’s no magic here, just hard work to support this flexibility.

    But when it works, it allows complex systems to be hoisted into operation across a multitude of operating systems on a fully-automated basis. Get Docker Engine running on your machine, issue a few terminal commands, and you’ve got a new application running wherever you want it.

    With Cog, Replicate said: “cool, let’s do that with ML models.”

    Replicate’s innovation

    Replicate provides hosting for myriad machine learning models. Pay them money, you can have any model in their ecosystem, or one you trained yourself, available through the metered faucet of their API.

    Diagram of the same stack, with Docker absorbing the model layer

    To support this business model, Replicate interposes Docker into the existing value chain. Rather than figure out the specifics of how to make your ML model work in a particular hosting arrangement, you package it into a container using Cog:

    No more CUDA hell. Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you.

    Define the inputs and outputs for your model with standard Python. Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.

    Thus, through, containerization, the arcane knowledge of matching models with the appropriate dependencies for a given hardware setup can be scaled on an infinitely automated basis.

    The model is then fully portable, making it easy to host with Replicate.

    But not just with Replicate. Over the weekend I got Ubuntu installed on my game PC, laden as it is with a high-end—if consumer-grade—GPU, the RTX 4090. Once I figured out how to get Docker Engine installed and running, I installed Cog and then it was trivial to load models from Replicate’s directory and start churning out results.

    There was nothing to debug. Compared to other forays I’ve made into locally-hosted models, where setup was often error-prone and complex, this was so easy.

    The only delay came in downloading multi-gigabyte model dependencies when they were missing from my machine. I could try out dozens of different models without any friction at all. As promised, if I wanted to host the model as a local service, I just started it up like any other Docker container, a JSON API instantly at the ready.

    All of this through one-liner terminal commands. Incredible.

    The confluence of Pattern Synthesis and open source

    I view this as a watershed moment in the new cycle.

    By making it so easy to package and exchange models, and giving away the underlying strategy as open source, Replicate has lowered the barriers to entry for tinkerers and explorers in this space. The history of open source and experimentation is clear.

    When the costs become low enough to fuck around with a new technology, this shifts the adoption economics of that technology. Replicate has opened the door for the next stage of developer adoption within the paradigm of Pattern Synthesis. What was once the exclusive domain of researchers and specialists can now shift into a more general—if still quite technical—audience. The incentives for participation in this ecosystem are huge—it’s a growth opportunity in the new cycle—and through Cog, it’s now so much easier to play around.

    More than that, the fundamental portability of models that Cog enables changes the approaches we can take as developers. I can see this leading to a future where it’s more common to host your models locally, or with greater isolation, enabling new categories of “AI” product with more stringent and transparent privacy guarantees for their customers.

    There remain obstacles. The costs of training certain categories of model can be astronomical. LLMs, for example, require millions of dollars to capitalize. Still, as the underlying encapsulation of the model becomes more amenable to sharing and collaboration, I will be curious to see how cooperative approaches from previous technology cycles evolve to meet the moment.

    Keep an eye on this space. It’s going to get interesting and complicated from here.


  • Succession is a civics lesson

    The typical American K-12 education is a civics lobotomy.

    Students are taught a simplistic version of American history that constantly centers the country as heroic, just and wise. America, we’re told, always overcomes its hardest challenges and worst impulses. In my 90’s education on the Civil Rights era, for example, we were told that Rosa Parks and MLK solved American racism once and for all. The latter’s assassination in this rendition was inconvenient and sad, but his sacrifice, we were meant to understand, paved the way for a more just and decent future.

    Similar simplicity was given to the workings of political power in the United States. Why, there’s a whole bicameral legislature, three branches of federal government, checks and balances, and a noble revolutionary history that gave birth to it all.

    Nowhere in this depiction was any time given to explaining lobbyists or campaign finance. Same for gerrymandering. These were yet more inconvenient details swept outside the scope of our lessons.

    I mean no slight on the teachers of America by this criticism. They do their best in a system designed for indoctrination and conformity. I think they’re some of the most important civic actors in our entire system. I’d give them more money, more independence, and more day-to-day support if I could wave a magic wand and make it so.

    Nevertheless, I emerged into adulthood feeling entirely unprepared to understand the civic complexity of this country. The entanglement of economic and political systems is a sort of dark matter that moves and shapes our everyday life, but lurks out of view without prolonged study and meaningful curiosity.

    This was frustrating: the world was obviously broken, and I didn’t have models or vocabulary to explain why. I came of age in the aftermath of the 2008 financial crisis, struggling economically myself, and watching so many of my peers flailing in their quests to find basic stability and prosperity.

    Indeed, Millennials have less wealth compared to previous generations, holding single-digit percentages of the pie, compared to 30% of US wealth for Gen X and 50% for the Boomers. Getting by with dignity and economic self-determination is an objectively hard slog.

    HBO’s Succession, now in its fourth and final season, brings a truck-mounted spotlight to the mechanics of inequality. It’s a fast-paced education in how late-capitalist power actually functions: the interactions between wealth, corporations, civic institutions and everyday people.

    The show, shot in a choppy, observational style, insists with every stylistic choice: “this is really how the world works.”

    I wish we’d had it much sooner.

    Elite crisis

    Part of what’s broken in America is the rank incompetence of our leadership. People in power are, in too many cases, just bad at their jobs.

    Before assuming his role as a Trump hatchet man, Jared Kushner bought The New York Observer as a sort of graduation present. His inept tenure there is the stuff of legend. The scion of a wealthy criminal, Kushner used the paper to prosecute personal beefs and, eventually, to endorse Donald Trump’s bid for the presidency.

    This is, indeed, the way the world works. Everything is pay-to-play. If you want something that people value, you can buy it.

    In following the travails of media titan Logan Roy, along with his children and the various toadies and yes-men who support their ailing empire, Succession makes the same point over and over again:

    It’s adjacency to wealth that determines your power, not your fitness to lead or your strategic competence. In an unequal world—63% of post-pandemic wealth went to 1% of humanity—money distorts relationships and decides opportunity.

    Over the show’s trajectory, we see Logan’s striver son-in-law Tom Wambsgans rise from minor functionary to chairman of a network that’s a clear stand-in for Fox News. All along, it’s clear that Tom doesn’t have any particular talents for the roles he’s given, but he is a loyal and trustworthy puppet for Logan.

    Meanwhile, the billionaire Roy children are constantly bumbling through various exercises at proving they’re the equal of their working class-turned-plutocrat father. Frequently out of their depth, they’re inserted relentlessly into positions of authority, with occasionally disastrous results. In rushing the launch of a communications satellite, for example, Kieran Culkin’s Roman Roy—somehow COO of his father’s media conglomerate—ends up destroying a launch vehicle and costing a technician his thumb.

    There’s never enough

    As much as the show is about wealth and power, it is also an exploration of personal and family dysfunction.

    Despite lives of extraordinary wealth and comfort—everyone lives in a state of Manhattan opulence mortals could never imagine—the Roy family carries decades of scars and trauma. They are human beings just like you or me, in this sense: they feel pain, they can be wounded, they carry grief.

    But unlike you or me, acting out their pain and grief lands with billion dollar shockwaves. The money doesn’t make them happy, no, but it does let them amplify their pain into the lives of others.

    So we see people of incredible power who can never have enough. What they need—peace, healing, clarity of self—is something they are unable to buy. What does it mean when flawed, broken human beings have the power to re-shape our world so completely? What does it mean when people have the money to buy their way out of consequences, even for the most dire of fuckups?

    It’s particularly resonant to follow the role and power of a media empire in this moment of our history. What does it mean to never have enough when your role is to inform and educate? What does “winning” mean, and cost, in an attention economy? What are we all losing, so a few rich guys can post a few more points on their personal scoreboards?

    Can we really sustain a few wealthy families using our civic fabric as their personal security blankets, instead of going to therapy?

    Succession wants us to ask these questions, and to imagine the consequences of their answers.

    Succession reassures you: it really IS nuts the world works like this

    Inequality isn’t an abstract statistic.

    Inequality is most people being a few bad months away from homelessness and destitution. It’s the majority of American workers living paycheck-to-paycheck, subject to the whims of their bosses. Inequality is medical bankruptcy for some, and elite recovery suites for others.

    Far from lionizing the wealthy, Succession constantly places the mindset that creates and preserves inequality under the microscope. The show is full of wry humor, quietly laughing at its cast in every episode. At the same time, it does a devastating job at explaining just how dark the situation is. The humor leavens the horror.

    With a tight four season trajectory, the show’s principals have created a show worth your time. There’s not a single wasted “filler” episode. The cast performs at the top of the game. The original score by Nicholas Britell manages to feel fresh yet permanent.

    It’s well-crafted entertainment, yes, but it’s also a window into the parts of our civic reality they didn’t teach you in school. With corporate power challenging and often defeating that of our civic institutions, it’s important to have some personal context for exactly how this all works in practice.

    It’s fiction, but there’s serious insight here. We just watched Elon Musk—son of a guy rich enough to own an emerald mine—set fire to tens of billions of dollars in part because he just desperately wants people to like him and think he’s cool. The real world absolutely works this way.

    The show is worth your time.