Skip to main content Redeem Tomorrow
I used to be excited about the future. Let's bring that back.
About Hire me
  • Building a business in 2023 (mine)

    It’s an exciting week for me. I’m launching my new business.

    Over a 20 year career, I’ve cycled continuously between self-employment and having a job. Thing is, I really prefer the self-employment. I like doing things my own way, and subordinating myself to the systems and policies of someone else’s company really stifles my everyday sense of creative joy.

    Still, my origins are economically humble. Building a business out into a sustainable, lucrative engine takes time and capital I haven’t always had. Out of necessity, I’ve had to leave the self-determination of my own work in order to keep the bills paid.

    I’m hoping to break that cycle today.

    I know so much about how software works. I understand not just how it’s built, but how it’s used. I want to apply decades of this understanding to an emerging field: developer experience.

    I’m calling it Antigravity. You can read the pitch if you want, but I’m not here to pitch you. I want to share the remarkable opportunities in 2023 for the solopreneur to punch above their weight.

    Over two decades, I’ve had some success building systems that made me money on the internet. Again, never quite enough, but also never nothing. I’ve never had a better time doing it than I have in the last few months.

    Vast creative markets

    Consider my logo.

    I’m not primarily a designer, but I’ve done plenty of design over my career. Part of how I get away with it is knowing my own limitations. For example, I cannot draw or illustrate for shit.

    In the past, this has limited my ability to get as deep as I want to go with logo and identity. Mostly I stuck with tasteful but bland logotypes. But not anymore.

    I didn’t have the budget for a design firm, but I do have a membership to The Noun Project, which hosts a vast array of vector art covering every conceivable subject, across a multitude of creative styles.

    I had a vague concept in mind for my logo, refined over many iterations. Then I found it: the flying saucer of my dreams. It was exactly what I was looking for. The right level of detail, the stylistic crispness that made it feel almost like a blueprint.

    I had my logo.

    Much love to Brad Avison for this lovely illustration. Check out more of his iconography here.

    Meanwhile, this model has permeated other creative work. For my launch video, I knew I’d need good music. But I was worried: historically, royalty-free music sounds like shit. It turns out, though, that we live in a new age. On Artlist, I found yet another vast library, offering music and sound effects that sounded great.

    My thanks to Ian Post, whose track New World proved just the vibe my intro video needed.

    Like Noun Project, Artlist is all you can eat. Subscribe for a year, get whatever you want, use it as needed. They even provide documentation of your perpetual sync license, for your legal records, right in your download page.

    I really hope creatives are getting a good overall deal from these platforms, because for me, they’ve proven transformative to how I represent myself online.

    Adobe on-tap

    I used to pirate the shit out of Adobe, as a kid. I learned Photoshop, After Effects, Premiere, and others, just for the fun of the creativity.

    You know what? Adobe got a great deal out of this loss-leader. Today I pay them monthly for access to Creative Cloud.

    I hate to say this—I really don’t love Adobe—but the CC subscription is a phenomenal deal. For a flat fee you get access to their entire catalog of creative tools, plus some bonuses (a lite version of Cinema4D lurks within After Effects, for example).

    But even better, you get access to Adobe’s font library, and it too is vast. You can use the typography locally, in all of your native apps, and you can set up web fonts to use on your site. This addition is savvy: it creates the case to maintain your subscription even if your usage is light. Now, past a certain point, it probably makes sense to just buy licenses for the type you want.

    Still, being able to audition so much typography out in a production environment is powerful. Moreover, Adobe handles all the tedium of making these things load fast around the world via their CDN infrastructure.

    The result is a distinctive web presence that can go toe-to-toe typographically with anyone else on the web. Having used Google’s free font service for years, I find there’s really no comparison. Adobe’s library is the premium stuff and it shows.

    Wild advances in creative hardware

    Ten years ago, I learned by accident just how powerful it is to have a video artifact representing me online. The opportunity showed up just at the right moment, as I was freelancing unexpectedly. It helped a lot.

    Fun fact: I have a film degree.

    It’s nothing fancy, sold out of a Florida diploma mill. But I spent more than a year getting my hands dirty across the full spectrum of film production tools. I learned the basics of film lighting, camera optics, audio strategies, editing and other post-production tasks.

    It left me a talented amateur, with a solid map of what I need to know about making things look right in front of a camera. I can create my own video artifacts.

    So when I tell you we live in an age of videography miracles, please believe me. A mirrorless camera and a decent lens can get you incredible, 4K footage, for about $1,000. Now, you don’t need all that data. But what you get with 4K is the ability to punch into a shot and still have it look crystal clear. It’s like having an extra camera position for every shot, for free.

    Capturing footage requires only a cheap SD card, with a nice specimen from SanDisk storing two hours for just $24.

    Even better, the lighting kits have gotten so small and affordable! For $250, you can get a couple of lights with a full spectrum of color temperatures, an even broader array of RGB colors, and incredible brightness without heat. No worrying about gels, no endless inventory of scrims to limit brightness. Just a couple of dials to get exactly the lighting you need.

    Here’s the thing: I remember lighting, I remember some terminology. I knew enough to know what to look for. The internet did the rest, with more video tutorials on how to solve these problems than I could possibly watch.

    And my M2 MacBook Air handled all of it beautifully. Editing was fast, rendering footage was fast. I can’t believe working with 4K footage can be so easy with everyday hardware.

    Videography is an amateur’s paradise. I can’t imagine how the rest of the industry has evolved since I first learned this stuff.

    And more magic besides

    I’ve already talked about how much I enjoy using Svelte, and of course Antigravity uses a similar strategy for its site. The incredible power of open source to amplify your ambitions is a big part of why I’m launching a DX consultancy! I want more of this.

    Using the GPT-4 LLM-based pattern synthesizer to explore marketing topics, debug problems with code, and otherwise work out a path forward, has been super helpful as well. A completely novel tool I didn’t expect.

    Speaking of AI, Adobe also has a weird “AI”-based audio processing tool. It sounds pretty cool, at first, but eventually becomes uncanny. I tried it out for my Antigravity intro video but didn’t end up using it.

    It’s an exciting time to try to build something new. While the internet and its various paradigms have largely matured, the resulting fertile platforms have left me more creatively effective than I’ve ever been.

    So, wish me luck. And if you know any seed-stage through Series A startups, send them my way!

  • Twitter, the real-time consensus engine

    As Twitter shambles steadily toward its grave, I’ve been thinking about its role on the internet and our culture broadly.

    Why did so many disparate groups—from politicians, journalists, activists and the posturing wealthy—all agree it was worth so much time and energy? Why did companies spend so much money not just on marketing for Twitter, but also for staffing to monitor and respond to it?

    It’s not because Twitter was the largest social media platform—it wasn’t. It’s not because Twitter drives large amounts of traffic—it doesn’t.

    Instead, for the last decade, Twitter has occupied a unique role: it is an engine for synthesizing and proliferating consensus, in real-time, across every conceivable topic of political and cultural concern. In this, it was a unique form of popular power, with each voice having potential standing in a given beef. Litigants were limited only by their ability to argue a case and attract champions to it.

    This had both its virtues and its downsides. Still, we’d never seen anything quite like it. In the aftermath of its capture by a wealthy, preening dipshit, we may never again.

    The Twitter cycle

    Like other media, Twitter was subject to cyclical behavior. Unlike other media, these cycles could unspool at breakneck speed, at any time of the week.

    Broadly, the cycle comprised these phases:

    • Real-time event: A thing has happened. An event transpired, a statement given, a position taken. The grain of sand that makes the pearl has landed within Twitter’s maw.
    • Analysis and interpretation: What does this event mean? What are the consequences? Who has been implicated? Why does it matter? What larger context is necessary to understand it? In Twitter’s deeply factionalized landscape, interpretations could be broad.
    • Consensus participation: Eventually a particular interpretation reaches consensus. Traveling along the viral paths of an influential few with broad audiences, a given point of view becomes definitive within certain groups. The factions thus persuaded join the fray, contributing their agreement, or refinement. Others, either contrarian by nature, or outside the factions with consensus, may start their own interpretation, and yet another sub-cycle toward consensus may kick off.
    • Exhaustion: The horse, long-dead, is now well and truly beaten. The cycle concludes, added to the annals of Twitter, waiting to be cited as precedent in the analysis of a future event. Though settled, lasting schisms may result from the fracas, the fabric of friendships and alliances torn or rended entirely.

    For example: Bean Dad

    In the earliest days of 2021, musician John Roderick shared a window on his parenting approach. In the process, he became caught up in the consensus engine, becoming Twitter’s main character for several days, until an attack on the US Capitol Building by far-right extremists reset the cycle.

    Event:‌ In a lengthy thread, Roderick explained how he tried to instill a sense of independence in his young daughter by offering the most minimal of assistance in the workflow of decanting some beans.

    Interpretation: Maybe this is not ideal parenting. Maybe his daughter should receive more support.

    Consensus: It is irrefutable: Bean Dad is a fuck. He has no place in polite society. It may be that his daughter is being starved instead of timely receiving the nourishment she needs to survive.

    Exhaustion: Roderick deleted his account. His music was removed from a podcast that used it for an intro. Days later, mainstream media reported what you could learn in real-time just by keeping an eye on Twitter.

    What the fuck?

    Yeah, it really did work like that.

    Politics is the negotiation and application of power. Twitter was a political venue in every sense of the word. Not just electoral politics, I stress here, but the everyday politics of industry, culture, even gender and sexuality. Twitter was a forum for the litigation of beefs.

    You’ll see a lot of people roll their eyes at this. Dismissiveness of Twitter, though, often correlates to a sense of overall larger political power.

    In our world, not everyone has a good deal. Not everyone has the basic dignity they deserve. In short, their beefs are legitimate. What was transformational about Twitter was the ability to bring your legitimate beef to larger attention, and in doing so, potentially change the behavior of those with more power.

    The powerful loved Twitter. They had a shocking pattern of admissions against interest, feeding the above cycle. Twitter was a direct pipe to the minds of the wealthy, stripping past the careful handling of publicists and handlers. Rich guys could say some truly stupid shit, and everyone could see it!

    I caution you against heeding the narrative of Twitter as an “outrage machine.” It’s not that this is strictly untrue, rather that it strips nuance.

    We live in a time of ongoing humiliation and pain for so many people. Power is a lopsided and often cruel equation. There is much to be legitimately outraged about. To respond with indifference would violate the basic empathy of many observers, and so outrage was a frequent consensus outcome.

    The cybernetic rush of consensus

    Nevertheless, I will not argue that Twitter was unalloyed good. Twitter could be a drug, and some of the most strident interactions may be tinged with individual traumas and biases. This invalidates neither the beefs nor their litigants, but it does create opportunities for behavior more destructive than constructive. I speak here from a place of both experience and some sheepishness.

    I was, at one point, a small-time Twitter outrage warlord.

    There is a rush of strange power open to any Twitter user of even modest following. In the algorithmic timeline era, much of this power has been tempered by invisible weighting and preference.

    But in the early days, it was simple: if there was some fuckshit, and you could name it, and describe its error persuasively, you could provoke a dialogue.

    In my case, as I’d arrived in Silicon Valley, and my tech career, I found myself promptly alienated from my peers. Their values were not my values, and in a case of legitimate beef, there was serious exclusion baked into the industry’s structures.

    After a lifetime in love with technology, this was where I had to make my career?

    It was a frustration that simmered quickly into rage, and in Twitter, I found a pool of similar-minded resentments. It was possible to raise a real stink against the powerful architects of these exclusionary systems.

    When these things took on a viral spread, it was a power that few humans had ever had before. I mean, you needed to own a newspaper to shape consensus on this level in any other age. Seeing the numbers tick up, seeing others agree and amplify… I enjoyed it at the time. I’m not proud to say it, but I’m telling you this because I think it’s instructive.

    Having a following doesn’t mean you’re financially solvent, but it does open certain doors. As only Nixon could go to China, I went to GitHub as it undertook a multi-year project to rehabilitate its image in the wake of an all-too-characteristic workplace scandal.

    Such is the power of consensus.

    Six months after I joined, someone close to me asked if I wanted to spend the rest of my life angry. If I wanted to spend my energies in the project of stoking rage in the hearts of others, indefinitely. They asked if this was the best use of my gifts. In an instant I knew I needed a different path.

    The years since have been a long journey away from the power of anger. There’s no doubt it works. There’s no doubt you can attract quite an audience when you show rage toward toward the wicked and unjust. But I think the spiritual cost isn’t worth the reward.

    This is not to discount the role of anger. We should be outraged when we see people doing wrong, when we see people hurt, when we see corrupt power enact abuse.

    But I must also look to what comes after anger. I think we need hope, care, and laughter as much as we need the activating flame of anger to break free of our cages.

    But Twitter was more than outrage amplification

    I don’t want to fall into the trap of tarring Twitter with the all-too-common rage brush. It certainly brewed plenty of anger, but it was also a space of connection, community, and even laughter.

    By contrast to its early contemporaries, Twitter used a subscription model akin to RSS. Symmetrical links between users were optional. As a result, we could be observers to vast communities we would otherwise have no exposure to.

    It was an opportunity for education and enlightenment on a variety of subjects. Consensus wasn’t always an angry or contentious process. Indeed, during the Covid crisis, finding emerging medical consensus through legitimate practitioners was powerful, filling many gaps left by a mainstream media that was slow to keep up.

    As global events unfolded, participating in the Twitter consensus left you better-informed than those stuck hours or days behind consuming legacy media. If you could find honest brokers to follow, nothing else came close in terms of speed and quality of information (though this, at points, could become exhausting and demoralizing).

    Participation in consensus, too, could build relationships. People have found love on Twitter, and I’ve had multiple life-changing friendships emerge through this medium. Hell, GitHub wasn’t even the first job I got thanks to Twitter.

    In short, Twitter was a unique way not just to understand the ground truth of any situation, but also to build connection with those who shared your needs, convictions and interests.

    I would make different choices than I had in the past, perhaps, but I still look upon Twitter as an incredible education. I’m a better, more knowledgeable, less shitty person thanks to all I learned in 14 years of participation.

    What’s with all the past-tense? still loads, my guy

    I mean, so do Tumblr, LiveJournal, Digg. Social products are simultaneously fragile and resilient. It doesn’t take much to knock them out of health, but they can keep scraping along, zombie-like, long after the magic pours out of them.

    This, I think, is Twitter’s fate. Interfering with legitimate journalism, cutting off public service API access, offering global censorship on behalf of local political beef—none of this is good news for Twitter, and this is just from the last couple weeks.

    Twitter is entering its terminal phase now. I’m sure it will live on for years more, but its power is waning as it is mismanaged.

    Will anything ever take its place?

    Mastodon is perhaps Twitter’s most obvious heir. It has picked up millions of accounts since Twitter’s disastrous acquisition. But certain design decisions limit Mastodon’s ability to meet Twitter’s category of power. For one thing, there is no global search of Mastodon content, and no global pool of trending topics. Moreover, while Twitter was an irresistible lure to the powerful, they have yet to join Mastodon.

    I’m personally hopeful about the Fediverse, and the open, decentralized potential it suggests. A real-time social medium that’s easy to integrate into anything on the web, and which no one person owns, is exciting. But it’s early days yet. Anyone could take this.

    There are other entrants. BlueSky, which promises an open platform and spun off from Twitter. Post News, which makes me skeptical.

    It’s entirely possible that, as Twitter dies, so will its unique power. Entirely, and forever. The dynamics that gave people popular power may be deliberately designed against in future platforms.

    But as Twitter wanes, know what we’re losing. It’s incredible that this existed, incredible that it was possible. Is it possible to design a replacement that emphasizes its virtues without its downsides? I’m not sure.

    I am a little sad, though, for this loss, warts and all.

  • The science fiction of Apple Computer

    For the so-called “Apple Faithful,” 1996 was a gloomy year. Semiconductor executive Gil Amelio was driving the company to the grave. The next-generation operating system project that would rescue the Mac from its decrepit software infrastructure had stalled and cancelled, with the company hoping to buy its way out of the problem.

    In the press, this “beleaguered” Apple was always described as on the brink of death, despite a passionate customer base and a unique cultural position. People loved the Mac, and identified with Apple even as its marketshare shrank.

    Of course, we know what came next. Apple acquired NeXT, and a “consultant” Steve Jobs orchestrated a board coup that ousted Amelio and most of Apple’s directors. There followed a turnaround unlike anything else in technology history, building the kernel of a business now worth $2.6 trillion.

    Much has been said about the basic business hygiene that made this turnaround possible. Led by Jobs, Apple leadership slashed the complex product line to four quadrants across two axes: desktop-portable and consumer-professional. They got piled-up inventory down from months to hours. They cleaned up the retail channel and went online with a web-based build-to-order strategy.

    Of course, they integrated NeXTSTEP, NeXT’s BSD-based, next-generation operating system, whose descendent underpinnings and developer tools live on to this day in every Mac, iPhone, and Apple Watch.

    But none of these tasks alone were sufficient to turn Apple’s fortunes around. The halo of innovation that Apple earned put winds in its cultural sails, creating a sense of potential that drove both adoption and its stock price.

    But what does “innovation” mean in practice?

    Apple rides the leading edge of miniaturization, abstraction and convenience

    If you look at the sweep from the first iMac to the latest M2 chips, Apple has been an aggressive early-adopter of technologies that address familiar problems with smaller and more convenient solutions.

    Apple’s turnaround was situated during a dramatic moment in hardware technology evolution, where things were simultaneously shrinking and increasing in power. What was casually termed innovation was Apple’s relentless application of emerging technologies to genuine consumer needs and frustrations.

    The USB revolution

    Before USB, I/O in personal computing was miserable.

    A babel of ports and specifications addressed different categories of product. Input devices needed one form of connector, printers another, external disks yet another.

    These connectors were mostly low-bandwidth and finicky, often requiring the computer be powered down entirely merely to connect or disconnect them. Perhaps the worst offender was SCSI, a high-bandwidth interface for disks and scanners, packed with rules the user had to learn and debug like individual device addresses and “termination” for daisy-chaining what was usually a single port per-computer. You could have only a handful of SCSI devices, and some of that number was gobbled up by internals.

    Originally conceived by a consortium of Wintel stakeholders, the USB standard emerged quietly to fix all of this just as Apple entered the worst of its death throes, with limited initial adoption.

    With the release of the first iMac, in 1998, Apple broke with precedent, making USB the computer’s exclusive peripheral interface. By contrast to previous I/O on the Mac, USB was a revelation in convenience: devices were hot-pluggable, no shutdown required. The ports and connectors were compact, yet offered 12 mbit connections, supporting everything from keyboards to scanners to external drives. Devices beyond keyboards could even draw enough energy from USB to skip an extra power cable. Best of all, USB hubs allowed endless expansion, up to a staggering 127 devices.

    Though the iMac was friendly and fresh in its design, its everyday experience was a stark departure in ease-of-use, thanks in part to the simplicity of its peripheral story. Mac OS was retooled to make drivers easy to load as needed, eventually attempting to grab them from the web if they were missing on disk.

    Meanwhile, in an unprecedented example of cross-platform compatibility, Apple’s embrace of USB created a compelling opportunity for peripheral manufacturers to target both Mac and PC users with a single SKU, reducing manufacturing and inventory complexity. In Jobs’s 1999 Macworld New York introduction of the iBook, he claimed that USB peripheral offerings had grown 10x, from just 25 devices at the iMac’s launch, to over 250 under a year later. Seeing the rush of consumer excitement for the iMac, manufacturers were happy to join the fray, offering hip, translucent complements to this strange computer that anyone could use.

    Today, USB is ubiquitous. Every office has a drawer full of USB flash drives and cables. Desperate to make a mark, with nothing to lose, Apple went all-in on the next generation of peripheral technology, and won the day.

    AirPort and WiFi

    In that iBook intro, Steve’s showmanship finds a dramatic flourish.

    Rather than telling the audience about his “one more thing,” he showed it to them, forcing them to draw their own conclusions as they made sense of the impossible.

    Loading a variety of web pages, Jobs did what is now commonplace: he lifted the iBook from its demonstration table, carried it elsewhere, and continued clicking around in his browser, session uninterrupted, no cables in sight.

    The audience roared.

    This magic, he explained, was fruits of a partnership with Lucent, and the commercialization of an open networking standard, 802.11. Jobs assured us this was a technology fast heating up, but we didn’t have to wait. We could have it now, with iBook and AirPort.

    Accompanying the small network card that iBook used to join networks, Apple also provided the UFO-like AirPort base station, which interfaced with either an ethernet network or your dialup ISP.

    Inflation-adjusted, joining the WiFi revolution cost about $700, including the base station and a single interface card. Not to mention a new computer that could host this hardware.

    Nevertheless, this was a revolution. No longer tethered to a desk, you could explore the internet and communicate around the world anywhere in your house, without running cables you’d later trip over.

    More than anything else Apple would do until 2007, the early adoption of WiFi was science fiction shit.

    Miniaturized storage and the iPod

    By 1999, Apple was firmly out of the woods. Profitable quarters were regular, sales were healthy, and their industrial design and marketing prowess had firmly gripped culture at large.

    But it would be the iPod that cemented Apple’s transition out of its “beleaguered” era and into its seemingly endless rise.

    While iPod was a classic convergence of Apple taste and power, the technical underpinnings that made it possible were hidden entirely from the people who used it everyday.

    The earliest iPods used a shockingly small but complete hard drive, allowing them to pack vast amounts of music into a pocket-sized device. Before USB 2.0, Apple used FireWire, with its 400 mbit throughput, to transfer MP3s rapidly from computer to pocket.

    Whereas a portable music collection had once been a bulky accordion file packed with hour-long CDs, now even a vast collection was reduced to the size of a pack of playing cards. Navigation of that collection was breezy and fully automated, a convenient database allowing you to explore from several dimensions, from songs to artists, albums to genres. You were no longer a janitor of your discs.

    Instead of fumbling, you’d glide your thumb over a pleasing control surface, diving effortlessly through as much music as you ever cared to listen to.

    iPod was far from the first MP3 player. But most of them had far less capacity. iPod was not the first hard drive-based MP3 player, either. But its competitors were far bulkier, and constrained by the narrow pipes of USB 1.1, much slower besides.

    Once again, at the first opportunity to press novel technologies into service, Apple packaged them up, sold them at good margins, and made a ton of money.

    Multitouch and the iPhone

    For most of us, our first exposure to gestural operating systems came from Minority Report, as Tom Cruise’s cop-turned-fugitive, John Anderton, explored vast media libraries in his digital pre-crime fortress.

    The technology, though fanciful, was under development in reality as well. A startup called FingerWorks, launched by academics, were puttering around trying to make a gestural keyboard. It looked weird and they did not find commercial traction.

    Nevertheless, they were exploring the edge of something pivotal, casting about though they were in the darkness. In 2005, the company was acquired by Apple in a secretive transaction.

    Two years later, this gestural, multi-touch interface was reborn as the iPhone interface.

    By contrast to existing portable computers, the iPhone’s gesture recognition was fluid and effortless. Devices by Palm, along with their Windows-based competition, all relied on resistive touch screens that could detect only a single point of input: the firm stab of a pointy stylus. Even at this, they often struggled to pinpoint the precise location of the tap, and the devices were a study in frustration and cursing.

    The iPhone, meanwhile, used capacitance to detect touches, and could sense quite a few of them at once. As the skin changed the electrical properties of the screen surface, the iPhone could track multiple fingers and use their motion relative to one another to make certain conclusions about user intent.

    Thus, like any pre-crime operative, we could pinch, stretch and glide our way through a new frontier of computing.

    Minority Report depicted a futuristic society, with a computing interface that seemed impossibly distant. Apple delivered it to our pockets, for a mere $500, just five years later.

    The future of the futuristic

    Apple has continued this pattern, even as the potential for dramatic changes has stabilized. The shape of computers has remained largely constant since 2010, as as has appearance of our phones.

    Nevertheless, the Apple Watch is a wildly miniaturized suite of sensors, computing power, and even cellular transceivers. It is a Star Trek comm badge and biometric sensor we can buy today.

    AirPods do so much auditory processing, even noise canceling, in an impossibly small package.

    And Apple’s custom silicon, now driving even desktop and portable Macs, provides surprising performance despite low power consumption and minimal heat.

    We are no longer lurching from one style of interface abstraction to the next, as we did from 1998 through 2007, trading CRTs for LCDs, hardware keyboards for multitouch screens. Still, Apple seems to be maintaining its stance of early adoption, applying the cutting edge of technology to products you can buy from their website today.

    As rumors swirl about the coming augmented reality headset Apple has been cooking up, it will be interesting to see which science fiction activities they casually drop into our laps.

    The challenge is formidable. Doing this well requires significant energy, significant graphics processing, and two dense displays, one for each eye. Existing players have given us clunky things that feel less futuristic than they do uncomfortable and tedious: firm anchors to the present moment in their poorly managed constraints.

    But for Apple, tomorrow’s technology is today’s margins. I wonder what they’re going to sell us next.

  • The modern web and Redeem Tomorrow's new site

    Redeem Tomorrow has a new site. Please let me know if you have any trouble with it.

    I had a few goals in rebuilding the site over the last few weeks:

    • High availability without admin burden
    • Power to shape the product
    • Tidiness and credibility
    • Approximate feature parity with Ghost

    I was thrilled to hit all of these marks. In the process, I’ve been struck by just how much the web has evolved over the last 20 years. So here’s a walkthrough of the new powers you have for building a modern site, and how much fun it is.

    In the beginning

    WordPress is the farm tractor of the internet.

    Nothing comes close to its versatility and overall power. But it was architected for a different time. In the beginning of the last cycle, we still thought comments sections were a universally good idea, for example.

    Pursuing the nimbleness of a lively, reactive machine, WordPress is built to constantly respond to user activity, updating the contents of the website when appropriate.

    To do this, WordPress needs run continuously, demanding a computing environment that provides things like a database, web server, and runtime for executing php code. Reasonable enough.

    The wrinkle is this: for most websites, most of that infrastructure is sitting idle most of the time. It swings to life when an admin logs in to write and publish a post, then sits quietly again, serving pages when requested.

    Nevertheless, you pay for the whole month, idle or not, when you host a web application built in the WordPress style. It’s not that much money. DigitalOcean will sell you a perfectly reasonable host for $6 a month that can do the job. Still, at scale, that’s not the best use of resources.

    Technologists hate inefficiency.

    Enter the static site generator

    Meanwhile, storage is cheap. Serving static content to the web can happen in volume. There are efficiencies to be found in bulk hosting.

    Under the static site paradigm, you use time-boxed, live computing on the few occasions where it’s actually needed: when you’re authoring new content to be inserted into your website. On these occasions, the raw contents of your site are converted, according to various templating rules, into real, styled HTML that anyone can use.

    Even better, because your content has been simplified into the raw basics of static files, it can now be served in parallel without any particular effort on your part. Services like Netlify offer workflows with transient computing power to generate your site, then distribute it around the world through Content Delivery Networks (CDNs). With a CDN, a network of hosts around the world all maintain copies of your content.

    With your content on a CDN, your visitors experience less delay between making a request and your site fulfilling it. Even better, through the parallel design of a CDN, it’s much less likely that a spike in traffic will knock your site offline. A drawback with WordPress was that a surge in popularity could overwhelm your computing resources, with a $6 virtual server suddenly handling traffic better suited to a $100 machine.

    When a CDN backs your content, you don’t have this category of problem. Of course, it was always possible to bolt this kind of performance onto a WordPress installation, but this required special attention and specialist knowledge.

    In the realm of static site generation, you don’t need to know anything about that sort of thing. The hosting provider can abstract it away for you.

    Having fun with SvelteKit

    The issue I’ve had with static site frameworks is this:

    The ergonomics never sat right for me. Like a pair of pants stitched with some subtle but fundamental asymmetry, I could never become comfortable or catch a stride. Either the templating approach was annoying, or the abstractions were too opaque to have fun with.

    Then, in my last months at Glitch, I saw mention of SvelteKit, which had just hit 1.0. I was working with some of the smartest people in web development, so I asked: “What do you think?”

    The glowing praise I heard convinced me to plow headlong into learning, and I’ve been having a blast ever since. Here’s how Svelte works:

    	//You can code what you want inside the `script` tag. This is also where you define input arguments that bring your components to life.
    	export let aValue;
    	<!-- Use whatever script values you want, just wrap them in curly braces and otherwise write HTML -->
    	Here is some content: {aValue}

    Components and Layouts can encapsulate their own scriptable functionality. Just write JavaScript.

    If you need more sophisticated behavior—to construct pagination, for example—you can write dedicated JS files that support your content.

    This stuff can get as complex as you like. Make API requests, even expose your own API endpoints. Best of all, for me, is that SvelteKit’s routing is completely intuitive. Make a directory structure, put files into it, and then your site will have the same structure.

    But what’s really badass is that all of it can be designed to generate static content. I don’t really understand or even care about the many layers of weird shit going on to support JS web development, and with SvelteKit, I don’t have to. The abstractions are tidy and self-contained enough that I can see results before becoming an expert.

    All I have to do is make a fresh commit to my site’s GitHub repo, and Netlify will update my site with the new content. Whatever automated tasks I want to do during this phase, I can. I feel like I have all the power of what we’d call “full-stack” applications, for my blogging purposes, but with none of the costs or maintenance downsides.

    Sounds neat, but what can you do with it in practice?

    OpenGraph preview cards

    For one thing, every post on this site now provides a unique OpenGraph card, based on its title and excerpt. When you share a post with a friend or your feed, a preview appears that reflects the contents of the link.

    To accomplish this, my SvelteKit app:

    • Loads font assets into memory
    • Transforms an HTML component into vector graphics
    • Renders that SVG into a PNG
    • Plugs the appropriate information into OpenGraph and Twitter meta tags.

    All of this on a fully automated basis, running as part of the generator process. I don’t need to do anything but write and push to GitHub. Special thanks to Geoff Rich, whose excellent walkthrough explained the technology and got me started on the right path.

    Converting Markdown to HTML and RSS

    As standard with static site generators, I write my posts in Markdown, including metadata like title and date at the top.

    From this, I can convert everything into the fully-realized site you’re reading right now. In this I owe another debt, to Josh Collinsworth, whose excellent walkthrough of building static sites with SvelteKit provides the foundation for my new home on the web. It explained everything I needed to know and do, and I wouldn’t have escaped the straitjacket of Ghost without this help.

    Access a treasure trove of resources

    I’ve found icon libraries, a date formatter, and even a component for building the newsletter subscription modal that pops up when you click the “subscribe” link.

    Svelte’s community has put serious love into the ecosystem, so there’s always a helping hand to keep your project moving.

    The power to build a product

    I’ve been building digital products for 20 years, but never got to the fluency I wanted on the web.

    This feels different. I feel like I can imagine the thing I want, and then make it happen, and then hand it to you.

    For example, I always wanted to do link-blogging posts on my Ghost site, but Ghost was quite rigid about how it structured posts.

    Now, I can imagine exactly how I want a link post to behave, down to its presentation and URL structure, and then build that. I can even decide I want to offer you two versions of the RSS feed: with and without link posts. It’s all here and working.

    But despite this power, I don’t have to worry about a server falling over. Someone else holds the pager.

    I feel like I’ve been struck by lightning. I feel like I’m in love. Contentment and giddiness all at once.

    Anyway, call it a 1.0. Hit me up if you see any bugs.

  • Replicate's Cog project is a watershed moment for AI

    I’ve argued that Pattern Synthesis (“AI”, LLMs, ML, etc) will be a defining paradigm of the next technology cycle. Patterns are central to everything we as humans do, and a mechanism that accelerates their creation could have impact on work, culture and communications on the same order as the microprocessor itself.

    If my assertion here is true, one would expect to see improving developer tools for this technology: abstractions that make it easy to grab Pattern Synthesizers and apply them to whatever problem space you may have.

    Indeed, you can pay the dominant AI companies for access to their APIs, but this is the most superficial participation in the Pattern Synthesis revolution. You do not fully control the technology that your business relies upon, and you therefore find yourself at the whims of a platform vendor who may change pricing, policies, model behaviors, or other factors out from beneath you.

    A future where Pattern Synthesis is a dominant technical paradigm is one where the models themselves are malleable, first-class development targets, ready for experimentation and weekend tinkering.

    That’s where the Cog project comes in.

    Interlude: the basics of DX in Pattern Synthesis

    The Developer Experience of Pattern Synthesis, as most currently understand it, involves making requests to an API.

    This is a well-worn abstraction, used today for everything from accepting payments to orchestration of cloud infrastructure. You create a payload of instructions, transmit it to an endpoint, and then a response gives your application what it needs to proceed.

    Through convenient API abstractions, it’s easy to begin building a product with Pattern Synthesis components. But you will be fully bound by the model configuration of someone else. If their model can do what you need, great.

    But in the long term, deep business value will come from owning the core technology that creates results for your customers. Reselling a widget anyone else can buy off the shelf doesn’t leave you with much of a moat. Instead, the successful companies of the coming cycle will develop and improve their own models.

    Components of the Pattern Synthesis value chain

    Diagram of the stack described below

    In order to provide the fruits of a Pattern Synthesis Engine to your customers, you’ll be interacting with these components:

    1. Compute substrate: Information is physical. If you want to transform data, you’re going to need physical hardware somewhere that does the job. This a bunch of silicon that stores the data and does massive amounts of parallelized computation. This stuff can be expensive, with Nvidia’s A100 GPU costing $10k just to get started.
    2. Host environment: Next, you’re going to need an operating system that can facilitate the interaction between your Pattern Synthesis model and the underlying compute substrate. A host environment does all of the boring, behind-the-scenes stuff that makes a modern computer run, including management of files and networking, along with hosting runtimes that leverage the hardware to accomplish work.
    3. Model: Finally we arrive at the Pattern Synthesizer itself. A model takes inputs and uses a stored pile of associations it has been “trained” with to transform that input into a given pattern. Models are diverse in their applications, able to transform sounds into text, text into images, classify image contents, and plenty more. This is where the magic happens, but as we can see, there are significant dependencies before you can even get started interacting with the model.
    4. Interface: Finally, an interface connects to these lower layers in order to provide inputs to the model and report its synthesized output. This starts as an API, but this is usually wrapped in some sort of GUI, like a webpage.

    This is the status quo, and it’s not unique to “AI” work, either. You can swap the “model” with “application” and find this architecture describes the bulk of how the existing web works.

    As a result, existing approaches to web architecture have lessons to offer the developer experience for those building Pattern Synthesis models.


    In computing, one constant bugbear is coupling. Practitioners loathe tightly coupled systems, because such coupling can slow down future progress. If one half of a tightly coupled system becomes obsolete, the other half is weighed down until it can be cut free.

    A common coupling pattern in web development exists between the application and its host environment. This can become expensive, as every time the application needs to be hosted anew, the environment must then be set up from scratch to support its various dependencies. Runtimes and package managers are common culprits here, but the dependency details can be endless and surprising.

    This limits portability, acting as a brake on scaling.

    The solution to this problem is containerization. With containers, an entire host environment can be snapshotted and captured into fully portable files. Docker, and its Docker Engine runtime, is among the most well-known tools for the job.

    Docker Engine provides a further abstraction between between the host environment and its underlying compute resources, allowing containers that run on it to be flexible and independent of specific hardware and operating systems.

    There’s a lot of devil in these details. There’s no magic here, just hard work to support this flexibility.

    But when it works, it allows complex systems to be hoisted into operation across a multitude of operating systems on a fully-automated basis. Get Docker Engine running on your machine, issue a few terminal commands, and you’ve got a new application running wherever you want it.

    With Cog, Replicate said: “cool, let’s do that with ML models.”

    Replicate’s innovation

    Replicate provides hosting for myriad machine learning models. Pay them money, you can have any model in their ecosystem, or one you trained yourself, available through the metered faucet of their API.

    Diagram of the same stack, with Docker absorbing the model layer

    To support this business model, Replicate interposes Docker into the existing value chain. Rather than figure out the specifics of how to make your ML model work in a particular hosting arrangement, you package it into a container using Cog:

    No more CUDA hell. Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you.

    Define the inputs and outputs for your model with standard Python. Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.

    Thus, through, containerization, the arcane knowledge of matching models with the appropriate dependencies for a given hardware setup can be scaled on an infinitely automated basis.

    The model is then fully portable, making it easy to host with Replicate.

    But not just with Replicate. Over the weekend I got Ubuntu installed on my game PC, laden as it is with a high-end—if consumer-grade—GPU, the RTX 4090. Once I figured out how to get Docker Engine installed and running, I installed Cog and then it was trivial to load models from Replicate’s directory and start churning out results.

    There was nothing to debug. Compared to other forays I’ve made into locally-hosted models, where setup was often error-prone and complex, this was so easy.

    The only delay came in downloading multi-gigabyte model dependencies when they were missing from my machine. I could try out dozens of different models without any friction at all. As promised, if I wanted to host the model as a local service, I just started it up like any other Docker container, a JSON API instantly at the ready.

    All of this through one-liner terminal commands. Incredible.

    The confluence of Pattern Synthesis and open source

    I view this as a watershed moment in the new cycle.

    By making it so easy to package and exchange models, and giving away the underlying strategy as open source, Replicate has lowered the barriers to entry for tinkerers and explorers in this space. The history of open source and experimentation is clear.

    When the costs become low enough to fuck around with a new technology, this shifts the adoption economics of that technology. Replicate has opened the door for the next stage of developer adoption within the paradigm of Pattern Synthesis. What was once the exclusive domain of researchers and specialists can now shift into a more general—if still quite technical—audience. The incentives for participation in this ecosystem are huge—it’s a growth opportunity in the new cycle—and through Cog, it’s now so much easier to play around.

    More than that, the fundamental portability of models that Cog enables changes the approaches we can take as developers. I can see this leading to a future where it’s more common to host your models locally, or with greater isolation, enabling new categories of “AI” product with more stringent and transparent privacy guarantees for their customers.

    There remain obstacles. The costs of training certain categories of model can be astronomical. LLMs, for example, require millions of dollars to capitalize. Still, as the underlying encapsulation of the model becomes more amenable to sharing and collaboration, I will be curious to see how cooperative approaches from previous technology cycles evolve to meet the moment.

    Keep an eye on this space. It’s going to get interesting and complicated from here.

  • Succession is a civics lesson

    The typical American K-12 education is a civics lobotomy.

    Students are taught a simplistic version of American history that constantly centers the country as heroic, just and wise. America, we’re told, always overcomes its hardest challenges and worst impulses. In my 90’s education on the Civil Rights era, for example, we were told that Rosa Parks and MLK solved American racism once and for all. The latter’s assassination in this rendition was inconvenient and sad, but his sacrifice, we were meant to understand, paved the way for a more just and decent future.

    Similar simplicity was given to the workings of political power in the United States. Why, there’s a whole bicameral legislature, three branches of federal government, checks and balances, and a noble revolutionary history that gave birth to it all.

    Nowhere in this depiction was any time given to explaining lobbyists or campaign finance. Same for gerrymandering. These were yet more inconvenient details swept outside the scope of our lessons.

    I mean no slight on the teachers of America by this criticism. They do their best in a system designed for indoctrination and conformity. I think they’re some of the most important civic actors in our entire system. I’d give them more money, more independence, and more day-to-day support if I could wave a magic wand and make it so.

    Nevertheless, I emerged into adulthood feeling entirely unprepared to understand the civic complexity of this country. The entanglement of economic and political systems is a sort of dark matter that moves and shapes our everyday life, but lurks out of view without prolonged study and meaningful curiosity.

    This was frustrating: the world was obviously broken, and I didn’t have models or vocabulary to explain why. I came of age in the aftermath of the 2008 financial crisis, struggling economically myself, and watching so many of my peers flailing in their quests to find basic stability and prosperity.

    Indeed, Millennials have less wealth compared to previous generations, holding single-digit percentages of the pie, compared to 30% of US wealth for Gen X and 50% for the Boomers. Getting by with dignity and economic self-determination is an objectively hard slog.

    HBO’s Succession, now in its fourth and final season, brings a truck-mounted spotlight to the mechanics of inequality. It’s a fast-paced education in how late-capitalist power actually functions: the interactions between wealth, corporations, civic institutions and everyday people.

    The show, shot in a choppy, observational style, insists with every stylistic choice: “this is really how the world works.”

    I wish we’d had it much sooner.

    Elite crisis

    Part of what’s broken in America is the rank incompetence of our leadership. People in power are, in too many cases, just bad at their jobs.

    Before assuming his role as a Trump hatchet man, Jared Kushner bought The New York Observer as a sort of graduation present. His inept tenure there is the stuff of legend. The scion of a wealthy criminal, Kushner used the paper to prosecute personal beefs and, eventually, to endorse Donald Trump’s bid for the presidency.

    This is, indeed, the way the world works. Everything is pay-to-play. If you want something that people value, you can buy it.

    In following the travails of media titan Logan Roy, along with his children and the various toadies and yes-men who support their ailing empire, Succession makes the same point over and over again:

    It’s adjacency to wealth that determines your power, not your fitness to lead or your strategic competence. In an unequal world—63% of post-pandemic wealth went to 1% of humanity—money distorts relationships and decides opportunity.

    Over the show’s trajectory, we see Logan’s striver son-in-law Tom Wambsgans rise from minor functionary to chairman of a network that’s a clear stand-in for Fox News. All along, it’s clear that Tom doesn’t have any particular talents for the roles he’s given, but he is a loyal and trustworthy puppet for Logan.

    Meanwhile, the billionaire Roy children are constantly bumbling through various exercises at proving they’re the equal of their working class-turned-plutocrat father. Frequently out of their depth, they’re inserted relentlessly into positions of authority, with occasionally disastrous results. In rushing the launch of a communications satellite, for example, Kieran Culkin’s Roman Roy—somehow COO of his father’s media conglomerate—ends up destroying a launch vehicle and costing a technician his thumb.

    There’s never enough

    As much as the show is about wealth and power, it is also an exploration of personal and family dysfunction.

    Despite lives of extraordinary wealth and comfort—everyone lives in a state of Manhattan opulence mortals could never imagine—the Roy family carries decades of scars and trauma. They are human beings just like you or me, in this sense: they feel pain, they can be wounded, they carry grief.

    But unlike you or me, acting out their pain and grief lands with billion dollar shockwaves. The money doesn’t make them happy, no, but it does let them amplify their pain into the lives of others.

    So we see people of incredible power who can never have enough. What they need—peace, healing, clarity of self—is something they are unable to buy. What does it mean when flawed, broken human beings have the power to re-shape our world so completely? What does it mean when people have the money to buy their way out of consequences, even for the most dire of fuckups?

    It’s particularly resonant to follow the role and power of a media empire in this moment of our history. What does it mean to never have enough when your role is to inform and educate? What does “winning” mean, and cost, in an attention economy? What are we all losing, so a few rich guys can post a few more points on their personal scoreboards?

    Can we really sustain a few wealthy families using our civic fabric as their personal security blankets, instead of going to therapy?

    Succession wants us to ask these questions, and to imagine the consequences of their answers.

    Succession reassures you: it really IS nuts the world works like this

    Inequality isn’t an abstract statistic.

    Inequality is most people being a few bad months away from homelessness and destitution. It’s the majority of American workers living paycheck-to-paycheck, subject to the whims of their bosses. Inequality is medical bankruptcy for some, and elite recovery suites for others.

    Far from lionizing the wealthy, Succession constantly places the mindset that creates and preserves inequality under the microscope. The show is full of wry humor, quietly laughing at its cast in every episode. At the same time, it does a devastating job at explaining just how dark the situation is. The humor leavens the horror.

    With a tight four season trajectory, the show’s principals have created a show worth your time. There’s not a single wasted “filler” episode. The cast performs at the top of the game. The original score by Nicholas Britell manages to feel fresh yet permanent.

    It’s well-crafted entertainment, yes, but it’s also a window into the parts of our civic reality they didn’t teach you in school. With corporate power challenging and often defeating that of our civic institutions, it’s important to have some personal context for exactly how this all works in practice.

    It’s fiction, but there’s serious insight here. We just watched Elon Musk—son of a guy rich enough to own an emerald mine—set fire to tens of billions of dollars in part because he just desperately wants people to like him and think he’s cool. The real world absolutely works this way.

    The show is worth your time.

  • A major difference between LLMs and cryptocurrencies

    It can be hard to discern between what’s real and what’s bullshit hype, especially as charlatans in one space pack up for greener grifting pastures.

    But as Evan Prodromou notes:

    A major difference between LLMs and cryptocurrencies is this:

    For cryptocurrencies to be valuable to me, I need them to be valuable to you, too.

    If you don’t believe in crypto, the value of my crypto goes down.

    This isn’t the case for LLMs. I need enough people to be interested in LLMs that ChatGPT stays available, but other than that, your disinterest in it is only a minor nuisance.

    In a market, I benefit from your refusal to use it. AI-enhanced me is competing with plain ol’ you.

    This is the complexity of the moment with pattern synthesis engines (“AI”, LLMs, etc). Regardless of how our personal feelings interact with the space, it is already creating outcomes for people.

    Those outcomes may not always be what we want. One reddit thread from a game artist finds that visual pattern synthesizers are ruining what made the job fun:

    My Job is different now since Midjourney v5 came out last week. I am not an artist anymore, nor a 3D artist. Rn all I do is prompting, photoshopping and implementing good looking pictures. The reason I went to be a 3D artist in the first place is gone. I wanted to create form In 3D space, sculpt, create. With my own creativity. With my own hands.

    This is classic “degraded labor,” as described by Braverman in Labor and Monopoly Capital:

    Automation and industrialization—through atomizing tasks and encoding them into the behaviors of machinery, assembly lines and industrial processes—eliminate the process of craft from everyday labor. Instead of discovery, discernment and imagination, workers were merely executing a pre-defined process. This makes such laborers more interchangeable—you don’t need to pay for expertise, merely trainability and obedience.

    So, we can feel how we want about pattern synthesizers, but they already create outcomes for those who deploy them.

    Clearly the labor implications are massive. My hope is that for all the degradation of everyday work this may usher in, it also expands the scope and impact for individuals and small teams operating on their own imagination and initiative.

  • What if Bill Gates is right about AI?

    Say what we will about Bill Gates, the man created one of the most enduring and formidable organisms in the history of computing. By market cap, Microsoft is worth two trillion dollars almost 50 years after its founding in 1975.

    Gates, who has seen his share of paradigm shifts, writes:

    The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

    Decades on, we’re still watching the microprocessor super cycle play out. Even as we’ve hit a wall in the raw speed of desktop computers, every year we push more boundaries in power efficiency, as in silicon optimized for mobile, and in parallelization, as in GPUs. The stuff we can do for realtime graphics and portable computing would have been flatly impossible a decade ago.

    Every technological advancement in the subsequent paradigm shifts Gates describes depends on this one field.

    The personal computer, meanwhile, has stabilized as a paradigm—they’ve all looked more or less the same the last ten years—but remains the bedrock for how people in every field of endeavor solve problems, communicate and create.

    In short, Gates is arguing that AI is a permanent transformation of humanity’s relationship not just to technology, but to each other and our drives to mold the world around us.

    Look, I’m not the guy who mistakes wealth for trustworthiness. I’m here for any critique of Bill Gates and his initiatives you may want to argue. But on the particular subject of paradigm shifts, he has the credibility of having navigated them well enough to amass significant power.

    So as an exercise, let’s grant his premise for a moment. Let’s treat him as an expert witness on paradigm shifts. What would it mean if he was right that this is a fundamental new paradigm? What can we learn about the shape of AI’s path based on the analogies of previous epochs?

    Internet evolution

    The internet had its first cultural moment in the 90’s, but that took decades of preparation. It was a technology for extreme specialists in academia and the defense establishment.

    And it just wasn’t that good for most people.

    Most use of the internet required on-premises access to one of a small handful of high-bandwidth links, each of which requiring expensive infrastructure. Failing that, you could use a phone connection to reach it, but you were constrained to painfully narrow bandwidth.

    For most of the 80’s, 9600 kilobits per second was the absolute fastest you could connect to any remote service. And once you did, it’s not like there was even that much there. Email, perhaps, a few file servers, usenet.

    By 1991, speeds crept up to 14.4 kilobits per second. A moderate improvement, but a 14.4 modem took several minutes to download even the simplest of images, to say nothing of full multimedia. It just wasn’t feasible. You could do text comfortably, and everything else was a slog.

    America Online, Compuserve and other online services were early midwives to the internet revolution, selling metered access over the telephone. Filling the content gap, they provided pre-web, service-specific news, culture, finance, and sports outlets, along with basic social features like chat and email.

    Dealing with the narrow pipe of a 14.4 modem was a challenge, so they turned the meter off for certain tasks, like downloading graphics assets for a particular content area. This task could take as long as half an hour.

    In short, the early experience of the “internet” was shit.

    Despite this, these early internet services were magic. It was addictive. A complete revolution and compulsion. The possibility of new friends, new conversations, new ideas that would have been absolutely impossible to access in the pre-internet world. The potential far outstripped the limitations. People happily paid an inflation-adjusted $6 hourly for the fun and stimulation of this experience.

    Interlude: the web, a shift within the shift

    Upon this substrate of connections, communication and community, the web was born.

    What was revolutionary about the web itself was its fundamental malleability. A simple language of tags and plaintext could be transformed, by the browser, into a fully realized hypermedia document. It could connect to other documents, and anyone with an internet connection and a browser could look at it.

    Instead of brokering a deal with a service like America Online to show your content to a narrow slice of the internet, you could author an experience that anyone could access. The web changed the costs of reaching people through the internet.

    So, yes, a business like, fully impossible without the internet and the web, could be built. But everyday people could learn the basics of HTML and represent themselves as well. It was an explosion of culture and expression like we’d never seen before.

    And again: the technology, by today’s standards, was terrible. But we loved it.

    Bandwidth and the accelerating pace of the internet revolution

    Two factors shaped the bandwidth constraints of the early consumer internet. The telephone network itself has a theoretical maximum of audio information it can carry. It was designed for lo-fi voice conversations more than a century ago.

    But even within that narrow headroom, modems left a lot on the table. As the market for them grew, technology miniaturized, error-correction mechanisms improved, and modems eked out more and more gains, quadrupling in speed between 1991 and 1998.

    Meanwhile, high-volume infrastructure investments made it possible to offer unlimited access. Stay online all night if you wanted to.

    As a consequence, the media capabilities of the internet began to expand. Lots of porno, of course, but also streaming audio and video of every persuasion. A medium of text chat and news posts was evolving into the full-featured organ of culture we know today.

    Of course, we know what came next: AOL and its ilk all withered as their technology was itself disrupted by the growing influence of the web and the incredible bandwidth of new broadband technologies.

    Today we do things that would be absolutely impossible at the beginning of the consumer internet: 4K video streaming, realtime multiplayer gaming, downloads of multi-gigabyte software packages. While the underlying, protocol-level pipes of the internet remain a work in progress, the consumer experience has matured.

    But maturation is different from stagnation. Few of us can imagine a world without the internet. It remains indispensable.

    What if we put that whole internet in your pocket?

    I’ve said plenty about mobile already, but let’s explore the subjective experience of its evolution.

    Back in 2007, I tagged along with a videographer friend to a shoot at a rap producer’s studio. Between shots, I compared notes with this producer on how we were each enjoying our iPhones. He thought it was cool, but it was also just a toy to him.

    It hadn’t earned his respect alongside all his racks of formidable tech.

    It was a reasonable position. The iPhone’s user experience was revolutionary in its clarity compared to the crummy phone software of the day. Its 2007 introduction is a time capsule of just how unsatisfied the average person was with the phone they grudgingly carried everywhere.

    Yet, objectively, the 2007 iPhone was the worst version they ever sold. Like the early internet, it was shit: no App Store, no enterprise device management features so you could use it at work, tortoise-slow cellular data, English-only UI.

    It didn’t even have GPS.

    But look what happened next:

    • 2008: App Store, 3G cellular data, GPS, support for Microsoft Exchange email, basic enterprise configuration
    • 2010: High-density display, front-facing camera for video calling, no more carrier exclusivity
    • 2011: 1080p video recording, no more paying per-message for texting, WiFi hotspot, broad international language support
    • 2012: LTE cellular data

    In just five years, the iPhone went from a neat curiosity with serious potential to an indispensable tool with formidable capabilities. Navigation, multimedia, gaming, high-bandwidth video calls on cellular—it could do it all. Entire categories of gadget, from the camcorder to the GPS device, were subsumed into a single purchase.

    None of this was magic. It was good ROI. While Apple enjoyed a brief lead, other mobile device manufacturers wanted a slice of the market as well. Consumers badly wanted the internet in their pocket. Demand incentivized investment, iteration and refinement of these technologies.

    Which brings us to Pattern Synthesis Engines

    I think AI is a misnomer, and causes distraction by anthropomorphizing this new technology. I prefer Pattern Synthesis Engine.

    Right now, the PSE is a toy. A strange curiosity.

    It has struggled to render fingers and teeth, when creating images. In chat, it frequently bluffs and bullshits. The interfaces we have for accessing it are crude and brittle—ChatGPT in particular is an exercise in saintly patience as its popularity has grown, and it’s almost unusably slow during business hours.

    Still, I have already found ChatGPT to be transformational in the workflow of writing code. I’m building a replacement for this website’s CMS right now, and adapting an excellent but substantial open source codebase as a starting point.

    The thing is, I’m not particularly experienced with JavaScript. In particular, I’m often baffled by its syntax, especially because there are often multiple ways to express the same concept, they’re all valid, and different sources prescribe different versions.

    Now, when I get tripped up by this, I can solve the problem by dumping multiple, entire functions in a ChatGPT session.

    I swear to god, last night the machine instantly solved a problem that had me stumped for almost half an hour. I dumped multiple entire functions into the chat and asked it what I was doing wrong.

    It correctly diagnosed the issue—I was wrapping an argument in some braces when I didn’t need to—and I was back on my way.

    Remember: shitty technology that’s still good enough to capture our cultural imagination needn’t stay shitty forever.

    So if we extrapolate from these previous shifts, we can imagine a trajectory for PSEs that addresses many of the current issues.

    If that’s the case, the consequences and opportunities could indeed rank alongside the microprocessor, personal computer, the internet, and mobile.

    Fewer errors, less waiting, less brittleness

    The error correction is going to get better. Indeed, the Midjourney folks are thrilled to report they can make non-terrifying hands now.

    As PSEs evolve, we can imagine their speeds improving while the ways we interact with them grow more robust. The first iPhone was sluggish and the early experience of the consumer internet could be wrecked by something as simple as picking up the phone. Now these issues are long in the past.

    Persistent and pervasive

    Once, we “went on the internet.” There was a ritual to this, especially in the earliest days. Negotiating clearance for the phone line with various parties in the household, then initiating the phone connection, with its various squeals and chirps.

    In the broadband age, the network connection became persistent. If the cable modem was plugged in, the connection was there, steady. Today, thanks to mobile, we are for better or worse stuck online, tethered at all times to a network connection. Now we must make an effort to go offline, unplug.

    Today PSEs require significant hardware and computation, and exist centralized in the hands of a few players. As investment makes running them more efficient, and hardware develops optimized for their execution, we can imagine a day in the future where specialized PSEs are embedded more closely in our day-to-day activities. Indeed, for many business applications, this capability will be table stakes for adoption. At a minimum, large companies will demand their own, isolated PSEs to ensure they aren’t training a competitor’s data set.

    Internet of frauds

    With rapidly improving speech synthesis, plus the ability to construct plausible English language content based on a prompt, we are already seeing fraudsters using fake voices to scam innocent people.

    Perhaps most daunting about this is the prospect that, in time, the entire operation could become automated. Set the machine loose to draw associational links between people, filter out the ones where you can generate a voice, and just keep dialing, pocketing the money where you can.

    PSEs bring the issue of automated spam into whole new domains where we have no existing defenses.

    Digital guardians

    It has always struck me that the mind is an information system, but that unlike all our other information systems, we place it into networks largely unprotected. The junky router you get with your broadband plan has more basic protections against hostile actors than the typical user of Facebook or Twitter.

    PSEs could change this, detecting hateful and fraudulent content with greater speed and efficiency than any human alone.

    Gates describes this notion as the “personal agent,” and its science fiction and cyberpunk roots are clear enough:

    You’ll be able to use natural language to have this agent help you with scheduling, communications, and e-commerce, and it will work across all your devices. Because of the cost of training the models and running the computations, creating a personal agent is not feasible yet, but thanks to the recent advances in AI, it is now a realistic goal. Some issues will need to be worked out: For example, can an insurance company ask your agent things about you without your permission? If so, how many people will choose not to use it?

    The rise of fraud will require tools to can counter with the same economics of automation. Without that, we’ll be drowning in bullshit by 2025. These guardians will be essential in an information war that has just gone nuclear.

    Learning and knowledge will be transformed

    I’m a first-generation knowledge worker. I grew up working class. If my mom missed work for the day, the money didn’t come in.

    My economic circumstances were transformed by microprocessors, personal computers, and the internet. I’ve earned passive income by selling software, and I’ve enjoyed the economic leverage of someone who can create those outcomes for venture funded startups.

    What made that transformation possible was how much I could learn, just for the fun of learning. I spent my childhood exploring ever corner of the internet I could find, fiddling with every piece of software that showed up in my path.

    It was a completely informal, self-directed education in computing and technology that would never have been possible for someone of my socioeconomic status in earlier generations.

    Gates takes a largely technocratic view of PSEs and their impact on education:

    There are many ways that AIs can assist teachers and administrators, including assessing a student’s understanding of a subject and giving advice on career planning.

    But to me, the larger opportunity here is in allowing people who don’t learn well in the traditional models of education to nonetheless pursue a self-directed course of study, grounded in whatever it is they need to know to solve problems and sate their curiosity.

    Forget how this is going to impact schools. Just imagine how it will disrupt them. It goes beyond “cheating” at essays. Virtuous PSEs will create all new paths for finding your way in a complex world.

    The challenge, of course, is that anything that can be done virtuously can also be turned toward darkness. The same tech that can educate can also indoctrinate.

    Cultural and creative explosion

    There is now a messy yet predictable history between the technologists building PSEs and the creatives whose work they slurped up, non-consensually, to train the datasets. I think the artists should be compensated, but we’ll see what happens.

    Nevertheless, I can imagine long term implications for how we create culture. Imagine writing Star Trek fanfic and having it come to life as a full-motion video episode. You’ve got hundreds of hours of training data, multiple seasons of multiple series. How sets and starships look, how characters talk.

    It’s just as complicated a notion as any other we’ve covered here. Anything fans can do, studios can match, and suddenly we’re exploiting the likenesses of actors, writers, scenic artists, composers and more in a completely new domain of intellectual property.

    This one is far beyond the current tech, and yet seems inevitable from the perspective of what the tools now.

    Still: what will it mean when you can write a TV episode make a computer spit it out for you? What about games or websites?

    We’re going to find out sometime. It might not even be that far away.

    A complicated future

    Merely on the downsides, it’s easy to grant Gates a premise placing automated pattern synthesis on the same level as the internet or personal computing. These technologies created permanent social shifts, and PSEs could do much the same.

    Nevertheless, there’s also serious potential to amplify human impact.

    I’m allergic to hype. There’s going to be a lot of bullshitters flooding this space and you have every reason to treat this emerging field with a critical eye. Especially those hyping its danger, as a feint to attract defense money.

    Nevertheless, there is power here. They’re going to build this future one or another. Stay informed, stay curious.

    There’s something underway.

  • Retrospective on a dying technology cycle, part 4: What comes next?

    I quit my job a few weeks ago.

    Working with the crew at was a highlight of my career. Making technology and digital expression more accessible is one of my core drives. Nobody does it better than they do.

    But as the cycle reached its conclusion, like so many startups, Glitch found itself acquired by a larger company. Bureaucracies make me itchy, so I left, eager to ply my trade with startups once again.

    Then the Silicon Valley Bank crisis hit.

    It was a scary moment to have thrown my lot in with startups. Of course, we know now that the Feds guaranteed the deposits. Once I mopped the sweat from my brow, I got to wondering: what comes next?

    The steam of the last cycle is largely exhausted. Mobile is a mature platform, full of incumbents. The impact of cloud computing and open source on engineering productivity is now well understood, priced into our assumptions. The interest rate environment is making venture capital less of a free-for-all.

    Meanwhile, innovators have either matured into more conservative companies, or been acquired by them.

    So a new cycle begins.

    Why disruption is inevitable

    Large companies have one job: maintaining a status quo. This isn’t necessarily great for anyone except the companies’ shareholders, and as Research in Motion found when Blackberry died, even this has limits. Consumers chafe against the slowing improvement and blurring focus of products. Reality keeps on moving while big companies insist their workers make little slideshows and read them to each other, reporting on this and that OKR or KPI.

    And suddenly: the product sucks ass.

    Google is dealing with this today. Search results are a fetid stew of garbage. They’re choked with ads, many of which are predatory. They lead to pages that aren’t very useful.

    Google, once beloved as the best way to access the web, has grown complacent.

    Facebook, meanwhile, blew tens of billions—hundreds of times the R&D costs of the first iPhone—to create a VR paradigm shift that never materialized. Their user populations are aging, which poses a challenge for a company that has to sell ads and maintain cultural relevance. They can’t quite catch up to TikTok’s mojo, so they’re hoping that lawmakers will kill their competitor for them.

    Twitter has been taken private, stripped down, converted into the plaything of the wealthy.

    Microsoft now owns GitHub and LinkedIn.

    Netflix, once adored for its innovative approach to distribution and original content, has matured into a shovelware play that cancels most of its shows after a season or two.

    Apple now makes so many billions merely from extracting rents on its App Store software distribution infrastructure, that could be a business unto itself.

    A swirling, dynamic system full of energy has congealed into a handful of players maintaining their turf.

    The thing is, there’s energy locked in this turgid ball of mud. People eager for things that work better, faster, cheaper. Someone will figure out how to reach them.

    Then they will take away incumbents’ money, their cultural relevance, and ultimately: their power. Google and Facebook, in particular, are locked in a race to become the next Yahoo: decaying, irrelevant, coasting on the inertia of a dwindling user base, no longer even dreaming of bygone power.

    They might both win.

    “Artificial” “intelligence”

    Look, AI is going to be one of the next big things.

    You can feel how you want to feel about it. A lot of how this technology has been approached—non-consensual slurping of artists’ protected works, training on internet hate speech—is weird and gross! The label is inaccurate!

    This is not intelligence.

    Still: whatever it is, is going to create some impact and long-term change.

    I’m more comfortable calling “AI” a “pattern synthesis engine” (PSE). You tell it the pattern you’re looking for, and then it disgorges something plausible synthesized from its vast set of training patterns.

    The pattern may have a passing resemblance to what you’re looking for.

    But even a lossy, incomplete, or inaccurate pattern can have immediate value. It can be a starting point that is cheaper and faster to arrive at than something built manually.

    This is of particular interest to me as someone who struggles with motivation around tedious tasks. Having automation to kickstart the process and give me something to chip away at is compelling.

    The dawn of the PSE revolution is text and image generation. But patterns rule everything around us. Patterns define software, communications, design, architecture, civil engineering, and more. Automation that accelerates the creation of patterns has broad impact.

    Indeed, tools like ChatGPT are already disruptive to incumbent technologies like Google. I have found it faster and more effective to answer questions around software development tasks—from which tools and frameworks are viable for a task, to code-level suggestions on solving problems—through ChatGPT, instead of hitting a search engine like Google. Microsoft—who you should never, ever sleep on—is seizing the moment, capturing more headlines for search than they have in decades.

    Still, I don’t valorize disruption for its own sake. This is going to make a mess. Pattern synthesis makes it cheaper than ever to generate and distribute bullshit, and that’s dangerous in the long term. It won’t stop at text and images. Audio, voices and video are all patterns subject to synthesis. It’s only a matter of time and technological progression before PSE’s can manage their complexity cheaply.

    On the other hand, many tedious, manual tasks are going to fall before this technology. Individuals will find their leverage multiplied, and small teams will be able to get more done.

    The question, as always for labor savings, will be who gets to keep the extra cream.

    Remote work

    Most CEOs with a return-to-office fetish are acting the dinosaur and they’re going to lose.


    Feeling power through asses-in-seats is a ritualistic anachronism from a time where telecommunications was expensive and limited to the bandwidth of an analog telephone.

    Today, bandwidth is orders of magnitude greater, offering rich communication options without precedent. Today, the office is vulnerability.

    In a world more and more subject to climate change, a geographically distributed workforce is a hedge. Look at the wildass weather hitting California in just the last few months. You’ve got cyclones, flooding, extreme winds, power outages. And that’s without getting into the seasonal fires and air pollution of recent years.

    One of the great lessons of Covid was that a knowledge workforce could maintain and even exceed their typical productivity even in a distributed context. Remote work claws back time and energy that goes to commuting, giving us more energy to use on the goals and relationships that matter most.

    Still, even if remote work is the future, much of it has yet to arrive. The tools for collaboration and communication in a distributed context remain high-friction and even alienating. Being in endless calls is exhausting. There’s limited room for spontaneity.

    The success stories of the new cycle will be companies nimble enough to recruit teams from outside their immediate metro area, clever enough to invent new processes to support them, and imaginative enough to productize this learning into scalable, broadly applicable solutions that change the shape of work.

    The future does not belong to Slack or Zoom.

    The next iteration of the web

    One of the most important things Anil Dash ever told me came on a walk after we discussed a role at Glitch.

    He explained that he was sympathetic to the passion of the web3 crowd, even if he thought it was misplaced. After all, who could blame them for wanting a version of the web they could shape? Who could begrudge yearning for a future that was malleable and open, instead of defined by a handful of incumbents?

    I’ve thought about it endlessly ever since.

    While I argue that web3 and its adjacent crypto craze was a money grab bubble inflated by venture capital, I also agree with Anil: there has to be something better than two or three major websites defining the very substrate of our communication and culture.

    In the wake of Twitter’s capture by plutocrats, we’ve seen a possible answer to this hunger. Mastodon, and the larger ecosystem of the Fediverse, has seen a massive injection of energy and participation.

    What’s exciting here is a marriage of the magic of Web 1.0—weird and independent—with the social power that came to define Web 2.0. A web built on the Fediverse allows every stakeholder so much more visibility and authorship over the mechanics of communication and community compared to anything that was possible under Facebook, Twitter or TikTok.

    This is much the same decentralization and self-determination the web3 crowd prayed for, but it’s built on mundane technology with reasonable operating costs.

    Excess computing capacity

    Meanwhile, the cost of computing capacity is changing. While Amazon dominated the last cycle with their cloud offerings, the next one is anybody’s game.

    Part of this is the emergence of “edge computing,” “serverless functions,” and the blurring definition of a static vs. dynamic site. These are all artifacts of a simple economic reality: every internet infrastructure company has excess capacity that they can’t guarantee on the same open-ended basis as a full virtual machine, but which they’d love to sell you in small, time boxed slices.

    Capital abhors an un-leveraged resource.

    As this computing paradigm for the web becomes more commonly adopted, the most popular architectures of web technologies will evolve accordingly, opening the door to new names and success stories. As Node.js transformed web development by making it accessible to anyone who could write JavaScript, look for these technologies to make resilient, scalable web infrastructure more broadly accessible, even to non-specialists.

    Reality, but with shit in front of your eyes

    Everything is about screens because the eyes are the highest bandwidth link we have for the brain without surgical intervention, and even that remains mostly science fiction.

    The iPhone was transformational in part because of its comparatively enormous display for the day, and in the years since phones have competed by developing ever-larger, more dense displays.

    Part of the logic of virtual reality, which obscures your vision, and augmented reality, which overlays it, is to take this evolution to its maximalist conclusion, completely saturating the eye with as much information as possible.

    Whether this has a long-term sustainable consumer application remains to be proven. There are serious headwinds: the energy cost of high-density, high-frequency displays, and the consequent weight of batteries needed to maintain the experience. There’s the overall bulk of the devices, and the dogged friction of miniaturizing so many components to reach fashionable sizes.

    But they’re sure going to try. As noted, Facebook has already blown tens of billions, plus a rebrand, trying to make VR happen. Valve did their best to create both the software delivery and hardware platform needed to spur a VR revolution, with limited success. Sony and Samsung have each dabbled.

    This year, rumors suggest Apple will enter the fray with their own take on the headset. Precedent allows that they might find traction. With the iPad, Apple entered the stagnant tablet market nearly a decade late with an offering people actually loved.

    While tablets didn’t transform computing altogether, Apple made a good business of them and for many, they’ve become indispensable tools.

    If indeed VR/AR becomes a viable paradigm for consumer computing, that will kick off a new wave of opportunity. It has implications for navigation, communication, entertainment, specialized labor, and countless other spaces. The social implications will be broad: people were publicly hostile to users of Google Glass—the last pseudo-consumer attempt in this space. Any successful entrant will need to navigate the challenge of making these devices acceptable, and even fashionable.

    There are also demonic labor implications for any successful platform in this space: yet more workplace surveillance for any field that broadly adopts the technology. Imagine the boss monitoring your every glance. Yugh.

    Lasting discipline, sober betting, and a green revolution

    In a zero interest rate environment, loads of people could try their hands at technology speculation. If interest rates hold at their existing altitude, this flavor of speculation is going to lose its appeal.

    The talented investor with a clear eye toward leverage on future trends will have successes. But I think the party where you bet millions of dollars on ugly monkey pictures is over for awhile.

    But there are no guarantees. Many seem to be almost pining for a recession—for a reset on many things, from labor power to interest rates. We’ve had a good run since 2008. It might just happen.

    Yet the stakes of our decisions are bigger than casino games. The project of humanity needs more than gambling. We need investment to forestall the worst effects of climate change.

    Aggressive investment in energy infrastructure, from the mundane of household heat pumps, to the vast chess game of green hydrogen, will take up a lot of funding and mental energy going forward. Managing energy, making its use efficient, maintaining the transition off of fossil fuels—all of this has information technology implications. Ten years from now, houses, offices and factories will be smarter than ever, as a fabric of monitoring devices, energy sources and HVAC machines all need to act in concert.

    The old cycle is about dead. But the next one is just getting started. I hope the brightest minds of my generation get more lasting and inspiring work to do than selling ads. Many of the biggest winners of the last cycle have fortunes to plow into what comes next, and fond memories of the challenges in re-shaping an industry. There’s another round to play.

    We have so much still to accomplish.

  • The Terraformers, by Annalee Newitz, wants to talk about social stratification

    A speculative fiction author has four jobs:

    1. Imagination. Find us a fresh lens. Show us a place we’ve never been, or a perspective on the mundane that we’ve never seen. Inject tangible energy into our imagination systems.
    2. Timely insight. Help us understand our world in this moment. Help us see the systems that animate social reality, and help us use that vision for our own purposes.
    3. Clarity. Clear the mud from our collective windshields. Use all this fresh perspective to let us look with renewed clarity on all that’s wrong, all that’s beautiful, and all we need to do to enact change.
    4. Challenge. Push the reader, and in the pushing, let fresh perspective permeate more than just the most convenient, already-exposed surfaces.

    In combination, these gifts offer the reader extra fuel to continue in a challenging world. Speculative fiction is an activist project that both aligns the reader and empowers them with new conceptual tools. A good yarn in this genre gives you better metaphors for understanding your values, and even making the case for them.

    In The Terraformers, Annalee Newitz, using they/them pronouns, spins a tale tens of thousands of years into the deep future. They explore the long-term project of terraforming a planet, and the teams who spend lifetimes making it comfortable, habitable and marketable for future residents.

    It’s an ambitious undertaking. The range of characters and timespans brings to mind Foundation, Asimov’s epic of collapsing empire. Unlike Foundation, I was able to finish this one: its representation of gender and sexuality was refreshingly complete.

    In a previous Newitz work, Autonomous, the author impressed me with the casual inclusion of computing mechanics that moved the plot forward in ways that were coherent, plausible and folded neatly into the story. Similarly, while Terraformers isn’t cyberpunk per se—not fetishizing endlessly upon the details of brain-computer interfaces—it is such a technically competent work. I completely believe in the elaborate, far-future computing systems described here. While that’s not central to my enjoyment of a book, I like when the details enhance the magic instead of break it.

    But what makes The Terraformers stand out, why you have to read it, is much more human than technical. This is a book that wants to talk about social stratification.

    Frozen mobility

    The geometry of social mobility is bound up in the shape of inequality, and today we have both at extremes.

    Social mobility is nearly frozen. If you’re born poor, you’re likely to stay poor.

    Meanwhile, wealth inequality ratchets tighter and tighter each year, especially since the pandemic. The middle class erodes steadily, the wealthy control more and more wealth, and the poor grow both in number and in their precarity. To say nothing about desperation in the developing world.

    These are not abstract truths. The statistics are more than just numbers in a spreadsheet. They reflect human experiences and traumas. They reflect a disparity of power, and they describe an everyday existence where wealth and corporate bureaucracies strip workers of agency and ignore their insights.

    Newitz brings these dry facts to vivid life, depicting the frustration and casual humiliation of living each day under the control of someone who sees you as a tool to implement their edicts. I don’t want to spoil anything, but I had a full-body limbic response to some of the email subject lines that issue from Ronnie, a central corporate figure in the story.

    Newitz brings an empathetic clarity to the experience of being employed. Anyone who has slotted into a corporate system whose executive team was far out of view will find themselves witnessed by this book.

    My favorite bits involve the places where the powerful are just pathologically incurious, and how this creates long term consequences for their influence. Reads so true to life for me.

    There’s also great stuff in here about the raw, utilitarian mechanics of colonialism broadly: far-off people craving yet more wealth, making social and infrastructure decisions for people they’ll never meet, who have no recourse.

    …but am I also the oppressor?

    Good speculative fiction challenges as much as it validates.

    Newitz also asks through this work whether we are complicit ourselves in the everyday oppression of innocent lives yearning to be free. Not since James Herriot have I seen so many animals depicted with love, compassion and centrality to the story.

    Unlike Herriot, Newitz is unbound by depression-era Britain’s technological limitations. Cybernetically enhanced, animals in The Terraformers text their companions, sharing their points of view and preferences, allowing them to be narrative prime-movers in their own right.

    There’s delight in this: the thoughtful body language of cats figures prominently at points, and a wide cast of animal personalities appears throughout.

    But the book is also pointed in its examination of how we, the readers, may relate to animals. Are we using our assessment of their intelligence as a convenient justification for their subjugation? Are we losing out on valuable insight to hard problems because we are, ourselves, incurious to the perspectives of our animal companions?

    Perhaps most uncomfortably: how would such a justification eventually leak over into the control and oppression of other humans?

    Implicit in The Terraformers—especially in its exploration of bioengineering and corporate slavery—is the argument that corporations would treat us all like livestock if they thought they could get away with it. Maybe we should start asking whether livestock is even a valid category before it envelopes us entirely.

    A thesis for a better world

    But Newitz is not here to depress us.

    Shot through the entire book is a bone-deep commitment to something hopeful and better than what we know today. Throughout the tale, we witness resistance on a spectrum of strategic forms: through journalism, through non-participation, through activist demonstration, through violence and non-violence, and of course, resistance through mutual aid. Despite this, there’s also sobriety about the frustrating pace of change, and just how messy consensus and governance can be.

    And a good case for why we have to work at it all anyway.

    Moreover, as a real-life technology cycle rife with exploitation and neocolonial intent comes to a close, it’s timely that The Terraformers explores how corporate-born technology can be bent toward self-determination and shared civic good. All around us are the tools to create completely novel solutions to big problems. We just need to take the time to dream them into existence, and work at the resources needed to sustain them.

    It’s not enough to shine a light on the problems. A good novel in this space lights up the levers. The Terraformers strikes a great balance between sober assessment of what hurts, and an optimistic vision for what we can do to change it.

    Go grab this book.

Redeem Tomorrow: Home Redeem Tomorrow: Home