Skip to main content Redeem Tomorrow
I used to be excited about the future. Let's bring that back.
About Get my help

Posts 21–30 of 48

  • Replicate's Cog project is a watershed moment for AI

    I’ve argued that Pattern Synthesis (“AI”, LLMs, ML, etc) will be a defining paradigm of the next technology cycle. Patterns are central to everything we as humans do, and a mechanism that accelerates their creation could have impact on work, culture and communications on the same order as the microprocessor itself.

    If my assertion here is true, one would expect to see improving developer tools for this technology: abstractions that make it easy to grab Pattern Synthesizers and apply them to whatever problem space you may have.

    Indeed, you can pay the dominant AI companies for access to their APIs, but this is the most superficial participation in the Pattern Synthesis revolution. You do not fully control the technology that your business relies upon, and you therefore find yourself at the whims of a platform vendor who may change pricing, policies, model behaviors, or other factors out from beneath you.

    A future where Pattern Synthesis is a dominant technical paradigm is one where the models themselves are malleable, first-class development targets, ready for experimentation and weekend tinkering.

    That’s where the Cog project comes in.

    Interlude: the basics of DX in Pattern Synthesis

    The Developer Experience of Pattern Synthesis, as most currently understand it, involves making requests to an API.

    This is a well-worn abstraction, used today for everything from accepting payments to orchestration of cloud infrastructure. You create a payload of instructions, transmit it to an endpoint, and then a response gives your application what it needs to proceed.

    Through convenient API abstractions, it’s easy to begin building a product with Pattern Synthesis components. But you will be fully bound by the model configuration of someone else. If their model can do what you need, great.

    But in the long term, deep business value will come from owning the core technology that creates results for your customers. Reselling a widget anyone else can buy off the shelf doesn’t leave you with much of a moat. Instead, the successful companies of the coming cycle will develop and improve their own models.

    Components of the Pattern Synthesis value chain

    Diagram of the stack described below

    In order to provide the fruits of a Pattern Synthesis Engine to your customers, you’ll be interacting with these components:

    1. Compute substrate: Information is physical. If you want to transform data, you’re going to need physical hardware somewhere that does the job. This a bunch of silicon that stores the data and does massive amounts of parallelized computation. This stuff can be expensive, with Nvidia’s A100 GPU costing $10k just to get started.
    2. Host environment: Next, you’re going to need an operating system that can facilitate the interaction between your Pattern Synthesis model and the underlying compute substrate. A host environment does all of the boring, behind-the-scenes stuff that makes a modern computer run, including management of files and networking, along with hosting runtimes that leverage the hardware to accomplish work.
    3. Model: Finally we arrive at the Pattern Synthesizer itself. A model takes inputs and uses a stored pile of associations it has been “trained” with to transform that input into a given pattern. Models are diverse in their applications, able to transform sounds into text, text into images, classify image contents, and plenty more. This is where the magic happens, but as we can see, there are significant dependencies before you can even get started interacting with the model.
    4. Interface: Finally, an interface connects to these lower layers in order to provide inputs to the model and report its synthesized output. This starts as an API, but this is usually wrapped in some sort of GUI, like a webpage.

    This is the status quo, and it’s not unique to “AI” work, either. You can swap the “model” with “application” and find this architecture describes the bulk of how the existing web works.

    As a result, existing approaches to web architecture have lessons to offer the developer experience for those building Pattern Synthesis models.

    Containerization

    In computing, one constant bugbear is coupling. Practitioners loathe tightly coupled systems, because such coupling can slow down future progress. If one half of a tightly coupled system becomes obsolete, the other half is weighed down until it can be cut free.

    A common coupling pattern in web development exists between the application and its host environment. This can become expensive, as every time the application needs to be hosted anew, the environment must then be set up from scratch to support its various dependencies. Runtimes and package managers are common culprits here, but the dependency details can be endless and surprising.

    This limits portability, acting as a brake on scaling.

    The solution to this problem is containerization. With containers, an entire host environment can be snapshotted and captured into fully portable files. Docker, and its Docker Engine runtime, is among the most well-known tools for the job.

    Docker Engine provides a further abstraction between between the host environment and its underlying compute resources, allowing containers that run on it to be flexible and independent of specific hardware and operating systems.

    There’s a lot of devil in these details. There’s no magic here, just hard work to support this flexibility.

    But when it works, it allows complex systems to be hoisted into operation across a multitude of operating systems on a fully-automated basis. Get Docker Engine running on your machine, issue a few terminal commands, and you’ve got a new application running wherever you want it.

    With Cog, Replicate said: “cool, let’s do that with ML models.”

    Replicate’s innovation

    Replicate provides hosting for myriad machine learning models. Pay them money, you can have any model in their ecosystem, or one you trained yourself, available through the metered faucet of their API.

    Diagram of the same stack, with Docker absorbing the model layer

    To support this business model, Replicate interposes Docker into the existing value chain. Rather than figure out the specifics of how to make your ML model work in a particular hosting arrangement, you package it into a container using Cog:

    No more CUDA hell. Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you.

    Define the inputs and outputs for your model with standard Python. Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.

    Thus, through, containerization, the arcane knowledge of matching models with the appropriate dependencies for a given hardware setup can be scaled on an infinitely automated basis.

    The model is then fully portable, making it easy to host with Replicate.

    But not just with Replicate. Over the weekend I got Ubuntu installed on my game PC, laden as it is with a high-end—if consumer-grade—GPU, the RTX 4090. Once I figured out how to get Docker Engine installed and running, I installed Cog and then it was trivial to load models from Replicate’s directory and start churning out results.

    There was nothing to debug. Compared to other forays I’ve made into locally-hosted models, where setup was often error-prone and complex, this was so easy.

    The only delay came in downloading multi-gigabyte model dependencies when they were missing from my machine. I could try out dozens of different models without any friction at all. As promised, if I wanted to host the model as a local service, I just started it up like any other Docker container, a JSON API instantly at the ready.

    All of this through one-liner terminal commands. Incredible.

    The confluence of Pattern Synthesis and open source

    I view this as a watershed moment in the new cycle.

    By making it so easy to package and exchange models, and giving away the underlying strategy as open source, Replicate has lowered the barriers to entry for tinkerers and explorers in this space. The history of open source and experimentation is clear.

    When the costs become low enough to fuck around with a new technology, this shifts the adoption economics of that technology. Replicate has opened the door for the next stage of developer adoption within the paradigm of Pattern Synthesis. What was once the exclusive domain of researchers and specialists can now shift into a more general—if still quite technical—audience. The incentives for participation in this ecosystem are huge—it’s a growth opportunity in the new cycle—and through Cog, it’s now so much easier to play around.

    More than that, the fundamental portability of models that Cog enables changes the approaches we can take as developers. I can see this leading to a future where it’s more common to host your models locally, or with greater isolation, enabling new categories of “AI” product with more stringent and transparent privacy guarantees for their customers.

    There remain obstacles. The costs of training certain categories of model can be astronomical. LLMs, for example, require millions of dollars to capitalize. Still, as the underlying encapsulation of the model becomes more amenable to sharing and collaboration, I will be curious to see how cooperative approaches from previous technology cycles evolve to meet the moment.

    Keep an eye on this space. It’s going to get interesting and complicated from here.


  • Succession is a civics lesson

    The typical American K-12 education is a civics lobotomy.

    Students are taught a simplistic version of American history that constantly centers the country as heroic, just and wise. America, we’re told, always overcomes its hardest challenges and worst impulses. In my 90’s education on the Civil Rights era, for example, we were told that Rosa Parks and MLK solved American racism once and for all. The latter’s assassination in this rendition was inconvenient and sad, but his sacrifice, we were meant to understand, paved the way for a more just and decent future.

    Similar simplicity was given to the workings of political power in the United States. Why, there’s a whole bicameral legislature, three branches of federal government, checks and balances, and a noble revolutionary history that gave birth to it all.

    Nowhere in this depiction was any time given to explaining lobbyists or campaign finance. Same for gerrymandering. These were yet more inconvenient details swept outside the scope of our lessons.

    I mean no slight on the teachers of America by this criticism. They do their best in a system designed for indoctrination and conformity. I think they’re some of the most important civic actors in our entire system. I’d give them more money, more independence, and more day-to-day support if I could wave a magic wand and make it so.

    Nevertheless, I emerged into adulthood feeling entirely unprepared to understand the civic complexity of this country. The entanglement of economic and political systems is a sort of dark matter that moves and shapes our everyday life, but lurks out of view without prolonged study and meaningful curiosity.

    This was frustrating: the world was obviously broken, and I didn’t have models or vocabulary to explain why. I came of age in the aftermath of the 2008 financial crisis, struggling economically myself, and watching so many of my peers flailing in their quests to find basic stability and prosperity.

    Indeed, Millennials have less wealth compared to previous generations, holding single-digit percentages of the pie, compared to 30% of US wealth for Gen X and 50% for the Boomers. Getting by with dignity and economic self-determination is an objectively hard slog.

    HBO’s Succession, now in its fourth and final season, brings a truck-mounted spotlight to the mechanics of inequality. It’s a fast-paced education in how late-capitalist power actually functions: the interactions between wealth, corporations, civic institutions and everyday people.

    The show, shot in a choppy, observational style, insists with every stylistic choice: “this is really how the world works.”

    I wish we’d had it much sooner.

    Elite crisis

    Part of what’s broken in America is the rank incompetence of our leadership. People in power are, in too many cases, just bad at their jobs.

    Before assuming his role as a Trump hatchet man, Jared Kushner bought The New York Observer as a sort of graduation present. His inept tenure there is the stuff of legend. The scion of a wealthy criminal, Kushner used the paper to prosecute personal beefs and, eventually, to endorse Donald Trump’s bid for the presidency.

    This is, indeed, the way the world works. Everything is pay-to-play. If you want something that people value, you can buy it.

    In following the travails of media titan Logan Roy, along with his children and the various toadies and yes-men who support their ailing empire, Succession makes the same point over and over again:

    It’s adjacency to wealth that determines your power, not your fitness to lead or your strategic competence. In an unequal world—63% of post-pandemic wealth went to 1% of humanity—money distorts relationships and decides opportunity.

    Over the show’s trajectory, we see Logan’s striver son-in-law Tom Wambsgans rise from minor functionary to chairman of a network that’s a clear stand-in for Fox News. All along, it’s clear that Tom doesn’t have any particular talents for the roles he’s given, but he is a loyal and trustworthy puppet for Logan.

    Meanwhile, the billionaire Roy children are constantly bumbling through various exercises at proving they’re the equal of their working class-turned-plutocrat father. Frequently out of their depth, they’re inserted relentlessly into positions of authority, with occasionally disastrous results. In rushing the launch of a communications satellite, for example, Kieran Culkin’s Roman Roy—somehow COO of his father’s media conglomerate—ends up destroying a launch vehicle and costing a technician his thumb.

    There’s never enough

    As much as the show is about wealth and power, it is also an exploration of personal and family dysfunction.

    Despite lives of extraordinary wealth and comfort—everyone lives in a state of Manhattan opulence mortals could never imagine—the Roy family carries decades of scars and trauma. They are human beings just like you or me, in this sense: they feel pain, they can be wounded, they carry grief.

    But unlike you or me, acting out their pain and grief lands with billion dollar shockwaves. The money doesn’t make them happy, no, but it does let them amplify their pain into the lives of others.

    So we see people of incredible power who can never have enough. What they need—peace, healing, clarity of self—is something they are unable to buy. What does it mean when flawed, broken human beings have the power to re-shape our world so completely? What does it mean when people have the money to buy their way out of consequences, even for the most dire of fuckups?

    It’s particularly resonant to follow the role and power of a media empire in this moment of our history. What does it mean to never have enough when your role is to inform and educate? What does “winning” mean, and cost, in an attention economy? What are we all losing, so a few rich guys can post a few more points on their personal scoreboards?

    Can we really sustain a few wealthy families using our civic fabric as their personal security blankets, instead of going to therapy?

    Succession wants us to ask these questions, and to imagine the consequences of their answers.

    Succession reassures you: it really IS nuts the world works like this

    Inequality isn’t an abstract statistic.

    Inequality is most people being a few bad months away from homelessness and destitution. It’s the majority of American workers living paycheck-to-paycheck, subject to the whims of their bosses. Inequality is medical bankruptcy for some, and elite recovery suites for others.

    Far from lionizing the wealthy, Succession constantly places the mindset that creates and preserves inequality under the microscope. The show is full of wry humor, quietly laughing at its cast in every episode. At the same time, it does a devastating job at explaining just how dark the situation is. The humor leavens the horror.

    With a tight four season trajectory, the show’s principals have created a show worth your time. There’s not a single wasted “filler” episode. The cast performs at the top of the game. The original score by Nicholas Britell manages to feel fresh yet permanent.

    It’s well-crafted entertainment, yes, but it’s also a window into the parts of our civic reality they didn’t teach you in school. With corporate power challenging and often defeating that of our civic institutions, it’s important to have some personal context for exactly how this all works in practice.

    It’s fiction, but there’s serious insight here. We just watched Elon Musk—son of a guy rich enough to own an emerald mine—set fire to tens of billions of dollars in part because he just desperately wants people to like him and think he’s cool. The real world absolutely works this way.

    The show is worth your time.


  • A major difference between LLMs and cryptocurrencies

    It can be hard to discern between what’s real and what’s bullshit hype, especially as charlatans in one space pack up for greener grifting pastures.

    But as Evan Prodromou notes:

    A major difference between LLMs and cryptocurrencies is this:

    For cryptocurrencies to be valuable to me, I need them to be valuable to you, too.

    If you don’t believe in crypto, the value of my crypto goes down.

    This isn’t the case for LLMs. I need enough people to be interested in LLMs that ChatGPT stays available, but other than that, your disinterest in it is only a minor nuisance.

    In a market, I benefit from your refusal to use it. AI-enhanced me is competing with plain ol’ you.

    This is the complexity of the moment with pattern synthesis engines (“AI”, LLMs, etc). Regardless of how our personal feelings interact with the space, it is already creating outcomes for people.

    Those outcomes may not always be what we want. One reddit thread from a game artist finds that visual pattern synthesizers are ruining what made the job fun:

    My Job is different now since Midjourney v5 came out last week. I am not an artist anymore, nor a 3D artist. Rn all I do is prompting, photoshopping and implementing good looking pictures. The reason I went to be a 3D artist in the first place is gone. I wanted to create form In 3D space, sculpt, create. With my own creativity. With my own hands.

    This is classic “degraded labor,” as described by Braverman in Labor and Monopoly Capital:

    Automation and industrialization—through atomizing tasks and encoding them into the behaviors of machinery, assembly lines and industrial processes—eliminate the process of craft from everyday labor. Instead of discovery, discernment and imagination, workers were merely executing a pre-defined process. This makes such laborers more interchangeable—you don’t need to pay for expertise, merely trainability and obedience.

    So, we can feel how we want about pattern synthesizers, but they already create outcomes for those who deploy them.

    Clearly the labor implications are massive. My hope is that for all the degradation of everyday work this may usher in, it also expands the scope and impact for individuals and small teams operating on their own imagination and initiative.

  • What if Bill Gates is right about AI?

    Say what we will about Bill Gates, the man created one of the most enduring and formidable organisms in the history of computing. By market cap, Microsoft is worth two trillion dollars almost 50 years after its founding in 1975.

    Gates, who has seen his share of paradigm shifts, writes:

    The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

    Decades on, we’re still watching the microprocessor super cycle play out. Even as we’ve hit a wall in the raw speed of desktop computers, every year we push more boundaries in power efficiency, as in silicon optimized for mobile, and in parallelization, as in GPUs. The stuff we can do for realtime graphics and portable computing would have been flatly impossible a decade ago.

    Every technological advancement in the subsequent paradigm shifts Gates describes depends on this one field.

    The personal computer, meanwhile, has stabilized as a paradigm—they’ve all looked more or less the same the last ten years—but remains the bedrock for how people in every field of endeavor solve problems, communicate and create.

    In short, Gates is arguing that AI is a permanent transformation of humanity’s relationship not just to technology, but to each other and our drives to mold the world around us.

    Look, I’m not the guy who mistakes wealth for trustworthiness. I’m here for any critique of Bill Gates and his initiatives you may want to argue. But on the particular subject of paradigm shifts, he has the credibility of having navigated them well enough to amass significant power.

    So as an exercise, let’s grant his premise for a moment. Let’s treat him as an expert witness on paradigm shifts. What would it mean if he was right that this is a fundamental new paradigm? What can we learn about the shape of AI’s path based on the analogies of previous epochs?

    Internet evolution

    The internet had its first cultural moment in the 90’s, but that took decades of preparation. It was a technology for extreme specialists in academia and the defense establishment.

    And it just wasn’t that good for most people.

    Most use of the internet required on-premises access to one of a small handful of high-bandwidth links, each of which requiring expensive infrastructure. Failing that, you could use a phone connection to reach it, but you were constrained to painfully narrow bandwidth.

    For most of the 80’s, 9600 kilobits per second was the absolute fastest you could connect to any remote service. And once you did, it’s not like there was even that much there. Email, perhaps, a few file servers, usenet.

    By 1991, speeds crept up to 14.4 kilobits per second. A moderate improvement, but a 14.4 modem took several minutes to download even the simplest of images, to say nothing of full multimedia. It just wasn’t feasible. You could do text comfortably, and everything else was a slog.

    America Online, Compuserve and other online services were early midwives to the internet revolution, selling metered access over the telephone. Filling the content gap, they provided pre-web, service-specific news, culture, finance, and sports outlets, along with basic social features like chat and email.

    Dealing with the narrow pipe of a 14.4 modem was a challenge, so they turned the meter off for certain tasks, like downloading graphics assets for a particular content area. This task could take as long as half an hour.

    In short, the early experience of the “internet” was shit.

    Despite this, these early internet services were magic. It was addictive. A complete revolution and compulsion. The possibility of new friends, new conversations, new ideas that would have been absolutely impossible to access in the pre-internet world. The potential far outstripped the limitations. People happily paid an inflation-adjusted $6 hourly for the fun and stimulation of this experience.

    Interlude: the web, a shift within the shift

    Upon this substrate of connections, communication and community, the web was born.

    What was revolutionary about the web itself was its fundamental malleability. A simple language of tags and plaintext could be transformed, by the browser, into a fully realized hypermedia document. It could connect to other documents, and anyone with an internet connection and a browser could look at it.

    Instead of brokering a deal with a service like America Online to show your content to a narrow slice of the internet, you could author an experience that anyone could access. The web changed the costs of reaching people through the internet.

    So, yes, a business like Amazon.com, fully impossible without the internet and the web, could be built. But everyday people could learn the basics of HTML and represent themselves as well. It was an explosion of culture and expression like we’d never seen before.

    And again: the technology, by today’s standards, was terrible. But we loved it.

    Bandwidth and the accelerating pace of the internet revolution

    Two factors shaped the bandwidth constraints of the early consumer internet. The telephone network itself has a theoretical maximum of audio information it can carry. It was designed for lo-fi voice conversations more than a century ago.

    But even within that narrow headroom, modems left a lot on the table. As the market for them grew, technology miniaturized, error-correction mechanisms improved, and modems eked out more and more gains, quadrupling in speed between 1991 and 1998.

    Meanwhile, high-volume infrastructure investments made it possible to offer unlimited access. Stay online all night if you wanted to.

    As a consequence, the media capabilities of the internet began to expand. Lots of porno, of course, but also streaming audio and video of every persuasion. A medium of text chat and news posts was evolving into the full-featured organ of culture we know today.

    Of course, we know what came next: AOL and its ilk all withered as their technology was itself disrupted by the growing influence of the web and the incredible bandwidth of new broadband technologies.

    Today we do things that would be absolutely impossible at the beginning of the consumer internet: 4K video streaming, realtime multiplayer gaming, downloads of multi-gigabyte software packages. While the underlying, protocol-level pipes of the internet remain a work in progress, the consumer experience has matured.

    But maturation is different from stagnation. Few of us can imagine a world without the internet. It remains indispensable.

    What if we put that whole internet in your pocket?

    I’ve said plenty about mobile already, but let’s explore the subjective experience of its evolution.

    Back in 2007, I tagged along with a videographer friend to a shoot at a rap producer’s studio. Between shots, I compared notes with this producer on how we were each enjoying our iPhones. He thought it was cool, but it was also just a toy to him.

    It hadn’t earned his respect alongside all his racks of formidable tech.

    It was a reasonable position. The iPhone’s user experience was revolutionary in its clarity compared to the crummy phone software of the day. Its 2007 introduction is a time capsule of just how unsatisfied the average person was with the phone they grudgingly carried everywhere.

    Yet, objectively, the 2007 iPhone was the worst version they ever sold. Like the early internet, it was shit: no App Store, no enterprise device management features so you could use it at work, tortoise-slow cellular data, English-only UI.

    It didn’t even have GPS.

    But look what happened next:

    • 2008: App Store, 3G cellular data, GPS, support for Microsoft Exchange email, basic enterprise configuration
    • 2010: High-density display, front-facing camera for video calling, no more carrier exclusivity
    • 2011: 1080p video recording, no more paying per-message for texting, WiFi hotspot, broad international language support
    • 2012: LTE cellular data

    In just five years, the iPhone went from a neat curiosity with serious potential to an indispensable tool with formidable capabilities. Navigation, multimedia, gaming, high-bandwidth video calls on cellular—it could do it all. Entire categories of gadget, from the camcorder to the GPS device, were subsumed into a single purchase.

    None of this was magic. It was good ROI. While Apple enjoyed a brief lead, other mobile device manufacturers wanted a slice of the market as well. Consumers badly wanted the internet in their pocket. Demand incentivized investment, iteration and refinement of these technologies.

    Which brings us to Pattern Synthesis Engines

    I think AI is a misnomer, and causes distraction by anthropomorphizing this new technology. I prefer Pattern Synthesis Engine.

    Right now, the PSE is a toy. A strange curiosity.

    It has struggled to render fingers and teeth, when creating images. In chat, it frequently bluffs and bullshits. The interfaces we have for accessing it are crude and brittle—ChatGPT in particular is an exercise in saintly patience as its popularity has grown, and it’s almost unusably slow during business hours.

    Still, I have already found ChatGPT to be transformational in the workflow of writing code. I’m building a replacement for this website’s CMS right now, and adapting an excellent but substantial open source codebase as a starting point.

    The thing is, I’m not particularly experienced with JavaScript. In particular, I’m often baffled by its syntax, especially because there are often multiple ways to express the same concept, they’re all valid, and different sources prescribe different versions.

    Now, when I get tripped up by this, I can solve the problem by dumping multiple, entire functions in a ChatGPT session.

    I swear to god, last night the machine instantly solved a problem that had me stumped for almost half an hour. I dumped multiple entire functions into the chat and asked it what I was doing wrong.

    It correctly diagnosed the issue—I was wrapping an argument in some braces when I didn’t need to—and I was back on my way.

    Remember: shitty technology that’s still good enough to capture our cultural imagination needn’t stay shitty forever.

    So if we extrapolate from these previous shifts, we can imagine a trajectory for PSEs that addresses many of the current issues.

    If that’s the case, the consequences and opportunities could indeed rank alongside the microprocessor, personal computer, the internet, and mobile.

    Fewer errors, less waiting, less brittleness

    The error correction is going to get better. Indeed, the Midjourney folks are thrilled to report they can make non-terrifying hands now.

    As PSEs evolve, we can imagine their speeds improving while the ways we interact with them grow more robust. The first iPhone was sluggish and the early experience of the consumer internet could be wrecked by something as simple as picking up the phone. Now these issues are long in the past.

    Persistent and pervasive

    Once, we “went on the internet.” There was a ritual to this, especially in the earliest days. Negotiating clearance for the phone line with various parties in the household, then initiating the phone connection, with its various squeals and chirps.

    In the broadband age, the network connection became persistent. If the cable modem was plugged in, the connection was there, steady. Today, thanks to mobile, we are for better or worse stuck online, tethered at all times to a network connection. Now we must make an effort to go offline, unplug.

    Today PSEs require significant hardware and computation, and exist centralized in the hands of a few players. As investment makes running them more efficient, and hardware develops optimized for their execution, we can imagine a day in the future where specialized PSEs are embedded more closely in our day-to-day activities. Indeed, for many business applications, this capability will be table stakes for adoption. At a minimum, large companies will demand their own, isolated PSEs to ensure they aren’t training a competitor’s data set.

    Internet of frauds

    With rapidly improving speech synthesis, plus the ability to construct plausible English language content based on a prompt, we are already seeing fraudsters using fake voices to scam innocent people.

    Perhaps most daunting about this is the prospect that, in time, the entire operation could become automated. Set the machine loose to draw associational links between people, filter out the ones where you can generate a voice, and just keep dialing, pocketing the money where you can.

    PSEs bring the issue of automated spam into whole new domains where we have no existing defenses.

    Digital guardians

    It has always struck me that the mind is an information system, but that unlike all our other information systems, we place it into networks largely unprotected. The junky router you get with your broadband plan has more basic protections against hostile actors than the typical user of Facebook or Twitter.

    PSEs could change this, detecting hateful and fraudulent content with greater speed and efficiency than any human alone.

    Gates describes this notion as the “personal agent,” and its science fiction and cyberpunk roots are clear enough:

    You’ll be able to use natural language to have this agent help you with scheduling, communications, and e-commerce, and it will work across all your devices. Because of the cost of training the models and running the computations, creating a personal agent is not feasible yet, but thanks to the recent advances in AI, it is now a realistic goal. Some issues will need to be worked out: For example, can an insurance company ask your agent things about you without your permission? If so, how many people will choose not to use it?

    The rise of fraud will require tools to can counter with the same economics of automation. Without that, we’ll be drowning in bullshit by 2025. These guardians will be essential in an information war that has just gone nuclear.

    Learning and knowledge will be transformed

    I’m a first-generation knowledge worker. I grew up working class. If my mom missed work for the day, the money didn’t come in.

    My economic circumstances were transformed by microprocessors, personal computers, and the internet. I’ve earned passive income by selling software, and I’ve enjoyed the economic leverage of someone who can create those outcomes for venture funded startups.

    What made that transformation possible was how much I could learn, just for the fun of learning. I spent my childhood exploring ever corner of the internet I could find, fiddling with every piece of software that showed up in my path.

    It was a completely informal, self-directed education in computing and technology that would never have been possible for someone of my socioeconomic status in earlier generations.

    Gates takes a largely technocratic view of PSEs and their impact on education:

    There are many ways that AIs can assist teachers and administrators, including assessing a student’s understanding of a subject and giving advice on career planning.

    But to me, the larger opportunity here is in allowing people who don’t learn well in the traditional models of education to nonetheless pursue a self-directed course of study, grounded in whatever it is they need to know to solve problems and sate their curiosity.

    Forget how this is going to impact schools. Just imagine how it will disrupt them. It goes beyond “cheating” at essays. Virtuous PSEs will create all new paths for finding your way in a complex world.

    The challenge, of course, is that anything that can be done virtuously can also be turned toward darkness. The same tech that can educate can also indoctrinate.

    Cultural and creative explosion

    There is now a messy yet predictable history between the technologists building PSEs and the creatives whose work they slurped up, non-consensually, to train the datasets. I think the artists should be compensated, but we’ll see what happens.

    Nevertheless, I can imagine long term implications for how we create culture. Imagine writing Star Trek fanfic and having it come to life as a full-motion video episode. You’ve got hundreds of hours of training data, multiple seasons of multiple series. How sets and starships look, how characters talk.

    It’s just as complicated a notion as any other we’ve covered here. Anything fans can do, studios can match, and suddenly we’re exploiting the likenesses of actors, writers, scenic artists, composers and more in a completely new domain of intellectual property.

    This one is far beyond the current tech, and yet seems inevitable from the perspective of what the tools now.

    Still: what will it mean when you can write a TV episode make a computer spit it out for you? What about games or websites?

    We’re going to find out sometime. It might not even be that far away.

    A complicated future

    Merely on the downsides, it’s easy to grant Gates a premise placing automated pattern synthesis on the same level as the internet or personal computing. These technologies created permanent social shifts, and PSEs could do much the same.

    Nevertheless, there’s also serious potential to amplify human impact.

    I’m allergic to hype. There’s going to be a lot of bullshitters flooding this space and you have every reason to treat this emerging field with a critical eye. Especially those hyping its danger, as a feint to attract defense money.

    Nevertheless, there is power here. They’re going to build this future one or another. Stay informed, stay curious.

    There’s something underway.


  • Retrospective on a dying technology cycle, part 4: What comes next?

    I quit my job a few weeks ago.

    Working with the crew at Glitch.com was a highlight of my career. Making technology and digital expression more accessible is one of my core drives. Nobody does it better than they do.

    But as the cycle reached its conclusion, like so many startups, Glitch found itself acquired by a larger company. Bureaucracies make me itchy, so I left, eager to ply my trade with startups once again.

    Then the Silicon Valley Bank crisis hit.

    It was a scary moment to have thrown my lot in with startups. Of course, we know now that the Feds guaranteed the deposits. Once I mopped the sweat from my brow, I got to wondering: what comes next?

    The steam of the last cycle is largely exhausted. Mobile is a mature platform, full of incumbents. The impact of cloud computing and open source on engineering productivity is now well understood, priced into our assumptions. The interest rate environment is making venture capital less of a free-for-all.

    Meanwhile, innovators have either matured into more conservative companies, or been acquired by them.

    So a new cycle begins.

    Why disruption is inevitable

    Large companies have one job: maintaining a status quo. This isn’t necessarily great for anyone except the companies’ shareholders, and as Research in Motion found when Blackberry died, even this has limits. Consumers chafe against the slowing improvement and blurring focus of products. Reality keeps on moving while big companies insist their workers make little slideshows and read them to each other, reporting on this and that OKR or KPI.

    And suddenly: the product sucks ass.

    Google is dealing with this today. Search results are a fetid stew of garbage. They’re choked with ads, many of which are predatory. They lead to pages that aren’t very useful.

    Google, once beloved as the best way to access the web, has grown complacent.

    Facebook, meanwhile, blew tens of billions—hundreds of times the R&D costs of the first iPhone—to create a VR paradigm shift that never materialized. Their user populations are aging, which poses a challenge for a company that has to sell ads and maintain cultural relevance. They can’t quite catch up to TikTok’s mojo, so they’re hoping that lawmakers will kill their competitor for them.

    Twitter has been taken private, stripped down, converted into the plaything of the wealthy.

    Microsoft now owns GitHub and LinkedIn.

    Netflix, once adored for its innovative approach to distribution and original content, has matured into a shovelware play that cancels most of its shows after a season or two.

    Apple now makes so many billions merely from extracting rents on its App Store software distribution infrastructure, that could be a business unto itself.

    A swirling, dynamic system full of energy has congealed into a handful of players maintaining their turf.

    The thing is, there’s energy locked in this turgid ball of mud. People eager for things that work better, faster, cheaper. Someone will figure out how to reach them.

    Then they will take away incumbents’ money, their cultural relevance, and ultimately: their power. Google and Facebook, in particular, are locked in a race to become the next Yahoo: decaying, irrelevant, coasting on the inertia of a dwindling user base, no longer even dreaming of bygone power.

    They might both win.

    “Artificial” “intelligence”

    Look, AI is going to be one of the next big things.

    You can feel how you want to feel about it. A lot of how this technology has been approached—non-consensual slurping of artists’ protected works, training on internet hate speech—is weird and gross! The label is inaccurate!

    This is not intelligence.

    Still: whatever it is, is going to create some impact and long-term change.

    I’m more comfortable calling “AI” a “pattern synthesis engine” (PSE). You tell it the pattern you’re looking for, and then it disgorges something plausible synthesized from its vast set of training patterns.

    The pattern may have a passing resemblance to what you’re looking for.

    But even a lossy, incomplete, or inaccurate pattern can have immediate value. It can be a starting point that is cheaper and faster to arrive at than something built manually.

    This is of particular interest to me as someone who struggles with motivation around tedious tasks. Having automation to kickstart the process and give me something to chip away at is compelling.

    The dawn of the PSE revolution is text and image generation. But patterns rule everything around us. Patterns define software, communications, design, architecture, civil engineering, and more. Automation that accelerates the creation of patterns has broad impact.

    Indeed, tools like ChatGPT are already disruptive to incumbent technologies like Google. I have found it faster and more effective to answer questions around software development tasks—from which tools and frameworks are viable for a task, to code-level suggestions on solving problems—through ChatGPT, instead of hitting a search engine like Google. Microsoft—who you should never, ever sleep on—is seizing the moment, capturing more headlines for search than they have in decades.

    Still, I don’t valorize disruption for its own sake. This is going to make a mess. Pattern synthesis makes it cheaper than ever to generate and distribute bullshit, and that’s dangerous in the long term. It won’t stop at text and images. Audio, voices and video are all patterns subject to synthesis. It’s only a matter of time and technological progression before PSE’s can manage their complexity cheaply.

    On the other hand, many tedious, manual tasks are going to fall before this technology. Individuals will find their leverage multiplied, and small teams will be able to get more done.

    The question, as always for labor savings, will be who gets to keep the extra cream.

    Remote work

    Most CEOs with a return-to-office fetish are acting the dinosaur and they’re going to lose.

    Eventually.

    Feeling power through asses-in-seats is a ritualistic anachronism from a time where telecommunications was expensive and limited to the bandwidth of an analog telephone.

    Today, bandwidth is orders of magnitude greater, offering rich communication options without precedent. Today, the office is vulnerability.

    In a world more and more subject to climate change, a geographically distributed workforce is a hedge. Look at the wildass weather hitting California in just the last few months. You’ve got cyclones, flooding, extreme winds, power outages. And that’s without getting into the seasonal fires and air pollution of recent years.

    One of the great lessons of Covid was that a knowledge workforce could maintain and even exceed their typical productivity even in a distributed context. Remote work claws back time and energy that goes to commuting, giving us more energy to use on the goals and relationships that matter most.

    Still, even if remote work is the future, much of it has yet to arrive. The tools for collaboration and communication in a distributed context remain high-friction and even alienating. Being in endless calls is exhausting. There’s limited room for spontaneity.

    The success stories of the new cycle will be companies nimble enough to recruit teams from outside their immediate metro area, clever enough to invent new processes to support them, and imaginative enough to productize this learning into scalable, broadly applicable solutions that change the shape of work.

    The future does not belong to Slack or Zoom.

    The next iteration of the web

    One of the most important things Anil Dash ever told me came on a walk after we discussed a role at Glitch.

    He explained that he was sympathetic to the passion of the web3 crowd, even if he thought it was misplaced. After all, who could blame them for wanting a version of the web they could shape? Who could begrudge yearning for a future that was malleable and open, instead of defined by a handful of incumbents?

    I’ve thought about it endlessly ever since.

    While I argue that web3 and its adjacent crypto craze was a money grab bubble inflated by venture capital, I also agree with Anil: there has to be something better than two or three major websites defining the very substrate of our communication and culture.

    In the wake of Twitter’s capture by plutocrats, we’ve seen a possible answer to this hunger. Mastodon, and the larger ecosystem of the Fediverse, has seen a massive injection of energy and participation.

    What’s exciting here is a marriage of the magic of Web 1.0—weird and independent—with the social power that came to define Web 2.0. A web built on the Fediverse allows every stakeholder so much more visibility and authorship over the mechanics of communication and community compared to anything that was possible under Facebook, Twitter or TikTok.

    This is much the same decentralization and self-determination the web3 crowd prayed for, but it’s built on mundane technology with reasonable operating costs.

    Excess computing capacity

    Meanwhile, the cost of computing capacity is changing. While Amazon dominated the last cycle with their cloud offerings, the next one is anybody’s game.

    Part of this is the emergence of “edge computing,” “serverless functions,” and the blurring definition of a static vs. dynamic site. These are all artifacts of a simple economic reality: every internet infrastructure company has excess capacity that they can’t guarantee on the same open-ended basis as a full virtual machine, but which they’d love to sell you in small, time boxed slices.

    Capital abhors an un-leveraged resource.

    As this computing paradigm for the web becomes more commonly adopted, the most popular architectures of web technologies will evolve accordingly, opening the door to new names and success stories. As Node.js transformed web development by making it accessible to anyone who could write JavaScript, look for these technologies to make resilient, scalable web infrastructure more broadly accessible, even to non-specialists.

    Reality, but with shit in front of your eyes

    Everything is about screens because the eyes are the highest bandwidth link we have for the brain without surgical intervention, and even that remains mostly science fiction.

    The iPhone was transformational in part because of its comparatively enormous display for the day, and in the years since phones have competed by developing ever-larger, more dense displays.

    Part of the logic of virtual reality, which obscures your vision, and augmented reality, which overlays it, is to take this evolution to its maximalist conclusion, completely saturating the eye with as much information as possible.

    Whether this has a long-term sustainable consumer application remains to be proven. There are serious headwinds: the energy cost of high-density, high-frequency displays, and the consequent weight of batteries needed to maintain the experience. There’s the overall bulk of the devices, and the dogged friction of miniaturizing so many components to reach fashionable sizes.

    But they’re sure going to try. As noted, Facebook has already blown tens of billions, plus a rebrand, trying to make VR happen. Valve did their best to create both the software delivery and hardware platform needed to spur a VR revolution, with limited success. Sony and Samsung have each dabbled.

    This year, rumors suggest Apple will enter the fray with their own take on the headset. Precedent allows that they might find traction. With the iPad, Apple entered the stagnant tablet market nearly a decade late with an offering people actually loved.

    While tablets didn’t transform computing altogether, Apple made a good business of them and for many, they’ve become indispensable tools.

    If indeed VR/AR becomes a viable paradigm for consumer computing, that will kick off a new wave of opportunity. It has implications for navigation, communication, entertainment, specialized labor, and countless other spaces. The social implications will be broad: people were publicly hostile to users of Google Glass—the last pseudo-consumer attempt in this space. Any successful entrant will need to navigate the challenge of making these devices acceptable, and even fashionable.

    There are also demonic labor implications for any successful platform in this space: yet more workplace surveillance for any field that broadly adopts the technology. Imagine the boss monitoring your every glance. Yugh.

    Lasting discipline, sober betting, and a green revolution

    In a zero interest rate environment, loads of people could try their hands at technology speculation. If interest rates hold at their existing altitude, this flavor of speculation is going to lose its appeal.

    The talented investor with a clear eye toward leverage on future trends will have successes. But I think the party where you bet millions of dollars on ugly monkey pictures is over for awhile.

    But there are no guarantees. Many seem to be almost pining for a recession—for a reset on many things, from labor power to interest rates. We’ve had a good run since 2008. It might just happen.

    Yet the stakes of our decisions are bigger than casino games. The project of humanity needs more than gambling. We need investment to forestall the worst effects of climate change.

    Aggressive investment in energy infrastructure, from the mundane of household heat pumps, to the vast chess game of green hydrogen, will take up a lot of funding and mental energy going forward. Managing energy, making its use efficient, maintaining the transition off of fossil fuels—all of this has information technology implications. Ten years from now, houses, offices and factories will be smarter than ever, as a fabric of monitoring devices, energy sources and HVAC machines all need to act in concert.

    The old cycle is about dead. But the next one is just getting started. I hope the brightest minds of my generation get more lasting and inspiring work to do than selling ads. Many of the biggest winners of the last cycle have fortunes to plow into what comes next, and fond memories of the challenges in re-shaping an industry. There’s another round to play.

    We have so much still to accomplish.


  • The Terraformers, by Annalee Newitz, wants to talk about social stratification

    A speculative fiction author has four jobs:

    1. Imagination. Find us a fresh lens. Show us a place we’ve never been, or a perspective on the mundane that we’ve never seen. Inject tangible energy into our imagination systems.
    2. Timely insight. Help us understand our world in this moment. Help us see the systems that animate social reality, and help us use that vision for our own purposes.
    3. Clarity. Clear the mud from our collective windshields. Use all this fresh perspective to let us look with renewed clarity on all that’s wrong, all that’s beautiful, and all we need to do to enact change.
    4. Challenge. Push the reader, and in the pushing, let fresh perspective permeate more than just the most convenient, already-exposed surfaces.

    In combination, these gifts offer the reader extra fuel to continue in a challenging world. Speculative fiction is an activist project that both aligns the reader and empowers them with new conceptual tools. A good yarn in this genre gives you better metaphors for understanding your values, and even making the case for them.

    In The Terraformers, Annalee Newitz, using they/them pronouns, spins a tale tens of thousands of years into the deep future. They explore the long-term project of terraforming a planet, and the teams who spend lifetimes making it comfortable, habitable and marketable for future residents.

    It’s an ambitious undertaking. The range of characters and timespans brings to mind Foundation, Asimov’s epic of collapsing empire. Unlike Foundation, I was able to finish this one: its representation of gender and sexuality was refreshingly complete.

    In a previous Newitz work, Autonomous, the author impressed me with the casual inclusion of computing mechanics that moved the plot forward in ways that were coherent, plausible and folded neatly into the story. Similarly, while Terraformers isn’t cyberpunk per se—not fetishizing endlessly upon the details of brain-computer interfaces—it is such a technically competent work. I completely believe in the elaborate, far-future computing systems described here. While that’s not central to my enjoyment of a book, I like when the details enhance the magic instead of break it.

    But what makes The Terraformers stand out, why you have to read it, is much more human than technical. This is a book that wants to talk about social stratification.

    Frozen mobility

    The geometry of social mobility is bound up in the shape of inequality, and today we have both at extremes.

    Social mobility is nearly frozen. If you’re born poor, you’re likely to stay poor.

    Meanwhile, wealth inequality ratchets tighter and tighter each year, especially since the pandemic. The middle class erodes steadily, the wealthy control more and more wealth, and the poor grow both in number and in their precarity. To say nothing about desperation in the developing world.

    These are not abstract truths. The statistics are more than just numbers in a spreadsheet. They reflect human experiences and traumas. They reflect a disparity of power, and they describe an everyday existence where wealth and corporate bureaucracies strip workers of agency and ignore their insights.

    Newitz brings these dry facts to vivid life, depicting the frustration and casual humiliation of living each day under the control of someone who sees you as a tool to implement their edicts. I don’t want to spoil anything, but I had a full-body limbic response to some of the email subject lines that issue from Ronnie, a central corporate figure in the story.

    Newitz brings an empathetic clarity to the experience of being employed. Anyone who has slotted into a corporate system whose executive team was far out of view will find themselves witnessed by this book.

    My favorite bits involve the places where the powerful are just pathologically incurious, and how this creates long term consequences for their influence. Reads so true to life for me.

    There’s also great stuff in here about the raw, utilitarian mechanics of colonialism broadly: far-off people craving yet more wealth, making social and infrastructure decisions for people they’ll never meet, who have no recourse.

    …but am I also the oppressor?

    Good speculative fiction challenges as much as it validates.

    Newitz also asks through this work whether we are complicit ourselves in the everyday oppression of innocent lives yearning to be free. Not since James Herriot have I seen so many animals depicted with love, compassion and centrality to the story.

    Unlike Herriot, Newitz is unbound by depression-era Britain’s technological limitations. Cybernetically enhanced, animals in The Terraformers text their companions, sharing their points of view and preferences, allowing them to be narrative prime-movers in their own right.

    There’s delight in this: the thoughtful body language of cats figures prominently at points, and a wide cast of animal personalities appears throughout.

    But the book is also pointed in its examination of how we, the readers, may relate to animals. Are we using our assessment of their intelligence as a convenient justification for their subjugation? Are we losing out on valuable insight to hard problems because we are, ourselves, incurious to the perspectives of our animal companions?

    Perhaps most uncomfortably: how would such a justification eventually leak over into the control and oppression of other humans?

    Implicit in The Terraformers—especially in its exploration of bioengineering and corporate slavery—is the argument that corporations would treat us all like livestock if they thought they could get away with it. Maybe we should start asking whether livestock is even a valid category before it envelopes us entirely.

    A thesis for a better world

    But Newitz is not here to depress us.

    Shot through the entire book is a bone-deep commitment to something hopeful and better than what we know today. Throughout the tale, we witness resistance on a spectrum of strategic forms: through journalism, through non-participation, through activist demonstration, through violence and non-violence, and of course, resistance through mutual aid. Despite this, there’s also sobriety about the frustrating pace of change, and just how messy consensus and governance can be.

    And a good case for why we have to work at it all anyway.

    Moreover, as a real-life technology cycle rife with exploitation and neocolonial intent comes to a close, it’s timely that The Terraformers explores how corporate-born technology can be bent toward self-determination and shared civic good. All around us are the tools to create completely novel solutions to big problems. We just need to take the time to dream them into existence, and work at the resources needed to sustain them.

    It’s not enough to shine a light on the problems. A good novel in this space lights up the levers. The Terraformers strikes a great balance between sober assessment of what hurts, and an optimistic vision for what we can do to change it.

    Go grab this book.


  • Retrospective on a dying technology cycle, part 3: Venture Capital and an economy on life support

    After leaving my job to build apps in 2009, life took on some hard math. I only had so much money saved up. While I was making around $1,000 monthly from the App Store, the gap between my income and expenses was leaving me in the red.

    So my maximum budget for a meal was $3. If I could get under that number, I tried.

    This meant clever use of bulk buying at Costco and lots of meal prep. Going out for meals was an occasional indulgence, but the three dollar figure remained a constraint. Spending more than that meant having to bring home enough leftovers to break even.

    I’d figured I could make some extra money working part time while I got the details of my app business figured out. To this point in my life, part-time work had been easy to come by. I took it for granted, like water from the tap. It would be there if I needed it.

    But the financial crisis meant stiff competition for any work. I’d moved to Bend, Oregon for its low cost of living and beautiful scenery. Unfortunately, no one was hiring, and as the new kid in town, I had no local network to lean on. When a Kohl’s opened up, it made the evening news: newly laid-off job applicants wrapped around the block hoping for a chance at even 15 hours of work per week.

    I wasn’t just competing with part-time job seekers. I was competing with people who needed any work they could get to keep a roof over their families’ heads.

    It was a scary, precarious time. Unemployment peaked around 10% nationwide, even greater than in the dot-com bust, with millions of workers unable to find jobs. I came within weeks of losing my housing.

    This desperate climate catalyzed the fuel for the last technology cycle. Central bankers around the world slashed interest rates nearly to zero. Without traditional means generating returns on their money, pensions, institutional investors and the wealthy sought new vehicles for growth.

    The Venture Capitalist had just what they needed. On behalf of these limited partners, a VC would secure a packet of investments in technology firms, one or more of which might just burst into a multi-billion dollar valuation.

    They called these “unicorns.”

    Cheap scale

    Software solves a problem through automation.

    When you build a spreadsheet to solve your reporting needs, you’ve just made yourself some software. It’s got decent leverage: instead of manually filling out a chart, some formulas summarize your data and nicely format it, saving you a few hours a week.

    You make a trade for this leverage. Making the report manually maybe costs you an hour a week. Building automation into the spreadsheet might take you much longer: a couple of afternoons. But within a few months, the leverage has paid for itself in time savings. Within a year, you’re ahead.

    This simple example captures the economic value of software, and its potential scales well beyond an individual’s problem at the office. Whatever costs you sink in creating software, you get many multiples back in whatever work it accomplishes.

    You can make office productivity tasks more efficient, you can build social interaction that promotes a particular style of engagement, you can deliver entertainment through games or other media, and you can even make cars show up in particular places where people want them.

    Any task that software can do once, it can again as many more times as you want. On paper, this happens at near-zero marginal cost.

    In practice, what this means is complicated. As Twitter’s Fail Whale demonstrated, just because scale is cheap doesn’t make it free. Architecting and re-architecting software systems to be resilient and responsive requires specialist knowledge and significant investment as your customer base transitions from one order of magnitude to the next.

    Copying code costs almost nothing. Running code on the web, and keeping it running, is more expensive. It requires infrastructure to execute the code, and it requires human labor to troubleshoot and maintain it in flight.

    The bargain of Venture Capital

    This is where the Venture Capitalist comes in.

    Writing code is expensive: it needs a full-time team of specialist engineers who know their way around the required technologies. The engineers also need to know what to build, so other specialists, like designers and strategists, are needed to document and refine the goals and end-product.

    The code continues to be expensive once it’s written. Again, information is physical: it lives somewhere. The data that software handles has a physical location in a database and has to be moved to and from the user, who may be quite far away from the server they’re interacting with.

    Thus, more specialist labor: engineers who can design technical infrastructure that gracefully avoids traffic jams as a web-based service becomes popular. Of course, there are more specialists after that. People to tell the story of the software, people to sell it, people to interact with the customers and users who are struggling to use it.

    Code needs humans. It can’t be born without them, it needs them to grow, and it can’t stay running forever in their absence. Automation is powerful, but it’s never perfect, and it exists in a human context.

    Humans are expensive. We need shelter, food, rest, recreation, partnership, indulgences, adventure… The list of human needs is vast, and like it or not, we meet those needs with time and money.

    So the Venture Capitalist buys a chunk of a growing software firm, injecting it with money in trade. The money can be used more or less freely, but much of it goes into paying for humans.

    Despite plunging tens or hundreds of millions of dollars into a firm, if the VC does their job right, they come out ahead. Software that can solve problems effectively, that captures a customer base that will pay for those problems to be solved, can produce billions of dollars in value. So long as the lifetime value of a customer is meaningfully greater than what it costs to earn them, you’ve got a powerful business.

    So spending a few million to support an infant firm can turn into orders of magnitude more money down the road.

    Not every startup achieves this success. These are outlier results. The majority will fail to achieve the scale and value needed for a win, in terms of venture expectations. But when the outliers are that lucrative, the overall model can work.

    So long as you pick an outlier.

    Going whackadoodle

    GitHub was unique. Founded in 2008, GitHub was profitable almost immediately, without needing any venture funding. They are perhaps the ultimate modern success story in software tools:

    GitHub simultaneously defined and seized the central turf of developer collaboration.

    Their strategy was devastatingly effective. Git was an emerging technology for version control, which every software project large and small needs. Version control maintains a history of project progress—who changed what, on which date, and why—while allowing developers travel back in time to a previous state in the project.

    Historically, developers needed an internet connection to interact with the history of a project, so they could connect with a server that stored all of the project data. Git’s great innovation was to decentralize this process, letting developers work in isolation and and integrate their changes with a server at their leisure.

    Originally developed to serve the needs of the Linux kernel project, one of the largest collaborations in open source history, git had both enormous power and daunting complexity. Still, git would be the future not just of open source, but of software development altogether.

    Seeing this, GitHub’s founders built a service that provided:

    • A central server to host git-based projects, called repositories
    • Convenient user interfaces to browse, change and duplicate repository content
    • Collaboration tools to discuss, improve and ship code with both teammates and strangers

    On the strength of this developer experience, GitHub fast became the most popular way to build software, no matter the size of your team or your goals.

    Its power and value earned it leverage. Four years into its life, it raised money on wildly favorable terms. It was a safe bet: enormous traction, plenty of users, massive goodwill, and a whole enterprise market waiting to be conquered.

    Then its cultural debts caught up with it.

    There’s a situation here

    GitHub built a replica of the Oval Office. They also built a replica of the White House Situation Room. Then there was the open bar, offering self-serve alcohol at any hour of the day. The rest of the space was of the typically fashionable vibes you’d expect of a startup, but these indulgences caused onlookers to raise their eyebrows.

    Many clowned GitHub for the White House stuff—myself included—but here is a place where I will now defend their intent. America is lots of things, and primarily an empire. The symbols of an imperialist state have a lot of complicated (that is, bloody) history and it’s a peculiar sort of hubris to adopt them whole-cloth.

    But America, in its most aspirational of modes, sees itself as a fabric for creating prosperity and shared growth. Having spent a couple years there, I really think that’s where GitHub was coming from in its office cosplay decisions.

    They wanted to be a fertile landscape for shared prosperity, and picked up the most garish possible symbols of that aspiration. It’s not how I would have come at it, but I get it how they landed there. It’s impossible to imagine the last chapter of software without GitHub as a productive substrate for all the necessary collaboration.

    This is a tidy microcosm for the cycle. Incredible ambition and optimism, tempered by social challenges few founding teams were equipped to address. An industry whose reach often exceeded its grasp.

    GitHub’s blindspots, coupled with its incredible early power, would conspire to make it both a distressed asset and a notorious place to work, claiming one of its founders in the process.

    The mess in its workplace culture made it challenging for the company to recruit both the experienced leadership and specialist labor needed to get it to the next phase of its evolution.

    GitHub did the work to patch itself up, recruiting the team it needed to get back to shipping and evolve into a mature company. With the bleeding stopped and stability restored, Microsoft purchased the company for—you guessed it—billions of dollars.

    This stuff was not unique to GitHub

    Zenefits had people fucking in stairwells, Uber spent $25 million on a Vegas bacchanal, and WeWork… they did a whole a docudrama about WeWork.

    With nowhere else for money to go, successful founders had a lot of leverage to spend as they pleased. VCs needed to compete for the best deals—not everything would be a unicorn, after all—and so the “founder-friendliness” of a firm was a central consideration of investor reputation. No founder wanted the money guys fiddling around and vetoing their vision and VCs feared losing out on the best deals.

    So investors kept a light touch on the day-to-day management of firms, so long as they were pursuing a strategy that kept hitting their growth metrics.

    Meanwhile, you’ll notice I keep emphasizing specialist labor. Some problems in technology are particularly challenging, with a limited number of practitioners able to make meaningful traction on them. Thus, startups had to work hard to recruit their workforce. Founders would argue here, with a certain degree of legitimacy, that differentiating their experience of work through generous perks was an essential lever in winning the competition for limited talent.

    Thus was the frenzy for hyper-growth that continued for more than a decade. Money pumped into the system, searching for its unicorns. Founders with more technical acumen than management experience tried to recruit the teams they needed. Investors kept the show running, hoping to grow money for their limited partners.

    But this was mere prelude to the wackiest money of all.

    Crypto craze

    The way venture historically worked:

    • Invest in a young company
    • Support it through relationships and more cash, if its progress was promising
    • Wait around for quite awhile for it to reach maturity
    • Get paid upon a “liquidity event”—when the company goes public (lots of work) or is acquired (a bit less work)

    Cryptocurrency, non-fungible tokens, and other so-called web3 projects were an irresistible optimization on the normal cycle. Going public required stability, clean accounting, a proven record of growth, and a favorable long-term outlook for a company’s overall niche. It required stringent regulatory compliance, as a company transformed from a privately-held asset into a tradable component of the public markets.

    Keeping a financial system healthy takes work, after all, and bodies like the SEC exist to ensure obviously-toxic actors don’t shit in the pool.

    Less arduous was an acquisition. Sometimes these were small, merely breaking even on an investor’s cash. Still, multi-billion dollar acquisitions like Figma and GitHub do happen. To integrate into a public company like Microsoft or Adobe, a startup needs to get its house in order lots of ways. Accounting, information security, sustainable growth—it takes time to get a company into a level of fitness that earns the best price.

    Cryptocurrency investment promised to short-circuit all of this.

    Investors could buy “tokens” as a vehicle for investing in a company. Later, when those tokens became openly tradable—through a mostly unregulated, highly-speculative market—investors could cash in when they decided the time was right.

    Rather than waiting seven or ten years for a company to reach maturity, investors in crypto firms could see liquidity within half that time.

    With cloud services full of incumbents, while mobile matured, it was hoped that cryptocurrencies and web3 would represent an all new paradigm, starting a new cycle of investment and innovation.

    More than a decade since the 2008 crisis, central bankers had to pare interest rates back down to zero once again, this time in response to the Covid-19 pandemic. This created a fresh rush of money once again in search of growth, and cryptocurrency-adjacent companies were happy to lap it up.

    What comes next

    Of course, we know how this ends. Covid-era inflation soared, and central bankers cranked rates back up to heights not seen since before 2008.

    Crypto came crashing back to earth, NFTs became a tax loss harvesting strategy, and now we muse about what else was a “zero interest rate phenomenon.” I’d be shocked if crypto was entirely dead, but for the moment, its speculative rush has been arrested.

    In the next and final installment, I want to take stock of what we can learn from all this, and what comes next.


  • Retrospective on a dying technology cycle, part 2: Open Source and the Cloud

    [Previously: Mobile]

    Though it rescued me from insolvency, I grew to hate my job at Aurora Feint. Two months into the gig, Apple Sherlocked us, sending the company into a series of endless pivots that lasted four to six weeks each.

    The engineers were crunching, working evenings and weekends, only to throw away their work and start again, seemingly based on how well one press release or another gained attention. Meanwhile, I was far from the action in an ill-defined product management role. While I knew the emerging mobile product space as well as anyone, I had limited vision into the various rituals of Silicon Valley.

    What I really wanted was to build mobile software.

    I interviewed for months, but with no pedigree and a strange path into technology through self-teaching, I didn’t get very far.

    After a particularly dispiriting day at the office, I sniped the attention of Adam Goldstein and Steve Huffman, then founders of a YC-funded travel search startup called Hipmunk. I wrote a flattering blog post about their product, posted it to Hacker News, and within weeks had a new job: leading mobile product design and engineering for a seed-stage startup.

    They never asked for a resume. They checked out my existing work on the App Store and made me an offer.

    I was the third employee. The entire company of five could fit into a San Francisco apartment’s living room. And did, regularly, in its first few months.

    Despite the small crew and modest quarters, thousands of people were buying airfare through Hipmunk already. Its great innovation was visualizing your flight options on a colorful Gantt chart, ranked by “Agony”—factors like layovers and overall flight time.

    Two things gave this team enough leverage to do it: open source software and cloud computing. A credit card was all they needed to provision server capacity. An internet connection gave them access to all the software they needed to make the servers productive, laying the foundation for building Hipmunk’s unique approach to finding flights.

    What is Open Source, anyway?

    Open source has permanently changed the nature of human productivity.

    Open source describes code you can use, according to some basic licensing terms, for free. Vast communities may contribute to open source projects, or they may be labors of love by one or a handful of coders working in their spare time.

    In practice, open source code is encapsulated labor.

    One of the fundamental economic levers of software in a globally-connected community is that it scales at zero marginal cost. Once the software exists, it’s nearly free duplicate it.

    Thus, if you have software that solves a common problem—provides an operating system, handles web requests, manages data, runs code—you can generalize it, and then anyone can use it to solve that category of problem in their own project. This combination of open source projects, called a “stack,” is sometimes packaged together.

    Such open source stacks made the last cycle possible. The Facebook empire could be founded in a dorm room because Zuckerberg had access to Linux (OS), Apache (web server), MySQL (database), and php (language/runtime) as a unified package that solved the problem of getting you productive on the web.

    This represented tens of thousands of hours of labor that startups don’t have to recruit or pay for, or even wait for, dramatically accelerating their time to market.

    The details of serving HTTP requests or handling database queries aren’t central to a startup’s business value—unless you’re making a platform play where your performance on these categories of tasks can be sold back to other firms. But for most, there’s no advantage to reinventing those wheels, they just have to work well. Instead, borrowing known-good implementations can quickly provide the foundations needed to build differentiated experiences.

    In Facebook’s case, this was a minimalist social platform, for Hipmunk this was visual travel search, while Uber dispatched cars. There was no innovation to be found merely existing on the web. Instead, to survive you had to be on the web while also providing unique, differentiated value.

    Open source is an informal commons of these known-good implementations, regularly refined and improved. Any party finding a project lacking can lobby for refinement, including proposing direct changes to the code itself.

    I would argue there is societal impact from this that rivals other epochal moments in information technology, like the advent of writing or the discovery of algorithms. Being able to store, replicate and share generalized technical labor is a permanent reorganization of how millions of people work. While capitalists were among the first to seize on its power, over decades and centuries it may permeate much more than business.

    The power of open source was constantly animating the last cycle. Node.js, a server-side runtime for JavaScript first released in 2009, took one of the most well-known languages in the world and made it viable for building both the user interface components of a web application and its backend server as well. Thus open source could not just encapsulate labor, but amplify it in motion as well. Suddenly, many more people knew a language that could be used to build complex web services. As a result, Node became among the most popular strategies hobbyists and startups alike.

    Meanwhile, it wasn’t enough for code to be easy to build on. The code actually has to live somewhere. While open source was a crucial engine of the last cycle, cloud computing running this open source code was just as essential.

    It’s in the cloud

    Information is physical.

    It needs somewhere to live. It needs to be transformed, using electricity and silicon. It needs to be transported, using pulses of light on glass fibers.

    Software requires infrastructure.

    Historically, this had meant shocking amounts of capital expenditures—thousands of dollars for server hardware—along with ongoing operating expenditures, as you leased space in larger facilities with great connectivity to host these servers.

    Scaling was painful. More capacity meant buying more hardware, and installing it, which took time. But the internet is fast, and demand can spike much more quickly than you could provision these things.

    Amazon, craving massive scale, had to solve all these problems at unique cost. They needed enough slack in their systems to maintain performance at peak moments, like holiday sales, back-to-school, and other seasonal surges. With the size of Amazon’s engineering division, meanwhile, this capacity needed to be factored into tidy, composable services with clear interfaces and rigorous documentation, allowing any team to easily integrate Amazon’s computing fleet into their plans and projects.

    Capital abhors an under-leveraged asset, so they decided to sell metered access to these resources. Though Amazon was the first to use this approach, today they compete with Microsoft, Google, IBM and others for corporate cloud budgets all over the world.

    With access to “the cloud” just a credit card away, it’s easy for small teams to get started on a web-based project. While scaling is commercially much easier than ever—you just pay more to your cloud vendor, then get more capacity—it can still be a complex technical lift.

    The arcane art of distributing computing tasks between individual machines, while still producing correct, coherent output across large numbers of users, requires specialist labor and insight. The cloud isn’t magic. Teams have to be thoughtful to make the most of its power, and without careful planning, costs can grow devastating.

    Still, this power is significant. The cloud can itself be programmed, with code that makes scaling decisions on the fly. Code can decide to increase or trim the pool of computing capacity without human intervention, adjusting the resources available to create maximum customer impact and system resiliency. Netflix is a notable specimen on this point, as one of the earliest large-scale adopters of Amazon’s cloud services. They created an infrastructure fabric that is designed to tolerate failure and self-heal. They even built a technology called “Chaos Monkey” to continually damage these systems. This forces their engineers to build with resilience in mind, knowing that at any moment, some infrastructure dependency might go offline.

    (Chaos Monkey is, naturally, available as open source.)

    For companies in infancy, like Hipmunk, the cloud opened the path to a fast-iterating business. For companies entering maturity, like Netflix, the cloud provided a platform for stability, resilience, global presence, and scale.

    The world had never seen anything like it. In fact, there was early skepticism about this approach, before it came to dominate.

    Today, the cloud is a fact of life. Few would even bother starting a new project without it. What was once transformational is now priced into our expectations of reality.

    Next

    Hipmunk would not become a galloping success. It sold to Concur in 2016, eventually scrapped for parts. Steve Huffman would return to lead Reddit, his first startup. Reddit sold to Condé Nast for peanuts, only to spin back out as an independent venture as, stubbornly, it just wouldn’t die.

    Meanwhile, six months after my departure, the renamed OpenFeint was acquired for $100m by a Japanese mobile company in need of next-generation mobile technology. I didn’t get a dime of the action—I left before my cliff kicked in, and I couldn’t have afforded to exercise the options even if I they were open to me.

    That’s the circle of life in venture capital. Not every bet is a winner, death is common, and sometimes acquisition is bitter.

    But when the winners win, they can win big. After a couple of years, I left Hipmunk for a couple more gigs leading mobile teams. Under all the VC pressure and SF’s punishing cost of living, I burned out hard.

    Somehow, barely able to get out of bed most mornings, I landed on a unicorn. In the next piece, we’ll explore how venture capital was the fuel of the last cycle, creating growth in a zero interest rate global economy by harnessing startups that, impossibly, created billions of dollars in value.


  • Retrospective on a dying technology cycle, part 1: Mobile

    I missed the introduction of the original iPad.

    Instead of watching Steve Jobs unveil his bulky slab of aluminum and glass, I was picking my way through the streets of downtown Burlingame. I’d spent what was left of my available credit card balance on a ticket to SFO.

    I was due for my first tech interview.

    Before he created the chat juggernaut Discord, Jason Citron founded a mobile game startup. After middling success selling games themselves, Aurora Feint had pivoted to middleware: client- and server-side social technology to serve the growing ranks of independent, mobile game developers.

    A few blocks’ walk from the Caltrain station, I’d found my destination: a plain office building hosting a bank and an insurance broker on the lower floors. Hidden away upstairs, thirty employees of Aurora Feint were toiling away, trying to capture turf in the unspooling mobile revolution.

    The interview was a grueling, all-day affair running 10:30 to 4, including one of those friendly lunches where the engineers try to figure out if you’re full of shit.

    It was 2010, and the inertia of the financial crisis was still on me. Weeks away from homelessness, I somehow closed the day strong enough to earn my first startup job.

    As a result, I got to watch the last technology cycle from its earliest phases.

    It transformed my life. I wasn’t the only one.

    Animating forces

    The last cycle was activated by convergent forces:

    1. The mobile revolution: An all new set of platforms for communication, play and productivity with no incumbents. A green field for development and empire-building
    2. The rise of cloud computing: In the past, computing resources were expensive and provisioning was high-friction. By 2010, cloud computing offered low-risk, highly-scalable capacity for teams of any size.
    3. The power of open source software: Open source transformed the economics of building a software business. A vast, maturing bazaar of software resources, free to use, gave founding teams a head-start on the most common tasks. This amplified their power and allowed small seed investments to produce outsized results.
    4. An economy on life support: In the wake of the 2008 financial crisis, economic malaise dominated. Central bankers responded with historically low interest rates. Growing money needed a new strategy, and the growth potential of software empires was irresistible.

    Each of these, alone, promised incredible power. In combination, they were responsible for a period of growing wealth and narrow prosperity without precedent. As the cycle draws to a close, we find each of these factors petering out, losing sway, or maturing into the mundane.

    As a consequence, the cycle is losing steam.

    This wasn’t a bubble—though it did contain a bubble, in the form of cryptomania. The last cycle was a period of dramatic, computing-driven evolution, whose winners captured value and power.

    New platforms and a digital land rush

    The 2007 introduction of the iPhone is a pivotal moment in the history of consumer computing. An Apple phone had been rumored for years, and on the heels of the popular iPod, it was an exciting prospect.

    Mobile phones of the day were joyless, low-powered devices that frustrated users even for the most common tasks. While Palm, Blackberry and Danger were nibbling around the envelope of what was possible, none of them had command of the overall market or culture of cellphones.

    So when Jobs, barely suppressing a shit-eating grin, walked through the demo of Apple’s new iPhone—easy conference calling, interactive maps, software keyboard, convenient texting—the audience gasped audibly, cheering with unrestrained delight. The user experience of the cell phone was changed forever.

    To accomplish this transformation, Apple had pushed miniaturization as far as possible for the day. Using a high-efficiency processor built on the ARM architecture was a typical move for mobile computing—even Apple’s failed Newton from 15 years earlier used this approach. The revolution came from porting the foundations of Apple’s existing operating system, Mac OS X, into a form that could run within the narrow headroom of an ultra-efficient, battery-powered portable computer.

    On this foundation, Apple shipped a cutting-edge UI framework that supported multitouch gestures, realtime compositing of UI elements, lush animation, and a level of instant responsiveness that had never existed in a handheld context.

    While the iPhone wasn’t an overnight success—it would take years to capture its current marketshare—it was an instant sensation, eventually demolishing the fortunes of Palm, Blackberry, and Microsoft’s mobile strategy.

    And without Apple’s permission, people began writing code for it.

    The promise of a vast green field

    Hobbyist hackers broke the iPhone’s meager security within months of its launch, and went straight to work building unauthorized apps for it. Regardless of Apple’s original plans, the potential of the platform was unmistakable, and within a year of its 2007 introduction, both the iPhone OS SDK and App Store launched.

    This was yet another sensation. Indie developers were crowing about shocking revenue, with the earliest apps taking home six figures in the first month. Apple had created a vast, fertile market with turn-key monetization for every developer.

    The rush was on.

    Google soon joined the fray with Android, offering their own SDK and marketplace.

    So by 2010, mobile was a brand new platform strategy. It used new libraries, targeted new form factors, and had all new technical constraints.

    There were no incumbents. Anyone could be a winner in mobile, so loads of people tried. Moreover, the use-cases mobile presented—always connected, with realtime geolocation and communication—were entirely novel.

    The story of Uber is impossible without the substrate of mobile. March of 2009, 18 months into the launch of the iPhone OS SDK, Uber was hard at work building unprecedented, automated infrastructure for car dispatch. The iPhone 3G, the most recent model of the day, had integrated cell tower-assisted GPS, allowing any device to know its exact position in realtime, even in urban areas, and report that information back to a central server. Add this to an advanced mapping API included in every iPhone, and Uber had leverage to build a category of business that had never existed.

    We know the rest of the story. Uber transformed urban transportation worldwide, adopting a take-no-prisoners approach and burning through vast sums of cash.

    The gig economy and the automated supervisor

    Of course, Uber wasn’t the only player leveraging this power to the hilt.

    Lyft and other “rideshare” startups built their own spins on the model. But the power of automation delivered through pocket-sized, always-connected computers created a new category of labor and business, which the press dubbed “the gig economy.”

    Using software automation, companies could animate the behavior of legions of dubiously-categorized independent contractors. Software told these workers what to do, where to go, and when they were needed. Software onboarded these workers, software paid them, and using ratings and algorithms, software could fire them.

    In an unexpected but inevitable twist of Taylorism, modern automation replaced not the worker themselves but middle managers.

    Gig work created new categories of service, and invaded existing ones as well. There were apps for delivery of all foods, apps for cleaning and home handiwork, even an app that handled packing and shipping of goods, sparing you a trip to the post office. Seeimingly every commercial relationship where Silicon Valley could interpose itself, through the lure of convenience, it tried.

    Reporting not to other humans but faceless, algorithm-driven apps, gig workers were precarious. Their pay was volatile, their access to opportunity was opaque. The capricious whims of customers and their ratings—a central input to the middle-management algorithms—dictated their ongoing access to income. They had limited redress when things were bad for them because the apps that defined their daily work weren’t built to express curiosity about their experiences.

    And they had no direct relationships to their customers.

    Gig work companies responded to these complaints by touting the flexibility their platforms offered. The rigid scheduling of legacy labor was replaced by the freedom to work when you wanted to, even at a moment’s notice.

    That was just one of many opportunities created by an always-online, always-with-you computer.

    The attention economy

    In 1990, the only way to reach someone waiting in line at a bank was to put a poster on the wall. Most banks weren’t selling wall space, so this was their exclusive domain to sell financial products like small business loans, mortgages and savings accounts.

    20 years later, a customer waiting in line at a bank had a wealth of smartphone apps competing for their attention.

    Social media platforms like Facebook and Twitter were in their mobile infancy, but by 2010 anyone with a smartphone could tune out their immediate surroundings and connect with their friends. It wasn’t just social platforms, either. Mobile games like Words With Friends, Angry Birds, and Candycrush competed vigorously to hold the attention of their enthralled users.

    Mobile computing created an entirely new opportunity for connecting to humans during the in-between times. Historically, digital businesses could only access humans when they were at their desks in their homes and offices. With mobile, a 24/7 surface area for commerce, advertising, culture and community arrived overnight.

    Mobile ossification

    So we see that the new paradigm of mobile computing opened the door to previously impossible feats. It was a space where the primary obstacles were time and funding to create something people wanted.

    Today, mobile is full of incumbents. Instead of a green field, it’s a developed ecosystem full of powerful players worth trillions in total. It is a paradigm that has reached maturity.

    More than a decade after smartphones became a mass-market phenomenon, they’ve grown commonplace. Yearly iterations add marginal improvements to processing speed and incremental gains on camera systems.

    When Facebook craves the metaverse, it’s because their totalizing strategy for growth—two billion users to date—is approaching a ceiling. They badly want a new frontier to grow in again, and they’d love to control the hardware as much as the software.

    But VR doesn’t solve everyday problems the way smartphones did. No one craves a bulky headset attached to their face, blocking reality, with battery life measured at a couple hours. Augmented reality might have greater impact, but miniaturizing the technology needed to provide comfortable overlays with the same constant presence as the smartphone in your pocket is still years away. Could it be less than five? Maybe, but I wouldn’t bet on it. More than ten? Unlikely.

    That range doesn’t present immediate opportunities for new growth on the scale they need. Indeed, Facebook is retreating from its most ambitious positions on the metaverse, as its strategy committed billions to the effort for limited return.

    There will be new paradigms for mobile computing. But we’re not there yet. Even so, Apple—16 years gone from their revolutionary iPhone announcement—is poised to release their own take on VR/AR. And like Facebook, they’re stuck with technological reality as it is. Perhaps their software advantage will give them the cultural traction Facebook couldn’t find.

    Next

    In my next post, I’ll get into the twin engines of cloud computing and open source, which were crucial foundations to the mobile revolution.

    Native code wasn’t enough. True power was orchestration of opportunity between devices, which needed server-side technology. Through the power of open source code running in the cloud, any team could put together a complete solution that had never been seen before.


  • Developer success is about paying the rent

    I look at developer experience as a nested set of microeconomic problems. A well-optimized DX makes the most of the individual and team labor needed to:

    • Learn developer tools, from APIs to design patterns to integration strategies
    • Apply these tools to the developer’s specific domain and goals
    • Debug outcomes to arrive at a reasonably robust, predictable, correct implementation
    • Successfully ship artifacts on whatever terms the developer cares about (creative, commercial, collaborative)
    • Iterate on these artifacts to refine their effectiveness and power
    • Collaborate across all these facets, either as a distinct team, or with fellow developers on the internet working on distinct projects

    In short, DX is about creating the conditions for developers to be successful.

    There are many ways, meanwhile, to define developer success. But for the specific context of a developer tools business, I would argue the definition is simple: developer success is a rent check that clears.

    While times are turbulent in the technology industry, the fact remains that software skills are a high-leverage form of labor. When you create software, you’re creating impact that can scale at near-zero marginal cost, serving dozens of customers up to billions. This has power and the need for it exists across every industry sector.

    Every business needs software, and most businesses will use it in some form to deliver value to their customers. To accomplish this, they hire the best developers their labor budgets, networks and cultures can attract.

    Thus, if you’re making devtools for money, you have to understand how your tools are competing against alternatives that make developers more effective at creating results that pay their rent and advance their careers.

    Onboarding is an essential lever in this equation. If the activation energy needed to become productive in your tool is too great, it’s going to lose against alternatives that provide more immediate results. You want the time between a developer reading your marketing page, and that same developer making their first API call, successful build, or other hello-world-like activity, to be as short as possible.

    A few minutes is ideal.

    A few seconds is even better.

    But even if you optimize your basic mechanics to make that moment of traction easier, there’s more to do. Your project needs to demonstrate its design patterns in a way that lets developers import the mental model of your expectations for a successful integration.

    Mere docs of the existing functionality are not enough. Tutorials are better. But best is a collection of reference implementations—Glitch.com calls these “starters”—that let developers begin tinkering with a system that’s already complete and working, even if its scope is limited or simplistic.

    Tinkering your way into a proof-of-concept is fairly cheap from there, and when the artifacts are good, pretty fun as well. This is how you can convince a developer that your solution has the tangible qualities they need to invest the considerable labor of learning and integrating your stuff into both their project and their longer term professional destiny.

    Remember, too, that your competition isn’t just other developer tools. It’s developers avoiding your tool by solving the problem the hard way, because short-term, at least they are feeling traction on their progress.

    So as we build developer tools, we have to think about:

    • Who will be made more successful when this ships?
    • How will we make the case to them they’ll win with us?
    • How will we demonstrate the superiority of our approach?
    • How can we limit the costs of initial exploration and proof of concept?
    • How can we maximize the impact of the investments developers make to integrate us into their workflows, products and future outcomes?

    Hey, I never promised it would be easy. But it is worth it. There’s nothing more powerful than a legion of grateful developers whose rent checks clear because once upon a time, you made them successful.