Redeem Tomorrow

I used to be excited for the future.

Let's bring that back.

The invisible skill at the heart of every technologist

There’s a skill that’s core to success at all levels of software development, but it hides like dark matter: inferred, rather than observed, lurking beyond description and discussion. On a gut level, we know it’s necessary to the work, but we don’t often know how to teach it. You’ve either got it or you don’t in the perception of the average workplace.

Let’s call this skill investigative reasoning. Or, more simply: “working the problem,” as Apollo legend Gene Kranz might frame it.

(Let me know if MIT press has a $70 textbook explaining this, situating it in computing or engineering, and giving it a name.)

This pile of skills manifests as a reflex to make sense of an issue or failure, locate its causes, and design a fix. Without these abilities, you’ll be helpless in the stew of complex abstractions that define modern computing.

Consider a common software development workflow:

  • You write some code
  • It’s compiled, bundled, or otherwise transformed
  • The transformed code is transferred to a host device
  • The host runs the code

Failures can emerge at or after any of these steps. The code may have errors which prevent compilation. The code may have errors despite compiling fine, and they manifest only at runtime, during certain circumstances. There may be a failure in the systems that package and transfer the code, preventing it from running.

The experienced practitioner understands their job is to analyze the problem, develop a hypothesis for its source within the many layers of technology they’re using, and validate the hypothesis by checking logs, adding print statements, starting an interactive debugging session, or using other means of peering inside of the machine. Working the problem may require research: finding online discussions of similar issues, or consulting documentation.

But core to this activity for both the novice and the expert is a simple conviction: problems can be understood and investigation can yield solutions.

At any level, hobbyist to advanced professional, confidence in this premise is the indispensable requirement. Without the confidence to work a problem, it’s unlikely anyone can make progress in building and integrating technical systems.

Domain experience helps to work a problem. Every class of system, from microcontrollers to native applications to web apps, has its own set of quirks, its own unique abstractions and design decisions where trouble may sneak in. But working the problem is a skill that transcends domains. Fixing a broken software development toolchain can prepare you for troubleshooting the boiler in your basement.

The paranoia in hiring for software roles, I’m certain, is rooted in the fear that the organization will hire someone who is unable to work a problem, outsourcing their investigative reasoning tasks to others, depleting productivity per unit of headcount budget.

Working the problem is also a feature of seniority. Translating business requirements into software development tasks is an expression of this skill, as is understanding how technical tradeoffs (debt) in existing systems will impact the needs of new features. The more senior a practitioner is, the more likely they are to feel a sense of confidence working a problem no matter how many layers stack up—even when those layers are “non-technical,” derived from business or customer needs.

We laugh at the fixation on languages or frameworks in job postings, but even the most thoughtful organizations struggle to identify this set of activities as a job requirement, much less map their outcomes within the team’s problem space.

Not every person is the ideal practitioner to work a given problem. The startup costs in research, learning new tools, or picking up domain-specific expertise, may outstrip any return on investment. Still, I have to think that applying a more conscious growth mindset around this would give our industry better results in recruiting, professional development and retention than we have today.

How can we elevate our understanding of these skills? How can we instill the confidence needed to work a given problem? How can we better train people to be able investigators?

I think there’s a lot of missed opportunity here.

The Economist, Apr 23, 2033: America to further cannibalise itself with new Labour Reserve Board targets

I have a guy who occasionally hooks me up with exotic data: content from alternative universes, evidence of paths not taken, and even the occasional dispatch from the future. This article will appear in a 2033 issue of The Economist—unless we change our course.


America’s newest fiscal bureaucracy isn’t mincing words: they’re falling behind. Setting a 2034 target for 500,000 additional bodies in the labour force is the latest admission that the world’s third-largest economy is failing to keep pace. As Washington grapples with a fertility crisis, an aging population, and ongoing civil unrest, restoring consistent GDP growth remains an elusive goal.

Many of these wounds are self-inflicted. Since before the 2023–2024 global financial crises, America’s immigration policy has remained inflexible, even as it struggles to recruit everyone from microchip designers to bricklayers. Meanwhile the once-leading economy stubbornly refuses to fund common sense policies that would make it easier for the remaining population of non-working parents to join the labour pool, like subsidies for child care or after-school education.

The only remaining source of policy innovation within the American system is criminalisation, which Labour Reserve Chair Liz Cheney suggests will provide the lion’s share of new workers. Under American law, prisoners can be compelled to work according to dictates of the state, and this has long been exploited by American businesses eager for cheap workers. Since the financial crises, this approach has taken on new scope, through a programme of so-called “community incarceration.” Under this scheme, convicted criminals serve out their terms in housing of their choice but provide up to sixty hours a week of compulsory labour for everything from fast food and retail to computer network deployment. State governments dole these workers out at a fraction of America's minimum wage, set to increase next year to $11.23.

This approach has obvious negative consequences for America’s long term outlook. Community incarceration may close immediate gaps in labour needs, but it doesn’t create new labourers from thin air. Democrats, who oppose the system, argue that it diverts disaffected youth from more productive directions, like higher education or work in the trades. Amnesty International, a global human rights group, suggests there is merit to this argument: in a new study they find that young people aged 15–25 are disproportionately represented in community incarceration, and they are charged overwhelmingly for crimes related to political demonstrations, which have become both commonplace and increasingly violent in the last decade. This in particular promises a negative feedback loop, as convicted criminals in the country permanently lose their right to vote, further alienating them from the legitimate political process.

By creating an underclass of young, forced labourers, America is ravenously consuming its seed stock. Far from stabilising their economic fortunes, the once-mighty power is setting itself up for yet another ride over a cliff. With its youngest workers cut off from both political engagement and entrepreneurship, America’s cultural and innovation economies will remain stalled, eclipsed by energetic new entrants as far flung as India, the Czech Republic, and Nigeria. Whether it meets its new 2034 labour targets or not, the former leader of the world economy seems committed to a path of economic hospice care and little else.

Developer experience: the basics

Periodically, tech Twitter swells with a discourse I find to be weirdly unproductive: people arguing about developer experience. Does it even exist? Does it matter?

I’m not going to recap the various head-scratchers in detail. But I do want to talk about what developer experience is, why companies invest in it, and what we all get for their trouble.

Spoilers: it's a problem of labor and software economics. A positive developer experience amplifies the return of effort invested building new software products.

The experience of getting things done

Put simply, developer experience is the sum of events that exist between identifying a requirement for a piece of software, and delivering code that satisfies it. Broadly, these events may be practical, emotional or social in nature.

Examples:

  • Referencing documentation to plan an integration of a third party service
  • Trying to install tools or libraries necessary to develop against a particular software framework
  • Getting frustrated because of an unfamiliar or undocumented design pattern
  • Successfully receiving help in a support forum when you get stuck

The practice of developer experience, of being deliberate in its design, is to identify the places of greatest leverage for clearing paths and relieving burdens. The objective is to improve adoption of a technology by making it easier to accomplish personal and business goals with it.

Developer tools: pickaxes in a gold rush

Selling developer tools is a straightforward business strategy. Rather than the risky bet of serving a broad consumer or cultural need, you can build technology that helps anyone who is building software be more successful.

Instead of trying to sell to hundreds of millions of users, you sell to a comparatively smaller handful of companies, typically by hooking their individual developers. If one of these customers succeeds, you succeed along with them, scaling your billings according to the growth of their business.

It sounds great, but there’s always a catch. In this case, your customers become a handful of technologists with a broad spectrum of experience levels and highly specialized needs. Your business success then rests upon a premise that’s easy to explain but harder to execute: making people more prosperous and effective because of your tools.

Leverage within the developer experience domain

How can you make people more successful and effective in accomplishing their goals? If we think of developer experience as the sum of all events between defining requirements and delivering them, we can identify some broadly recurring points of leverage.

Ergonomics and abstractions

Where the fingertips meet the keys, how does it feel to work with your tools? Does integration require painful, recurring boilerplate code, or can developers easily drop in your tools to solve a problem and keep moving to their unique implementation?

Is it easy to debug and inspect the state of your tools at runtime? When errors are thrown by your tools, is log output clear and descriptive, allowing further investigation and social troubleshooting on forums or Stack Overflow?

What is the everyday texture of life with your tool?

Tools that feel good to use obviously earn more loyalty, enthusiasm and word of mouth than tools that grate and frustrate.

Documentation, reference and education

How do people learn to use your tool? Do you provide clear documentation? Recipes? Tutorials?

What references exist for troubleshooting, debugging and discovery of features within your tool?

Thorough reference material makes it easier for developers to get the most out of everything you’ve built.

Community and ecosystem

Is there an active community experimenting and sharing their experiences with your tool? Is there a reliable, active, healthy venue where someone who is running into trouble can get help?

Is an eager community filling in gaps with their own tutorials, plugins, libraries and ergonomic improvements?

It’s easier to roll the dice on a new tool when you know that, should you need help, you’ll find a community that has your back.

Developer experience: the business cases

From these levers, we can identify the business cases for developer experience. In the context of the adopters of developer tools—whether individuals or teams—the question is whether a tool enhances their ability to be successful.

In a software production context, developer labor budget is among the costliest resources a business has to manage. Any tool that allows a business to get more for that budget is creating serious impact.

For purveyors of developer tools, the business case becomes clear as well. Success depends on adoption. You can improve adoption by addressing points of friction in existing developer workflows, and by making your tool’s experience more positive and productive than frustrating.

This doesn’t have to be hard

When we’re talking about developer experience, we’re talking about real things:

  • How people feel when using tools and making software
  • How effective they are in meeting their goals
  • How these factors converge into amplified productivity versus wasted effort
  • What leverage exists in your strategy to shift that balance more and more toward success for individual practitioners and businesses that might pay for your service

Developer experience is a process of shifting the economics of building software to be more favorable for every dollar or hour invested. There's a lot going on there, but conceptually, this doesn't have to be that hard.

reMarkable 2: a magical disppointment

I’m obsessed with artifacts of the future.

Always have been. I didn’t save money as a kid. I blew every dollar I earned on electronics and computing devices. It’s a compulsion I've only slightly tamed in adulthood.

When I discovered reMarkable, it struck the futuristic note that captures my imagination. This is a digital notebook, storing as many pages of as many collections of notes as you can come up with.

It’s a frustratingly mediocre product.

But I love it anyway.

ePaper and the valley of mediocrity

Consumer devices have to climb out of mediocrity.

That journey can be painful, but what’s interesting is how much of it outsiders get to witness directly. When the iPad arrived, I had two competing thoughts:

  • This thing is terrible. It’s heavy! It’s cold! I cannot stand how it feels to hold.
  • This is incredible. It’s a Star Trek-level touch screen.

Tactile failures of the device lost to the power of its screen. I could live with the discomfort for the luxurious scale of all that multitouch. But I wouldn’t have to grit my teeth for long.

When the 2011 rev of the iPad dropped, everything terrible about it had been erased. It was thinner, it was lighter. It felt great to hold. You could snuggle up with it gladly on the couch or take a far-flung sweetheart to your pillow for a goodnight FaceTime call.

It was a comfortable companion, escaping mediocrity in the short span of just 11 months.

ePaper, meanwhile, remains stuck in the mud with no end in sight. There’s something greasy and disposable about any ePaper experience. It feels like newsprint. As mediums go, they’re spiritually similar: their goal is to produce a high volume of content as efficiently as possible.

Using ePaper is accepting:

  • Ghosting of content
  • Slow, flickery redrawing
  • Periodic, Zamboni machine-like full-screen wipes to clean up

In exchange, though, you get a pleasant experience for consuming large amounts of information. ePaper feels like reading off of paper, hence the name. It’s about pigment that reflects existing light, not diodes blasting fresh light at you. Many find this soothing and restful by comparison to the typical high-speed screen. I certainly do.

reMarkable uses ePaper to offer a satisfying drawing and writing experience. When you’re in the zone, stylus on the screen, there’s nothing mediocre to its ePaper implementation. You see instant results, like pencil on paper.

It’s a joy to capture as much writing as I want without worrying about which notebooks I brought or how many pages they have left.

The rest of the time, though, the device feels constrained by the display technology. Even flipping pages has the barest delay, and these build up like sandpaper scuffs. Menus feel slow and flickery. I do my best to get in and out of the user interface contrivances as quickly as possible. They’re not pleasant enough to linger in, and that’s too bad.

Despite its imperfections, and its endless winter of mediocrity, ePaper is indispensable stuff. It’s more than good enough as part of a trade for bottomless, infinite notebooks.

Where things get heartbreaking is the software.

It's just too bad

I’m going to focus on two pieces that really grate. They’re a missed opportunity for users and the platform alike.

‘Note: Custom templates aren’t supported on reMarkable 1 or 2.’

You have to be fucking kidding me.

We have the ability to create infinite notebooks to any structure or specification and your story for user customization is “well, maybe you could fuck off.”

That’s really sad.

The neat thing with reMarkable notebooks is that you can combine as many templates as you want, in any order you want. It’s powerful. The platform provides a quite a few templates, spanning everything from graph paper to music notation to chemistry.

The template library is a great starting point, but not enough.

Still, a few dozen templates can’t come close to the full breadth of things that humans need. The creative magic of this age, where we can find it, is the ability to both tailor digital artifacts perfectly to our needs and share them freely for anyone else who might benefit. In the age of open source, reMarkable’s design is stuck in read-only mode.

Of course I bought it anyway, and that’s because there’s a workaround. Due to dependence on GPL code, reMarkable is obliged to provide root access to its embedded Linux operating system. From here, the savvy tinkerer can drop in new template content if they’re willing to navigate:

  • SFTP file transfers
  • Multiple file formats (PNG and SVG)
  • Tinkering with a JSON file
  • Stopping and restarting the process that draws the UI, via ssh session

It’s crummy, high-friction stuff. This should be a first-class feature. It multiplies the use cases and constituencies that could find a serious groove on the platform.

reMarkable's PDF editing tools.

reMarkable is wide open in its file handling. You can drop in, say, a PDF you put together with a bunch of custom-designed planner pages. But the authoring experience there doesn’t compare. To reMarkable’s credit, it’s possible to cut apart, reorder, and duplicate pages in a PDF. Still, the workflows to accomplish this aren’t nearly as low-friction as what you get in reMarkable’s native notebook metaphor.

It’s up to the vendor to solve this one, and they’re asleep on the job.

Ugh, the sync is bad

Look, sync is hard.

When it’s time to implement sync, I’m out getting a sandwich. Call someone else. I’m not saying that I think this is easy to get right. I am saying that if you’re going to bill $8 every month for this single device to sync content, it needs to be as near to perfect as computer science allows.

reMarkable is not there.

In a perfect world, my notes would be safely in the cloud the moment I pick up my stylus from the screen. We don’t live in that world. reMarkable only seems to update its cloud state when I explicitly close a notebook. So if I push away from the desk to do something else, the reMarkable abandoned, nothing is going to be available for review off-device until I manually activate the tablet again and close out.

Just sad stuff.

Yet I love it anyway

reMarkable finds itself in an awkward position.

It is a mediocre implementation of a wonderful category: the digital, paper-like, bottomless notebook. I love this category. Writing by hand can really help my focus, but I depend on so many digital workflows. Being able to combine the two, without distractions, has a lot of power. reMarkable seizes on this magic, emphasizing that it is

The only tablet that helps you focus

This focus positioning—acknowledging the harried, distracted life that must attend anyone comfortable spending $300 on a digital notebook—appears on the website and again on the collateral in the package. It’s a savvy tack to follow: the subjective experience of this combined technology is what creates enthusiasm. It’s more than the sum of its parts.

Even those parts that are mediocre.

But reMarkable must be careful. In ages past, the early captors of magic thought themselves invincible. Blackberry—with its expressive, high bandwidth, qwerty text input and realtime messaging—is dust today. A much less mediocre expression of that category arrived in the form of the touchscreen smartphone. Apple, Google, Samsung and countless others dissolved a giant that defined a category.

In some ways, reMarkable feels more niche than smartphones. Then again, what could be niche about writing and drawing things by hand? It’s a problem everyone has. I can think of so many contexts for learning, studying, creativity and collaboration where a writing tablet like this would be less distracting and disruptive than a laptop or touchscreen device. Someone is going to try to sell this to more people.

For me and my uses, reMarkable 2 is worth every penny. But one of these days someone is going to build a much better take on this category, and I really can’t wait to trade up.

I hope I can give my money to reMarkable again when that time comes—they’ve built something charming here, despite its flaws.

Life in the time of dragons

Protocols rule everything around us. In the digital age, that word has a specific meaning: you might imagine the protocols that govern WiFi, or your web browser.

Computer protocols enable interoperation, while facilitating common tasks. They specify how data should be encoded, how to exchange it, how to signal errors. Protocols are specific, running as deep as necessary, even to the physics of radios and radiation when necessary.

At the boundary between every system, protocols negotiate coherence and consistency. A network is a series of successful protocol interactions, cascading in waves.

From these protocols interacting, opportunity emerges. We can unite people who are far apart. We can transport vast quantities of media, on demand, to anyone who wants them. We can bank, we can shop.

This scope of opportunity is valuable. One of today’s dominant social castes is a group who can leverage these protocols to generate wealth and influence.

They know the most dangerous class of protocol there is:

An apparatus and process for turning money into more money.

A business is a protocol

If you give me two dollars and I can use that to make a hundred dollars, I’m rich. I can easily find someone willing to provide two dollars in exchange for five or ten the next day, which I can afford to share.

These are only the beginning of the economics that are possible in the age of globally networked computers on every desk, in every pocket.

So today it is possible to build a protocol:

  • That acquires resources, like investor or customer money
  • Applies those resources to labor, communications, and infrastructure
  • Thereby generating wealth in excess of the cost necessary to run the process again

Airbnb is a protocol for maintaining and arbitraging a short term real estate rental brokerage network. Uber is a protocol for arbitrage of a labor and vehicle network. Facebook arbitrages their graph of human connection, selling interactions to advertisers.

These protocols are everywhere, and one thing that’s scary is that they don’t have great tools for caring about the human impact of their execution.

But we haven’t gotten to the runtime yet

The runtime that executes a business protocol is flaky. Its exception handling is dogshit in many cases.

A business protocol runs entirely on humans, even as it exists almost entirely in the space between them. Businesses want to maximize shareholder value at all costs, so any given human platform executing the protocol is expendable.

Even the originator of the protocol.

That is to say: once the protocol is alive, anyone can be thrown under the bus to keep the money flowing—including, once there are shareholders, the founders of the business. There's endless games the savvy founder can play with the underlying legal structure to protect themselves at that point, but in our current system they still owe fealty to the money pump the moment other people own a piece of the pie.

Runtimes all the way down

Any given business runtime is hosted within a legal runtime. In the United States, there are enforcement mechanisms for shareholder maximization. Everything must be done in the service profit, or the protocol is viewed to be in a failure mode.

Everyone in serious power at a major business has a moral implant on their person at all times. “Fiduciary duty” obliges them to act in support of profit or face the violence of legal and financial repercussions. Fiduciaries are trusted with the highest privileges in executing the protocol, but they pay with responsibility to it.

In effect, the dark art of creating commercial protocols has a protection racket. Immediate enforcers at the top of the org chart, with bosses across the civic landscape enforcing everything from accounting conventions to securities law.

Where this gets scary is that these protocols don’t care what they metabolize, and their runtime is disincentivized from caring on their behalf. So, if pays well, Facebook converts advertiser dollars into hate speech, radicalization, pogroms, disinformation.

“It’s nothing personal. We just have to feed this dragon. It’s, uh, our fiduciary duty.”

Here be dragons

Dragons and their various warlords roam our economic countryside. They… take what they want. We’re not entirely powerless against them, but it’s still a lot of work to live with their predations, and we need more recourse than we have.

This is not the first age to live with the transformational impact of a successful business protocol. A century ago, Henry Ford and his contemporaries discovered powerful strategies, like assembly lines and dealer networks, to change the face of whole continents. An entire chapter of modern industrialization is dominated by the automotive industry, and its processes, proliferating to saturate the planet.

Serious human impact, whether we liked it or not.

Computer networks are a transportation of a different kind, but no less impactful than automobiles road networks. They make it cheaper than ever to bridge the gulf between a business and the customers whose money that business wants to collect.

Within a substrate of entangled legal, social and economic systems, it’s possible to build some truly gnarly war creatures. As long as profit is the only success condition vigorously protected at scale in this social order, we're creating long term selection pressure for massive economic organisms that don't care if they eat  our social fabric or ecology.

That's an exhausting and dangerous future.

Wordle: A factional skirmish for the soul of technology

So check out today's Twitter beef.

You know all those emoji squares that have been popping up everywhere? That's Wordle. It's a bracingly earnest word puzzle web app deliberately built to a non-commercial ethos.

Wordle is a darling of the press because of its artisanal, small-batch sensibility. No compulsion loop—you get one puzzle per day. The score-sharing tweets are informational, instead of promotional. The tweet isn't about getting your friends into a conversion funnel.

It's about showing off your adventure and prowess.

Cue the stormclouds: a guy came in, saw the cultural fervor for this, and decided to build a paid, native application, using the same name and design. Taking people for $30 a year if they keep the subscription, this Pirate Wordle started printing money.

And its developer decided to tell Twitter about it, making him the day's Main Character:

Zack Shakked gushes about the game he cloned.

A schism in tech

The Wordle beef happens at a particular cultural fault line. Information technology has politics of all kinds, but one of the most strident is described on a spectrum:

Technology used for:

joy and wonder...sacks of money

Software automation is incredible. It offers leverage unlike anything we have ever seen. You can be worth billions of dollars because you build something that solves broad-scale problems for just $0.003 a user.

Sofware is all margin.

Which attracts the money fetishist.

Money fetishist?

Money's only something you need in case you don't die tomorrow.

Martin Sheen said this in Wallstreet, and it fucked me up for life. You can't unsee it.

This is absolutely an age where having money has become synonymous with safety and security. I can't fault anyone for trying to be okay, nor being strategic about it.

But some of us are building an entire identity around being the sort of person who has money, can get money, is the ultimate money chessmaster.

They'll go through any trial proudly for access to a capitalism lottery ticket.

So, for the money fetishist, software is an irresistable lure. Software is a lottery ticket dispenser. There are scratcher tickets, like building indie software. Those are occasionally gushers of a win.

But the scale goes all the way up to lottery tickets in the shape of a term sheet for massive investment to build a company. Assemble the right team, seize the right market, and you're a billionaire.

For this group, joy in computing is not always central to their goals. Mostly their participation in information technology is a cold calculation: "how many disposable robots can we build to seize a market?"

Computing's true believers

To the other end of the spectrum, this posture is off-putting, even revolting. For better or worse, the opposing faction truly believes in the power and wonder of computing, for its own sake. Money is a second order concern to pursuing the magic of making sand have dreams.

@computerfact: computers think using etchings in poisoned sand and measure time using vibrating crystals so if you were looking for magic you found it

It wasn't always obvious—either individually or societally—that the computer was a money printer. For some folks, the computer was simple fascination. An all new frontier, defined by different rules than our everyday existence.

Everything that makes computers good at money also makes them interesting.

For many, huge chunks of a lifetime have been dedicated to exploring the power of a digital realm. Building up skills, knowledge and imagination for a playing field that's not always intuitive, but so often rewarding as you develop mastery.

This is also a path to lottery tickets. Sometimes exploring the frontier leads you to a gold mine. But on this side of the spectrum, you've got people eager to explore computing as a creative endeavor first, grabbing what money they can in case they don't die tomorrow.

The beef

Wordle exists at the maximal edge of this non-commercial ethos. It's earnest in its humane approach. Instead of an engagement treadmill, Wordle is a limited, daily treat. Rather than promoting Wordle, the tweets for announcing a score simply describe the player's adventure for the day as a score with some emoji. No URL.

Wordle score panel: 207 5/6

In the context of Wordle's cultural froth—all these articles, all these score tweets—developer Zach Shakked saw an opportunity. To take Wordle's name, concept, and design, then strap a yearly subscription price to do it. Where the original Wordle was written for the web, this clone was built as a native iOS app, the better to capitalize on Apple's built in payment system.

My bias here: I think that move is tacky as hell, and quite possibly legally actionable. You can draw your own conclusions.

Quite a few more took exception to this approach. Kottke sums up the prosecution's case succinctly:

This person stole Wordle (a game @powerlanguish invented), put it on the App Store, and is now crowing about how rich it's gonna make him. 🤬

What this beef can show us is a fault line in the culture of those who participate in technology. Some for fun, others for profit. This is a long-brewing conflict, and you can find the seeds of it going back generations.

This Wordle beef gives you a model for this conflict that's small and fast enough to dissect as it happens.

But it's not the only beef you'll find in Wordle town. Did you know what those scores are like for screen readers?

Building a native macOS configurator for Adafruit's Macropad

Adafruit’s Macropad is an absolute hardware delight: a cluster of 12 keys, with individual lighting, plus an OLED screen, rotary encoder and even a speaker! While external, programmable keypads are common enough, there’s something to this combination of features that makes the Macropad more than the sum of its parts.

Top shot of powered on, assembled MacroPal keypad glowing rainbow colors.
Photo via Adafruit

It’s just a treat to hack on, and one of the rare technologies that makes me feel like a kid again, eager to experiment and try new things.

I want to walk you through a project I spun up recently, building a bit of macOS software to make my own Macropad easy to configure and tinker with:

I'll explain not just what I built, but how I approached solving the problems of integrating two very different systems speaking different languages. This stuff is essential to success in any sort of technology pursuit, but I always wish I saw more discussion of how we practioners do it.

Here’s my contribution.

Problem: configuration and iteration

I’ve been using programmable keyboards for years. Essential to success, for me, is iteration. Finding the most useful, comfortable layout of keys is a matter of trial and error, so I want to be able to experiment quickly with minimal friction.

So step one was finding whatever code existed to let me customize the Macropad’s key layouts.

Here I was in luck. Adafruit provides a terrific starting point:

MACROPAD Hotkeys

What’s brilliant about this example code is that fully utilizes the magic of the Macropad: turn its rotary encoder to move back and forth between pages of hotkeys. The page title and keymap labels are displayed on the OLED screen.

This seems simple and even obvious, but to me it’s a game changer because it makes an enormous breadth of keymaps viable. I don’t have to memorize key positions, so I can make pages upon pages of keys to solve whatever common problems I have.

There was just one wrinkle:

The configuration ergonomics.

Speaking to the Macropad

Macropad is built on an RP2040. This is a microcontroller: a tiny, specialized computer that you can code to solve a specific problem. Microcontrollers are amazing for hardware projects like the Macropad because they’re able to wrangle all kinds of electronic components—switches, lights, screens—and make them as easy to program as any software application on your computer.

Microcontrollers are hidden everywhere in your digital existence, but hobbyist platforms like RP2040, Arduino and others come with an ecosystem for conveniently programming and troubleshooting them without any specialized hardware. Plug them in via USB and you’re off to the races.

RP2040 supports code written two ways: C and Python. Adafruit’s example code used Python, so I followed their lead.

Most of this code was exactly what I wanted. Crucially, they'd solved all the problems of acting like a USB keyboard for me, picked out all the libraries I'd need. This is real research and experimentation effort saved.

But in terms of iterating on my keymaps, it didn’t quite hit the spot. Here’s how you define keymaps in their example project:

app = {                    # REQUIRED dict, must be named 'app'
    'name' : 'Mac Safari', # Application name
    'macros' : [           # List of button macros...
        # COLOR    LABEL    KEY SEQUENCE
        # 1st row ----------
        (0x004000, '< Back', [Keycode.COMMAND, '[']),
        (0x004000, 'Fwd >', [Keycode.COMMAND, ']']),
        (0x400000, 'Up', [Keycode.SHIFT, ' ']),      # Scroll up
        [...]

To tell the Macropad what to do for a given key, you provide an array of tuples, corresponding to each key, specifying:

  • hexadecimal value for backlight color
  • label string
  • an array of key presses

Thus,

(0x004000, '< Back', [Keycode.COMMAND, '['])

would create a key labeled < Back, with a green backlight, sending the key combination Command-[.

Look, this works, but there’s a lot of cognitive overhead involved. You have to build up these tuples, do RGB hexadecimal math for the colors, and make a tidy Python dictionary.

Better than a stick in the eye, but I’ve been spoiled by decades of GUI configurators for my input devices.

Still, this project had 90% of what I needed. If I could find a way to ingest configuration content that could replace this fiddly Python dictionary approach, I could build an interface for programming the keymaps however I wanted.

So that’s where I went next.

How do you say ‘red’?

After digging through this example code, I wrote down all of the things I needed to communicate to the Macropad to build custom layouts.

Some stuff was easy: if you want to specify a label that shows up on a screen, you use a string.

Other stuff needed a little checking. The colors were specified using hexadecimal. In other languages, I often see RGB hex handled as a string, but in this Python code, it took the form of 0xXXXXXX.

I’ve seen this around since I began using computers, but I never needed to actually know what it meant. In this context, the hex was being treated as an integer, expressed in hexadecimal notation. A little fiddling quickly confirmed the Macropad would treat it with the same behavior of hex colors I knew from the web, so that meant I didn’t need to handle colors in a unique way, which was a good start.

My next question was: can I convert a string into a hex integer in Python? StackOverflow, naturally, had me covered:

int("0xa", 16)

Initialize an int with a hex string, specify that its base is 16, and Python will do the rest.

So that’s the colors sorted. I could handle them like hex strings, externally. Again, strings are easy.

A thornier question remained: what about special keys, like option, command, etc? In the sample code they’re represented with some sort of Python variable, so I went on a hunt to find where these were defined.

Keycodes were represented like Keycode.COMMAND, and a comment in the configuration files offered this hint:

from adafruit_hid.keycode import Keycode # REQUIRED if using Keycode.* values

From that, I can infer:

  • Keycodes rely on an imported dependency
  • That dependency lives in a library called adafruit_hid

Sure enough, googling that led me to Adafruit’s GitHub project for a HID library. What’s HID? It stands for Human Interface Device, and it’s the USB standard that allows keyboards, mice and other peripherals to communicate on behalf of you and me.

From there it was a matter of digging around in the project structure until I found keycode.py, which contained a mapping of keycode names to the lower-level values the HID library relied on.

Spoiler: it was more hexadecimals:

    LEFT_CONTROL = 0xE0
    """Control modifier left of the spacebar"""
    CONTROL = LEFT_CONTROL
    """Alias for LEFT_CONTROL"""

Well, I’d just learned that hex values could travel easily as strings, so that’s not a big deal.

With these questions answered, I had the basics for translating values from outside the Macropad into input that it could properly interpret.

Next I’d need a medium to structure and carry these values.

Enter JSON

JSON is great: it’s become a universal standard, so every platform has a means of parsing it, including Python. It’s easy enough to inspect simple JSON with your own eyes, so that helps with troubleshooting and iterating.

I started by thinking through a structure for a JSON configuration payload:

Array: To contain pages of keymaps
	Dictionary: To define a page
		String: Page name
		Array: To contain keys
			Dictionary: To define a key
				String: Color (as hex)
				String: Label
				String: Macro

It would require a few iterations to get this to a final form, but this early structure was enough to get things moving.

Next I needed to learn enough Python to parse some JSON.

One of the most important habits I’ve picked up in a career of programming: isolate your experiments. Instead of trying to write and debug a JSON payload parser inside of the existing Hotkeys project, with the Macropad reporting errors, I used an isolated environment where I could get easy feedback about my bad code. Replit seems to provide the most robust, one-click, “let me just run some Python” experience online right now, so that’s where I ended up.

I badly wanted to figure out how to convert my JSON into proper Python objects, whose properties I could access with dot notation. This is how I’ve done JSON parsing in Swift for years, and it felt tidy and safe.

But after chasing my tail reading tutorials and Stack Overflow for awhile, I gave up and accessed everything using strings as dictionary keys. It was good enough. Ergonomics in the parsing code weren’t essential. Once it was up and working, I wouldn’t need to fiddle with it very much.

with open('macro.json') as f:
  pages = json.load(f)

for page in pages:

  keys = page['keys']

  imported_keys = []

  for key in keys:
    color_hex_string = "0x" + key['color']
    color_hex = int(color_hex_string, 16)
    macro = key['macro']
    [...]

It’s straightforward enough. The code loads a JSON file, iterates through the pages, pulls values out of the dictionaries it finds, builds key-specific tuples, doing conversions on hex code strings, and then plugs it all into the App class provided by the sample code, which represents a page. These are stored to an array.

Again, I don’t know any Python, so this took a bit of trial and error. Getting immediate feedback on syntax and errors from Replit’s environment helped me work through the bugs easily.

Once it all seemed to parse JSON properly, I dropped my code into the Hotkeys project, replacing the part that went looking for Python configuration values.

Macropad was happy, displaying a simple configuration I’d written in JSON by hand.

We were on our way.

A tool to create tools

With the basics of a working JSON structure, and a means for the Macropad to translate that JSON into keymaps, the table was set for real fun:

Building a user interface.

I’ve spent the last couple years writing a lot of SwiftUI code, and I was eager to try it out on macOS.

It’s surprising how straightforward it is to build multi-pane, hierarchical navigation in SwiftUI:

The old way, using AppKit and nib files, would have required loads more effort. But I had the basics of this up and working in a couple hours.

It’s not the most intuitive thing ever, but a quick google for swiftui macOS three column layout got me a terrific walkthrough that explained the process.

Next, I needed:

  • A data model to represent configurations, pages, keymaps, etc
  • A means of persisting those configurations so I could revise them later

For this, I turned to Core Data. It’s a polarizing technology. Many hate its GUI model editor, and it’s got plenty of complexity to manage. But as persistence strategies go, you can’t beat its out-of-the-box integration with SwiftUI. I’ve done enough time in the Core Data salt mines that I could quickly bang out the object graph I wanted and use it to generate JSON files. Best of all, the app could quietly store everything between sessions for easy iteration.

Modifiers were a challenge. In a perfect world, I could import keycode.py into Swift somehow and directly reference the values. My reading suggests this is possible, but I couldn’t sort out how. In the end, I used a spreadsheet to transform the Python code into Swift enum cases, making the hex values into strings:

enum AdafruitPythonHIDKeycode: String, CaseIterable, Identifiable {

    var id: String {
        return rawValue
    }
    
    case
    COMMAND = "0xE3",
    OPTION  = "0xE2",
    CONTROL  = "0xE0",
    SHIFT  = "0xE1",
    […]
}

As an enum, it was easy to iterate through all of the modifier keys and represent them in the UI. Storing them as strings made them easy to persist in Core Data. In theory I could convert them into plain integers, and send them around that way, but this seemed like less work to debug.

With navigation working and a data model specified, I went to work on the editor UI. My requirements were simple:

  • It had to visually represent the layout of the real thing
  • Editing had to be fast and easy

What I ended up with was a grid representing each key. Clicking a key let you edit its color (with a picker!), label text, and output. Thanks to the magic of SwiftUI, and a generous soul on StackOverflow, it was even easy to provide drag-and-drop reordering of the keys.

Clicking a button exported a json file, which could be saved directly to the Macropad—it shows up as a USB storage device on your computer.

Now I could bang out a new configuration in seconds, and what I saw in the editor was what I got on the Macropad.

“How’d you do that?”

In all, it was a weekend of effort to get this all rolling. At one point, showing it off, I was asked “How did you know how to do all of that?”

It’s a system integration problem! I’ve been doing that sort of thing for 20 years, so for a moment, I didn’t know where to start explaining. But to summarize the above, here’s how I approach this kind of challenge:

Find an existing, successful artifact

I need to start with something that already works. In this case, I found example code for the Macropad that was directionally aligned with my own goals. I’ve gotten so far in this game with example code. Code gives you pointers about how a system works, demonstrates its assumptions. It also gives you a starting point for your own approach.

No matter the system, in this age, you can usually find code that successfully interacts with it if you google hard enough.

Write down your unknowns

Once I’ve examined a working artifact, I usually end up with more questions than answers.

Hacking a project together is as much investigation as it is creation. I grab a pad of paper and keep track of my biggest unknowns to give that investigation its shape. After examining the example code, my biggest questions were:

  • How does it represent color?
  • How does it represent keycodes?
  • How do I parse JSON in Python?
  • How do I get feedback from the Macropad when things break?

If I found answers to these, I could build an external system that talked to the Macropad.

Learn how to communicate

The next step is asking “how does this system expect me to communicate my needs?”

Understanding the structure and format of data is essential. This step is equal parts research and experimentation. I need to find whatever documentation exists, even if it’s just source code, to understand the requirements and expectations of the system. Then, I need to try to create a working data structure of my own that the system will accept.

Build a bridge

Once you understand how to speak to a system, you need a means to reliably, repeatedly do so.

Here, I wrote a simple JSON parser to ingest and interpret external output into something Python could understand for one side of the bridge. On the other, I wrote Swift code that could generate JSON according to the expectations of that parser.

Fuck around, find out

With your unknowns revealed, a communication strategy understood, and a bridge constructed, you’re ready to start playing around. Experiment with different approaches until you get the results you’re looking for.

Sometimes you’ll break things. Use version control to keep track of your experiments, so you can always roll back to something you know is working.

Adafruit recommends the simple but effective Mu Python editor, which provides a serial connection to the Macropad. When I broke things, I could get hints about what went wrong that way through console log messages sent to the serial monitor.

Don't let perfect be the enemy of done

Throughout this process, there were "better" ways to accomplish what I wanted. I wish I could have parsed JSON into native Python classes, I wish I could have imported the keycode.py content more transparently into Swift. In a well-behaved Mac app, you can double-click a list item to edit its name. Still not sure how to do it in SwiftUI.

It's easy to get bogged down in the perfect solution, but I try to prioritize progress over perfection. Do the ugly, hacky thing first, then keep moving. If it comes to bite you later, you can rewrite it. Probably you'll have a lot more context later, so the deeper solution will be better informed than if you'd tried it from jump.

What about the configurator?

Sure, here’s the code:

macOS Project

Macropad code, as adapted from the MACROPAD Hotkeys project

Stuff I might add in the future:

  • Properly signed binary builds of the configurator (right now you'll need Xcode)
  • JSON import
  • Multi-part macros with delays
  • Undo support
  • Direct user input capture of keystrokes

Maybe this will inspire you to write a web application that does the same things.


Hope this look inside the hack was interesting. Drop me a line with any questions you’ve got about how tech gets built. Much love to Adafruit, which makes the coolest electronics hobby stuff around. Projects like this Macropad remind me why I fell in love with technology:

  • Exploration
  • Experimentation
  • Magic

I have the life and career I have because once upon a time, I learned how much fun it was to make things that talked to each other.

Kevin Kelly: Spatial audio and future computing

I’m gonna bet that a large part of AR, XR, Mirrorworld and spatial computing, will be spatial audio. Not just stereo sound, not just 3d surround sound, but spatial sound. That means virtual sound that is very specific in its locations, like real sound is. If a cricket is cricketing, it is perceived as being in a very exact spot, which we hear, and our brain translates, as RIGHT THERE. When we turn our head, the sound remains where it is. Like in movies, spatial audio will be at least 50% of the immersive experience.

There's something to this.

Occasionally, playing a video game, I'll observe a bug in the sound effects subsystem. A missing engine noise during a cutscene in a car, for example, entirely breaks the illusion.

But when visuals and sound fit together, the credibility of a scene is enhanced.

Watching Foundation with AirPods and spatial audio was surprising in its impact. I felt like I was sitting in a high-end home theater, and the show was all the more immersive. If indeed Apple is quietly churning away at an AR experience, their publicly available spatial audio tech is yet another tip of their hand.