Skip to main content Redeem Tomorrow
I used to be excited about the future. Let's bring that back.
About Hire me

Posts 41–46 of 46

  • Wordle: A factional skirmish for the soul of technology

    So check out today’s Twitter beef.

    You know all those emoji squares that have been popping up everywhere? That’s Wordle. It’s a bracingly earnest word puzzle web app deliberately built to a non-commercial ethos.

    Wordle is a darling of the press because of its artisanal, small-batch sensibility. No compulsion loop—you get one puzzle per day. The score-sharing tweets are informational, instead of promotional. The tweet isn’t about getting your friends into a conversion funnel.

    It’s about showing off your adventure and prowess.

    Cue the stormclouds: a guy came in, saw the cultural fervor for this, and decided to build a paid, native application, using the same name and design. Taking people for $30 a year if they keep the subscription, this Pirate Wordle started printing money.

    And its developer decided to tell Twitter about it, making him the day’s Main Character:

    Zack Shakked gushes about the game he cloned.A schism in tech

    The Wordle beef happens at a particular cultural fault line. Information technology has politics of all kinds, but one of the most strident is described on a spectrum:

    Technology used for:
    
    joy and wonder...sacks of money

    Software automation is incredible. It offers leverage unlike anything we have ever seen. You can be worth billions of dollars because you build something that solves broad-scale problems for just $0.003 a user.

    Sofware is all margin.

    Which attracts the money fetishist.

    Money fetishist?

    Money’s only something you need in case you don’t die tomorrow.

    Martin Sheen said this in Wallstreet, and it fucked me up for life. You can’t unsee it.

    This is absolutely an age where having money has become synonymous with safety and security. I can’t fault anyone for trying to be okay, nor being strategic about it.

    But some of us are building an entire identity around being the sort of person who has money, can get money, is the ultimate money chessmaster.

    They’ll go through any trial proudly for access to a capitalism lottery ticket.

    So, for the money fetishist, software is an irresistable lure. Software is a lottery ticket dispenser. There are scratcher tickets, like building indie software. Those are occasionally gushers of a win.

    But the scale goes all the way up to lottery tickets in the shape of a term sheet for massive investment to build a company. Assemble the right team, seize the right market, and you’re a billionaire.

    For this group, joy in computing is not always central to their goals. Mostly their participation in information technology is a cold calculation: “how many disposable robots can we build to seize a market?”

    Computing’s true believers

    To the other end of the spectrum, this posture is off-putting, even revolting. For better or worse, the opposing faction truly believes in the power and wonder of computing, for its own sake. Money is a second order concern to pursuing the magic of making sand have dreams.

    @computerfact: computers think using etchings in poisoned sand and measure time using vibrating crystals so if you were looking for magic you found itIt wasn’t always obvious—either individually or societally—that the computer was a money printer. For some folks, the computer was simple fascination. An all new frontier, defined by different rules than our everyday existence.

    Everything that makes computers good at money also makes them interesting.

    For many, huge chunks of a lifetime have been dedicated to exploring the power of a digital realm. Building up skills, knowledge and imagination for a playing field that’s not always intuitive, but so often rewarding as you develop mastery.

    This is also a path to lottery tickets. Sometimes exploring the frontier leads you to a gold mine. But on this side of the spectrum, you’ve got people eager to explore computing as a creative endeavor first, grabbing what money they can in case they don’t die tomorrow.

    The beef

    Wordle exists at the maximal edge of this non-commercial ethos. It’s earnest in its humane approach. Instead of an engagement treadmill, Wordle is a limited, daily treat. Rather than promoting Wordle, the tweets for announcing a score simply describe the player’s adventure for the day as a score with some emoji. No URL.

    Wordle score panel: 207 5/6In the context of Wordle’s cultural froth—all these articles, all these score tweets—developer Zach Shakked saw an opportunity. To take Wordle’s name, concept, and design, then strap a yearly subscription price to do it. Where the original Wordle was written for the web, this clone was built as a native iOS app, the better to capitalize on Apple’s built in payment system.

    My bias here: I think that move is tacky as hell, and quite possibly legally actionable. You can draw your own conclusions.

    Quite a few more took exception to this approach. Kottke sums up the prosecution’s case succinctly:

    This person stole Wordle (a game @powerlanguish invented), put it on the App Store, and is now crowing about how rich it’s gonna make him. 🤬

    What this beef can show us is a fault line in the culture of those who participate in technology. Some for fun, others for profit. This is a long-brewing conflict, and you can find the seeds of it going back generations.

    This Wordle beef gives you a model for this conflict that’s small and fast enough to dissect as it happens.

    But it’s not the only beef you’ll find in Wordle town. Did you know what those scores are like for screen readers?


  • Building a native macOS configurator for Adafruit's Macropad

    Adafruit’s Macropad is an absolute hardware delight: a cluster of 12 keys, with individual lighting, plus an OLED screen, rotary encoder and even a speaker! While external, programmable keypads are common enough, there’s something to this combination of features that makes the Macropad more than the sum of its parts. Top shot of powered on, assembled MacroPal keypad glowing rainbow colors.Photo via Adafruit It’s just a treat to hack on, and one of the rare technologies that makes me feel like a kid again, eager to experiment and try new things.

    I want to walk you through a project I spun up recently, building a bit of macOS software to make my own Macropad easy to configure and tinker with: I’ll explain not just what I built, but how I approached solving the problems of integrating two very different systems speaking different languages. This stuff is essential to success in any sort of technology pursuit, but I always wish I saw more discussion of how we practioners do it.

    Here’s my contribution.

    Problem: configuration and iteration

    I’ve been using programmable keyboards for years. Essential to success, for me, is iteration. Finding the most useful, comfortable layout of keys is a matter of trial and error, so I want to be able to experiment quickly with minimal friction.

    So step one was finding whatever code existed to let me customize the Macropad’s key layouts.

    Here I was in luck. Adafruit provides a terrific starting point:

    MACROPAD Hotkeys

    What’s brilliant about this example code is that fully utilizes the magic of the Macropad: turn its rotary encoder to move back and forth between pages of hotkeys. The page title and keymap labels are displayed on the OLED screen. This seems simple and even obvious, but to me it’s a game changer because it makes an enormous breadth of keymaps viable. I don’t have to memorize key positions, so I can make pages upon pages of keys to solve whatever common problems I have.

    There was just one wrinkle:

    The configuration ergonomics.

    Speaking to the Macropad

    Macropad is built on an RP2040. This is a microcontroller: a tiny, specialized computer that you can code to solve a specific problem. Microcontrollers are amazing for hardware projects like the Macropad because they’re able to wrangle all kinds of electronic components—switches, lights, screens—and make them as easy to program as any software application on your computer.

    Microcontrollers are hidden everywhere in your digital existence, but hobbyist platforms like RP2040, Arduino and others come with an ecosystem for conveniently programming and troubleshooting them without any specialized hardware. Plug them in via USB and you’re off to the races.

    RP2040 supports code written two ways: C and Python. Adafruit’s example code used Python, so I followed their lead.

    Most of this code was exactly what I wanted. Crucially, they’d solved all the problems of acting like a USB keyboard for me, picked out all the libraries I’d need. This is real research and experimentation effort saved.

    But in terms of iterating on my keymaps, it didn’t quite hit the spot. Here’s how you define keymaps in their example project:

        app = {                    # REQUIRED dict, must be named 'app'
            'name' : 'Mac Safari', # Application name
            'macros' : [           # List of button macros...
                # COLOR    LABEL    KEY SEQUENCE
                # 1st row ----------
                (0x004000, '< Back', [Keycode.COMMAND, '[']),
                (0x004000, 'Fwd >', [Keycode.COMMAND, ']']),
                (0x400000, 'Up', [Keycode.SHIFT, ' ']),      # Scroll up
                [...]

    To tell the Macropad what to do for a given key, you provide an array of tuples, corresponding to each key, specifying:

    • hexadecimal value for backlight color
    • label string
    • an array of key presses

    Thus,

    (0x004000, '< Back', [Keycode.COMMAND, '['])

    would create a key labeled < Back, with a green backlight, sending the key combination Command-[.

    Look, this works, but there’s a lot of cognitive overhead involved. You have to build up these tuples, do RGB hexadecimal math for the colors, and make a tidy Python dictionary.

    Better than a stick in the eye, but I’ve been spoiled by decades of GUI configurators for my input devices.

    Still, this project had 90% of what I needed. If I could find a way to ingest configuration content that could replace this fiddly Python dictionary approach, I could build an interface for programming the keymaps however I wanted.

    So that’s where I went next.

    How do you say ‘red’?

    After digging through this example code, I wrote down all of the things I needed to communicate to the Macropad to build custom layouts.

    Some stuff was easy: if you want to specify a label that shows up on a screen, you use a string.

    Other stuff needed a little checking. The colors were specified using hexadecimal. In other languages, I often see RGB hex handled as a string, but in this Python code, it took the form of 0xXXXXXX.

    I’ve seen this around since I began using computers, but I never needed to actually know what it meant. In this context, the hex was being treated as an integer, expressed in hexadecimal notation. A little fiddling quickly confirmed the Macropad would treat it with the same behavior of hex colors I knew from the web, so that meant I didn’t need to handle colors in a unique way, which was a good start.

    My next question was: can I convert a string into a hex integer in Python? StackOverflow, naturally, had me covered:

    int("0xa", 16)

    Initialize an int with a hex string, specify that its base is 16, and Python will do the rest.

    So that’s the colors sorted. I could handle them like hex strings, externally. Again, strings are easy.

    A thornier question remained: what about special keys, like option, command, etc? In the sample code they’re represented with some sort of Python variable, so I went on a hunt to find where these were defined.

    Keycodes were represented like Keycode.COMMAND, and a comment in the configuration files offered this hint:

    from adafruit_hid.keycode import Keycode # REQUIRED if using Keycode.* values

    From that, I can infer:

    • Keycodes rely on an imported dependency
    • That dependency lives in a library called adafruit_hid

    Sure enough, googling that led me to Adafruit’s GitHub project for a HID library. What’s HID? It stands for Human Interface Device, and it’s the USB standard that allows keyboards, mice and other peripherals to communicate on behalf of you and me.

    From there it was a matter of digging around in the project structure until I found keycode.py, which contained a mapping of keycode names to the lower-level values the HID library relied on.

    Spoiler: it was more hexadecimals:

            LEFT_CONTROL = 0xE0
            """Control modifier left of the spacebar"""
            CONTROL = LEFT_CONTROL
            """Alias for LEFT_CONTROL"""

    Well, I’d just learned that hex values could travel easily as strings, so that’s not a big deal.

    With these questions answered, I had the basics for translating values from outside the Macropad into input that it could properly interpret.

    Next I’d need a medium to structure and carry these values.

    Enter JSON

    JSON is great: it’s become a universal standard, so every platform has a means of parsing it, including Python. It’s easy enough to inspect simple JSON with your own eyes, so that helps with troubleshooting and iterating.

    I started by thinking through a structure for a JSON configuration payload:

        Array: To contain pages of keymaps
        	Dictionary: To define a page
        		String: Page name
        		Array: To contain keys
        			Dictionary: To define a key
        				String: Color (as hex)
        				String: Label
        				String: Macro

    It would require a few iterations to get this to a final form, but this early structure was enough to get things moving.

    Next I needed to learn enough Python to parse some JSON.

    One of the most important habits I’ve picked up in a career of programming: isolate your experiments. Instead of trying to write and debug a JSON payload parser inside of the existing Hotkeys project, with the Macropad reporting errors, I used an isolated environment where I could get easy feedback about my bad code. Replit seems to provide the most robust, one-click, “let me just run some Python” experience online right now, so that’s where I ended up.

    I badly wanted to figure out how to convert my JSON into proper Python objects, whose properties I could access with dot notation. This is how I’ve done JSON parsing in Swift for years, and it felt tidy and safe.

    But after chasing my tail reading tutorials and Stack Overflow for awhile, I gave up and accessed everything using strings as dictionary keys. It was good enough. Ergonomics in the parsing code weren’t essential. Once it was up and working, I wouldn’t need to fiddle with it very much.

        with open('macro.json') as f:
          pages = json.load(f)
    
        for page in pages:
    
          keys = page['keys']
    
          imported_keys = []
    
          for key in keys:
            color_hex_string = "0x" + key['color']
            color_hex = int(color_hex_string, 16)
            macro = key['macro']
            [...]

    It’s straightforward enough. The code loads a JSON file, iterates through the pages, pulls values out of the dictionaries it finds, builds key-specific tuples, doing conversions on hex code strings, and then plugs it all into the App class provided by the sample code, which represents a page. These are stored to an array.

    Again, I don’t know any Python, so this took a bit of trial and error. Getting immediate feedback on syntax and errors from Replit’s environment helped me work through the bugs easily.

    Once it all seemed to parse JSON properly, I dropped my code into the Hotkeys project, replacing the part that went looking for Python configuration values.

    Macropad was happy, displaying a simple configuration I’d written in JSON by hand.

    We were on our way.

    A tool to create tools

    With the basics of a working JSON structure, and a means for the Macropad to translate that JSON into keymaps, the table was set for real fun:

    Building a user interface.

    I’ve spent the last couple years writing a lot of SwiftUI code, and I was eager to try it out on macOS.

    It’s surprising how straightforward it is to build multi-pane, hierarchical navigation in SwiftUI: The old way, using AppKit and nib files, would have required loads more effort. But I had the basics of this up and working in a couple hours.

    It’s not the most intuitive thing ever, but a quick google for swiftui macOS three column layout got me a terrific walkthrough that explained the process.

    Next, I needed:

    • A data model to represent configurations, pages, keymaps, etc
    • A means of persisting those configurations so I could revise them later

    For this, I turned to Core Data. It’s a polarizing technology. Many hate its GUI model editor, and it’s got plenty of complexity to manage. But as persistence strategies go, you can’t beat its out-of-the-box integration with SwiftUI. I’ve done enough time in the Core Data salt mines that I could quickly bang out the object graph I wanted and use it to generate JSON files. Best of all, the app could quietly store everything between sessions for easy iteration. Modifiers were a challenge. In a perfect world, I could import keycode.py into Swift somehow and directly reference the values. My reading suggests this is possible, but I couldn’t sort out how. In the end, I used a spreadsheet to transform the Python code into Swift enum cases, making the hex values into strings:

        enum AdafruitPythonHIDKeycode: String, CaseIterable, Identifiable {
    
            var id: String {
                return rawValue
            }
    
            case
            COMMAND = "0xE3",
            OPTION  = "0xE2",
            CONTROL  = "0xE0",
            SHIFT  = "0xE1",
            []
        }

    As an enum, it was easy to iterate through all of the modifier keys and represent them in the UI. Storing them as strings made them easy to persist in Core Data. In theory I could convert them into plain integers, and send them around that way, but this seemed like less work to debug.

    With navigation working and a data model specified, I went to work on the editor UI. My requirements were simple:

    • It had to visually represent the layout of the real thing
    • Editing had to be fast and easy

    What I ended up with was a grid representing each key. Clicking a key let you edit its color (with a picker!), label text, and output. Thanks to the magic of SwiftUI, and a generous soul on StackOverflow, it was even easy to provide drag-and-drop reordering of the keys. Clicking a button exported a json file, which could be saved directly to the Macropad—it shows up as a USB storage device on your computer. Now I could bang out a new configuration in seconds, and what I saw in the editor was what I got on the Macropad.

    “How’d you do that?”

    In all, it was a weekend of effort to get this all rolling. At one point, showing it off, I was asked “How did you know how to do all of that?”

    It’s a system integration problem! I’ve been doing that sort of thing for 20 years, so for a moment, I didn’t know where to start explaining. But to summarize the above, here’s how I approach this kind of challenge:

    Find an existing, successful artifact

    I need to start with something that already works. In this case, I found example code for the Macropad that was directionally aligned with my own goals. I’ve gotten so far in this game with example code. Code gives you pointers about how a system works, demonstrates its assumptions. It also gives you a starting point for your own approach.

    No matter the system, in this age, you can usually find code that successfully interacts with it if you google hard enough.

    Write down your unknowns

    Once I’ve examined a working artifact, I usually end up with more questions than answers.

    Hacking a project together is as much investigation as it is creation. I grab a pad of paper and keep track of my biggest unknowns to give that investigation its shape. After examining the example code, my biggest questions were:

    • How does it represent color?
    • How does it represent keycodes?
    • How do I parse JSON in Python?
    • How do I get feedback from the Macropad when things break?

    If I found answers to these, I could build an external system that talked to the Macropad.

    Learn how to communicate

    The next step is asking “how does this system expect me to communicate my needs?”

    Understanding the structure and format of data is essential. This step is equal parts research and experimentation. I need to find whatever documentation exists, even if it’s just source code, to understand the requirements and expectations of the system. Then, I need to try to create a working data structure of my own that the system will accept.

    Build a bridge

    Once you understand how to speak to a system, you need a means to reliably, repeatedly do so.

    Here, I wrote a simple JSON parser to ingest and interpret external output into something Python could understand for one side of the bridge. On the other, I wrote Swift code that could generate JSON according to the expectations of that parser.

    Fuck around, find out

    With your unknowns revealed, a communication strategy understood, and a bridge constructed, you’re ready to start playing around. Experiment with different approaches until you get the results you’re looking for.

    Sometimes you’ll break things. Use version control to keep track of your experiments, so you can always roll back to something you know is working.

    Adafruit recommends the simple but effective Mu Python editor, which provides a serial connection to the Macropad. When I broke things, I could get hints about what went wrong that way through console log messages sent to the serial monitor.

    Don’t let perfect be the enemy of done

    Throughout this process, there were “better” ways to accomplish what I wanted. I wish I could have parsed JSON into native Python classes, I wish I could have imported the keycode.py content more transparently into Swift. In a well-behaved Mac app, you can double-click a list item to edit its name. Still not sure how to do it in SwiftUI.

    It’s easy to get bogged down in the perfect solution, but I try to prioritize progress over perfection. Do the ugly, hacky thing first, then keep moving. If it comes to bite you later, you can rewrite it. Probably you’ll have a lot more context later, so the deeper solution will be better informed than if you’d tried it from jump.

    What about the configurator?

    Sure, here’s the code:

    macOS Project

    Macropad code, as adapted from the MACROPAD Hotkeys project

    Stuff I might add in the future:

    • Properly signed binary builds of the configurator (right now you’ll need Xcode)
    • JSON import
    • Multi-part macros with delays
    • Undo support
    • Direct user input capture of keystrokes

    Maybe this will inspire you to write a web application that does the same things.


    Hope this look inside the hack was interesting. Drop me a line with any questions you’ve got about how tech gets built. Much love to Adafruit, which makes the coolest electronics hobby stuff around. Projects like this Macropad remind me why I fell in love with technology:

    • Exploration
    • Experimentation
    • Magic

    I have the life and career I have because once upon a time, I learned how much fun it was to make things that talked to each other.


  • Epic's ambitious plan for Unreal Engine and content-as-a-service

    Fascinating twitter thread by game professional Mike Bithell about the strategy Unreal is telegraphing with their recent Matrix Awakens tech demo:

    Epic is attempting to flip the economics of game production towards film production.

    In games, we build everything, so (and this is over simplification) game assets are a fixed cost. If I need 100 things, I need to pay for the production of one thing a hundred times. Efficiencies come in, but we’re still building stuff from scratch.

    An open world game? Only possible if you’re a mega AAA company, or you wanna stylize to the point of affordability. Movies don’t build a city, they find one to shoot in. They buy props and costumes. They hire actors rather than making westoworld style puppets.

    Asset stores brought some of this prop shop and central casting mentality to games, but the problem is aesthetic consistency. If you buy 10 characters off the unity store and throw em in a game together, unless you did so very tastefully, it’s gonna look shit.

    With Epic offering a fully coherent city backdrop, including efficient rendering technology and consistent design across characters and architectures, Mike argues, it becomes cheaper than ever for game devs to tell stories in realistic settings. This introduces new tradeoffs:

    But it also increases the quality bar and expense of doing anything that’s not on the shelf. Alien characters? Metahuman with some forehead bumps added… 6 limbs is expensive and out of scope. Guns? Take a prop gun and stick something on it. 90s film solutions.

    I think it’s interesting to see Epic touting this just a year after Cyberpunk 2077, an ambitious platform play that nearly tanked its developer, CDPR.

    Sure, Cyberpunk is a game. It’s also an elaborate canvas of characters, settings, and content that they can use to sell a series of stories. Night City is enormous and immersive, and the initial story they shipped barely scratched the surface of all they built there. I suspect we’ll see plenty of paid DLC over the next few years leveraging that investment.

    CDPR was hoisting an enormous open world, full of systems that could be reused. The complexity of pulling something like this off is formidable, and in Cyberpunk’s case, led to a disappointing initial release full of technical issues, especially for players on older consoles.

    Epic, decades deep into the game engine business, understands most teams don’t have the capital or risk appetite to undertake that level of investment, but sees a similar opportunity to build that kind of canvas and rent it out.

    As Bithell says:

    Movies don’t build a city, they find one to shoot in. They buy props and costumes. They hire actors rather than making westworld style puppets.

    The software economics of games continue to evolve.

  • VR revolution: any day now

    Oculus Quest 2 is a decent device that costs a bit less than a nice iPad, and far less than a television and console.

    From a performance perspective, it’s surprisingly decent.

    I’m no fan of Facebook, but curiosity got the better of me. I grabbed a Quest 2 to find out if we were any closer to a big shift in culture, communication and commerce.

    I think it could happen any day now. But it’s not here yet.

    Once upon a time

    We’ve been holding our breath on VR since before I entered puberty. At malls and fair grounds, you could get a taste of the future by strapping into a monstrous set of gear.

    It never really took off, to say nothing of coming into the home.

    Placing ourselves into a simulated environment that our eyes and ears will believe is a challenge. It requires tracking the user’s motion in space at multiple levels. Not just where the body is moving, but where the head is pointed. More advanced systems might even track your limbs. All of this, historically, has required significant computing power, specialized hardware, and complex software.

    But over the last five years, VR has gurgled back into life. I’d call Vive the best early expression of modern VR, but that’s not saying much. To make Vive work, you needed:

    • A high end gaming PC with a powerful graphics processor
    • Permanently mounted “lighthouse” devices for tracking your movements in 3D space
    • A heavy, bulky headset tethered to your PC by several cables
    • Big, awkward controllers to track your hand movements

    Once you assembled all of this, you were rewarded with an experience that resembled reality as viewed through a screen door. To trick the mind into a 3D experience, a VR rig needs to produce two images: one for each eye.

    So you need to be able to drive a lot of pixels in order to make 3D work. This is challenging in two places: computationally, as more detail requires more horsepower, and physically, as pixel density makes display panels like those used in a headset more expensive. In Vive’s case, managing these costs meant a distracting black grid seemingly overlaid on an otherwise plausible visual field.

    Really, you were perceiving the gaps between pixels.

    So, just to get into VR you were talking about $500 for Vive hardware and another $1500 for a high end gaming PC. In exchange, you got a mediocre picture and occasional headaches.

    Literally, headaches. I had to take the thing off sometimes because of nausea and other discomfort.

    It wasn’t a great value proposition.

    That doesn’t even get into how technically complicated the first-run experience was. It required so many cables plugged in, so many pieces of software installed, so many interlocking systems to debug. This was an experience only arch nerds could make sense of. It had no chance at mass-market appeal.

    But that was five years ago. Today, things work a little differently.

    Oculus Quest 2: surprisingly okay

    Oculus, meanwhile, has been aggressively pushing away from the tethered, PC-dependent model.

    In the case of Quest 2 (hereafter Q2), cameras built into the headset itself determine your motion through space. Cameras track the motion of controllers you hold in your hands, allowing you to interact with virtual menus and environments with surprising accuracy.

    This is not Facebook’s first attempt at such a self-contained system. Vive and Oculus have both been trying to crack the nut of self-contained VR for several iterations now.

    From a hardware perspective, it’s important to note that Q2 is merely okay, not excellent. The included straps are terrible, though upgrading to a better mounting system is possible. Adjust the straps wrong and you’ll have a splitting headache within minutes. But get them too loose, you’ll find your view blurry, and a different sort of headache sets in.

    VR is still fiddly.

    But far less fiddly than it ever was.

    Onboarding for Q2 was far easier than anything possible five years ago. You install an app on your phone, sign into Facebook (ick), and pair with the Q2 hardware. From there, you settle into the headset and follow some prompts.

    This is a much more consumer-friendly experience. Anyone with tech savvy, or even just a teenager nearby, will do it with no problem.

    All of this sets the table for a simple proposition: a computer tricking your brain into thinking you exist somewhere that isn’t real. Q2 can do that reliably and somewhat comfortably. Even the screen door effect is much improved. Unlike early adventures with Vive hardware, Q2 has the resolution to convincingly hide the technical details that drive its experience.

    The question is:

    So what?

    The software is meh

    Once you’re in virtual reality, what’s supposed to keep you there?

    Right now, not a lot.

    There are some compelling novelties. Beat Saber is a rhythm game that’s fun and physically demanding, and probably more interesting than any cardio equipment you’ll ever find at the gym.

    Google’s Tilt Brush, an early entry in the modern VR game, might be one of the more magical creative experiences you’ll ever have. With Tilt Brush, you can paint in 3D space using both mundane pigments and with light. Five years on, it remains my favorite way to spend time in VR.

    There’s other stuff. 3D games, from shooters to puzzle things to a remake of Myst. Facebook would like you to conduct meetings in VR. Altspace, now owned by Microsoft, provides a multi-user environments, and Facebook has a competitor in a closed beta.

    This software runs the gamut from meh to okay. But I don’t think there’s a killer app that’s going to inspire fanatical, compulsive engagement with an experience you can only get in VR.

    Yet.

    Still, the economics of Q2 create a strange situation. At $299, it’s compelling enough to be an interesting gift. By comparison to a console, it’s a bargain. A PS5 costs $499. Add the television you need to play it, and a console experience edges toward $1000.

    Will enough people buy them to trigger the network effects needed to start the next phase? If so:

    • All new venues for social media emerge. The basic experience is already cognitively persuasive enough, and everyone is tired at looking through their friends and loved ones through a screen. The illusion of sharing space despite distance will be compelling, to say nothing of sharing physical experiences. But it only works if there are people there you want to share the space with.
    • A customer base large enough to gamble real money on emerges as well. If you can make money building VR experiences, you might make a metric fuckton of money making the one experience people can’t live without.

    Facebook, of course, badly wants these outcomes. The challenge is in the IP. Sure, Q2 cheaper than a PS5, but so is a plastic bucket, and neither a bucket nor a Q2 is going to be able to play Spiderman or Final Fantasy.

    Unless something changes, Facebook is gambling on novelty and a middling content catalog to move these units.

    What about Apple?

    Of course, Apple has been sitting this one out.

    Remember how VR needs lots and lots of pixels, the better to trick your eyes? The thing about Apple is that they have an unparalleled advantage in energy efficient processors that push loads of pixels. Their portable graphics prowess is the best in the business.

    Better graphics efficiency means nicer screens and smaller batteries, making for lighter and more comfortable headgear. Apple has been shipping denser and denser screens on lighter and lighter devices for over a decade.

    The thing about Apple, though, is that they don’t ship their experiments. They ship when they’re convinced they’re poised to gobble up the entire high end of a hardware market.

    Meanwhile, they’ve been laying the foundations for augmented reality and social VR:

    • ARKit, Apple’s framework for realtime compositing of mixed reality content, has been through years of iterations
    • Apple is two years into ramping up production of miniaturized LiDAR sensors, as used in their Pro-level iPhones and iPads
    • The new SharePlay API allows users in different locations to synchronously enjoy the same content

    Most of all, Apple already has thousands of developers using these APIs and trained up in their ecosystem. If they release a new category of hardware, they have both the software infrastructure and the developer base to make a play for that vaunted killer app.

    Rumors have said for years that Apple is targeting 2022-2023 for such a device. But who knows when they’ll move?

    Any day now

    Facebook is making lots of noise. They have a mindshare advantage, thanks to close consumer relationships built for 16 years, and their pricing is damned compelling. We’ll see how many Oculus units go home this holiday season. Microsoft wants their own crack at the pie, Vive is still out there, and Apple is locked in their wizard tower, up to god knows what.

    It’s still anyone’s game, and so far there’s nothing to inspire deep FOMO.

    Still, with the hardware firming up like this, a whole new paradigm could kick off any day now. For better or worse.

    Keep your eyes peeled, and don’t forget your dramamine.


  • Amazon: bleeding institutional knowledge through attrition

    Here’s a post from someone in the r/antiwork subreddit detailing their difficulty in getting started in a gig at Amazon:

    One hour was spent trying to make the surround sound work for endless power points. Another hour was spent by HR basically gloating about how proud she is about catching people faking reasons for time off. This woman literally was so proud about how she will literally call funeral homes to make sure the person you said died, is in fact actually dead. She was so arrogant about it. It really disgusted me.

    Next the safety guy came in and said a few things, only to leave without an explanation. Then they have us log into these iPads. They’re wondering why we can’t. We never received login information to do so. That took another hour. I tried logging in with the info i had for the recruitment website but that didn’t work. They gave up.

    The post is really sad: OP needed accommodation for their ADHD, but didn’t have a supportive environment to get it, so walked from the job.

    All of this is of a piece with other stories we’ve heard from Amazon: they don’t care about attrition, especially at the warehouse level. Using people up is the business model.

    But the obvious downside of this is that their institutional knowledge is constantly running out the door. After awhile no one knows how to do the basics of essential tasks like onboarding, so the jobs are miserable for new folks from the very first second.

    Hard to imagine that doesn’t catch up with them eventually.

  • 9/11 and a Hari Seldon future

    It’s important to understand the War in Iraq as an imperial snuff film.

    You can get away with a lot in America—destroying the financial system, illegal arms dealing, laundering drug money—without earning the lasting ire of our ruling class. America is an empire long ago built on a two-tiered view of human worth.

    It’s a country that has always treated some of its population as disposable. Which neatly limits your accountability for malfeasance. Consequences are for those other people. Up to a point.

    Because of this two-tiered state of affairs, American life is not itself treated as sacred by our culture, nor is civic duty.

    But the American Status Quo is another story. Rupturing that has consequences.

    While we lost thousands of American lives on September 11th, 2001, we also lost our sense of the status quo. Bankers, politicians and generals spent their day feeling out of control. Literally running for their lives, American elites felt visceral fear as they evacuated their strongholds in the wake of a terrorist attack. The people who were supposed to be safest in our country, the people with the most power, were scrambling.

    This is not how the United States is supposed to work.

    Bush and his courtiers spent the day in a nightmare of uncertainty, vulnerability, and disconnection. The resulting trauma of that crisis was pumped and massaged into the psyche of the country. In the US, calls for blood were in the media, in the halls of government, in the corner bar.

    Somehow, Afghanistan wasn’t enough.

    Meanwhile, Iraq never had anything to do with 9/11. Didn’t, as claimed, have terrifying weapons that could end Our American Way of Life as We Knew It. What it did have was people who bore a passing cultural resemblance to the attackers who shattered our status quo. It had a feeble dictator.

    So 18 months later we all watched on cable news while the American Empire made a demonstration of its power. The world’s most expensive military made quick work of Iraq’s defenses, concluding its invasion in less than one month.

    Saddam, humiliated, was yanked out of a hole in the ground nine months after that. Then they killed him.

    Truly some street thug shit, sending a message: the US can still deal pain and destruction wherever we want it. For the rest of the world, this was a threat. For US citizens, this was meant to be a reassuring promise.

    For the traumatized elite, it was confirmation that they were still in control.

    Foundation on TV+ is an intro to imperial power

    Vague spoilers within.

    One of the gifts of science fiction is its ability to reframe our perception. Spaceships are fun, but they’re not really the point. What matters is that, from a new frame of reference, we can imagine old problems fresh. We can discover new points of view, freed from the biases of history and prejudice.

    At its best, science fiction is a social simulation laboratory.

    In the case of Foundation, the simulator projects an enormous, tactile model of empire.

    How does empire work? What are its traits?

    Foundation takes all these details and zooms in on them, letting us roam around and examine systems of empire. Imagine a sociological museum filled with exhibits, diagrams and models. By imagining a powerful state that spans an entire galaxy, Foundation lets us really dig into the mechanics of power.

    And what is an empire?

    Empires are… civic eruptions. They’re tangled webs of logistics capacity, resources of all kinds, military power, and self-justifying ideology. Empires can snowball, since having all of these things in combination lets them seize more capacity, more resources, more power. Ideology provides the moral and cultural lubrication needed to enable these actions.

    The challenge is stability. All that growth comes at a cost. Administrative overhead, yes, but also the resentments of those on the losing end of encounters with the empire.

    Foundation explores the practical, visceral experience of an empire: the brutality of power, the terror of instability and decline. We join Cleon as his Peace is interrupted in a terrorist attack. He responds, much like America, with attacks that do little more than destabilize the game board and slake popular bloodlust.

    “You can’t play chess with someone who’s willing to set the world on fire.”

    Like America, Cleon lets his subjects watch on TV as the empire enacts revenge upon far-away people who or may not have had anything to do with their crisis.

    An imagination toolbox

    America is an empire in decline. I walk with the trauma of watching a leviathan crumbling around me.

    Logistics capacity sputters. Resources are squandered to fight enemies we don’t have. Meanwhile, hundreds of thousands of lives are snuffed out by a plague—many multiples of those lost on 9/11.

    Large swathes of our population are too ignorant to shield themselves from that plague now, even as science provides badly needed, effective protection. Meanwhile, grifters take money for cures that don’t work.

    We’ve always had the ability to shut down the covid pandemic within our borders. We’ve just never bothered to do the work.

    There is, everywhere, an air of discontent. The minor burghers who would once bully the poor into low-wage work now lament shortages of people willing to be so oppressed. Labor actions are more and more frequent, as conditions grow so dire that workers must overcome decades of crumbling union power and organize.

    Of course, injustice abounds. We see corruption out in the open, we see the innocent oppressed for the color of their skin, we see righteous anger in the streets.

    And we see white supremacists trying to overthrow our government.

    Meanwhile, basic needs go unmet. US citizens, supposedly the freest people in all the world, can’t access medical care without risking financial calamity. Wages have stagnated. We’ve abandoned updating the federal minimum wage. Homelessness is a soaring problem, and though we have the resources to solve it, like covid, we just can’t be bothered.

    I don’t know what’s next for this place. Not knowing is scary and tiring.

    Foundation enters with metaphors, postulates and scenarios to help me imagine the long term consequences of what I see around me. All of it is rendered in exquisite, emotional, human detail. So far, we’ve touched:

    • The civic conflict between science and theology
    • Terrorism
    • Colonialism and the tribute paid to tyrants
    • Use of mathematics, engineering and science to enhance the imperial machine
    • The use of media narratives to support the whims of the political elite
    • Gerontocracy and the consequences of long-term elite crisis
    • Civil unrest as an outcome of the dimming light of the state
    • Imperial meddling in the political process of client states

    Every detail of the writing is thoughtful, but every other detail is just as attentive. The production design brings us exquisitely rendered costumes, architecture, vehicles and props. A poignant, utterly distinctive score from Bear McCreary (Battlestar Galactica) sews everything together into perfect emotional resonance.

    The cast is, of course, unbeatable.

    Lou Llobel and Jared Harris are giving these roles everything they’ve got.

    I’m haunted and captivated by what I’ve seen so far. I chew on the stories between viewings, and find so much integrity in the web of cause and effect they describe.

    The scale of the story works. It’s a successful epic that feels both adult and nourishing. Where Game of Thrones hit us with brutality to be provocative, Foundation finds more subtle ways of taking our maturity seriously—without pulling punches on the brutality of empire.

    It’s not necessarily feel-good. But it’s a story of how to recognize, survive and mitigate a difficult moment in history. It’s a story about putting a floor on darkness and chaos. It’s a tribute to our power to stand up to tyrants and chart a new course.

    I don’t know about you, but I need a little of that right now.