ChatGPT has forever changed my career
Contents
I had certain expectations of the possible, informed by decades of building things. Some stuff lay within my zone of talent, while others lay far out of reach, in a place of broken ROI.
Today I’m not so sure. Today far more seems possible than I ever expected.
I’m still not sure what to do about this.
Prologue: how applications are structured
The structure of software has pivotal consequences for the future success of the project.
When structure works against a project, that project is more costly to build and maintain. These costs show up in a multitude of ways:
- Complexity cost for changes and new functionality, making it harder to implement features, harming velocity
- Opacity cost in debugging and reasoning about the workings of the application, making it harder to recover from failures, harming velocity
- Onboarding cost for new team members joining the project, harming velocity
Velocity is the fuel of a software project. Velocity makes challenges feel winnable. Velocity provides a sense of progression, and it’s addicting. It feels good to build things. It feels good to see the things we imagine take shape.
The morale benefits of velocity are intuitively understood and deliberately captured by the best software leaders. Going from zero to one is hard, and it helps when you believe it’s possible.
Velocity ignites the fire of belief, stoking it when it falters.
While reliability, scalability, and correctness matter, without velocity you don’t ship. If you don’t ship, your code is stillborn, and eventually, your business dies. You need a vehicle that moves.
Like any vehicle, then, velocity is affected by structure. Square wheels don’t roll.
To succeed in building a complex tool requires thoughtfulness about the structure you’re laying out. You need good foundations to build on, especially modularity, with seams that make composition and rework of components low-cost.
The shape of software
The germ of software is a set of requirements. Requirements animate software, giving it a purpose for shipping.
Someone, somewhere, needs help.
The software is built to oblige.
To address requirements, we build components. Components implement some functionality, and have specific contracts and boundaries, even if only implicitly.
Components address roles: a well-scoped need fulfilled.
This can spool into fractal complexity. Consider an application that sends an email.
Requirement: connect to a server and send a message.
Simple enough. We know we need a networked app. But not enough to build with yet. Let’s try something more specific:
Requirement: connect to a server at this URL, using a specified protocol documented at a link I’ll give you, and send a plaintext message to a user-provided identifier.
Now we can start to sketch a component that addresses the requirement. We’ll need a NetworkClient
. To do its work, the client will need to fulfill some roles:
- Server connection checking: does this thing exist? Can we reach it?
- Apply the protocol: Once connected, we’ll need some way convert the message into a format and transport that can be implemented by the server.
- State management and error handling: Can we communicate the state of things to the developer, and perhaps the user?
- Content validation: Is this message correctly addressed? Is there anything about its content that would prevent successful transmission?
With these roles explored, you can see that a whole new round of requirements has burst forth. To address them, we’ll need to add more components to the network client. The appropriate shape for this is going to differ according to the culture of a language. In simplest form, these components might be expressed as functions. In other languages, structs, objects, enums and closures come onto the stage.
All of these boundaries and structures are for our organizational, logical benefit. They get boiled away by the magic of compilers and interpreters into the sheer, infinitely iterating might of a computing device.
You can make frightfully twisted structures that nonetheless run. Every developer’s career is defined by the lessons their sharp edges teach us.
The software doesn’t care. The computer doesn’t care. The iron law of building Idea Machines is that you can do it as poorly as you want. No one will stop you except the machine itself, and only then when your intentions have veered into the impossible.
The software professional’s job is structure that maintains velocity
To be successful, you have to observe these mechanics as best you can, building a plan that navigates through them enough to someday ship something. This is the work of software, more than just typing things or proving you understand a bubble sort by describing one with a whiteboard. Can you meet requirements without tripping over your own shoelaces?
This is challenging work, made all the more so by how poorly it’s explained when you’re starting out. It occurs to me that in 13 years in formal technology roles, I’ve only once had an engineering manager to guide me.
But eventually you skin your knee enough times and learn some lessons. You develop intuitions, then routines, for how you structure your projects, break down problems, and begin solutions.
The snag is that, at the code level, your approach to these things may be dramatically impacted by the language and framework features you have access to. Jumping into a new platform carries friction. You have to learn to apply your structural priorities according to all new instruments.
Some are straightforward transfers, like going from a piano to a church organ. Something new but not something alien. Others are going from piano to a violin. The same rules in the abstract, but a very different tactile experience. Bridging the gap can be a lot of work, and it can absolutely kill your velocity. Backing up to figure out the specific syntax for achieving a design pattern you favor can require forum searches, reading through books and docs, or even a wait for asynchronous support.
‘It’s really taking you 1k words to reach the AI shit?’
You have to understand what I understand about the proper construction of software to grasp why ChatGPT and its ilk are so transformational.
I know how to build an object-oriented application that runs on a resource-constrained, networked device. I’ve done it so many times I can do it in my sleep now. It’s just, all that experience happened in Objective-C and Swift.
I avoided C++ like the plague because it’s fucking weird and clunky. The code would sometimes peak out of third party components, technically viable in iOS projects, but mostly avoided except by games masochists.
But let me tell you: I have felt a deep and seething rage about the remote controls that came with my heat pumps. Absolute garbage, marring the otherwise pristine miracle of high-efficiency heating and air for all seasons. So I wanted to fix this.
The problem was complexity. Here are all the requirements I had to address:
- Network connectivity with Home Assistant using MQTT
- Infrared pulses perfectly calibrated to the timings of my specific Mitsubishi IR control protocol
- Visual readouts using LEDs and bitmapped screen
- Local input handling for temperature, mode and fan speed settings
This required writing a pile of top-level components serving specific roles, like:
IRInterface
, to manage the infrared emitterInput
, to capture button presses and turns of a rotary dialDisplay
, to manage screen writes and setupEnvSensor
, to read and publish sensor valuesHAInterface
, to coordinate remote and local state using MQTTBaseView
, to establish a common structure for converting local content into screen writes
The velocity benefit of these discrete components serving specific roles is real. Days before hitting release, I discovered a terrific library that made my aging, recycled implementation of the Mitsubishi IR protocol entirely obsolete.
Replacing it took less than 20 minutes, as I deleted huge blocks of code and replaced them with calls into the new library. Because the underlying IR implementation was entirely encapsulated as a subclass of IRInterface, nothing else in the code needed to change.
None of it was possible without ChatGPT
I’d attempted building the exact same project a year ago.
It sputtered out. I couldn’t maintain velocity. Endless tangents, stumped by a compiler or runtime error, researching some syntax or another, just endless tail chasing.
Thing is, loads of people have done this tail chasing before me. C++ is one of the great languages of computing history. It’s been used in so many ways, from hobby microcontrollers to serious embedded systems to simulations and games. There are endless forum threads, books, Q&A pages and mailing list flame wars dedicated to C++.
Which means ChatGPT knows C++ extremely well.
Show it code that baffles you, it can explain. Show it errors you can’t make sense of, it’ll give you context. Explain the code you want to write, it will give you a starting point.
From there, give it feedback from the compiler, it will course-correct. Give it shit code you want to refactor, it will have ideas.
None of it is perfect. Occasionally I could sense that the suggestions it was offering me were far more complex than the situation required. I’m sure there’s stuff in ThermTerm that would make a practiced hand at C++ cringe.
But this is the stuff of velocity. Instead of getting stumped, you’re trying new things. Instead of giving up in confusion, you’re building context and understanding. Instead of frustration, you’re feeling progress.
What matters is, I shipped
All around my house is a network of environmental sensors and infrared emitters. Their current uptime is measured in months, and they still faithfully relay commands and provide remote feedback.
They are working great.
My curiosity could be immediately and usefully satisfied. I asked things like:
Would it be possible to replace these lookup arrays with dictionaries? [redacted pile of code]
I was concerned that keeping indices straight would be more brittle than naming values by keys.
Why is the compiler unhappy? [redacted pile of error spew]
I was genuinely stumped.
What does this error mean? “‘HVACCommand’ does not name a type”
I’d stumbled into a circular dependency, and the tools expected me to manage it.
In each case, missing pieces of context might have derailed my sense of progress and confidence, prematurely ending my coding session. Instead, ChatGPT gave me enough useful guidance that I could overcome my roadblocks and deliver on my next requirement.
Stitch enough of these moments together, you’re going to ship.
It doesn’t work this well with everything. Like I said, there’s loads of content about C++ for an LLM to absorb. Newer stuff, less popular stuff, you’re probably more prone to silliness and hallucination.
Still, you can accomplish so much with languages that exert the sort of cultural gravity that make ChatGPT especially useful. JS/TypeScript, Ruby and Python will all let you build the web. Swift will let you build native apps for Apple platforms, while Java and Kotlin will get you there for Android.
There are more languages still, serving more domains and platforms, all popular enough to get useful advice from ChatGPT.
Developer switching costs are now much lower
It’s never been this easy to be productive in an environment I know well, because it’s never been possible to get this kind of consistent stimulus to velocity.
It’s never been this easy to become productive in an environment I don’t know at all, for the same reason.
I feel like I can build anything now, and ThermTerm is just one of several projects this year that has cemented that conviction. This is a transformation. I’m different than I was. I have more power, my ambitions have greater reach.
But what’s worrisome about this to me is that, at least so far, there’s really one game in town. I was excited to try the new GitHub Copilot beta, but I found it not nearly as consistently helpful as ChatGPT, even though switching to the web browser to use it involved more friction. Other solutions may exist, but no one is coming close right now to the quality and reliability of ChatGPT for this category of work.
Forget AGI.
There’s a serious social risk to a company that can monopolize this much influence over our productivity.
I am more certain than ever that this technology will be as essential to our experience of building software as compilers or networks or key caps. $20 for a month of ChatGPT can produce many multiples of that value in the right hands.
So to me the future of AI, whatever it holds, carries risk of entrenching OpenAI with the same level of power as Standard Oil or Old Testament Microsoft: able to shape the very playing fields of entire sectors. For the time being, OpenAI has the best stuff. They can influence the productivity of millions, their scale of impact limited only by GPU scarcity.
There’s more than hype here. The technology is real. There are many who adopt an understandable posture of defensiveness to the technology industry’s fads and whims. In the wake of cryptocurrency bullshit, this has largely taken the form of reflexive skepticism.
I don’t think that’s the move on LLM’s. I think the more productive stance is to be proactive at monitoring the growing power of AI’s purveyors. This stuff has serious cultural transformation built into it, and it’s going to happen whether you personally believe it or not.
People want to do silly things with LLM’s, like replace writers and artists. I don’t think that’s going to work that well. Instead, I see these tools as amplifying the individual reach of any given person who has access to them.
Instead of dismissing or decrying them, we need to get to work democratizing their access, or this will become a serious vector of inequality.
As for myself… for a mere $20 a month, I am transformed. Two thousand words later, I still don’t know what to do about it. It takes time to make sense of options that are multiplying. Loads of people are going to land in that spot in the coming months.
But eventually people are going to figure it out. Hold on to your ass for the social consequences of that.