Skip to main content Redeem Tomorrow
I used to be excited about the future. Let's bring that back.
About Get my help

The average AI criticism has gotten lazy, and that's dangerous

Let’s get one thing out of the way: the expert AI ethicists who are doing what they can to educate lawmakers and the public are heroes. While I may not co-sign all of it, I support their efforts to act as a check on the powerful in the strongest possible terms.

Unfortunately, a funny thing is happening on the way into popular discourse: most of the AI criticism you’ll hear on any given digital street corner is lazy as hell. We have to up our game if we want a future worth living in.

Sleeping on this will cede the field to people who will set fire to our best interests just to gain 2% more market share. I want people of conscience to be better at this discussion.

The danger of bad critique

There’s a fork in the road.

After fifty years of evolution, digital computing now has the power to reliably interpret certain patterns of information, and to generate new patterns based on that input. Some of these patterns are not really what we want. But as time passes and investment continues, the output becomes more and more compelling. I think it’s most productive to call this pattern synthesis. But its purveyors would prefer “artificial intelligence.” I’ll use the terms interchangeably, but I think “AI” is more brand than accurate descriptor.

Whatever we want to call it, the cat is out of the bag. We are not going to stop its use because all it is, in the end, is one of many possible applications of commodity computing hardware. It is broadly documented and broadly available. To curtail its use would require a level of ruthless restriction of how individuals use their privately owned computing devices.

I don’t see it happening, and if you follow the idea to its logical conclusion, the civil liberties implications suggest we should not want it to happen. Think Fahrenheit 451, but the jackbooted thugs destroy your kid’s Raspberry Pi.

That said, not all pattern synthesis applications are created equal, nor are all computing devices. OpenAI enjoys a significant lead in the space, and they have enough advanced computing hardware to create uniquely powerful outcomes.

There are competing initiatives. Other vendors are attempting to catch up with their own proprietary products, and open source ML is a thriving ecosystem of experimentation and novel developments.

But as it stands, OpenAI is in no danger of losing its lead. ChatGPT’s quality steadily improves, as does its abilities. The difference between this year’s product and last year’s is staggering.

Meanwhile, OpenAI cannot keep up with demand.

But I was told this stuff was useless

At some point in time it wasn’t worth much. A mere toy curiosity. But the evolution of these tools is happening at a vertiginous pace. Look away for a few quarters, and your picture of how it all works is fully out of date.

Sadly, that doesn’t stop its lazier critics.

The fork in the road is this: we can dismiss “AI.” We can call it useless, we can dismiss its output as nonsense, we can continue murmuring all the catechisms of the least informed critique of the technology. While we do that, we risk allowing OpenAI to make Microsoft, AT&T and Standard Oil look like lemonade stands.

We then cede any ability to shape the outcomes of pattern synthesis technology, except through the coarse and sluggish cudgel of regulation. And I don’t know about you, but the legislators in my jurisdiction don’t have the technical sophistication needed to do anything but blindly follow the whims of the last lobbyist they spoke to.

Real activists ship

Whatever your beef with AI, you can’t put the genie back in the bottle without outlawing statistics or silicon. The shittiest version of any computer in your house can probably achieve some machine learning task right now, if you download the right stuff.

More than that, enormous amounts of human labor concern the management and creation of patterns. OpenAI is sold out of its computing capacity because the stuff they do multiplies productivity.

Pattern synthesizers alter the economics of labor. Like global networking, general purpose personal computing, telephones, electrification, and combustion engines before this, pattern synthesis changes the shape of what individuals, teams, and societies can accomplish.

So the question is: what kind of future do you want? Do you want a future where effective, reliable pattern synthesizers are available to everyone at a reasonable cost? Or do you want a single company to set their costs, making great margin by serving only the most profitable customer?

Do you want a future where pattern synthesizers are built cooperatively, on consensually contributed data? Or do you want a regulatory framework authored by the richest guys in the room?

Do you want a future where pattern synthesizers are energy-efficient, balancing performance against externalities in a sustainable way? Or do you want their costs and externalities concealed and laundered?

Do you want pattern synthesizers to create a caste system of technical haves and have-nots?

That’s pretty over the top

Twice a month, I head over to my library. For an evening, I sit at a table and help seniors with their technology questions. They bring their laptops, phones, tablets, even printers, and often lists of problems that have cropped up. We work through their issues, try to fix problems, and I do my best to reassure them their difficulties are not their fault.

But too many blame themselves nonetheless. They don’t have a mental model for why things are doing what they’re doing.

  • Why this website is asking them to log in with their Google account with some obnoxious popover (answer: someone has an OKR for new account signups).
  • Why their computer is so slow for no reason (answer: the vender installed backdoor to add crapware that uses vast amounts of CPU)
  • Why someone would be able to remotely log into their computer and destroy all their data (answer: they got scared into calling a scammer callcenter and social engineering did the rest)
  • Why they can’t make sense of the gestures and controls that are necessary to operate any modern smartphone (answer: the UX design isn’t tested on people like them)

This is an issue of economic justice and political self-determination, as all essential civic activities become digitally mediated. Lack of technology literacy and access hits many people of all ages, but especially low income families and senior citizens. We have completely failed to bring them along.

And at this rate, it’s going to happen all over again.

But worse.

The dumb critique

I want to talk about all the essential criticism that needs more airtime, but first we need to walk through all the counterproductive bullshit that serves to erode the credibility of AI criticism as a whole.

It’s “useless” and produces “nonsense”

The AI elite have been pushing a narrative that would cement their lead with a regulator-imposed fence. Namely: that AI is going to kill us all. It’s important to note here that they’re full of shit on this point: if they truly believed it, surely they’d use their incredible leverage over their own companies to, at the least, delay the inevitable.

This has not come to pass. Instead, they keep going to work each day.

Researchers and ethicists have countered this by explaining that these tools are not, in fact, “intelligences” but more akin to “stochastic parrots,” repeating patterns they’ve seen before without much in the way of higher reasoning.

This has, unfortunately, unleashed a wave of stochastic parroting itself—the meme is irresistible in its visual flourish—which misinterprets the criticism to mean that the output of the tools is ipso facto without value.

But in fact, an African Grey Parrot retails in the thousands of dollars. Despite their limitations, parrot fanciers find tremendous value in their birds for companionship and amusement.

Similarly, the output of an LLM is not guaranteed to be useless or nonsense or otherwise without meaning.

For example LLMs can be used to provide surveys of a topic area, and even book recommendations, tailored to a specific learner’s need. They have, famously, a tendency to “hallucinate,” a generous term of art for “fabricating bullshit.” But in just a few months, this tendency has found a curious guardrail: the LLM can browse the web, and in doing so, provide citations and links that you can check yourself. So where before you might have been led toward publications that didn’t exist, you can now prompt the LLM to ensure it’s giving you proof.

So it’s not nonsense. Nor is it useless.

Part of what’s interesting about how LLM’s work is how they can interpret existing information and clarify it for you. An LLM can usefully summarize text of all kinds, from a dense essay to a file of source code.

And so the problem with saying “AI is useless,” “AI produces nonsense,” or any of the related lazy critique is that destroys all credibility with everyone whose lived experience of using the tools disproves the critique, harming the credibility of critiquing AI overall.

Worse still, those who have yet to be exposed to the potential of these tools may take this category lazy of critique at face value, losing the opportunity to develop their own experience with an emerging, economically consequential technology.

This category is the laziest shit out there and I badly wish people would stop.

Energy consumption as a primary objection

When we get to the part where people insist that AI technologies use too much energy, it starts to feel like some quarters are just recycling the valid criticisms of cryptocurrency without bothering to check if they fit.

And I get it: having failed to manifest a paradigm shift in their digital tulip bulbs, the worst of the crypto bullshitters have seamlessly switched lanes to hype AI. Nevertheless, these are distinct cases.

Energy consumption in cryptocurrency was problematic specifically because cryptocurrency built into its model ever-increasing energy costs. Essentially, proof-of-work cryptocurrency was designed to waste energy, performing cryptographic operations that served no meaningful purpose in the vast majority of cases.

That’s an exceedingly stupid use of energy, especially in climate crisis, and everyone who supports that deserves to be roasted.

But a pattern synthesizer has a directed goal: accomplish something useful as decided by its user. Not every product or model is doing a great job at this, but the overall trajectory of usefulness is dramatic.

The various things we call AI can interpret code for you, detect cancer cells, and helpfully proofread documents. I wrote 2000 lines of C++ that drive IoT devices deployed across my house with ChatGPT’s help. Currently, their uptime is measured in months.

As I’ve been writing this, I’ve asked ChatGPT 4 to assess how fair I was to cryptocurrencies above, and it provided some nuanced analysis that helped me get to something more specific (qualifying proof-of-work as the major energy wasting culprit).

So, at least for some cases, and for some users, and in ways that grow as the technology improves its effectiveness, AI is accomplishing helpful work. Moreover, by further contrast to cryptocurrencies, AI vendors are incentivized to reduce the cost of AI. Scarce resources like energy and advanced GPUs reduce their ability to serve customers and harm their margins. The more efficient pattern synthesis can be made, the more profitable its purveyors.

Meanwhile, the history of computing shows a steady trend: amount of energy needed to accomplish work decreases, while the amount of work possible increases. This even holds true for GPUs, now with 25 years of data.

On Mastodon.social, @glyph argues that energy costs are in fact orders magnitude smaller in favor of LLMs, which seems to hold some water. Compare gigawatt hours to train and operate an LLM to hundreds of terawatt hours to operate the Bitcoin network. Or, for that matter, total data center energy consumption, also measured in the hundreds of terawatts.

Any argument against pattern synthesis on the grounds of energy consumption is an even more urgent argument to shut down the existing constellation of software and computing products.

Thing is, we decided a long time ago to build our society on the premise that productive work was worth spending energy on. Today we spend vast amounts of energy on things like:

  • Mundane cloud computing infrastructure, as mentioned
  • Global telecommunications
  • Commuting to offices so people can sit on Zoom calls
  • HVAC for those same offices
  • Industrial fabrication
  • Air travel
  • Sea shipping
  • Sports
  • Making beer and keeping it cold

These are a handful of examples off the top of my head. What about applied statistics demands unique scrutiny?

I’ve had the shittiest year when it comes to climate change, and I’ve invested in significant green energy infrastructure, from solar to battery to heat pumps. I take the issues of renewable energy and climate incredibly seriously.

And asking an emerging technology to hold the bag for a climate crisis that spans industries just seems incoherent to me, unless you also call for an end to mundane computing generally.

We must address climate from multiple dimensions:

  • Regulatory pain for polluters and fossil fuel companies
  • Ending fossil fuel subsidies
  • More research and investment in energy storage
  • Replacing incumbent energy infrastructure
  • Driving down the costs of alternative energy infrastructure manufacturing
  • Cutting wasteful and inefficient uses of energy across sectors and activities

Simply making AI a boogeyman for Shell and Exxon and BP’s fuckups doesn’t deliver the goods. It’s fundamentally unserious as a primary objection to AI, even if, like all energy consumers, AI companies should be subject to scrutiny, regulation and reform on their resource consumption.

Students will use it to cheat!

Fuck the rote memorization, performative bullshit of school. I’m not even giving this more than a paragraph. Let the schools figure out how to actually create learning outcomes instead of regurgitation sessions. This isn’t AI’s problem to solve. Time to catch up with the 21st century, you putrefying mechanism of oppressive conformity and class stratification.

Examples of actual, important issues we must confront

Pattern synthesis is going to melt the status quo the way the web did 30 years ago. No industry, no human activity will go untouched.

Some people will do absolutely stupid stuff trying to save money with this power.

But the worst is that people will use the power to do harm and this technology will be only too happy to oblige.

This is not an exhaustive or definitive list. Any omission you may catch is a failing of my own scholarship and rigor, not an indictment the omitted critique. (Unless it’s genuinely dumb, but that’s up to you to judge.)

Instead, here is a survey of the sort of things we need to be aware of so we can demand and even build better alternatives. Failing that, understanding these issues in a useful tool people actually use help us demand accountability for that tool’s effects.

Training sets include CSAM

Updated 12/20/23 to add this from 404 Media:

“While the amount of CSAM present does not necessarily indicate that the presence of CSAM drastically influences the output of the model above and beyond the model’s ability to combine the concepts of sexual activity and children, it likely does still exert influence. The presence of repeated identical instances of CSAM is also problematic, particularly due to its reinforcement of images of specific victims.”

The model is a massive part of the AI-ecosystem, used by Google and Stable Diffusion. The removal follows discoveries made by Stanford researchers, who found thousands instances of suspected child sexual abuse material in the dataset.

AI perpetuates, amplifies and launders bias, with consequent unequal impact

Because the pattern synthesizers are built by ingesting, well, patterns, they’re trained on the things humans have written and said. Those patterns are full of bias.

You will struggle, for example, to ask an image generator to give you a Black doctor surrounded by white children because the legacy of colonialism means most images we have of such a scene are inverted.

This presents serious problems for using these tools to imagine different futures. The more of today’s inertia they carry, the more they replicate a negative past.

Meanwhile, bias exists all over human belief. Biases inform tremendous violence and oppression. Computing systems that blithely amplify that as many times as someone can afford are fundamentally dangerous. Even worse if they can create new patterns altogether.

Finally, if you can blame the AI for your biased decision, it makes it harder for those wronged to actually address unjust outcomes.

AI is constructed non-consensually on the back of copyrighted material

This is one of the greatest stains on this technology, period.

People’s work was taken, without consent, and it is now being used to generate profit for parties visible and not.

Thing is, this is unsustainable for all parties. The technology requires ongoing training to remain relevant. Everyone who is part of its value chain, in the long term, must be represented in both its profits and the general framework for participation.

Incidentally, this is one of the places where critique becomes most incoherent: if the output of these systems is “nonsense” or otherwise has no value, where is the threat to creators?

It is precisely because the outputs are increasing in value and coherence that it’s essential that the people who make that value possible get a fair deal.

If the AI is in fact “hallucinating,” how will you know? It has a certain bluff and bluster that suggests no possibility of doubt. This makes the technology better to demo, but also wastes time and even misleads.

This has had some comical effects.

Funny or not, it’s clear this tool is intruding closer and closer to how people do real work that impacts real lives. Deception is an irresponsible product feature.

Being unable to debug the thing is untenable the more we rely upon it.

AI can be used to create misinformation and pollute the information sphere

Fake images, fake voice, fake videos, fake text, fake posters.

The cost of waging an information war has dropped by orders of magnitude. We’re not ready for what this means, and the year over year trajectory of output quality is dramatic.

Meanwhile, mundane applications let anyone quickly gum up the works with low-quality content.

Any and every beef on the part of entertainment labor

No, you should not be coerced into giving up your likeness for a studio to use forever just because now the computer will allow it.

Further degradation of labor broadly

I do not believe that this technology, now or in the future, is effective at replacing human insight or ingenuity.

The more complicated question is how pattern synthesis fits into and disrupts our existing experience of work.

There’s precedent for this. Before electrification and the assembly line, building things was a matter of craft and expertise. As technology progressed, this work was broken down (”degraded,” in Braverman’s parlance) into smaller and smaller fragments by industrialists, like Henry Ford, until workers no longer needed to know a whole craft, just their tiny piece of the assembly line.

Pattern synthesis tools could inject a similar degradation into traditional knowledge work, or erode the authority of decision makers by making them resort to AI at crucial moments.

Already, they create problems for transcriptionists and translators, who find their work far less valued than they did a decade ago. It’s meager comfort to those immediately effected, but history suggests a wholeasale displacement of a category of labor doesn’t require every category be displaced—blacksmiths had a bad time after cars replaced horses, but mostly those cars now take us to different jobs.

On the other hand, it’s important to note that transcription of far more things happens now that it’s automated. Three years ago, transcription was a tediously manual process for TikTok creators. Today the push of a button gets them to a 90% good enough set of subtitles, making this content enjoyable for everyone, even those who can’t hear (which may be the deaf, but may be anyone else in transient circumstances, like waiting in line without headphones—the curb cut effect is real).

Exploiting labor in the Global South at tremendous psychological cost

In order to manicure the final product presented to users, OpenAI turned to the cheapest workers they could find on earth.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance.

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said.

Western technological development has a centuries-long tradition of foreign exploitation always just hidden from view, and it seems pattern synthesizers are not exempt. While the typical deal with technology startups is that workers participate in the success of their employer, these data labelers got nothing but a one-time, paltry wage. As OpenAI thrives, they’re left scarred.

Terrifying surveillance potential

We now have the technology to transcribe any conversation without a human in the mix. The transcriptions are full of errors, but from a surveillance context, that doesn’t matter so much.

It has never been more possible to thoroughly surveil the activities and communications of anyone.

Worse still, faces can be scanned and tracked, license plates read, even gaits tracked.

This may be the end of privacy.

It gets worse from there, as other critiques compound into this one. The AI can be wrong, and it can be biased, and it have a disparate impact on different populations and identities.

This is a horrifying outcome that actively denies people their liberties.

You have to trust the platform vendors too much

How do you know your information is safe when it’s used to accomplish work in a centralized system? Privacy policies don’t mean anything when a technical failure can breach them. It happens all the time in mundane computing. Why should pattern synthesis be unique?

“My favorite product added AI in a stupid way and I hate it”

Yeah, that sucks and I’m sorry. There’s a lot of sloppiness and hype going on.

Eyes on the prize

We are not going to turn back time, unless you have a plan that can successfully plunge global commerce back hundreds of years.

This shit is here.

Yes, there’s hype. Yes, there’s scumbags. But there’s also some baby in this bath water for meaningful numbers of people.

We have to act in accordance with that reality.

The reality is that there’s a lot to work on in order to create just, decent, scalable, personal, private implementations of this technology. I would argue we should.

Because these tools can make us more effective. They can amplify our reach and insight, they can help us accomplish things we couldn’t on our own.

But there’s a lot wrong with them. If we plug our ears and say this technology should not exist, the growing ranks of people who come to depend on it will brush past us, hearing only the case presented by those best positioned to profit.

If we simply dismiss this technology, people may believe us, and find that a whole new technological paradigm has passed them by, curtailing their power and agency.

If we let this technology become the plaything of the affluent exclusively, we’ll deepen our digital divide in a way we may not be able to recover from.

So we need to up our game. I hope this survey of the landscape helps.


The average AI criticism has gotten lazy, and that's dangerous The average AI criticism has gotten lazy, and that's dangerous