Improved Means for Achieving Deteriorated Ends


iLL oMens

I keep getting dragged into conversations around LLMs. Whether profesionally or personally, it seems everyone in the tech periphery is suffused with discussion about them. I thought writing a short note on llms would be enough for me but here we are. (Somehow I've been working on this post 2 months.)

I have personal, technical, social, and ethical objections to LLMs, but I think the more important motivator to write is to lay out what I believe computers are for: what harms they can cause and what good they can create. Or at least what I value when thinking about what I want from software. I'm going to do my best to start from the frame of LLMs and maybe write a follow-up about the more general questions. Let's get into it.

Starting Assumptions

I need to touch on some foundational beliefs that inspired the title of this website. The masthead reads "Improved Means for Achieving Deteriorated Ends" in reference to this Aldous Huxley quote:

We are living now, not in the delicious intoxication induced by the early successes of science, but in a rather grisly morning-after, when it has become apparent that what triumphant science has done hitherto is to improve the means for achieving unimproved or actually deteriorated ends.

There is another quote by Alfred North Whitehead that I find inseparable from the Huxley quote. I've paired these quotes together for so long (over 20 years now) that I'm unable to say where I discovered either.

Civilization advances by extending the number of important operations which we can perform without thinking about them.

There is a beautiful tension between these quotes that still animates me. On the one hand, I find it very difficult not to agree with Whitehead. Being able to focus on high-level tasks instead of inconsequential details is a huge part of why most technological advances are valuable. On the other hand, when I read the Huxley quote I feel it in my chest. It isn't a question of whether the science or technology works but a question of what we are trying to achieve. And who we want to empower.

A Few Objections

Disclaimer

  1. I have not used LLMs whether for summarizing content, writing code, writing prose, or generating media of any kind.
  2. A consistent theme in my own critiques and other critiques is that AI cannot possess intent, leading to a host of risks. It pretends to see and we yearn for that sight to be real.

TL;DR: Read this critically since I have spent little time with LLMs.

Personally

On a personal level, this just isn't the way I wanted to write software. I don't write much code professionally now that I'm an EM and part of the reason for that is I want to be really passionate about the code I write, agonizing over every detail. It makes sense that with that kind of control freak attitude, LLMs wouldn't have a ton of appeal. But even separate from that, I really enjoy the puzzle of understanding the various abstraction layers in systems and doing things for myself. I think there is value in knowing why every dependency and method is needed. It is difficult to imagine the satisfaction would be the same if I was issuing minor corrections or alterations to "someone else's code".

Professionally

On a professional level, my biggest objection to LLMs is that the ways they fail work against rather than with human talents. LLMs are designed to show me code that looks sensible enough to trust, regardless of whether it actually works. And "mostly works" is the worst kind of code, more expense than asset.

Put simply, the risk of hallucination makes it more difficult to judge if the code works or not. Tests and types can provide some safety here but I should probably write the tests myself, carefully checking the generated code.

Everything I've read leads me to believe that LLMs can serviceably write code that would have been boring for you to write in the first place because you know the underlying tech stack, understand the problem domain, the business logic can be clearly stated, and the code patterns the project uses are in the training data.

That is a lot of constraints! I get the appeal for MVPs and prototyping, or being a contractor jumping between projects for a dozen clients, but for most other scenarios these trade-offs seem troubling. And the technology is always on so we're expecting an extraordinary amount of vigilance on the behalf of developers to not get bitten by laziness or trusting plausible output.

There are some smaller objections I have as well.

Socially

Companies looking for ways to build faster with AI to me just begs the question of "why is speed the determining factor for your business"? In my experience companies aren't failing to profit because they can't deliver fast enough but because their ideas aren't good enough to move the needle. Among other issues. Being able to experiment quickly is valuable but without careful measurement, thought, and preparation, it turns into flailng.

For example, I've heard reports from a friend that they were asked to "use AI" to come up with UI prototypes but the product requirements and goals were so unclear that he didn't know how to prompt the agent. I genuinely believe that in situations where companies believe AI can magically speed things up what they actually are suffering from are process and communication flaws that disempower their R&D teams. AI simply cannot fix that.

It's understandable how we fall into this trap though. Speed feels good because it can be used to power any other goal. Velocity is a quantifiable metric. But the worst dysfunctions I saw in my time at Calendly had nothing to do with speed and everything to do with internal communication, alignment, and ownership. I'm willing to bet that at most SaaS companies, the part where the most value is being lost is understanding the customer problem and communicating about it, not the build step. But that's much harder to observe.

Besides if AI had the level of impact that is claimed by its proponents, it should be absolutely trivial to determine which companies or engineers are using it and which are not. I don't think we're seeing this clear differentiator where those who have not adopted AI are simply unable to compete in the market. I genuinely have not seen any evidence of that.

All of this is separate from the question of "eating our seed corn" and whether or not the improvement in LLM technology will outweigh the undoubtedly negative social impact of disrupted labor markets.

Tools like anubis and iocaine are designed to give website administrators a way to fend off crawlers used to feed the LLMs hunger for data. Plenty of sites have found that the majority of the traffic they receive is from crawlers, even recrawling the same page dozens of times a day. Even as a consumer of the web I notice this. I'm seeing more cloudflare alerts to ensure I'm human, more things like anubis and iocaine before a page loads. Not to mention the declining quality of search results among other things.

From my perspective, making the web an actively more hostile and misinformation filled space and unemploying large quantities of competent creative workers is not a great tradeoff even if LLMs improve beyond their current abilities.

Ethically

I won't get into questions of climate impact here as I'm simply not well-read enough in that area and there are so many ways we are complacent in legitimate harms that focusing on LLMs here feels unnecessary.

I will say that as a child that grew up in the era of the RIAA suing children for using Napster, the argument that AI companies should be able to trawl the collected creative works of mankind in the name of scientific advancement or product development is horseshit. (Meta is actually torrenting in case you were unaware.)

Now, legally, I can see an Intellectual Property argument that no sale would have occurred to these companies and so there is no lost revenue. Or perhaps, that the output of LLMs is sufficiently novel to qualify as new work. But isn't that the point?

The intent of Intellectual Property Law is to promote creative human works and protect creators. The intent of creating LLMs from all existing text and audio or visual media is to enable the low-effort creation of derivative works by guiding the LLM to produce something similar to its training data which you, the user, can deliver as original labor!

I really fail to see this in any other way than disempowering artists, actors, designers, photographers, programmers, and many other professions whose human ingenuity and expertise is now being resold by someone else while the vendor simultaneously insists that they can be replaced for $20/month. That certainly sounds like robbing someone of future earnings to me. We'll see what the courts do.

In short, LLMs completely upending the landscape of creative labor and insisting it is in society's best long-term interest to allow it to transpire is a hell of a position.

Conclusion

Ultimately, I see LLM proponents assume a set of values the population doesn't hold. I have seen technologists I respect express optimism about the forward progress of AI that ignores the fact most people do not want to manage prompts or agents. I find that a lot of smart people are captivated by this thinking but unable to imagine the perspective of someone who disagrees or feels differently. Most creative people did not enter their field to guide an incompetent intern.

Is this what my parents felt like? I'm left with uncomfortable questions about my youth. Was I excited by technology simply because it afforded me a chance to have more agency? At the end of the day, I'll respect my engineers if they decide they want to use AI. They're the ones responsible for doing the work. But I hope that I, and leadership figures above me, don't start telling people the best tools to use to do their job.

Further Reading

Skateboarding - A Memoir