Improved Means for Achieving Deteriorated Ends


iLL oMens

I keep getting dragged into conversations around LLMs. Whether profesionally or personally, it seems everyone in the tech periphery is suffused with discussion about them. I thought writing a short note on llms would be enough for me but here we are. (Somehow I've been working on this post 2 months.)

I have personal, technical, and social objections to LLMs, but I think the more important motivator to write is to lay out what I believe computers are for: what harms they can cause and what good they can create. Or at least what I value when thinking about what I want from software. I'm going to do my best to start from the frame of LLMs and maybe write a follow-up about the more general questions. Let's get into it.

Starting Assumptions

I need to touch on some foundational beliefs that inspired the title of this website. The masthead reads "Improved Means for Achieving Deteriorated Ends" in reference to this Aldous Huxley quote:

We are living now, not in the delicious intoxication induced by the early successes of science, but in a rather grisly morning-after, when it has become apparent that what triumphant science has done hitherto is to improve the means for achieving unimproved or actually deteriorated ends.

There is another quote by Alfred North Whitehead that I find inseparable from the Huxley quote. I've paired these quotes together for so long (over 20 years now) that I'm unable to say where I discovered either.

Civilization advances by extending the number of important operations which we can perform without thinking about them.

There is a beautiful tension between these quotes that still animates me. On the one hand, I find it very difficult not to agree with Whitehead. Being able to focus on high-level tasks instead of inconsequential details is a huge part of why most technological advances are valuable. On the other hand, when I read the Huxley quote I feel it in my chest. It isn't a question of whether the science or technology works but a question of what we are trying to achieve. And who we want to empower.

A Few Objections

Disclaimer

  1. I have not used LLMs whether for summarizing content, writing code, writing prose, or generating media of any kind.
  2. A consistent theme in my own critiques and other critiques is that AI cannot possess intent, leading to a host of risks. It pretends to see and we yearn for that sight to be real.

TL;DR: Read this critically since I have personally spent little time with LLMs.

Personally

On a personal level, this just isn't the way I wanted to write software. I don't write much code professionally now that I'm an EM and part of the reason for that is I want to be really passionate about the code I write, agonizing over every detail. It makes sense that with that kind of control freak attitude, LLMs wouldn't have a ton of appeal. But even separate from that, I really enjoy the puzzle of understanding the various abstraction layers in systems and doing things for myself. I think there is value in knowing why every dependency and method is needed. It is difficult to imagine the satisfaction would be the same if I was issuing minor corrections or alterations to "someone else's code".

Professionally

On a professional level, my biggest objection to LLMs is that the ways they fail work against rather than with human talents. LLMs are designed to show me code that looks sensible enough to trust, regardless of whether it actually works. And "mostly works" is the worst kind of code, more expense than asset.

Put simply, the risk of hallucination makes it more difficult to judge if the code works or not. Tests and types can provide some safety here but I should probably write the tests myself, carefully checking the generated code.

Everything I've read leads me to believe that LLMs can serviceably write code that would have been boring for you to write in the first place because you know the underlying tech stack, understand the problem domain, the business logic can be clearly stated, and the code patterns the project uses are in the training data.

That is a lot of constraints! I get the appeal for MVPs and prototyping, or being a contractor jumping between projects for a dozen clients, but for most other scenarios these trade-offs seem troubling. And the technology is always on so we're expecting an extraordinary amount of vigilance on the behalf of developers to not get bitten by laziness or trusting plausible output.

There are some smaller objections I have as well.

Socially

Companies looking for ways to build faster with AI to me just begs the question of "why is speed the determining factor for your business"? In my experience companies aren't failing to profit because they can't deliver fast enough but because their ideas aren't good enough to move the needle. Among other issues. Being able to experiment quickly is valuable but without careful measurement, thought, and preparation, it turns into flailng.

I think measuring speed feels good because it can be used to power any other goal. But the worst dysfunctions I saw in my time at Calendly had nothing to do with speed and everything to do with internal communication, alignment, and ownership. I'm willing to bet that at most SaaS companies, the part where the most value is being lost is understanding the customer problem and communicating about it, not the build step. But that's definitely harder to measure.

On a social level, I think it is assuming a set of values the population doesn't hold. I have seen technologists I respect express optimism about the forward progress of AI that ignores the fact most people do not want to manage prompts or agents. I find that a lot of smart people are captivated by this thinking but unable to imagine the perspective of someone who disagrees or feels differently.

Is this what my parents felt like? I'm left with uncomfortable questions about my youth. Was I excited by technology simply because it afforded me a chance to have more agency? At the end of the day, I'll respect my engineers if they decide they want to use AI. They're the ones responsible for doing the work. But I hope that I, and leadership figures above me, don't start telling people the best tools to use to do their job.

Further Reading

Skateboarding - A Memoir