Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.
In contemporary “AI” discourse, people often make a point that LLM output cannot be trusted, since it contains hallucinations, often doesn’t handle edge cases properly, causes vulnerabilities, and so on. This is seen as an argument to never use LLM-generated code in production. Others argue that the benefits AI grants them are worth the risk.
These groups are talking past each other. The problem was never about AI, it’s only the catalyst. To discuss what problems AI causes in software development is to completely miss the point, since those arguments have been milked to death even before LLMs were a thing.
The question is not whether AI can generate garbage, but whether we consider this low quality acceptable. If you follow the “fuck it, we ball” attitude, then you’ll found a startup, hire juniors, and use LLMs. If you’re not a technocrat, you won’t be the first to market, but you’ll build a product that actually works, and you’ll do that by hiring experienced devs and avoiding vibecoding.
This topic is fundamentally philosophical (will you focus on the far future or only the present?), ethical (will you do the right thing if you’re punished for it?), economical (how much are you willing to decrease quality to increase profits?), and political (should we correct for these externalities?). It’s absolutely not a coincidence that those pushing LLMs are typically centrist or right, and leftists tend to avoid them. The war on AI is all but a proxy war for moral compasses.
To me, overusing AI, destroying ecosystems, covering up fuck-ups, and hating minorities are all “bad” for the same reason, which I can mostly sum up as a belief that traumatizing others is “bad”. You cannot prove that AI overuse is “bad” to a person who doesn’t think in this framework, like a nihilist that treats others’ lives like a nuisance. It’s moral relativism all over.
I’ve explored the same effect relative to the “worse is better” ideology with a different audience yesterday, and every single disagreement I got was people “proving” that their stance on AI is right because it’s implied by the goal of optimizing for revenue and large audience. None of them used the word “framework”, or realized that people may operate in different frameworks and that no framework can be objectively correct.
That’s not to say that AI or ecosystem destruction aren’t “really” bad, but rather that 99% of AI discourse is founded in proof by assertion and won’t change anyone’s opinion. Discussing the dangers of AI while ignoring the culture that allowed inhumane frameworks to manifest is like tip-toeing around living costs when discussing demographic shifts. It’s intellectually stimulating, but utterly meaningless.
I absolutely believe that educating people on problems caused by widespread AI use is necessary, but I don’t think you can do it without ultimately appealing to feelings like fairness and consideration. Instead of hating on AI, we should teach people empathy in the way that they will understand–though you can’t say that outright, as otherwise “facts don’t care about your feelings” becomes a trivial thought-terminating cliché.
All of that is to say: affecting public opinion like this is utterly complicated, probably requires a polsci degree, and gets dangerously close to manipulation. But I think we’d be better off admitting that discussing AI as a technical topic on developer blogs won’t help and we all need to learn some rhetorics.