AIs, LLMs, Agents — They’re Useful. Humans Still Matter.

AIs, LLMs, agents — they’re all very useful. Genuinely. They are powerful tools, and anyone pretending otherwise is either lying or hasn’t shipped anything recently.

But humans still have a place in the office.

If I ask you a question and your response is a copy-paste of an LLM output, you have fundamentally debased our relationship of any meaning. You’ve cheapened the work you do, the trust between us, and the way you expect other people to interact with the products you build. Especially if you don’t understand the text you just pasted back at me.

At that point, why did I ask you the question? Why wouldn’t I have just asked the LLM myself?

Human interaction and reputation are the cornerstone of society. Expertise isn’t just about producing text — it’s about judgment, accountability, and context. If the questions I ask you can be fully replaced by an AI, then the uncomfortable follow-up is obvious: why do you need to exist in that loop at all?

This isn’t an anti-AI argument. It’s an anti-abdication argument.

I understand that in this new world of vibe coding, the only thing that seems to matter is whether the resulting product “works.” I understand the business appeal of that. Speed matters. Output matters. Demos matter. Revenue matters.

But there is still a group of people who have to hot-glue the dogshit you just vomited onto an SSD into an actual ecosystem that runs in production.

Those people live downstream.

When your tower of earwax-soaked Q-tips throws a 500, or DDoSes my Redis instance because you “forgot” to configure a concurrency limit on your Lambda, I don’t get the luxury of vibes. I get paged. I get blamed. I get to explain to leadership why the system is on fire.

So I am forced to treat all AI-generated code as potentially hostile. Not just external threats — internal ones. Not just attackers — coworkers.

And when your accidental malware starts acting up, I can’t ask you what’s happening, because you don’t know. You didn’t reason about the system. You didn’t design it. You prompted it. So now you’re asking the LLM what the LLM meant — which brings us right back to the original question: why are you in the loop at all?

This is the real cost of blind AI adoption, and it’s not theoretical. It’s operational. It’s social. It’s cultural.

AI is incredible at generating answers. Humans are still responsible for understanding them.

If you can’t explain the code you ship, you can’t debug it. If you can’t reason about the system, you can’t operate it. If you can’t be accountable for decisions, you’re not an engineer — you’re a courier.

Use LLMs. Abuse them, even. But don’t replace thinking with copying. Don’t replace understanding with output. And don’t confuse “it worked once” with “it deserves to exist in production.”

The future isn’t humans versus AI.

It’s humans who think, augmented by AI — and everyone else quietly removing themselves from relevance.