I use AI generously in my life. My recent projects are vibe-coded with Claude Code and OpenAI codex, and I frequently ask Claude to play editor for my blog.

There are many excellent discussions about what Large Language Models mean for the future of my chosen profession, but I think most miss the point. Eventually, LLMs — or some future architecture — will become good at maths and science. Once someone figures out how to hook an LLM to an Atlas, the physical domain won't be free from AI automation either.

Nobody can predict the future, so we don't know what job or trade is safe from automation. Given a long enough horizon — 10 years? 20 years? — the answer may very well be: most things.

Where does that leave us? Surely, WALL-E isn't an appealing vision for most of us.

End-stage humanity in WALL-E

I think the pertinent question to ask is:

What aspects of humanity are so fundamentally human that we don't want to automate them?

The question is no: what can be automated — because, probably, everything can. The better question is: What don't we want to be automated?

That, to me, feels like a much better question to ask ourselves as we buckle down for what's sure to be a wild ride.

For now, I'm going to stop worrying about my job safety and instead follow this sage advice:

When in doubt, be yourself — Simon Sinek https://simonsinek.com/quotes/

Some working principles I’m trying to live by (subject to change):

  • I value the struggle to learn something — skill, language, facts — even if AI can do it better/faster. The struggle and mastery itself is fulfilling

  • AI has no role in my personal relationships

  • I'm cautious about AI influencing my judgement, or me delegating judgement to AI