On Vibe Coding

All skills are obsolesced in 6 months and your only salvation is praying to Go–I mean, prompting GPT.

[Prophets of AGI, It’s Always Taken Reps, Managerial Hell, The Contrarian’s Trick, Choose Greatness]

2025.04.20

XCVI

Thesis: The spread of the belief that you should give up on getting good at things because AI will be able to do it better creates enormous economic opportunity for those who disagree.

[Prophets of AGI]

If you listen to a certain type of person online, you’ll hear the gloomy message that the world is ending in 6 months and the only way to save yourself is to pray to Go–I mean, to prompt chat gpt. 

Dramatism aside, there really is a growing sentiment that AI will replace most knowledge worker jobs in the next year. With the never ending IV drip of hype around AI & AI Agents the source of this belief is understandable—if you take what is spun out by most generative AI companies at face value, this belief might seem obviously true.

Unfortunately, there is a heinous conclusion drawn from this belief that has cancerous effects and infantilizes many of the believers, as is the case with most pseudo religious manias: 

The Fallacy of Automation: You should stop investing in any skill that AI is getting better at.

The follow up to the fallacy is that you should focus on getting better at using AI tools & prompting and nothing else. While it won’t be critical to my counter argument, it is funny that the biggest proponents of such a belief tend to be those selling or investing in AI tools (OAI, Anthropic, YC, etc).

I’ve heard the argument being made for software engineering, the bulk of tasks done by entry level bankers and consultants, many sales motions, various finance tasks, writing, and pretty much any role that is digitized and information based. Of course, I’ll focus on software, as that is what I am most exposed to.

I could go on about a dozen reasons why this belief is dangerous, but I’ll focus on one:

Even if AI is as good as the average worker at a given skill, having a deep understanding of that skill will still lead you to outperform your competition. 

Note that while I don’t necessarily believe this, I’m giving the AI Prophets the benefit of the doubt and assuming that AI will become as good as average at certain skills & occupations at some point ‘soon.’

My whole argument is predicated on the belief that you get better at something through focused practice, so we’ll start there.

[It’s Always Taken Reps]

First off, you (a human) get really good at things through focused repetitions, not by having the thing done for you. See Deep Work, Atomic Habits, The Art of Learning, literally any biography on someone who was good at something, or even some of my blog posts on the matter.

When you keep doing a thing and you’re paying attention to it, you tend to get good at it. Anyone who is good at anything will tell you that.

Unfortunately, you don’t get good at cooking by having a Michelin grade chef cook for you. Maybe you can get better faster by standing in the kitchen and watching him cook and then implementing some of the things you saw for when you go to cook. But, you still have to go and cook or it won’t matter. 

The point is, you don’t get better at coding by “vibe coding.”* AI can certainly help you get better at coding, I’ve used it a ton to help teach me and love it for that, but if all you’re doing is prompting the AI and running the code and seeing what happens with no effort to understand it, you should not expect to get much better at coding. 

The same goes for any skill that you abstract yourself away from with AI.  It’s likely a sliding scale–the more you depend on the AI to do the work for you, the slower you will get better at it. 

So, we have a whole group of people (check comments on this post for a few) who either don’t understand this or explicitly believe that it doesn’t matter if they get better at a thing.

While the belief is corrosive, it’s also nice for us, because it means less competition.

[Managerial Hell]

Okay, but even if I don’t get better at some skill, I don’t actually need to, right? I’m effectively just a project manager working over the AI just like you would work over a human!

As someone who has paid for coders to do things for me before and after I knew how to code, I will tell you the experience is much more pleasant when you know how to code. You can scope the project, you have an idea of how long it should take, and you know when the programmer is being unreasonable. 

Of course, AI has different incentive structures that paid programmers, but it would be naive to think that it’s incentive structure was totally aligned with what you actually want. 

That’s not an argument of the AI being malicious, it’s an argument based on your naivety. If you don’t know how to code, you don’t actually know what variables or context might matter to begin with, and even if you know the relevant context, how would you know that it actually made decisions lined up with this context?*

Sure, maybe if AI becomes some omnipotent god in the next 12 months that can understand your exact intentions through telepathy and 100% of the time picks an optimal decision based on what you want, then it doesn’t matter. 

But, even in the quite bullish case that AI becomes reliably as good as the average engineer in 12 months, there are still things you need to tell it to do and to make sure that it actually gets right.

Kubrick was an exceptional film director. He didn’t have to know how to use a camera or write a script, but I would bet everything I have that he would be a worse director if he was not exceptional at those things.

Even if you don’t actually use the skill you learn now in the future, you will be more equipped to guide someone who knows something about that skill if you actually are good at it and understand it.

[The Contrarian’s Trick]

Even if the AI can be the engineer and the project manager, you still need to understand something uncommon about the underlying objective or at the very least approach it from an unconventional way to get outsized returns.

We can call it the Contrarian’s Trick; I’ve written something about it here.

Ignoring even the possibility of LLM Model Collapse, there is still a very real coalescence around some sort of average when you depend on AI to do the driving for you. And, in a way, if you use autocomplete, it can be very easy to slip into a state of it driving.

It might be something of a simplification to say that it generates the “average” of the code it’s seen before, but even as the models get more complex, it seems to hold true. Regardless, you certainly won’t get it to generate anything someone else with the same knowledge as you can’t get it to generate.

In Beating the Averages, Paul Graham writes about the unfair advantage he had at Viaweb by leveraging the awesome power of meta programming found in Lisp. The thing is, this gave him such a strong advantage in a competitive market because no one else was using it

Write now, any technical advantage that BirdDog has does not come from the fact that I am a “better programmer.” It comes from the fact that we’ve made very different decisions and trade offs than most or all of our peers. 

Some of these decisions have general & objective trade offs we’ve made, others simply play well into my disposition.* If I didn’t ever program and just vibe coding, I wouldn’t even know what my disposition in terms on engineering was!

If we did what we were ‘supposed’ to do, we would have the outcome we were supposed to have, not the one that we do have. Which, so far, is a pretty good outcome.

But, if we kept abstracting ourself further and further from the skill and the reality and what was going on and just trusted the LLM, we’d have the same product everyone else does.

*Disposition: Our 3NF DB would be annoying for some to work with because it can be strange to conceptualize. For whatever reason, I can pretty easily conceptualize it so the cost is low compared to the benefit it yields by removing a whole class of errors that we would otherwise have to worry about.

[Choose Greatness]

All of this is to say, please don’t give up learning things because some people are shouting about all skills being obsolesced by some code that gives Deus Ex Machina a more literal meaning than it originally had.

It’s a dangerous belief and feels almost cultish to me. And don’t forget, my arguments here assume the AI is much better than it currently is, as many people who don’t understand the skill that it automates seem to think it is.

Maybe if your intention was just to be average at something, you can give up now and let the AI handle it for you at some undetermined point in the future. 

Really, though, the point was never to be average at something. The point was always to be as great as you can at it.

Live Deeply,