On Gullibility

How I let LLMs con me 3 times this week

2026.04.12

XCLVII

[Torrent of Bullshit; Fooled by a Sycophant; Fight Club but it's AI Slop; The Worst Way to Handle an OOM Error; It's Still Not Good at Engineering; Touch Grass; A Path Forward]

Thesis: With an increasing volume of noisy information, we all should be increasingly skeptical; however, we’re becoming increasingly gullible, instead.

[Torrent of Bullshit]

There is an overwhelming and unrelenting torrent of information flowing across every place we consume information. This is a problem, because most of the ‘information’ is total bullshit.

While the volume of total available 'information' goes up, the quality of that information is not going up. If anything, the quality of that information is going down.*

The abundance of increasingly noisy information should make us more skeptical, not less. But for some reason, it seems that our collective gullibility is rapidly rising. I’ve been exploring the trend away from critical thinking and rationality quite intensely. [here & here & here]

My argument today is that LLMs are only making this issue worse.

I let myself get conned by an LLM, not once, not twice, but three times this week. Keep in mind, I am a person who views himself as a cautious user of LLMs, especially when compared with 95% of engineers and founders I speak to.

Still, the ease at which I was deceived by a text prediction engine is startling.

To be clear, I take full responsibility for the errors made in each of the 3 stories I’m about to tell (albeit the second story I didn’t really make an error other than by briefly enjoying AI slop generated to be enjoyed).

The point of writing this, then, is to remind you that if you trust these things (which it’s really, really easy to do), they cause more harm than good, even if you’re generally intentional and guarded about your use. There’s probably a way to appropriately use them, but as someone who thought he was appropriately using them, I can tell you I’m still missing the mark, and resultantly, will be reducing my usage further.

*If the dead internet theory is directionally correct, and you believe bot content is, on average, lower quality than human produced content, the decrease in average quality of content follows.

[Fooled by a Sycophant]

My startup, BirdDog, had an inbound meeting from the CMO of a company with >$300M in rev. On top of that, the firm is in a similar space and is acquisitive. So, for a lot of reasons, this was a very important call.

Prior to the meeting, I was reading some of the CMO's LinkedIn posts, and using my trusty sidekick Chat GPT (read: gippity) to do some research on the company.

Together, Gippity and I discovered that the firm received investment from a PE fund we’ll call PrivateCo 3 year ago. Then, Gippity and I strung together quite the logical sounding narrative in a conversation that went something like this:

Me: Is the company perhaps a platform of some sort for PrivateCo, I wonder? [this means that it is a company the PE firm uses to combine other companies into]

Gippity: But of course, it acquired another middling portco of PrivateCo.

Me: Hm, interesting info Gippity, how is PrivateCo doing? Did they buy a bunch of dogs in 2021 like every other pe shop? This company I'm talking to is mentioning AI a lot, is this a survival play? Ride or die on AI? And is PrivateCo doing any continuation funds?

Gippity: Keen eye, Noah! PrivateCo was actually complaining about valuations of secondaries last year. They're not in as dire straits as some of the other PE shops, but they did make some hefty acquisitions during peak valuations, so they're definitely not in a great spot either. You can position BirdDog as a way to overcome the pressure they're obviously seeing from PrivateCo to leverage AI to hit aggressive growth targets!

I get on the phone with the CMO of the company. He's a very hot lead & has already heard about us.

In the call, however, I hinted at the thesis that Gippity helped me come up with, asking the CMO about the longer term view behind PrivateCo's investment.

He pauses, and looks at me, and says, "Who's PrivateCo?"

My heart drops. "The Pe firm that invested in you a few years ago."

"Oh, them! Yeah, we hardly hear about them."

He went on to explain that they really only ever talk about another PE firm that invested in them earlier. In other words, my narrative was completely off. PrivateCo hardly had any strategic impact on the CMO’s company.

Thankfully, the conversation ended well regardless; I was lucky he was in good spirits, already quite qualified, and used the question to further explain his specific pain.

If I would have been unlucky or more clunky and presumptive with the way I asked the question, I very easily could've come across as a total dumbass and botched the call.

To be clear, this is a pretty normal class of errors for a human: string together a narrative that's not actually real but be very convicted about it based on evidence. Taleb writes about it in Fooled by Randomness a lot.

The bit that's frustrating, the bit that I'm writing about, is how LLMs make it so much easier to believe these nonsense narratives. These machines are not critical thinkers; they're not thinkers at all!

In this case, the LLM basically functioned as an echo chamber where I could grow and compound an incorrect idea with increasing confidence.

[Fight Club but it's AI Slop]

A friend sent me an X article about 25 life hacks a guy learned last year.

I read the article, and the standards of a mid 20s male, it's for lack of better words, totally awesome. It plays into a gambit of male fantasies concurrently: it describes how you should run around like you have a bunch of money, travel all the time, look hot, go on dangerous adventures, and always be the coolest guy in the room.

But, the more I thought about it, the more I realized even though it seemed totally awesome, it was totally fucking bullshit. And that the guy who wrote it is very likely not a real person at all.

The article itself basically has no substance or anecdotes or any evidence of lived experience at all. The 'story' starts and stops with this guy drinking a criminal amount of coffee on a plane with a pretty girl next to him with some vague notion of them both having done some crime together over the last year, the craziest year of his life. That's an awesome exposition. What crime? What made the year crazy? I want to know, and I want to know now!

But literally nothing happens with the exposition. Instead, he gives 25 two or three sentence rules about acting like you're rich at airports and seeming interesting and cool while also actually concurrently being broke at the same time (apparently recklessly spending all of your money on $50 tips to airpot bartenders is a life hack). (you can read article here)

The problem with this is that even though there's literally 0 substance, you can’t help but feel AWESOME reading it. Yes, of course you want to be untraceable and run from the law in airports and on beaches in linen with a beautiful woman in a foreign country.

Then, if you click through and check out this guy's page, you can see ai generated photos of a generic but ‘ideal’ blonde haired bronzed ripped guy surrounded by women.

And if you read one of his longer articles (please, for the love of god, just skim it, as I did), it follows a classic grift pattern:

  • Paints a desirable image of who you want to be with little substance but tons of emotional imagery; he's vaguely on a beach, surfing, feeling alive, with the girl of his dream, writing about how awesome and alive he feels and how ripped he is

  • He then goes on to contrast this with middle aged, over weight, single men with a job they don't like (his mark). He describes them derogatorily like half men and contrasts them greatly with who 'he' is and who they want to be

  • Finally, he lets them know that it's not too late! These poor bastards can transform and leave everything behind and be a WOLF like him. All they need to do is pay $50/mo to join his online community

Now, I obviously can't 'know' that this shit is a pile of AI Slop, but there's far more reason to believe that it’s AI generated than not:

  • Many of the photos are obviously AI generated

  • He shares no lived experience other than vagueries about being in a bar in Copenhagen or surfing at a beach

  • There's plenty of telltale ai text patterns, like "This isn't x, it's y."*

One of Kendrick's Drake disses is becoming an increasingly valid heuristic for navigating the modern world:

Why believe you? You never gave us nothin’ to believe in.

- Kendrick

Yes, grifts and scams existed before AI, and they will exist after. My concern is we're becoming more prone to them for two reasons:

  1. We have a society increasingly trained and conditioned to evaluate information emotionally rather than rationally

  2. Gen AI makes it even easier to produce pretty sounding but shallow bullshit at scale

The trouble it it’s very easy to fall into this trap.

I consider the guy who originally sent me this post to be one of the smarter and most competent people I know, and I certainly think he's quite rational and generally skeptical. Now, I’m certain he didn't fall for the actual con of paying $50 to the grifter and probably didn't even click through to the guy's website, but he, like 99.99% of us, was spending time in a space (the internet) where this kind of content abounds.

And in all honesty, without digital hermitude, it may be impossible to avoid.

*I'm so sick of AI written slop that I avoid that pattern like the plague.

[The Worst Way to Handle an OOM Error]

The third LLM con that I experienced this week was when I had a critical issue on a server. There was a pretty basic Out of Memory (OOM) error, but I was in between meetings and had high cortisol, so I panicked and asked an LLM how I should solve it.

Now, if you run out of memory on a computer, there are two basic solutions:

  1. You can free up some of the existing memory by deleting things

  2. You can increase memory by, well, adding more memory

On a cloud machine, which this was, the second option is usually trivial, and again, in this case, it was. Going with the trivial path would've been a great option, because I would've been able to rapidly resolve the issue. Then, I could calmy evaluate WHY I ran out of memory in the first place and decide if I needed to change something engineering wise or if all was well.

But, if you can guess where this is going, I didn't add memory. Without thinking, I took the LLM's advice and ripped a command to delete some stuff to clear up memory.

Well, I deleted the environment, which was, without getting into the weeds, very much the wrong fucking thing to delete. This decision cost me about 20x more time to fix the issue than if it would’ve cost me if I just increased memory.

So, I made the class of mistake that seems to be increasingly common in our society: working myself up into an emotional state, shutting down my own reasoning, and allowing something else to make a decision for me.

Having easy access to an LLM obviously made it so much easier to make this kind of mistake: I can just stream of conscious into a text box and go with whatever it says!

That’s not to say I would’ve made the right decision without the LLM; rather, I didn’t even give myself a chance to slow down and make the right decision.

[It's Still Not Good at Engineering]

So we’ve got three instances of me buying some bullshit that came from an LLM this week. If I think I’m using these things rationally, I can’t imagine what it’s like to fully depend on these things for everything.

And since I haven’t written about it in a minute, and everyone is shouting about it from the rooftops even louder than before, I absolutely have to say it: I still don’t believe LLMs are good at engineering.

I tried, for the life of me, to be impressed by Claude when coding this week, but it didn’t do it.

Vaguely related photo of me in a waymo in austin, autonomous cars being a use of ai I am excited about

I'm undertaking a large project for BirdDog right now. We're rewriting our front end from scratch for the PLG motion I wrote about last week. It'll be server side rendered, and I have the intention of making it blazingly fast.

If you know me, I have an allergy to JS and client side logic and frameworks in general, and I'm amateur with CSS and HTML. So, I thought I would have Claude ‘One shot’ all the actual pages templates for me. To review the thorough steps I took, I:

  • I spent hours thinking about the architecture of and maybe 10 hours writing out a spec

  • I shared the data structure that would be fed into the front end

  • I described every button and what I wanted it to

  • I fed in screenshots of our existing front end for design reference, and wrote out element by element where I wanted it to be the same / different.

In other words, this LLM had the benefit of a front end refined with two years of Jack and I's trial and error and a spec that explained pretty much exactly what I wanted. It had to write some templates for some html / css / jss pages, which is very close to the thing that it’s supposedly really good at it.

Do you think it was able to one shot the whole project?

No, it wasn't even able to one shot the LOGIN PAGE. It looked like it did at first, but it did not.

Sure, in some ways the work it did was impressive, but it is such a far cry from what I’m hearing people say it can do that I feel like I’m going insane.

A couple counterarguments I'm predicting:

  • "But bro, you didn't use Claude Code. It's so much better than using Claude in chat."

    • they literally use the same models

    • Even if Claude Code is “so much better” than just using Claude, the same people have been saying this have been saying that every release is a game changer for the last 3 years*

  • "You’re the problem, Noah! You did too much work. Just trust in the vibes and let Claude cook"

So no, I'm not going to trust in the vibes.

*I’m not bothering to cite someone, but an interesting project would be to track the amount of articles that claimed coding is dead over the last 3 years and see how many times it was claimed that whatever thing that just came out was the cause.

[Touch Grass]

Listen, if I thought LLMs were completely and utterly useless, I wouldn't use them enough to have the two usage based horror stories I shared with you above.

My concern is that there is this shared delusion that these things are a silver bullet for literally everything, especially software engineering.

Quite to the contrary, they're just accelerating the problems of living in a world flooded with more and more information in which we’re concurrently getting less and less skeptical and more and more gullible.

As 'information' becomes easier to produce at scale with no evidence that it's any more meaningful than it was when it was harder to produce at scale, we should be becoming INCREASINGLY skeptical, not less.

But that's really hard when LLMs, such a friendly and easy source of information are incentivized to do whatever it takes to make you use them more, including ingratiating itself with you and making you FEEL productive rather than making you actually more productive.

And do NOT for a SECOND think that me saying an LLM has an incentive structure is me ascribing any level of intelligence or consciousness or intent behind it. This could be a post on it's own, but as Dawkins articulates succinctly in the Selfish Gene in relation to genes and memes, or like Yuval Noah Harari posits in relation to wheat domesticating man in Sapiens, non sentient entities can have incentive structures and reward functions that get optimized for regardless of their lack of sentience.

Not that this point actually matters, because maximizing LLM usage is quite obviously the incentive of Anthropic and OpenAI and just about everyone else in the LLM value chain. A value chain full companies that are hemorrhaging money and have a documented history of puffery, deceit, and outright lies.

These companies get paid for you to use the LLM more, and as I am trying to articulate with the anecdotes above, your usage of an LLM is not positively correlated with your success or the value of what you are producing.

If you also think we’re in the middle of a bubble and are concerned about the decay of rationality in the west and the world at large and like building businesses, there is literally no reason why you shouldn’t subscribe, we’re made for each other.

[A Path Forward]

I'd say that if you're going to use LLMs at all, you have to be incredibly careful. But I thought I WAS careful and I still made some pretty heinous mistakes.

As many of you know, I already won't even let AI in my IDE with something like Cursor. Instead, if I use it, I have a separate window I snap too when I have a question. Even that is clearly too easy to access, too.

I'm going to go back to something I didn't even realize I stopped doing: for at least the first hour of engineering each day, I'm not even going to let myself open up an LLM chat window. You'd be surprised at how much you can get done this way; problems you forgot were easy to solve on your own become easy to solve again. I'm actually entirely confident I will move faster this way and likely won’t even open the chat window well after that.

In regards to the issue with the AI slop post? I'm still going to continue filtering information aggressively. And in regards to the delusional research I did with the LLM about the CMO's company? I'm going to strictly use the LLM for information retrieval, and not feed it any narratives along the way, or listen to any narratives it attempta to string together.

As information becomes increasingly easy to produce, we need to be more skeptical and more rational, not less.

Live Deeply,