- Noah Jacobs Blog
- Posts
- On The Automated Turkey Problem
On The Automated Turkey Problem
What turkeys can teach us about automating things that shouldn’t exist.
2025.01.26
LXXXIV
If you only look at things that are happening around you to determine how the world works, you’ve fallen victim to the turkey problem.
Watching what is going on around you is important, but unless you are also asking why things happen the way they do, you can quickly jump to a lot of bad conclusions.
Because it is easy to jump to such bad conclusions and because AI makes it easy to automate things, we’re seeing a lot of things get automated simply because they exist, not because they should exist.
Asking “why” and thinking logically can help you avoid this trap.
The Turkey Problem
There exists a turkey who lives on a farm. A human has been feeding it for the last 80 days.
The turkey thinks that the human is quite fond of him, a sort of benevolent patron or friend. After all, ALL of the evidence supports this! Every day so far, the farmer has fed him!
Gobble, gobble.
As a matter of fact, for every day that passes, the turkey feels MORE sure that the farmer will feed him on the next day—there’s MORE evidence now!
And then, on day 100, when the turkey is at the peak of its confidence about the owner’s benevolence, he is taken to a shed and slaughtered.
The problem could be comically reformulated as the “oats” problem with pigs. Please only click that link if you would normally spend 3 minutes watching a picture of two pigs as they argued with each other about oats.
Induction
Unfortunately for the turkey, it was only looking at a pattern without looking at any sort of measure of cause and effect.
“Past performance is no guarantee of future results.”
Meaning, it saw the man feed it on day one and day two and day three, so it decided that it was likely that the man would feed it on day four. Every time this pattern held, the turkey grew more confident that it would continue to hold.
This is the problem of induction–if you just use past examples, you are not actually knowing things with any degree of certainty.
The Human Problem
Before you say the turkey problem doesn't apply to humans, I’d check out this collection of famous last words:
“The last two guys I got into bar fights with didn’t have knives!”
“Sears has been around for over a hundred years, of course it’s a great stock to buy!”
“The last time I drank a bottle of wine and went on a joy ride, I didn’t hit anyone!”
“I’ve always been able to raise more capital!”
“The last frog I touched wasn’t poisonous!”
“OpenAi has always lowered the cost of its API!”*
“I’ve never seen a cop on this road, so I can go 2x the speed limit!”**
“Well, I mean I’ve never died before! So I guess I can’t die in the future!”
As you can see, there are a lot of ways we can easily draw absurd conclusions from past evidence alone.
*I am not actively betting on the cost of the api going up, but I’m certainly not building a business that relies on it continuing to go down or even stay as low as it is, either. In September, OpenAI said they would lose $5B in 2024.
**This is not an endorsement for speed limits.
Automate Everything
One trap that inductive thinking will get you is that you should just automate anything that currently exists.
With AI making it easier to just automate things, it does not seem like a bad bet to pick an industry and just slap ai on it to make the thing go faster. People already do the task, why would they not want to do the thing faster?
This is natural and not always wrong, but it’s not always right, either. The issue arises if you don’t ask why the thing exists in the first place. Maybe it doesn’t need to exist.
Never automate something that can be eliminated, and never delegate something that can be automated or streamlined. Otherwise, you waste someone else's time instead of your own, which now wastes your hard-earned cash. How's that for incentive to be effective and efficient?
In my head, I’m picturing a Rube Goldberg machine of increasingly absurd complexity being constructed to do something like dispense a particular amount of water for a cat every 4 hours when there is a constantly flowing fountain two feet away.
The AI SDR
As an example of automating potentially the wrong thing in BirdDog’s space, a bunch of people who don’t know anything about sales* are building AI SDRs.
An SDR (sales development rep) is a sales position at a company that is tasked with booking and/or qualifying** meetings for a more senior sales person, typically an Account Executive (AE) who does the bulk of the sales process and closes the contract.
The SDR position sucks because it involves cold calling and emailing and doing anything you can to get meetings.
What’s not obvious, though, is that the position doesn’t necessarily need to exist in the first place. A sales team exists to generate revenue for a company. A sales team does not exist to have an SDR team.
The first sales team I worked with in early 2024 had either just fired or promoted all of their SDRs to full cycle AEs, meaning the AE’s did their own prospecting and closing without any SDRs. Here is a reddit post from over a year ago debating the merits of not even having the SDR role. Here is a post from well before the AI SDR phenomenon was on the radar (2021) discussing the drawbacks of the SDR role. And, here is an article from 2018 explaining the issues of the position.
SDRs typically optimize for meetings booked with a notoriously bad rep for booking meetings that don’t convert into revenue. Might it make more sense to take a step back, realize that the sales team cares about revenue, not meetings booked, and help them optimize for that?
In short, while the position is common and therefore inductively justifiable, it is not obvious that it needs to exist or even should exist in all cases, let alone be optimized and automated.
One of the biggest traps for smart engineers is optimizing something that shouldn't exist.
*Jack and I are also people who fall into the “don’t know anything about sales” category, but I’d like to think we are good listeners
**Making sure that the person might actually buy the product or service
An Aside on LLMs
A recursive aside on LLMs: they are characterized by the same issue that people who are automating things that shouldn’t exist might struggle with–they are pattern recognizers, not knowledge gatherers. Perhaps more on this later.
Tumors & Bubbles
A lot of things shouldn’t exist.
However, if instead of asking “why,” you think like a turkey and go exclusively off of the past, you will not realize that these things should not exist. You will find yourself optimizing the status quo, not the ideal state.
Since AI makes it easier to automate things, we will continue to see more and more automations that should not exist.
Luckily, unlike the growth of a cancer that terminates with the death of the host, such things will likely be more inclined to pop like a bubble.
Live Deeply,