On Autocomplete

Or how AI can make you dumber.

2025.02.02

LXXXV

Surprise surprise, AI cannot only make you smarter, it can also make you dumber.

No More Autocomplete

I turned off the autocomplete function in my code editor this weekend. It was making me a worse programmer. 

An expert is good at looking at a problem in a domain, quickly coming up with a number of solutions, and selecting the most acceptable one based on the situation. 

When you use an AI tool at the start of a problem solving process, you are effectively starting with one or two solutions that the AI came up with for you. 

Now, your role is that of critic rather than creator. Now that you have an option in front of you, it is easier to analyze and edit that option than it is to create and evaluate other options. If that option is “good enough,” on average, you will likely go with it.

...the first rule–he calls it the rule of all rules–is the art of challenging what is appealing.

Christopher Hitchens

In this way, you will keep creating things that are “okay.” This will make you worse over time.

I am still using AI to help me, but I am being much more intentional about it. It is phenomenal leverage, but it is dangerous if you do not us it carefully. 

Experts

I’ve written about experts a number of times before. A working definition: 

An expert can efficiently and accurately identify and manage the risks of decisions in a given domain.

The relevant consequences of this are two fold: 

  1. Given a blank canvas, the expert can come up with a number of decisions and select the most appropriate.

  2. Given an existing solution, the expert can analyze the pros and the cons.

The second case is incredibly valuable if you are getting advice from an expert–you can see how they think in relation to the solution you came up with and benefit from their understanding of cause and effect. 

However, if you are functioning as the expert and creating something, you will get better results by frequenting the first case. That being said, it is cognitively more expensive to do so, and, if you are given an “okay” solution, even if you are functioning as an expert, there is a very real temptation to just go with it.

By the way, I am not an expert at coding. I am intermediate at best, novice at worst. That being said, this definition of expert above is aspirational and practical. Meaning, an expert ought to be efficient at thinking as outlined. But, if you are not yet efficient at thinking like that, you become efficient at it through repetition and trial and error.

You become an expert by thinking like one.

AI Autocomplete

With tools like Cursor and GitHub CoPilot, it is very common for programmers to have a visible, auto generated solution at all times (“the easy way out”).

The colored function is one we use in prod. The grayed out text is cursor’s autocomplete function. Right now, we have no use at all for a “safe_score_batch.” Go down a pic to see what happens when I accept the suggestion.

The recommendation for completing the function we don’t need is “okay” to “bad”—it gives the benefit of sharing a resource (that could potentially be problematic to share) but does nothing to complete the function in an actual “batch.”

This is like the text autocomplete on an iPhone keyboard or when you’re writing an email in gmail, except more extreme, because oftentimes, the coding autocomplete will provide whole functions for you. 

We can break the quality of the autocomplete suggestions into three categories: 

  • Case I: All of the autocomplete suggestion is as good or better than what you would write

  • Case II: Some of the autocomplete suggestion is as good or better than what you would write

  • Case III: None of the autocomplete suggestion is as good or better than what you would write

Cases I & III are super easy to deal with. In Case I, you accept it, and in Case III, you ignore it.  Of course, things aren’t quite so simple, because most of the time, the suggestion is Case II. 

So, in theory, what happens is that there is some sort of threshold for what percentage of the code needs to be as good or better than what you would write for you to accept the suggestion. Maybe if 51% of the code is as good as what you would write, you take it and then edit out the other 49%, whereas if 49% of the code is as good or better than what you would write, you ignore it completely.

In reality, such an autocomplete system causes two things to happen: 

  1. You are spending more time analyzing the code than you are creating it.

  2. Over time, you are tempted to think a higher percentage of the code is as good or better than yours

I know I’m talking about code here, but really, these same risks apply to anything in which you default to letting ChatGPT or another AI or AI Agent do for you.

Analysis Mode

If you are constantly given solutions to analyze, you will naturally spend time analyzing them. This takes effort and time away from your ability to create solutions. 

This may seem like a natural step towards a leadership or executive position–looking at GPT’s output becomes like analyzing an employee’s work. 

The problem with an autocomplete system like Cursor Tab is that you often times see a possible solution before you can even start coming up with options yourself. And, if even a quarter of that solution is good enough, you’re now fixing the other three quarters in your head rather than asking if it’s even a solution that really should be used.

In other words, the AI is in the driver’s seat and you are editing for it.

While this may seem like an issue confined to autocomplete tools, it is not. The same issue exists when you are using a chat tool with lazy prompting. If you just throw in some context and a problem, you are not doing the creative part of the problem solving and will be analyzing whatever solution it gives you.

It is very easy to fall into this trap, which is why it is so dangerous.

Lowering Quality

Humans tend to choose the path of least resistance. So, going back to our three autocomplete Cases, we have a pressure to want the suggestions to be closer to Case I, because it is easier.

So, if you are coding for 6 hours a day, and always see these auto suggestions, you might be inclined to start biasing towards the assumption that a greater percentage of the code is as good or better than what you’d write. After all, if that is the case, then there’s no point in you editing it! You can just go ahead and use it as is. 

In this way, it becomes convenient if the ai is better and you are worse. So, maybe we start to believe that this is the case.

And, because we believe it, maybe it starts to become true.

Working Solution

I am not eschewing AI tools altogether–I’m just being considerably more careful with them.

As mentioned, I’ve disabled the autocomplete functionality in my code editor. Now, when I need to write or edit a function, I am thinking about what I want it to do and how I want it to do it. Then I’ll potentially use AI tools to flesh out my solution with actual code.

The same sort of logic applies to anything outside of coding, too. If I’m going to use AI to help me complete some task, I’m making sure I take a stab at the creative part first (if there is one).

Live Deeply,