On Mytho(s)logical Threats

Mythos isn't a threat and more context doesn't fix broken AI

2026.04.19

XCLVIII

[Slow Optimism; Mytho(s)logical Threat; If I hear 'context' One More Time…; A Gorge Doesn't Know Anything about Water; It's Still just a Prompt; Be A Critic]

Thesis: Repeated, nonsensical claims about silver bullets are a major threat to continued progress.

[Slow Optimism]

Nonsensical and repeated claims about silver bullets are the biggest threat to continued progress.

Uncritically repeating claims like this new LLM is the end of cybersecurity or ‘context is the only moat’ chips away at our society’s rational systems and information filtration systems, and that jeopardizes progress.

I’m tempted to write an entire essay waxing poetic about how amazing and impressive both the technological and psychological achievements of humanity have been in such a short time, but it’s quite tough when there’s no point of comparison:

  • How impressive is it that we’ve only had flight for a century, but already can go to and from space?

  • How impressive is it that electrical batteries have only been around for a couple of centuries and we’ve already electrified a large part of the globe?

  • How impressive is it that we’ve only known how to make plastic for the last 150 years, and we can already buy a little box called a 3D printer that can make anything out of it we want?

  • How impressive is it that we’ve been able to obliterate ourselves with nuclear warheads for decades and have yet to do it?

That last point sounds a bit wonky, congrats, we haven’t wiped out our own species, but I really do think it’s impressive. Having the ability to do so is historically unprecedented I wonder how many human cultures in the same position would not have had the same constraint?

Again, I don’t and can’t know how impressive any of these things are; regardless, I’m in awe of them and very optimistic about what else we can achieve. Can we create abundance on our own planet? Can we colonize the stars?

I also think the biggest risk to an amazing outcome is the decay of a reason based, rational society. I'm deeply concerned about substituting logic for emotion, blindly following demagogues, and believing in silver bullets and philosopher stones that do not exist. This is a huge claim I’ve been exploring, so I’m not asking you to believe it based on one sentence.

What I’m asking from you today is to be skeptical of people who claim that there is one simple, solution to all of our problems, and they happen to have it. I’m asking you to think rationally when presented with emotionally drenched claims that when briefly examined, have little to know substance.

The more we react to emotional, unsubstantiated rhetoric and parrot claims that make us feel something without checking for any underling logical soundness or evidence, the more that others will continue to propagate emotional, unsubstantiated and hallow rhetoric.

And the more we jeopardize progress.

[Mytho(s)logical Threat]

You probably haven't heard, though, a reasoned breakdown of why it's not anywhere near as scary as they claim. (Spoiler: it might just be a marginally improved, better marketed re released of their last model, Opus 4.6)

Here's a brief summary of some points from Cal Newports' video that very calmly analyzes the overly sensationalized claims:

  • Claim: Mythos found new, critical vulnerabilities in very important open source packages. Reality: Cyber security experts were able to find the same critical vulnerabilities in open source projects with open source LLMs that were a fraction of the cost and scale of Mythos and have been out longer.

  • Claim: Mythos's hacking abilities are such an improvement over existing models that it can autonomously hack many systems. Reality: Even when the likely biased AISI institute tested the model directly, they saw a gradual improvement on cyber benchmarks inline with and even lower than previous jump ups.

  • Claim: Now, LLMs are a geopolitical risk due to hacking abilities. Reality: LLMs have always posed a real and gradually increasing cybersecurity risk. But this has been known and discussed basically since the damn things were commercially available.

Newport's conclusion is a tight one: Anthropic is controlling the media narrative with sensationalist drivel. This allows them to keep getting off the hook for increasingly outlandish claims; forget about the data center claims or revenue numbers, look, now LLMs can do the things cyber security experts have been saying is a problem since the start!

You don't NEED to care about whatever a company and media THINK you'll have a strong, emotional response to. It's just a toxic, co dependent feedback loop - they think we'll respond more strongly to puffery and outlandish claims, we respond better to puffery and outlandish claims, so they do more puffery and make increasingly outlandish claims.

You don't have to respond that way. You're allowed to care about other things, you know, like holding Anthropic accountable for an increasingly unreliable product or how despite their super powerful Mythos cybersecurity AI they still leaked the super hackable source code to Claude Code.

All of this is without mentioning two other facts:

While I wouldn't die on this hill, I also wouldn't be surprised at all if "Mythos" was just a more creatively marketed Opus 4.6.

Fool me once shame on you, fool me - You can't get fooled again…

- George Bush

some of these claims are as credible as me showing you this photo and saying I won the match (circa 2022, I’m better now lol)

[If I hear 'context' One More Time…]

Outside of the latest super honest marketing campaign, an oft repeated claim about LLMs that I believe really needs to be addressed is this idea that 'context is the only moat.'

For the mercifully uninitiated, this is a common idea in venture and startup circles that asserts that having domain specific context when building a domain specific business is going to be the only moat when building useful products now.

The whole assertion goes something like this:

LLMs will drive the cost of building software so low that the only way to differentiate your business is by giving LLMs specific context about your specific industry or market.

This idea is particularly insidious, because one of the key underlying assumptions is very strong: the more you understand the people or the market you are building for, and the more of your understanding you build into your product, the better the product will be!* This is damn near self evidently true, and something that I whole heartedly endorse; I don't think I've met a single competent founder who would entirely disagree with this claim.

However, below this rather benign belief, there are two assumptions that I take qualms with:

  1. Commercially viable software is or will soon be trivial and incredibly low cost to build with LLMs

  2. Any difference between good & great software that can't be erased by the raw power of LLMs can be erased by giving the LLM the 'right' 'context'

I don't agree with number 1 for a lot of reasons. Some include:

All that aside, let’s just assume that number one is true: We have unsubsidized LLMs that reduce cost to build maintainable software by an order of magnitude or more. Funnily enough, this is the world that people think we're living in right now, anyways.

Still, context does not solve the remaining issues.

*See The Mom Test, Domain Driven Design, Disciplined Entrepreneurship, probably a Paul Graham essay, too

[A Gorge Doesn't Know Anything about Water]

I don't see how a useful amount of in depth domain expertise is imparted into an LLM, an artifact that inherently doesn't 'know' things and has no way of storing additional information. The closest we can get to truly injecting it with context in even a remotely deterministic way is giving it more prompts. And calling that remotely deterministic is generous.

You need to remember, LLMs fundamentally do not know 'context' in the same way we do, if they 'know' anything at all. In this sense, they are 'stateless'.

They can 'encode' information in something like the way the answer to a math problem 'stores' information about the problem, but this doesn't mean they have any sort of inherent representation of information in what we would consider a meaningful way.*

I did an interesting experiment last year on whether or not an LLM is 'storing' probability estimates in a meaningful or even consistent way. The results kind of surprised me: the LLM seemed to have something roughly resembling the shape of a loss function in it's weights.

The more I've thought about it, the less impressed I've been: I think the LLM is storing information about the real world in the way that a mountain ravine stores information about the water that rushed down it.

Perhaps a smart person can use the rocks to determine things about the flow and force of the river overtime, but, outside of poetry, we wouldn't say the mountain 'knows' about the water in the same way that we 'know' about the water.

An LLM is an admittedly fascinating artifact that we've spent gobsmacking amounts of energy and capital carving information and echoes of sentience into.

But to build an echo chamber, and to shout "I'm alive," and be surprised when it shouts back, "I'm alive," would be the peak of absurdity. An LLM is more sophisticated than an echo chamber, but the point still stands.

*If you want to understand how these things work, I would recommend Deep Learning for an understanding of neural nets, or 3Blue1Brown’s video or series on it. Please shoot me a note if you would be interested in a post technically describing LLMs.

[It's Still just a Prompt]

I can think of 4 options to improve the output of an LLM, none of which involve giving it 'context' in a way that causes it to ‘know’ that context.

  1. Improve the prompt

  2. Give it tools (eg, write or provide software for it to use)

  3. Fine tune the model

  4. Improve the architectecture of the LLM

No one I'm talking about is seriously considering number 3 or 4; they trust the oh so benevolent AI Labs too much to consider number 4. On the off chance they bring up number 3, outside of the objection that it's computational intensive, expensive, and easy to mess up (from experience), I'd bring up the same thing as I would for 4: it still doesn’t store information in the LLM in a meaningful way. You're tweaking the probability of a stateless thing giving you the outcome you want.

Most of the time, people who are talking about using context to improve the output mean 1 or 2.

2 in itself is a bit of a funny one, because tools are just giving the LLM access to software that you or an LLM write for the LLM. So if what they mean when they say 'context is the only moat' is that context is storing information in the right, problem specific software tools, then we can restate their full claim to something more akin to:

LLMs will drive the cost of building software so low that the only way to differentiate your business is by giving LLMs software that is built for your specific industry or market.

But, of course, software is cheap and easy to write now that these people won't actually write the software, the LLMs will, so we can ignore the irony of still having to write software to not have to write software (something that has been possible since or before lisp macros in the 60s). All we're left with is that the prompt will save the day:

LLMs will drive the cost of building software so low that the only way to differentiate your business is by giving LLMs context in the form of a prompt that is built for your specific industry or market.

Now, I'm immediately going get two rebuttals that are missing the point:

  1. "What about Agents?"

  2. "What about memories and artifacts etc?"

In regards to number 1, what is an agent other than an LLM that has a tool to call another LLM and a tool that lets it edit and grab some sort of information like a text file or rag database? So no, I'm not ignoring 'agents,' I just refuse to romanticize them.

In regards to number 2, memories and artifacts and whatever else are all literally just prompts that are stored and fed into the LLM in a sometimes clever, sometimes convoluted way.

Due to the fundamental limitations of an LLM, no matter how clever your prompting system is, and how many different calls you make to it, there is still no memory to the LLM. Maybe you can say your complex software system with a bunch of text files or better yet, a thoughtfully structured database has context stored, and maybe you can call this an “agent". But that doesn’t change the fact that if the LLM itself is making any meaningful decisions, it’s doing so based on ‘context’ stored in a prompt that it may or may not follow or consider.

Cool, now you've just built yourself a likely complex, potentially over engineered system that is more expensive to run and less reliable than a basic, well written piece of software.

Don’t get me wrong, you can sometimes build something that can be useful using LLMs!

But to act like “more context” will solve the inherent limitations of these systems is to misunderstand how the system works.

If you are pro rationality, pro progress, etc, etc, etc, or even if you disagree with me and just want to see what I write next, please subscribe below.

[Be A Critic]

What you do and say and believe might seem small, but it matters. Other people are listening to you, even if you don’t think you have a platform.

The more we believe in a simple solve someone wants us to believe, the less we invest in finding the real, incremental improvements:

Please don’t take things people say at face value. And please don’t repeat things you've heard that sound good but might not be true.

There is a really, really good chance you've been misled. And the more we tolerate bullshit as a culture, the more bullshit there will be!

And the more bullshit there is, the easier it is to take advantage of our society collectively.

I’ll leave you with a note from the phenomenal essay about software engineering by Fredrick P Brooks, No Silver Bullets:

The first step toward the management of diseases was replacement of demon theories and humors theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers the progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness.

Live Deeply,