If You Use AI, You Will Be Left Behind



October 18, 2025

AI is unironically making you stupid.

Since ChatGPT blew up a few years ago, the phrase Learn to use AI, or be left behind has been repeated adnauseum. I wonder how little experience someone must have with learning to seriously advance this idea.

After all, how long do these people think it would take to "learn AI?". To be clear, the people who parrot this phrase aren't talking about studying machine learning, or building your own models; no. They're talking about learning how to prompt.

So, if I were to decipher their meaning, what they're actually saying is that you need to learn to proompt, and you need to do it now, Grandad.

While putting myself at risk of sounding arrogant, I've never had much difficulty picking up skills, and I suspect that if you're reading this, neither have you.

You can learn to prompt and get good at something like Cursor in a free afternoon. Besides, isn't AI always changing? Isn't that what these people keep saying, that every few months, a paradigm shift occurs, and we are all only six months away at most from losing our jobs?

It's funny how the people who say these things are either heavily invested in AI, or they're trying to sell you something that will protect you from AI; some kind of course, supposedly chock-full of forbidden knowledge that will allow you to save yourself, as everyone around you gets fired.

In reality, if AI will indeed work as advertised, something which is currently far from a certainty, I can always "learn" how to use it later, given the speed of change, and my ability to learn.

Let's take a step away from a moment, and pretend that it's the 1950s. We've just finished smoking cigars in a stuffy boardroom, and we make our way to this place that's just opened across the road, where teenagers on rollerskates are serving us hamburgers and Coca-Cola.

As we smoke our early-afternoon cigarettes indoors, the conversation steers towards flying. How crazy is it that in only about fifty years, we went from the first flight of the Wright brothers to now being able to fly from New York to London in just under seven hours?

What will they come up with in the next fifty years? Where will we be by the year 2000? Well, given the speed at which technology advances, we will at the very least have bases on the Moon, right?

If we were having this conversation in the 50s, we could absolutely be excused for having this take. After all, the progress up until that point was phenomenal.

However, over the next ten years, by the mid-60s, commercial planes hit their peak. Over the following 60 years, no real improvements have occurred, unless you count paying through the nose for WiFi on some flights.

Just because the numbers went up before, that doesn't mean that number will go up forever. Despite many such examples, people keep insisting that AI domination is a given.

One of the reasons why I find this kind of rhetoric frustrating is that it is often harmful. Convincing people that A) everything is lost, and we'll all be turned into paper clips, or B) you don't have to learn anything, just use AI, is causing serious damage.

Option A is easy to dismiss as nonsense, and most people do; however, option B is far more pernicious. Why learn anything when you can just use AI?

The answer? Speed, unironically.

What? You might ask. How can not using AI make you faster? I can pull up ChatGPT right now, and within moments, I can have paragraphs of knowledge ready for me to consume.

Now, yes, AI can be faster, assuming that you don't know anything. For example, I know nothing about microbiology. It would be faster for me to ask an LLM about something related to microbiology, and assuming that the answer is correct, and not subtly incorrect, or outright hallucinated, yes, that is faster than looking up the answer, and learning it the old way.

That being said, when it comes to your own field, the domain in which you make your living, knowing is always faster.

You have more bandwidth between your conscious mind and your stored knowledge than between you and the AI. If two people have to solve a problem, the person who just knows stuff is much more likely to solve the problem quicker than someone who has to constantly ask the AI questions.

Let's say we're working on a programming problem to do with images. How is image data stored, as far as machines go? How is color represented?

Well, there's this thing called RGB, which is an additive color model that uses various combinations of Red, Green, and Blue to achieve pretty much any shade of color you can imagine.

Each color in the RGB model can be set to any value from 0 to 255. If all three colors are set to 0, the result is black, or technically, the absence of light. If all three colors are set to 255, you get white.

If you play around with an RGB color picker, a setting of `rgb(255, 255, 0)` will result in yellow. Need some violet? No problem, just do `rgb(161, 66, 255)`.

So, how is this used to represent an image?

Depending on the file format, images can be represented in different ways. For example, a bitmap file, or .bmp, like in MS Paint, is a file where every single pixel has its RGB data stored (like a map). This results in a large file, but the image is represented very accurately.

Other formats, such as .jpg, use compression, losing some data, but still representing the image somewhat accurately, as long as you don't zoom in too far.

I just know this. I didn't have to ask an AI, because I learnt this at some point, but if you didn't, you'd have to ask Chat GPT, which could also hallucinate.

Knowing something will always be faster than asking the Oracle. It's ok to memorize stuff. It's ok to know how things work, and the existence of LLMs is not an excuse to not have to learn anything anymore. In fact, it's a really good reason to start learning!

Now, this doesn't mean that you should never use AI; it just means that the skill of just using an AI has zero economic value, and if it doesn't already, oh, it soon will.

You heard me right.

If you only "know" as much as the AI knows, you're doomed, because then you're basically just squishy middleware between your boss and the AI. Statistically, your boss would love to fire you and replace you with someone more desperate. I mean, if both of you are just LLM middlemen, why not?

Surprisingly, AI has not raised the bar for knowledge; it has lowered it.

Everyone loves to dunk on Zoomers for not knowing anything, and there is some truth to that. The more they rely on AI, the more cooked they are, and the brutal part of this is that they have very few people amongst their peers to compare themselves to. If everyone in a generation uses AI, they won't notice that they are unable to reason until it is likely too late.

Look, I get it. I went to school too, and I know how much it sucks. You're stuck in a room all day with other kids that you don't like, being bossed around by adults who often only know just enough about their field to get through their classes.

I, too, remember how many times I was told to memorize random poems and facts, painfully aware of the fact that my time was being criminally wasted. As a result, I had a very bleak view of memorization for many years after.

Here's the thing, though. Just because some teacher soured you on memorization, that doesn't mean that memorizing is a useless skill. You should memorize things. You should learn them and repeat them over and over in your head to the point where I can grab your shoulders, shake you from sleep, and you'd immediately be able to output information.

Given how low the bar is now when it comes to learning, just knowing stuff will put you miles ahead. Knowing stuff now has value once again, and hey, if your boss forces you to use AI, whatever, you're still ahead of the game, because you KNOW stuff.

Do you disagree? Let's assume that you're right, and AI will continue to get better and better. Even if that happens, I'll still be ahead of you, as my learning and reasoning skills will not be atrophied, and thus, I will be able to navigate problems with ease. You, on the other hand, will be ripping your hair out, trying to figure out why your vibecoded lines of code aren't working.

10 This is still not working. Fix this, please, with sugar on top, or the world will end! Can you help me, please?

20 Certainly! Oh, I now see my error—here is the new, correct code.

30 The AI produces the exact same code

40 GOTO 10

I think the AI bubble will burst, sooner rather than later. AI can not do your job, and people will realize this eventually, but until then, many more people will be fired, and have their lives ruined due to their bosses being conned into thinking AI can replace their workforce.

When the bubble pops, there will be a lot of pain, but there will also be opportunities. The people who have been let go will retrain, and some will retire, leaving massive gaps, which can only be filled by capable humans.

When that happens, those of us who kept learning will reap unimaginable benefits.

Don't be silly. Go and learn something today.