The Problem Nobody Warned You About
For most of history, learning was gated by access. You wanted to understand a topic, you had to find a book, a teacher, a course, or a mentor. The bottleneck was information. If you could get your hands on the material, the rest was time and effort.
That bottleneck is gone. A capable model will now explain quantum mechanics, debug your code, summarise a legal document, and walk you through a new language - all in the same afternoon, at a level pitched exactly to you.
Which sounds wonderful. And it is. But it has created a quieter problem that very few people are talking about honestly.
You can now finish tasks without ever actually learning anything.
The code compiles. The essay reads well. The spreadsheet is correct. And yet, six weeks later, you could not reproduce any of it from memory. You do not understand it more deeply than you did before. You have outputs without understanding.
This post is about how to stop that happening to you.
The Old Learning Loop vs the New One
The traditional learning loop was slow and a little painful. You read something. You did not fully understand it. You tried to apply it. You failed. You went back to the material. You tried again. Eventually it clicked, and because the clicking came after real struggle, it stayed clicked.
The new loop, if you are not careful, looks like this. You hit a question. You ask a model. The model answers. You paste the answer. You move on. The friction that used to create memory has been removed, and the memory goes with it.
The research on desirable difficulty from Robert Bjork and others has been clear for decades. Learning that feels easy in the moment tends not to stick. Learning that feels slightly hard - that requires retrieval, that surfaces what you do not know - is what actually moves a skill from short-term into long-term memory.
AI tools are astonishingly good at removing the difficulty. That is the thing you now have to add back in on purpose.
Separate the Task From the Skill
The first mental shift is to stop treating “finishing the task” and “learning the thing” as the same activity. In an AI-assisted world, they are not.
If your goal is to ship a feature today, let the model help you ship it. There is no virtue in suffering through something you are not trying to get better at.
If your goal is to become the kind of person who can build that feature without help next time, you have to work differently. You have to build friction back in deliberately.
A few principles that have worked well for me.
- Decide in advance. Before you start, ask yourself whether this is a task you are trying to complete or a skill you are trying to build. Both are valid, but the workflow is different.
- Do not optimise the wrong variable. If you are learning, speed is not the metric. Retention and transfer are.
- Protect one area from AI assistance. Pick one skill you care about deeply - the fundamentals of your craft, a language, a musical instrument - and do that work by hand. Use AI everywhere else.
The Retrieval Habit
The single most effective technique I have found is what cognitive scientists call retrieval practice. The idea is simple and slightly uncomfortable. Before you look something up, try to produce it from memory.
Concretely, this means changing the order of your interactions with AI.
Old order: hit a problem, ask the model, read the answer, move on.
New order: hit a problem, write down what you think the answer is, ask the model, compare the two, understand the delta, move on.
That extra step - the attempt before the lookup - is where almost all of the learning happens. It surfaces exactly what you do not know. It makes the model’s answer land somewhere specific rather than washing over you.
You can do this with anything. Reading a design doc. Debugging an error. Trying to recall an API. Writing a paragraph. The pattern is always the same. Predict, then check.
The Teach-Back Technique
Once you have made an attempt and compared it to the model’s answer, there is a second step that multiplies the effect. Explain the concept back to the model in your own words, and ask it to find the weak points in your explanation.
This is the Feynman technique, given a conversational partner that does not get bored.
What you are doing here is turning a one-way consumption activity into a two-way teaching activity. You are forced to find your own words. You are forced to construct the narrative. The gaps in your understanding show up immediately, because you cannot explain what you do not actually know.
Most people never reach this step. They read the model’s answer, nod along, and close the tab. If you want the knowledge to be yours, you have to produce it, not just receive it.
Build a Personal Curriculum, Not a Playlist
One of the strangest side effects of having an always-available tutor is that it becomes easier than ever to drift. You can spend hours learning interesting things without ever building toward anything.
The fix is to have an actual curriculum. Not a fixed syllabus - a rough map of what you are trying to become capable of, and in what order.
A few questions worth answering honestly.
- What is the one capability I most want to have in twelve months?
- What are the prerequisites for it, and which of them am I weakest at?
- What does a reasonable next month of practice look like?
- How will I know, concretely, that I have made progress?
Write this down somewhere you will see it. Then use AI ruthlessly to accelerate the path - generating exercises, explaining prerequisites, giving you feedback on your work - but keep the direction in your own hands. The model is very good at the how. It is not good at the why.
The Projects Test
The best learning I have ever done has come out of projects, not courses. That was true before AI, and it is even more true now.
A project forces you to confront real decisions. It gives you a thing that either works or does not. It surfaces questions you would never have thought to ask. And it gives the AI something concrete to help you with, which is when these tools genuinely shine.
If you are trying to learn a new language, build something small in it. If you are trying to learn a new domain, pick a question in that domain and try to answer it. If you are trying to get better at writing, write something real, for a real audience, on a real schedule.
The research on retrieval and transfer showed that applying knowledge in new contexts is what produces durable, flexible skill. Projects are the simplest way to manufacture those contexts.
Beware the Fluency Illusion
One final warning, because this one is sneaky.
When a model explains a topic to you clearly, you will feel like you understand it. That feeling is almost entirely unreliable. Cognitive psychologists call this the illusion of fluency. A smooth explanation feels the same as a deep understanding, and it is not.
The only way to distinguish the two is to try to produce, teach, or apply the material without assistance. If you can, you have learned it. If you cannot, you have consumed it.
Do not trust the feeling. Trust the output.
A Practical Weekly Rhythm
If you want something concrete, here is a rough weekly rhythm that has worked well for me and the engineers I have mentored.
- Daily - one small retrieval attempt before any lookup. Just one. Write your prediction down first, then check.
- Twice a week - one thirty-minute block of deliberate practice on your chosen skill, with AI assistance switched off. No autocomplete. No chat. Just you and the work.
- Weekly - one teach-back session. Pick something you learned that week and explain it to the model. Let it poke holes.
- Monthly - a short review. What did I learn? What am I still faking? What is the next skill I need?
None of this is exotic. It is the same deliberate-practice scaffolding that K. Anders Ericsson wrote about long before ChatGPT existed. The difference is that the environment around you no longer enforces the friction by default. You have to build it yourself.
The Skill Behind the Skills
The deeper point is this. The meta-skill of this era is not prompt engineering. It is not knowing which model to use. It is not even taste, though taste matters.
It is the discipline to keep learning in an environment that will happily let you stop.
The people who will do genuinely remarkable work over the next decade are not the ones who used AI the most. They are the ones who used it without letting it hollow them out. Who stayed curious. Who kept their hands on the raw material. Who treated every finished task as an opportunity to ask - did I actually get better, or did I just get done?
The tools will keep improving. The bar for real understanding will keep rising, because the easy version of everything is now free. The only durable edge is the one you build slowly, on purpose, in the quiet hours when nobody is watching.
Learn how to learn. It may turn out to be the most important thing you do.