AI in the coming years
AI is everywhere. Students use it while studying and, teachers use it to prepare tests or scorecards. People write outreach emails with it, others reply with it. LinkedIn posts, Reddit comments, internal company notes… the list goes on. You can keep finding new, surprising and sometimes strange ways in which AI is shaping our daily environments.
With this article we want to offer a relatively simple way to look at the current evolution of AI. This article mostly focuses on LLMs (Large Language Models). That does not mean other types of AI tasks (categorization, prediction, etc.) are less valuable. It is simply that the current buzz is around LLMs (and the hype is somewhat justified), so that is what we focus on here.
Previously we wrote a piece comparing the current AI boom with earlier technological shifts and we also touched on AGI then. If you haven’t read that article yet, it might be a fun follow-up.
LLMs
Let’s start with a very abstract way of thinking about LLMs and why the current boom is happening. We will avoid technical depth as much as possible and keep the jargon limited.
At the core, a large language model is a neural network that predicts the next word in a sequence. That’s it.
But it turns out they are extremely good at this and they even exhibit some “emerging” properties that surprised everybody. Much better than previous architectures AI engineers used in the past, such as RNNs (Recurrent Neural Networks, a different type of neural network). LLMs turned out to be much more scalable in terms of inputs, the amount of text you can feed into the system, thanks to clever design choices and mathematical optimizations.
Computers obviously do not “read” words. They operate on numbers, often called tokens. The translation from words to these numerical representations is handled in a very thoughtful way in modern LLMs, which allows them to capture meaning and context surprisingly well.
Then additional elements and steps were added.
First came web search capabilities, giving these systems access to the same information you would retrieve through Google. Then access to documents, apps, and other contextual data, making the answers much more personal and relevant.
And finally: chaining the models together.
An LLM might first rewrite your question, then generate a plan, then produce the answer, then summarize it.
Stack enough of these steps together and suddenly a “next word predictor” begins to feel like some sort of super-intelligent know-everything system. Extremely useful, and probably one of the biggest shifts in technology and user behavior of the 21st century.
Yet there are clear limitations.
As discussed in earlier articles, the way current AI works cannot really be called general intelligence.
This is why researchers like Yann LeCun (Meta’s former Chief AI Scientist) remain skeptical. In the 1980s, while at AT&T Bell Labs, LeCun originally developed CNNs (Convolutional Neural Networks), another type of neural network architecture, which the revolutionized image recognition. These models first helped the US postal service recognize handwritten digits for mail routing. Later the same ideas allowed computers to recognize almost anything: dogs, objects, vegetables, you name it.
Despite being one of the architects behind modern AI progress, LeCun recently raised more than $1B for a new venture, AMI Labs, precisely because he believes the current generation of LLM models will not get us close to true general intelligence. Instead, he is betting on world models: systems that learn how the real world works and use that structure as a basis for reasoning and planning. That is a topic for a next article. (More info)
Humans in the Loop
The following statement is difficult to make with complete certainty:
Humans will remain in the AI loop.
It’s not a crazy claim, but one could argue the opposite. And there is probably truth somewhere in between.
Many jobs and processes will change dramatically. Some might even disappear entirely. But fully automated AI systems that run complex business processes end-to-end without human involvement seem unlikely.
Think about processes like recruitment, procurement or legal review.
If you break these large workflows into smaller sequential steps, which is exactly how (AI) systems tend to operate, every step introduces a small error.
Take a hypothetical example: suppose each step has a 1% error rate.
If the AI needs to perform 100 steps, the probability of the final outcome being correct becomes:
0.99¹⁰⁰ ≈ 37%
That is a terrible outcome.
This is of course very difficult to measure as you could have an agent that performs 1% better than a human, is that than 101% accurate?
And the more complex the process becomes, the more steps get introduced. Even if models improve significantly, they will always remain below 100% ‘accuracy’.
Of course humans also make mistakes, and businesses are full of inefficiencies. But my intuition is that humans will remain in the loop, just not in the way most people currently interact with systems like ChatGPT, Claude or Gemini.
Today interaction is very action-reaction based: you type something, the model responds.
In the future it might look different.
Processes will run continuously in the background and occasionally “come back” to the human when decisions are required. (for inspiration, read about neural timing.)
A good metaphor is a golf ball slowly rolling down a gentle slope toward the hole. Every few seconds you give the ball a small push to keep it on track. Because you constantly nudge it in the right direction, the chance of reaching the target becomes much higher.
Humans are so to speak ‘on the loop’.
This leads to one of the questions we often ask founders building AI agents:
“Where exactly are the human decision points in the system?”
That question often reveals three things:
- the actual product experience;
- the depth of the founder’s understanding of the problem;
- and the long-term vision for their space and market.
Long-Term vs Short-Term Thinking
A well-known observation about technological change is the following:
People tend to overestimate the short term and underestimate the long term.
This pattern has repeated itself through most societal shifts, and AI is no exception.
In the short term there is a lot of noise around LLMs and AGI. Some people genuinely believe that a kind of technological singularity will suddenly arrive and transform everything overnight.
But tomorrow the sun will still rise.
You will still be human.
And Belgian beer will still outperform most other nations.
Some things simply remain true.
The more interesting question lies in the long term.
Is everyone underestimating it? Not everyone, but probably most people are.
It is almost impossible to imagine the number of tokens that will be used in the future.
Read that again.
Now try to picture a world with “a lot” of AI usage. Multiply that by 100, and you’re probably still not close.
The amount of LLM usage coming in the next decade will be enormous.
And that scale will trigger a series of changes that are easier to discuss.
Agents and AI for Everyone
Every department will eventually have its own small army of AI agents.
Every business process will have AI somewhere inside it.
Every person will have personalized agents and sub-agents.
Anywhere where words are used, an AI will likely be involved.
A future company might even be optimized around things like:
- compute availability
- token usage efficiency
- token cost ratios
It sounds strange today, but these metrics could become normal operational variables.
The AI Department
Just like companies developed IT departments in the 1950s and saw them explode in the following 80 years, we will probably see the same with AI. Although this evolution will be faster.
The “IT guy” fixing your laptop might become the “AI guy” fixing your agent pipeline.
Organizations will have:
- AI audits
- AI managers
- AI infrastructure teams
- AI governance roles
It sounds strange today, but these metrics could become normal operational variables just as energy use and efficiency for instance is used today.
Changes in Services
Last year we wrote a paragraph in our article Interesting Ideas for the Future about service companies. The argument still stands, below is a reworked version:
Many service industries are essentially built around moving text around.
Recruiters, consultants, law firms, translators, marketing agencies, a large part of their work involves handling information and communication.
Some industries have strong marketplace dynamics (for example recruitment: you cannot hire a hallucinated CV…). But still, much of the work revolves around managing text and information flows.
Service businesses have traditionally been difficult to scale. Automation might change that. (more insight)
The cost of building AI software is dropping quickly, which means these industries could be disrupted in three ways:
- AI-native tools for service providers – software that helps existing firms automate repetitive tasks.
- Software for insourcing – platforms that allow companies to bring these functions in-house.
- Service-as-a-Software – entirely new service firms built on automation rather than human labour.
The last category is the most ambitious.
Instead of building software for lawyers, why not build an AI-powered law firm?
Instead of selling recruiting tools, why not create an automated recruiting service?
By capturing a part of the service value chain, often at higher margins, these companies could become very large without needing global dominance.
AI in Education and Cognitive Decline
This topic is slightly strange, because the early signals are still subtle.
If humans increasingly stop thinking themselves and delegate most cognitive work to AI, it is reasonable to assume that human thinking patterns will change. (+language!)
Whether that change is positive or negative is hard to predict.
At the same time, education technology presents a massive opportunity. As discussed in our earlier article, the combination of AI capabilities and existing inefficiencies in education makes it a very interesting space.
That said, many founders avoid it due to heavy regulation and complex market dynamics.
Change in Work
Knowledge work will change, that part seems obvious.
But the big question remains: what stays?
The answer is probably not “nothing.”
The more human parts of work: creativity, conversation, collaboration. Will likely remain important.
But again the short-term versus long-term paradox applies here. It is difficult to see exactly where things will end up.
Automation of repetitive tasks is already happening, but that probably represents only the tip of the iceberg.
A Strange Reference to Primates, Adam and Eve.
To end the article, let’s make a slightly unusual reference.
Most human progress ultimately comes from curiosity.
Across more than 50 years of research communicating with apes using sign language, apes have never asked humans a spontaneous question. They can use signs to request objects or describe things, but they rarely ask things like:
“What is that?”
“How does this work?”
Those kinds of questions seem deeply human.
Animals do show curiosity in the physical world. A cat pushing a glass off a table or a primate playing with a mechanical puzzle are examples of this. But they rarely demonstrate
information-seeking curiosity.
And yet this type of curiosity has driven some of the biggest changes in human history.
Someone at some point wondered what would happen if seeds were placed in the ground. Agriculture emerged.
Louis Pasteur became curious about microscopic organisms in water and blood. That curiosity eventually led to vaccines, pasteurization, and modern microbiology.
Now the Adam and Eve reference.
In the story of the Garden of Eden, Adam and Eve live comfortably with everything provided for them. But Eve becomes curious about the fruit of knowledge, despite being told not to eat it.
She does anyway.
That curiosity ultimately leads to their expulsion from the garden and a completely different human existence, one defined by work, survival and continuous searching.
Curiosity changed everything.
And in many ways, the current AI moment might be the same, we are very curious about everything and that changes the way we live in unthinkable ways.
That change may feel disruptive or unsettling at first, but over the longer term it often converges toward a better future for everyone.
Feel free to reach out:
Ruben Pauwels
Investment Manager
Angelwise
e-mail: ruben@angelwise.be



