Artificial (general) intelligence
3 november 2025

Artificial (general) intelligence - A(G)I

Intro: A comparable for the AI boom?

In 2017, one of the best papers in statistics and machine learning was written, reaching +200.000 citations. Eight years later, we’re way past that point, and the market feels like it’s boiling over. In this short piece, I want to map out a few ideas about the dynamics and forces at play. I highly appreciate you giving the attention for reading this, it is all I need.


To start, I want to go back to the years 2005 to 2010. Think about the products that were launched in that short period: YouTube, Twitter, WhatsApp, Google Maps, Google Docs, Gmail, SoundCloud, Reddit, Facebook. Quite a list, right? You probably still use (half of) them today. Many people see these years as the unfolding of the web2 application layer. So it’s natural that current AI changes are often framed in that context, through metrics and strategies comparisons.


If there’s one high-level product insight to make about those years, it’s that these products had massive network effects. They spread like oil stains. Using them pulled in more users, which pulled in even more. These companies grew insanely fast (growing and maturing the internet as a result), all chasing one thing: attention (or using a word with a more positive connotation: usage).


What’s funny about that era is how little the products themselves changed. Gmail, for example, completely outclassed every other mail provider when it launched, offering a gigabyte of storage and a search-focused interface that redefined webmail. After that, it didn’t need to evolve much. Its core utility was clear from the start, and it stayed that way. The same was true for most tools of that time: they solved a core problem so effectively that people were happy, the products worked, and their essential job was done.


Even in B2B software, the pattern was similar. HubSpot, Zendesk or Workday: sure, they evolved, but there’s only so much innovation you can get out of a CRM or ERP in the pre LLM period. Besides a better UX, what do you really add?


Now fast-forward to the ChatGPT era. Is it the same this time? Not really. For two big reasons.


First, LLMs spread faster than anything before. By mid 2023, everyone in tech knew what they were. Now almost 3 years in, we’re at over a billion weekly users. Distribution isn’t a problem anymore.


The second difference is deeper. Users aren’t satisfied. They want more: more tokens, faster answers, better outputs.

Everyone seems hungry for more power. That’s completely different from 2005–2010, when people were content once something worked. Now, “good enough” doesn’t exist. The result is high churn, exploratory revenue, and paper-thin margins for many AI application startups. If we multiplied today’s compute capacity by ten, we’d probably max it out immediately. More, faster, better. That’s the loop. This never-ending need for such an essential resource isn’t new, it’s similar to how earlier core technologies were built up.


A comparison with the distribution and buildout of the energy structure or internet are much more in place. Both innovations made users dream about the applications it would yield: motors, light, toasters, remote resource sharing, file transfers and later during the dotcom bubble: universal stores, instant information, seamless transactions or personal pages.


So the AI market hype makes sense, right? More is good? More energy was good and more internet has yielded some great things.
Or maybe that logic starts breaking down soon.
Let’s talk about latency, AGI, and a small dose of doom.




AGI: Et alors, boom or doom?

This brings us to the big, and often uncertain, question of AGI. But before we can speculate on boom or doom, we need to face the major technical challenges standing in the way of human-like intelligence. The first of these is a problem of experience: latency.


Take voice-based LLMs as an example, think automated recruitment calls or outbound sales bots, still don’t feel real. The uncanny valley is alive and well. Nobody can know what the real answer will be. The answer probably isn’t in bigger models run in a central data center, but maybe smaller ones running locally. Imagine saying, “Hi Zora, make my coffee, check the weather, and give me some input on what to make for lunch.” And then actually having a short, natural conversation without those awkward pauses.


That might only happen when inference happens at the edge. Small, task-specific models running on cheap chips. But I don’t see this arriving soon, partly because I don’t drink coffee. Edge AI will shift ROI calculations and make big centralized compute investments trickier. The ROI math gets a lot harder when the smart answer needs to run on a $20 chip in your phone...


Now to the fun part: AGI.


The term is old (20th century old...). OpenAI used it from the start in their missions statements and communication. Microsoft made it famous when it popped up in their investment clauses used in financing OpenAI. ChatGPT will tell you AGI means “exceeding human-level cognitive abilities across virtually any task” which sounds nice until you start wondering what “human-level” even means.


If you break it down, humans do three interesting things very well: we anticipate others, we’re aware of our surroundings, and we have instincts.


Current LLMs don’t anticipate humans or other AIs. If they did, they’d already outperform us. But that would require an insane amount of compute. Not 10x more, probably 1000x. That’s not happening in the next few years.


Awareness is harder. You know where you’re sitting right now, what’s behind the door, and where you’ll sleep tonight. You can picture it. For a robot to have that, it would need eyes and ears everywhere. Technically possible, let’s be honest, who actually reads the terms and conditions before giving a device microphone and camera access? The computing bill is here again a complete nightmare, but as most of the sensors in the world have some sort of structured meta data I would guess that could lead to smarter optimization. Making the complete picture feasible.


And then instincts. Humans have three: safety, procreation, and social connection. Robots don’t. And unless a machine develops its own version of these drives, we probably can’t call it AGI. We could define them artificially, sure, but that’s not the same thing.


Still, imagine it. A robot-human program, super smart, instant, self-aware software ‘thing’. The singularity that effective accelerationists talk about. Besides full blown crazy stuff where all of mankind would be warped into a black hole I would suggest a simple example: the implications on the financial markets. If such an entity started trading stocks, it would probably cause a market implosion (or explosion). The market and stock trading facilitators already installed extra fiber cables to add milliseconds to hinder quant traders, what would regulators do to ‘protect’ the markets from our super quant?


And since I promised a small doom scenario: do you know which public LLM actually has ‘access’ to money? Grok, the one from Elon Musk’s xAI, has around $620,000. Not in a bank account, but in a crypto wallet. (0xb1058c959987e3513600eb5b4fd82aeee2a0e4f9)


I’ll let you imagine what happens when AGI meets coded internet money.

The future is already here. It just hasn’t finished loading yet.



End and final word:


To finalize, I am not very sure that we will ever reach AGI with the current tech stack. This is mostly linked to the points discussed above but also to a more philosophical questions what a human really is. For the more religious people in my audience I want to make a reference to Genesis 32, as it gives a glimpse of what might be the potential future of the AGI believers and disbelievers. Jacob wrestling with the unknown, deceiving, believing and fighting; eventually blessed. That might be where AGI ends up: a struggle for meaning and positioning before the unfolding of a new future (for humans).


I take a very simple approach: I mostly don’t know but I am sure about the following things:

·      AGI will not be achieved in the coming 3- 5 years.

·      Yet I think that the current evolution is an exciting time to be alive and that a future (industrial) revolution is upon us with a horizon so distant it is difficult to fathom.


To bring this back to the founder audience reading this, if you are building in some sort of verticalized space and don’t know how to handle all the big AI evolutions and hype. Firstly I hope that this unstructured essay gave you some insight into the market and secondly I would advise you to push back against the “more, more, more” mindset.


Be deterministic in how you use LLMs. So avoid the UX chat box rectangle, as this often invites either no usage or abuse. Instead start embedding LLMs where you have structured data of your clients (where context exists or is created).


That’s where the real leverage lies.





Feel free to reach out:



Ruben Pauwels

Investment Manager

Angelwise

e-mail: ruben@angelwise.be




16 januari 2026
Authic is live in 27 countries Authic continued to grow internationally and is now live in 27 countries, available in more than 20 languages. Expanding across markets brought new perspectives and confirmed something they see everywhere: businesses want loyalty solutions that are clear and easy for customers to use. More information on: Info website: Authic 
18 december 2025
On Wednesday 3 December, we gathered Angelwise advisors and portfolio companies at Koolputten in Waasmunster for an inspiring morning and lunch focused on sales insights. With practical talks from Kurt Verellen, Koen Vandaele, and Dave Goethals, followed by roundtable discussions, the session delivered hands-on perspectives on sales fundamentals, partnerships, and scaling what works. Thank you to all speakers and participants for the strong engagement and open conversations.
10 december 2025