As you may - or may not know - I am always a bit late to the party, whatever the party is. IRL. In digital space. In almost everything. Except, when I agree on some deadline, I keep that deadline. Rarely, very rarely, I let the deadline for what it is, delivering not in time. But but but, this is not the topic I wanted to write about today.
Years ago, much before the LLM (Large Language Models) was introduced to the general public through ChatGPT (was it version 3.0 or 3.5? I don't remember anymore), I found myself in a classroom being taught about what AI is, what AI can do for us, what AI algorithms are around, how models are trained, how we can play with such models, how we can train a model ourselves. The key message: "AI will not replace humans, but assist humans".
Am not sure if I agree with that key message. I mean, I think AI will have the ability to take over most to ALL human tasks, from production to creation, to conceptualisation, to innovation, to Nobel levels of research. The question is not IF, but WHEN. Perhaps such AI will even have some form of consciousness; Perhaps it won't. Likely, it won't since we don't know what consciousness is. But let's assume it is too complex for us - including AI - to ever understand what it is; Let alone, to try and create an artificial one. Still...
I am 100% sure AI will become intelligent enough and human-like enough, no human on earth can distinct AI from a 'real' human anymore.
For a reason I used the quotes for the word 'real', since when AI is 100% like a human, can't be filtered out anymore by humans, we shall revisit this question and determine what a 'real' human is.
Back to the classes...
I was quite loud during the training sessions regarding the strong statement: "AI will not replace Humans but assist them!", by asking things such as on what grounds and information they make such hard conclusions. Why does one keep thinking in/from their own single-sided views? Remain in their tunnels? Why do they not open their minds? Why they can't extrapolate whatever we've seen in the past becoming a reality? Why stuff that sounds Sci-Fi, remain Sci-Fi in their minds? Added to that, I didn't get any of the other students asking similar questions. They seem to be ok with such closed views. Or, they may not have any opinion. They may even be too lazy to speak up. Or, perhaps they don't like confrontations! Honestly, I, kind of hope it is the latter.
Anyways .. My voice was more or less binned because I was on my own in a class of 20-odd people; Since it was against what they intended to bring across to the class (literally, the latter was the trainer's answer, after I pushed, pushed, pushed for an explanation).
Note that the trainer not only trained private classes but also politicians one-on-one and in groups. Like those from my own country (the Netherlands). But also those residing in Brussels (the EU). They found their foundation with a book written by two employees from Accenture. The book: "Human + Machine: Reimagining Work in the Age of AI". Written back in 2018, not long before I took the mentioned classes.
Over the years, perhaps even a decade or more, I have seen many such books, written by peeps with direct ties to the industry, like Deloitte, Accenture, but many others as well, like factories, logistics companies and whatnot. They all paint the picture: AI will not replace their own jobs. AI will not be able to become creative. AI will always be bound to what humans put into them.
They are so wrong!
However, I have no direct means to convince you of such a statement. I suppose time will tell. At this stage, it is more a belief than a science, determining what AI can develop into.
We've seen nothing yet!
That is a given for me.
Wasn't it a half year ago the first AI emerged, creating podcasts from text it got fed? Wasn't it a few months ago the next iteration was made public through NotebookLM? Sure, it sounded very artificial: no intonations (or the wrong ones), no emotions, no human-like flow and such. Wasn't it 1 or 2 months ago when emotional AI entered the game, mixed into NotebookLM, giving the discussion on stage between two AI voices quite a human feel? Including the ah's and uhm's! And what about the next thing: the ability for a voice AI to be interrupted, to listen to others (AI agents, humans, whatnot) and to respond in real-time?! It is not around the corner; it is available today! NotebookLM. This is just the beginning.
Fuzzy logic...
What seemed to be fast, wasn't fast at all. A world of research preceded ChatGPT. Did you know, AI was invented all the way back in the 60s of last century? About 60 years ago? Did you know, already back in the late 80s and 90s, big-ass applied research was ongoing to make AI useful for a large variety of use cases? I worked myself on a project like that when I was still trying to become (and behave) like a research scientist. AI was called Fuzzy Logic back then. We tried to reduce the number of false positives on a radar screen. Can't tell you this ever went into production, but the initial research and simulations we executed, were promising. Then I left the field of research in favour of what I liked more, being the center point - 'the connector' so to speak - between sales people (business development), marketing people (strategic), and technology people (software architects and engineers). Being at this center point wasn't my goal, but it was the means to an end, my real passion. I am trying to drive product, proposition, pitch and create tailored stories for targeted B2B customers while winning the hearts of key people at our prospects, driving our contract stacks to unimaginable sizes. Driven by happy customers, being able to win their hearts, but at the same time helping them to step out of their own comfort zones and accept things can be done differently from what they thought of thus far.
Off the rail...
I wonder where I went off the rail. I wrote this sentence at the closing of the very first paragraph of this blog post "But but but, this is not the topic I wanted to write about today." The above was not, I repeat, was NOT, the stuff I wanted to write about. And the 'late' to the party wasn't about my introduction to AI (already back in the early 90s, like 30 years ago) or my training classes well before LLMs emerged, well before Covid (it must have been late 2018 or sometime in 2019). The topic I feel being late to the party for, the topic I wanted to write about, are all the AIAgents emerging in crypto space. The AIAgent Memes. The utility stuff. The framework platforms that we see emerging. All the promisses made. All the shit that is being released. The few gems I discovered thus far. Think AI16z. Think ElizaOS. Think Virtuals. Think DOAS. Think LAI.
Next blog post...
I promise you, I'll be writing about my original thoughts, my recent research. After spending the last few days in the rabbit hole of AI Agents, I know for a fact, I know much less than I thought I knew about this topic before finding the rabbit hole. I also know it is possible to be in two states: at the same time! Perhaps I just outperformed the quantum world, a world where anything can have multiple states, but the moment the state is observed, it has one unique state, not two at the same time. Anyways, my two .. uhm three .. states at once: Amazed, Bewildered, Baffled and Flabergasted .. Oopsy, four states all at the same time 😆 ...and I mean this not only in a negative way, but certainly also in a very positive way.
NJOY 2025