Thilo Stadelmann – How not to fear AI

13. April 2025 – Oliver Stoldt

A guest article from Prof. Dr Thilo Stadelmann

Circa two years ago, I started investigating, and putting into words and talks, how a technically grounded understanding of AI can help mitigate unwarranted fears. KCF 2023 in Berlin was a kick-off, after which many requests reached me to extend this material and make it available to larger audiences. When Tilman Slembeck asked in January if I wanted to join TEDxZHAW “Merging Worlds” as a speaker, I felt ready.

Little did I know how much preparation still awaited me. I easily invested more than one order of magnitude more effort here than in any other talk I ever prepared. As the TED format asks for a different style than I usually do (shorter, story-focussed, less slides, speaker-centered), I had to change my whole approach to creating and then practicing a talk. I felt like a boxer who had prepared intensely for months for this one fight: I created 12 iterations of slides; wrote a script for the talk for the first time in my life; rehearsed and updated it every free minute for the last two weeks; gave premature versions of the talk to my colleagues at least once a day. Thanks to this preparation, we now have a full script of the talk available. I want to share it with you below.

Thilo Stadelmann – TEDx talk: How not to fear Artificial Intelligence

It is 1997. Chess world champion Garri Kasparov takes on Deep Blue – the most advanced chess AI system of its time. High noon for human versus machine. Then it happens, already in the first game: Kasparov observes an AI move that he cannot understand.

This move is pure randomness, the result of a bug in Deep Blue’s software. But instead of realising the machine’s limitations, Kasparov is overtaken by fear: He thinks this move is a sign of higher intelligence, and concludes that the machine might be unconquerable with his mere human strategies. That throws him off track with his match plan – and ultimately, he loses. Kasparov’s defeat became known as the first big loss of humankind against AI. But the human lost not because of the machine’s abilities but because of the man’s fears.

Today, I observe something similar: In my capacity as a professor of AI, I get to discuss AI with a lot of different people. And one topic keeps surfacing: people’s fear of AI. They share their concerns of being displaced in their job by a future chat bot; of becoming so dependent on some system to lose critical skills for mastering their life independently; of losing their very life to an AI overlord turning against them.

How warranted are such fears? Or: How sober can we afford to think about one of the most defining trends of our times? I would like to help you answer that question based on a clear understanding of what AI is and isn’t and where some of the most intimidating ideas on this topic come from.

What is Artificial Intelligence?

So what is AI? It has been defined as the simulation of intelligent behaviour with a computer. AI thus is not about the creation of intelligent beings. It is about mimicking the result of intelligence – by any means available, a whole toolbox full actually, dependent on the intended behaviour. Arguably, the most prominent means of AI in the last two decades has been “machine learning”. We use machine learning if the behaviour we want to simulate cannot be described by a set of rules.

Consider classifying a set of images into the categories “cats” and “dogs”. We cannot find a set of rules – but give a set of examples of input to the system (visual features of images) and corresponding output (the category). They can be separated by a function relating input to output. Learning then means to systematically manipulate the parameters of that function to fit the given examples. This is done with a mathematical device we all learned about in school: the chain rule of differentiation.

Now, machine learning works not only with simple visual features and straight lines. We can put in all the pixels of the images. Then use a more wiggly function template called a “neural net”. And then scale to more data to learn a more nuanced relationship.

Thilo Stadelmann: What is a large language model?

It also works with other input-output pairs, e.g. text as input and its likely continuation as output. This gives us “large language models” – the engine behind products like ChatGPT and the arguable pinnacle of modern AI.

Quantitatively, this context can become as large as millions of words. The respective function will have billions of parameters to fit such data well. And it needs a full internet of text to be well trained.

The nature of statistical models

What can we say qualitatively about such a model?

Well, it is a statistical model. That differentiates it quite drastically from how humans work. For example, a statistical model has no concept of truth and facts – it has just learned about statistical plausibility. On average, this will be very useful and even outperform humans. But: In any individual case, it may be wrong, because of the fundamental limitation of no facts and truth, just coincidences and correlation instead of causation.

Let’s make this concrete: I asked one of the leading so-called “reasoning” models the following question: The surgeon, who is the boy’s father, says “I can’t operate on this boy, he is my son!” Who is the surgeon to the boy?”

The answer should be straight-forward and is even contained in the text: “Who is the boy’s father”. Now the model computes for 10 seconds and replies: “The surgeon is the boy’s mother.” Which is profoundly stupid. But to the model, this makes actually sense – and it tells us why: “The riddle plays on the assumption that a surgeon is male,” it tells us. And actually, variations of my question exist in abundance on the web as tests for our own human biases: Namely our gender bias that usually associates males with the role of a surgeon. So the model has seen all these during its training and learned the utter statistical implausibility of answering anything male to a question that looks remotely like mine.

So the model’s answer is plainly wrong – and totally plausible for any AI system built according to the principles of machine learning at scale. Unfortunately, we don’t’ have other principles. Not now and not at the horizon of research. So while examples come and go, fundamental limitations like this will stay with us also with GPT 5, 6, 7 et cetera.

A difference in kind to human nature

How does this compare to the human? We said AI simulates intelligent behaviour. And we saw it does so by completely different means than human intelligence. These means have fundamental limitations – we just saw one example, the lack of veracity, there are many more.

These different means constitute a difference in kind to human nature. Think of it using the analogy of a musician and a DJ: While a DJ simulates certain aspects of creating music very well, their method of music creation by design is not general. There are so many aspects of music beyond the method of turntables and remixing. Certain genres. Certain techniques. Certain settings. Similarly, AI does not simulate humans, but certain carefully designed aspects of human behaviour, with a very specific method of cleverly interpolating between pre-recorded behaviour samples.

What then is the future of such AI? It is not Artificial General Intelligence, whatever this means precisely. It is not an “AI overlord” on eye level with humanity.

The source of AI fears in an unexpected world view

Where, then, does this fear of AI come from?

It does not have a strong basis in the technology we have, or in the underlying science we just saw. Here is the thing: Such fear is rather based on wide-spread narratives. And these dystopian narratives are purely based on world view – on science fiction, not on tech.

Let me explain. AI ethicists Timnit Gebru and Emile Torres recently analysed the respective world views, and coined the acronym TESCREAL to refer to Transhumanism, Extropianism, Singularitarism, Cosmism, Rationalism, et cetera. They show how respective philosophies are widespread in the tech industry, and how they profoundly shape the global narratives on AI. You find traits of TESCREAL in movies like “The Matrix”, or in the books of Prof. Harari. In fact, they have become the mainstream school of thought in Silicon Valley. See for example the 2023 “Open Letter” on an AI research moratorium to circumvent existential risk. It was, as co-originator Prof. Max Tegmark of MIT is very open to explain at length in a podcast, purely based world view, not on a single technical argument.

Instead, the TESCREAL narrative for AI goes like this: Humans are nothing but information processors – just on biological, decaying hardware. This makes them akin to machines, just inferior, because of the fragile hardware (that’s the Rationalism above). If humans could gain intelligence, then nothing prevents machines from reaching the same. Soon. And this intelligence will just increase ever further to AGI and beyond (that’s the Singularitarism). So humans should upgrade themselves to become more like machines (that’s the Transhumanism). It goes without saying that this view is highly contested from different scientific disciplines.

Summarizing, the TESCREAL philosophies are characterized by having little regard for human worth and dignity. It comes at no surprise that they exaggerate machine competence and make people feel small, intimidated and worthless. But this is philosophy, not inevitable science. If TESCREAL is not your world view, there is considerably less to fear about AI.

Source of hope: Appreciating your human worth and dignity

So how not to fear AI? It begins with the realisation that the AI you fear does not exist. AI is a tool. If that helps, rename AI to “EI” – Extended Intelligence, as we have recently argued in an article. Because extending our own human capabilities is all AI can do: Help us to improve our lives. Not replace us. So an insurmountable categorical difference remains: Agency ultimately stays with us, for the better or worse.

This means that AI is not coming after your freedom nor anything in your life. That is good news. But you might lose your freedom to AI in a different way – by giving it up voluntarily! We need to look at how this can happen:

First, we might surrender our freedom prematurely to non-existent machine competence. Like this: Oh AGI, you calculate probabilities precisely and store so many patterns: What shall I eat today? What vocation shall I train for? Whom shall I marry? Shall I marry at all?

Don’t laugh – this happened before in the history of human-AI relationship: Remember Kasparov vs. Deep Blue from the beginning? Kasparov didn’t primarily lose against an excellent chess computer. First and foremost, he lost the battle in his mind against his assumption of an unconquerable AI.

Second, we might become defenceless against the endless conveniences that AI tools offer, and hence fail to become the person we ought to be. It is part of our human nature that we need to learn, mature, grow. And all growth includes an element of pain – think of sports. If we forego too many opportunities for growth by for example letting AI write the essay, solve the task, enquire the information, we might forfeit future freedom by not forming the character and skills necessary to wield such powerful tools.

Fortunately, there is a mitigation strategy against these only two ways how you can lose to AI: Know your worth and dignity as a human being! If you can say with the parents of the universal declaration of human rights: I am wonderfully made. I have purpose. I love and am loved. I thrive on human relationships. I am endowed with exceptional skills, but I am more than my skillset.

Then you feel not intimidated by a powerful tool. You don’t surrender to it. You treat yourself, even with necessary pain for growth.

Real and hypothetical AI Risk

We are almost there. But… aren’t there real risks of AI, philosophy or not? Yes, absolutely.

How not to fear AI

So that’s how not to fear AI: Consider it as Extended Intelligence. It is a tool, built on probability functions, and has fundamental limitations. Don’t overestimate its scope.

Reject fears purely based on a world view that you probably don’t share and that is rooted in a view of humanity and of technology that comes from science fiction, and not from reality. Instead, contribute to creating the future you want to live in. Use AI where appropriate. Tools are for the worker.

Enjoy.

Prof. Dr. Thilo Stadelmann

Expert on Artificial Intelligence, Scientist & Pioneer