Thought — 3 Min Read
AI Hallucinates. So do we!
by Case Greenfield, March 7th, 2024
Thought — 3 Min Read
AI Hallucinates. So do we!
by Case Greenfield
March 7th, 2024
Maybe the AI algorithms, such as ChatGPT, that are being developed today have much more in common with the workings of our own brains than many people would have thought to be possible. With the same flaws, such as hallucinations.
Since the publishing of ChatGPT, AI is the hype of the day all around the world. And right so. AI will have a profound impact on our lives, on art, and quite likely on the course of humanity.
Here, I want to focus on one specific element of the so-called Foundational Models, the Large Language Models (LLM) that are used as the foundation – hence the name – of the Generative Pre-trained Transformers (GPT), such as ChatGPT and GPT-4. (Jargon, hey haha.) And there are many more advanced AI applications around and emerging, eg. image and video generators, such as DALL-E and Sora. (These examples are all from OpenAI. Obviously there are other companies who developed very potent AI algorithms.)
Anyway, I want to focus on the general ‘accusation’ that these AI algorithms sometimes ‘hallucinate‘. Here’s an extremely interesting talk by Canadian professor Geoffrey Hinton on AI in general, and especially the issue of hallucination, or ‘confabulation‘, as he prefers to call it. It takes about half an hour, and sometimes he uses jargon, but still the talk is very much worth listening to. Guaranteed!
And below is another very interesting talk, by British philosophy professor Andy Clark. The key point to me that he makes is this. The brain is a prediction machine. (Ultimately to increase our chances of evolutionary survival. I said it many times.) And to make predictions the brain uses a model, a model of the world. (aha, mind model!) Also, the brain get signals from our environment (through our eyes, ears, etc.; you can compare it with the black box idea of the brain by professor Lisa Feldman Barrett.) Now, the crucial point is this.
The brain continuously balances between the signals from outside and from the model in our brain. And it does so, by giving more or less weight to the one or the other.
Now, if the signals correspond with the model, all is fine. “Proceed according to signal, equals, model.” But … the problem arises when signals and model conflict. Cognitive dissonance! What to do? The brain then attaches weights to either. It determines what is more important, what should be the guideline for action, the signal or the model. (How it does that strongly relates to the System 1 and System 2 theory of professor Daniel Kahneman.)
You can compare it with the GPS in your car. The GPS uses a map. You may have an outdated version of the map or missing traffic information. So, you arrive at a point where the map says ‘continue straight’, while in reality the road is blocked. Conflicting signals! Wise people look outside (signals from our environment!) and take another road. Some people stubbornly, or slavishly, follow the instructions of the GPS (the mind model!), because without the guidance of the map they feel lost, not confident, and continue straight … where there is no road!
And this, of course, is very interesting. How ‘correct’ is our model of the world, our mind model? That is what psychologists sometimes call ‘reality testing’. What shapes our mind models? Is it a coincidence, that usually mind models overlap with our personal and group interests? And so on, and so on.
An awful lot can be said about mind models. It is the main subject of my art, as you know. One quite provocative idea is the following. The Bible, Genesis 1:27, says the following:
God created mankind in his own image,
in the image of God he created them (…)
One may seriously consider, based on the above theory of the mind, if it should not be the other way around? “Man created God (in his own image)” rather than “God created man”? Because, what a coincidence, how very human-centric, that God, the almighty being, supposedly created and ruling the entire universe with billions of diverse planets, looks exactly like Homo Sapiens, incidentally totally suited for life on planet Earth.
Here you see just one example of the quite dramatic consequences these philosophies can have.
Hallucinations
But, let’s go back to the central point of this story, hallucinations. So, AI algorithms such as ChatGPT hallucinate. Well, so do we. So, does our brain. Just think of the John Dean example, related to the Watergate hearings, that professor Geoffrey Hinton gave in his speech (see video above, around 17:00 min). Or think of the example of the construction worker with the nail (not) in his foot, that professor Andy Clark started his speech with.
We live mainly in the world of our mind model, even if we get serious signals from the real world that it is wrong. And, depending on the weights in your brain, some people have better reality testing than other. The worst cases, obviously, are people who suffer of psychiatric diseases, such as psychosis. They tend to live totally in their own world, blocking all signals from outside, even “seeing” and “hearing’ things in their mind that do not exist in reality. And autism is an interesting phenomenon in this respect. Some autistic people seem to have an extreme weight on observing details of outside signals, while decreasing weights of their – eg. social – mind model (see the Andy Clark video, around 29:30 min). Another interesting example is how alcohol and drugs may influence our reality testing.
So what about AI algorithms? Well, I started this story with the so-called “foundational models“. Basically, foundational models in AI algorithms are what mind models are in our brain. It is the ‘model of the world’ that the AI algorithm uses to predict, well, the next word, the next pixel, the next sound bit, etc.
Clearly, AI algorithm easily beat the human brain when it comes to knowledge. ChatGPT knows almost infinitely much more than any individual human. But there’s more. We always thought that AI algorithms are sort of ‘savants’. They know a lot, but they understand nothing. Well, think again. These days, the foundational models are getting so extremely sophisticated that professor Geoffrey Hinton suspects that they may have developed a form of reasoning.
And, this is only the beginning. Where will it lead to in ten years time?
Interesting times ahead. For sure!
Questions
It does leave me with lots of questions, though.
How do mind models in our brain develop?
How much is nature, how much is nurture?
How can we change mind models in our brain?
Can we modify our natural weight balance?
And as far as AI is concerned:
What data were used to create a foundational model?
How representative are these data for the real world?
How does reasoning emerge from correlated data?
How smart will AI become? How bad is that for us?
And so on.
And so on.