Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.
Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.
This is the bit you are missing, the attention network actively changes the token vectors depending on context, this is transferring new information into the meanings of that word.
The network doesn’t detect matches, but the model definitely works on similarities. Words are mapped in a hyperspace, with the idea that that space can mathematically retain conceptual similarity as spatial representation.
Words are transformed in a mathematical representation that is able (or at least tries) to retain semantic information of words.
But different meanings of the different words belongs to the words themselves and are defined by the language, model cannot modify them