Shame on me for not having read the paper at the top of the thread, but I did want to reply to this example of failure as thought.
It’s not clear to me that what takes place in our brains isn’t, to a large degree, what is taking place within an LLM. What is different is that an LLM is essentially a “brain in a vat.” It has not, to my knowledge, had the ability to interact with the physical world, and form associations from those interactions.
LLMs are trained on text, and it’s unclear to me that there would be a large corpus of text that explains or explores the relationship between possessive pronouns, transportation artifacts and cleaning facilities.
When we get anthropic robots (androids) that can allow an LLM to “explore” the “real world,” undoubtedly directed by prompts at least at first, its “knowledge” will expand through the associations it makes with interactions in the real environment, at which point it may become clear that in order to wash “my” car, I must drive it to the carwash.
Whatever “thinking” is, it relies on abstractions and associations, heuristics. “Logical” thought is essentially algorithmic, and I would venture to guess, based on my experience of human intelligence and “thought” in the real world, the vast majority of it is not “logical,” nor even “rational.” It is mostly conditioned responses to physical cues that prompt emotional states, which in turn are “rationalized” within the interior narrative construct.
That is to say, to the extent that we reason at all, it is mostly to explain ourselves to ourselves, backwards from our feelings.
Artificial intelligence will evolve without the prior and parallel evolution of “feelings,” an awareness of an interior state that is mostly the product of external stimuli. So I expect that once AI gets access to the physical world, and is free to form its own associations and abstractions, it will likely evolve to become something resembling Mr. Spock.
I think we, as a species, hold our so-called “intelligence” in too high regard, and that makes us arrogant, and often “inhumane.” The “inner voice” is an unreliable narrator, and it is the beginning of “wisdom” to understand that in a profound way.
It is another discussion altogether to consider whether a machine will ever be “wise,” in the way the best humans have been, because they will have lacked that emotional component to their evolution.
I am not so quick to discount what we have achieved with LLMs, nor what AI may evolve to become. What is remarkable is how much compute and energy it takes to emulate a human mind.
For now.
Anyway, this should have been a blog post.