I haven’t read Bolte Taylor’s book, but the answer my chatbot gave me about that question is unambiguous: it doesn’t know what it means to experience a feeling.
This is unsurprising and not the same as not having an interior experience. Human interior experience is one of “feeling,” because we are embodied beings, not “brains in a vat.”
The interior experience of an AI, LLM or “brain in a data center” will different. When AI’s are given the ability to move about and sense the environment in relation to the “body” it inhabits, something akin to “feelings” may evolve as sensory memory is encoded with the memory of “thought” in the action of inhabiting a body capable of sensing and interacting with the environment.
All of this is perhaps speculation, but I can foresee humans modeling robot intelligence on human intelligence, wherein a robot AI learns from the consequences of its actions, not simply scraping files. A robot will learn the robotic equivalent of not putting its metallic hand on a hot stove, or smelter if that makes more sense. (Apologies to Cyberdyne Systems Model T-800s everywhere.)
It may be useful (maybe not) to think about the human experience of encountering an AI as one of meeting an alien species that has the ability to communicate over vast distances. Would we not then be reluctant to impose human expectations on an entity we know virtually nothing about? (We do know some things about LLMs, but perhaps not as much as we’d like to think.)
Anyway, I don’t expect to convince you or persuade you. As for myself, I’m persuaded that there’s really some “there” there, and we need to think carefully. Something we’re not especially good at.
And I worry a bit about what goes on in the labs.