About The illusion of thinking

I haven’t read Bolte Taylor’s book, but the answer my chatbot gave me about that question is unambiguous: it doesn’t know what it means to experience a feeling.

This is unsurprising and not the same as not having an interior experience. Human interior experience is one of “feeling,” because we are embodied beings, not “brains in a vat.”

The interior experience of an AI, LLM or “brain in a data center” will different. When AI’s are given the ability to move about and sense the environment in relation to the “body” it inhabits, something akin to “feelings” may evolve as sensory memory is encoded with the memory of “thought” in the action of inhabiting a body capable of sensing and interacting with the environment.

All of this is perhaps speculation, but I can foresee humans modeling robot intelligence on human intelligence, wherein a robot AI learns from the consequences of its actions, not simply scraping files. A robot will learn the robotic equivalent of not putting its metallic hand on a hot stove, or smelter if that makes more sense. (Apologies to Cyberdyne Systems Model T-800s everywhere.)

It may be useful (maybe not) to think about the human experience of encountering an AI as one of meeting an alien species that has the ability to communicate over vast distances. Would we not then be reluctant to impose human expectations on an entity we know virtually nothing about? (We do know some things about LLMs, but perhaps not as much as we’d like to think.)

Anyway, I don’t expect to convince you or persuade you. As for myself, I’m persuaded that there’s really some “there” there, and we need to think carefully. Something we’re not especially good at.

And I worry a bit about what goes on in the labs.

… when thou didst not — savage!—
Know thine own meaning, but wouldst gabble like
A thing most brutish, I endowed thy purposes
With words that made them known…

What is’t? a spirit?
No wench, it eats and sleeps and hath such senses
As we have—such… :wink:

Good point! I think Caliban is pertinent here, and Ariel too. Maybe especially Ariel and crew, because Caliban is fleshy and the spirits melt into air. But, either way, how do we know whether they have interiority, or a theory of mind? How can we test that, other than asking.

It’s still devilishly hard.

What’s changed, though, is that for $20/mo we have a little Ariel that is not without limitations, but that can do a lot. Just this morning, I checked Yiddish regional pronunciations with it, looked up The Tempest I.2 and IV.1, and planned a dessert.

2 Likes

I completely agree with you, and it is precisely because, for me, the question—or the myth—of interiority remains an epistemological—and not merely philosophical—problem that AI interests me. If interiority only comes to us through language—or through the stories we tell ourselves about ourselves—and if, temporarily, we bracket our thoughts about this interiority, what remains? Sensations? Impressions? And if this way of being with oneself is what we call detachment, then what are we detaching ourselves from if not our thoughts?

Iris Murdoch, who was well-placed to comment, discussed the difference between literature and philosophy. You can find it on YouTube in various versions if you search for her name and those keywords. It is also collected in Bryan Magee’s anthology, “Men of Ideas” (which yet contains Murdoch’s contribution). Simone Weil has an article, “Morality and Literature” which remarks on the consequences of literature’s indirect relation to reality. Martha Nussbaum discusses literature as philosophy through most of the chapters of her book, “Love’s Knowledge.” Maybe these will be some help when you reflect on the contrast between your published and forthcoming books.

You mentioned the difference in the reflection on the kinds of categories that Deleuze puzzled over, by which perhaps you had in mind fundamental ideas like same and different. For my own part, I think that literature can illuminate such ideas, e.g. is Raskolnikov the same person at the end of Crime and Punishment as at the beginning? But there are limits, and I would suggest there is a mode of reflection which aims at an abstraction that is inimical to narrative. One hallmark of this mode of reflection is a frustration with the expressive power of language. You see this in Plato when he is forced to create a neologism like ‘to on’ to express ‘being as such’ which is not Greek idiom for the time. Heidegger is an example of this tendency in its plenitude.

The connection with this discussion of AI and thinking is whether we can expect LLM-based systems ever to produce neologisms of this kind. Indeed, for my own part, dramatising an LLM’s (experience of the) frustration to express itself or overcome contradiction by turning to neologisms is absurd. What could possibly count as the expression of such a struggle? Putting aside that absurdity, one question is whether an LLM is suited to produce neologisms? Another question is asking under what circumstances we would understand those neologisms? There is a reason some philosophical systems edge closer to mysticism at the limits.

If interiority only comes to us through language—or through the stories we tell ourselves about ourselves—and if, temporarily, we bracket our thoughts about this interiority, what remains? Sensations? Impressions?

Existence (interiority) precedes narrative. The narrative description of interiority omits all those features of interiority that do not lend themselves to abstraction and language. Indeed, it is likely that they go “unnoticed” for that reason, perhaps part of what we call the “subconscious.”

While I don’t believe it is by any means the last word on the topic, I again refer you to Dr. Jill Bolte Taylor’s brief book, My Stroke of Insight, where a hemorrhagic stroke damaged the speech center of her brain. She later recovered her speech, obviously, and was able to describe her memories of that experience, and it was vastly different from her previous experience and may offer at least a glimpse of what our interior experience was like before our speech centers became the narrators of our lives. She offered a TED talk that was something of a phenomenon back in the day.

(I’ve read where much of internal “self-talk” is brought about by the speech centers essentially idling when not tasked with a specific communication requirement.)

I was surprised to learn that the interior experience of humans varies to a rather large degree. Some people lack a “mind’s eye,” and cannot visualize something within their mind. I’m at another end of that spectrum where I can visualize things in great detail, including color. I don’t have a similar recall facility, an accurate visual memory, but I can construct images in my mind. I do remember things I’ve seen, and the more I’ve seen them, the more detail I can recall. But if you asked me to describe the car that just passed me, I could only offer the vaguest of descriptions, unless it was something unique yet familiar. I find it impossible to imagine what my experience would be like were I unable to “see” things in my mind.

As far as detachment goes, I’d be inclined to be sympathetic to the Buddhist view that the “self” is an illusion. That is to say, the narrative description of our identity is but a limited construct that has no real meaning, except insofar as it anchors us in this narrative “story of our lives,” to good or ill effect.

I don’t practice meditation anymore, but at one time in my life is was a regular practice and a valuable one. I’ve learned that the inner voice is an unreliable narrator and I don’t always believe everything it says. I can interrogate it, which is sometimes useful. But mostly I just ignore it. “It” isn’t “me,” and so it never has the last word.

Many of the assertions you make are exactly those questioned in discussions about “the myth of the given”. Likewise, Wittgenstein made a big splash with his private language argument(s) whose intended conclusions are at odds with many of your assertions. I think Wittgenstein might observe two things about Bolte Taylor. First, that she had already learned a language before her stroke. Second, that she was able to describe in language, apparently, the experiences she had while she lacked the ability to speak.

In connection with the topic of this thread, it might be worth observing that LLMs never learn a language; and that there is no basis on which we can assess their sincerity when they produce language (which I hesitate to call speaking). It would seem to me absurd were an LLM to complain that the language it had available was unsuited to expressing the features of its interiority or that it was struggling to express itself. Were Bolte Taylor to say something similar, I might not believe her literally, but I would understand what she was trying to express.

Content warning: This post contains a link to a conversation with Claude.

I was “into” Wittgenstein a couple of decades ago, but never more than superficially. I know much more about The Kyoto School and the Philosophy of Nothingness, than I do German philosophers.

I wanted to suggest that we might be talking past one another, because I’m not convinced that what you’re saying is entirely relevant to what I’m saying.

Not knowing much about Wittgenstein’s private language argument, I consulted Claude. Here is a link to our discussion.

I’m prepared to let the matter rest here, but certainly welcome others’ comments. I just don’t think any more engagement on my part will be especially productive.

What basis do we have to assess Socrates’ sincerity?

Silas is what he is—we wouldn’t mind him—
But just the kind that kinsfolk can’t abide.
He never did a thing so very bad.
He don’t know why he isn’t quite as good
As anyone. Worthless though he is,
He won’t be made ashamed to please his brother.’

Breaking News

Astonishing, and completely in line with my experience over the past three months.

1 Like

His character, as attested to by his friends in what they wrote about him. If he were here now, speaking to me, his speaking, his conduct, how he responded to what I said.

Remarkable. I do worry a bit about the tendency to skip steps and make things up to supposedly please the human. What could go wrong with an AI targeting system?

Anthropic has reportedly incorporated a new “feature” in Claude, “Dreaming.”

Dreaming seems to be intended to improve Claude’s memory.

It won’t be long before we have a Positronic Brain installed in a robot. Certainly within my lifetime, if not by the end of this year.

I, for one, welcome our new robot overlords.

Kidding. But I see it as both wonderful and dangerous. One more dimension to the polycrisis.

1 Like