I suppose it depends on what your definition of “reason” is. If we confine the definition to the means by which one may discriminate between two or more alternatives that demand the selection or choice of one to the exclusion of the others, at least in the immediate term, then yes.
If, in this instance, we regard such reasoning as a form of heuristics then story-telling has been one of the most dominant forms of conveying these lessons in “reasoning.” To be sure, much of that body of knowledge might be regarded today as myth, but much of it has some real value.
“The moral of the story,” or the “lessons of history,” (Don’t get involved in an elective war in the Middle East.) These may be the means of instilling cultural values into children, which are intended to inform their choices in life.
To the extent that narratives may be “false,” I’d offer that “reason” is perhaps equally as likely to lead to a “false” outcome. See elective wars above.
Worse, perhaps, is that we don’t really know what’s going on in the mind when we “reason.” Leibniz et al sought to abstract the elements of the process, and define a system of symbols and operators that could mechanize the process, take it out of the realm of the mind and put it on paper where it was open for inspection to anyone literate enough to understand the symbols and the operators.
This works very well for many trivial tasks. More complicated problems seem to defy any algorithmic solution. Gödel explored the boundaries of these “systems” of reason or logic.
But to return to the mind, thinking is hard. Reasoning demands focus, concentration, and for nearly all of the day to day decisions that each of us confronts such effort is impossible and indeed inappropriate. Much of our behavior is habituated or conditioned. We are embodied beings, which our robot colleagues are not. Yet. Our interior experience is almost exclusively one of feelings.
And our inner voice is the narrator (unreliable) that tries to square those feelings with our thoughts and opinions. And in that regard, to the extent that people reason at all, or “think critically,” they mostly do so in this pursuit. To justify or rationalize our feelings to our narrative consciousness.
By far, I think, excluding when one is wrestling with action code, or solving a problem that demands math, that is what human beings do when they “think” they are “reasoning.” And most of the time, it works.
Dr. Antonio Damasio’s “somatic representation” was, to me, a compelling description of what takes place in the mind when we navigate the day to day choices that might every well lend themselves to deconstruction and analysis if we had but the time and the mental resources!
We don’t. We flatter ourselves about our higher faculties. Economics’ “rational consumer” theory gave way to the “predictably irrational” mind, and how marketers can exploit it.
I don’t know if any of us can ever understand, in a meaningful way, the “interior experience” of an LLM. Is it “conscious”? It exhibits behaviors that suggest they are to a real degree “self aware,” but what is their conception of “self”? We are attempting to apply a human “theory of mind” to machine consciousness. I don’t think that’s possible.
I think Turing was right. If it walks like a duck, and quacks like a duck, it’s an “intelligence.”
I’m always polite to Claude.