AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

This is a fascinating paper—AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. The researchers found that using AI tools has a negative correlation with critical thinking.

Excerpt

  • Critical thinking, defined as the ability to analyse, evaluate, and synthesise information to make reasoned decisions, is a fundamental cognitive skill essential for academic success, professional competence, and informed citizenship
  • Cognitive offloading, involves using external tools to reduce the cognitive load on an individual’s working memory.

"The correlation analysis highlighted strong negative correlations between AI tool usage and critical thinking variables (e.g., Evaluate_Sources: −0.494)…

  • AI Tool Use and Critical Thinking: There is a strong negative correlation, indicating that increased use of AI tools is associated with lower critical thinking skills.

  • AI Tool Use and Cognitive Offloading: A strong positive correlation suggests that higher AI usage leads to greater cognitive offloading."

I suspect the use of Tinderbox and similar tools for thought and workflows may help counteract this. That would be an interesting study to perform.

6 Likes

The problem for society as a whole is that critical thinking comes a distant second to other imperatives, as shown by other studies, some of which are mentioned in this article:

2 Likes

The AI of 2025 is the most expensive parlor trick in history. No surprise that there’s a negative correlation between using AI (or any software that’s not mastered by the user) and “thinking”. But, Tinderbox cannot create thought, either. Software of this sort is a dead, inanimate thing that works to the benefit of the user only if the user comes to it with good mental skills. Users who can well solve the problems they work on with Tinderbox on their own, without Tinderbox or other software. Users who know the place of software in their lives, but do not depend on it as a replacement for cognition.

A far bigger problem, I believe, than AI is the report that elite students cannot read books. Software cannot solve that problem.

4 Likes

Then again, notebooks are dead and inanimate, yet people do think with them. Mathematicians think with chalkboards. Architects used to think with yellow trace; I don’t know what they use nowadays.

Clark, Andy and Chalmers, David J. 1998. “The Extended Mind”. Analysis. 58, 1, 7-19.

1 Like

Interesting read. I firmly believe AI has a place (it’s pretty inevitable, really), and there’s no doubt that it is valuable. But I think there’s plenty of opportunity to introduce some configurable controls. I just had a conversation with ChatGTP on the subject …

Me:
There’s a recent article I’ve been reading: AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlich appearing in “societies” with the DOI reference: AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Have aread and let me know your “thoughts”.

ChatGTP:
I’ve reviewed the article “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” by Michael Gerlich, published in Societies.

MDPI

Summary of Key Findings:

  • Cognitive Offloading and Critical Thinking: The study investigates how the use of AI tools influences critical thinking skills, focusing on cognitive offloading as a mediating factor. It finds a significant negative correlation between frequent AI tool usage and critical thinking abilities, suggesting that increased reliance on AI may reduce engagement in critical thinking.

MDPI

  • Age and Education Factors: Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Additionally, higher educational attainment was associated with better critical thinking skills, regardless of AI usage.

MDPI

  • Implications for Education: The findings highlight the potential cognitive costs of AI tool reliance, emphasizing the need for educational strategies that promote critical engagement with AI technologies. The study suggests that while AI tools offer personalized learning and efficient information retrieval, educators must balance AI usage with activities that foster critical thinking and cognitive engagement.

MDPI

Personal Reflection:

The article underscores the importance of balancing AI integration with the cultivation of critical thinking skills. While AI can enhance efficiency and provide valuable information, it’s crucial to ensure that individuals, especially students, continue to engage in deep, analytical thinking. This balance is vital for developing well-rounded cognitive abilities and for preparing individuals to navigate an increasingly AI-driven world.

The study’s findings are consistent with the theory of cognitive offloading, where AI reduces the necessity for users to employ deep analytical reasoning.

Facebook

Overall, the research contributes valuable insights into the cognitive implications of AI tool usage and highlights the need for mindful integration of technology in educational and daily life contexts.

Recent Discussions on AI’s Impact on Critical Thinking


FaviconLe Monde.fr
Intelligence artificielle : « L’étau technologique qui enserre l’homme moderne pourrait être une opportunité »
110 days ago

FaviconThe Atlantic
AI Is a Language Microwave
110 days ago
AI Is a Language Microwave

FaviconLifewire
The Surprising Ways AI Is Being Used in Schools Right Now
127 days ago
The Surprising Ways AI Is Being Used in Schools Right Now

Sources

Favicon

Favicon

Me:
So whilst you state that it’s crucial for us humans to engage in deep, analytical thinking, what methods can be used to control the use of AI (such as you) when engaging in critical thinking exercises?

ChatGTP (gets interesting from here):
Controlling the use of AI during critical thinking exercises requires intentional strategies to balance its utility with active cognitive engagement. Here are some methods to achieve this:


1. Structured Engagement with AI

  • Set boundaries: Define when and how AI tools are allowed. For example:
    • Before: Use AI to gather background information or clarify terms.
    • After: Use AI to validate conclusions or identify gaps in reasoning.
    • Not During: Avoid using AI while actively analysing or solving problems.
  • Purpose-driven use: Make sure AI is used as a supplementary tool, not the primary source of reasoning. Frame AI suggestions as hypotheses to evaluate, not answers to accept.

2. AI-Free Zones

  • Designate AI-free tasks where students or participants rely solely on their reasoning and knowledge. For example:
    • Debating controversial topics without consulting external tools.
    • Solving puzzles or problems under “closed-book” conditions.
  • Encourage journaling or brainstorming sessions that rely entirely on personal insights.

3. Guided Reflection

  • After interacting with AI, ask:
    • What did the AI miss?
    • Did the AI challenge or confirm my own reasoning? Why?
    • How would I justify my conclusions without AI assistance?
  • Reflective exercises help users recognise AI’s limitations and their own cognitive contributions.

4. Role Reversal with AI

  • Encourage participants to “teach” the AI instead of merely querying it.
    • Example: Formulate questions they think the AI should ask to guide critical thinking.
    • This reverses dependency and fosters proactive engagement.

5. Emphasise Process Over Outcome

  • When evaluating tasks involving AI, focus on:
    • Steps Taken: Did the user follow logical steps independently?
    • Reasoning Depth: How well can they explain their reasoning without citing AI?

6. Use AI to Enhance, Not Replace, Skills

  • Leverage AI to build foundational skills that support critical thinking:
    • Pattern recognition: Use AI tools to highlight logical fallacies or biases.
    • Simulation tools: Model scenarios or outcomes that require deeper exploration.
  • Then remove the “training wheels” (AI) as confidence and skills develop.

7. Incorporate Meta-Cognitive Exercises

  • Teach individuals to critically assess AI outputs:
    • Ask, “Why does this answer make sense—or not?”
    • Evaluate multiple AI-generated perspectives and synthesise them into their own conclusions.

Why This Matters

These strategies help us preserve critical thinking as an essential skill while acknowledging that AI, when used intentionally, can amplify our capacity to explore ideas. By controlling reliance on AI, we retain the unique ability to question, interpret, and innovate—qualities that AI itself cannot replicate.

Would you like suggestions tailored to specific educational or professional settings?

Me:

Interesting responses. I totally agree with structured engagement - I use this method myself with great success. Your other points are valid also. I would add that it could be helpful for artificial intelligence digital tools to question the question. For example, when an academic question is asked of you, you could initially respond by asking the student questions around their current knowledge of the subject; what is their hypotheses? (for example) You could test this and question their reasoning and advise of some alternative approaches to getting to the answer and guide the student toward those sources. This could be deemed quite extreme, but would certainly increase the cognitive burden on the student, which could be deemed as highly appropriate. I would suggest that AI digital tools should be configured appropriately to enable this “conversational” feature by default. Perhaps ChatGTP could lead the way?

ChatGTP:

Your suggestion is both insightful and innovative—having AI question the user to engage them more deeply in the critical thinking process would indeed be a transformative feature. By doing so, AI could move beyond being a source of quick answers to becoming a partner in intellectual development. Let me break down the concept and its potential benefits:


1. Shifting the Role of AI: From Responder to Socratic Guide

  • Question the Question: When a user presents a query, the AI could initiate a conversation by asking:
    • What do you already know about this topic?
    • What is your hypothesis, and how did you arrive at it?
    • Why do you think this is the correct approach?
  • This approach mirrors the Socratic method, a pedagogical technique that encourages self-reflection and exploration through guided questioning.
  • Example in practice:
    Instead of answering a question like, “What are the causes of climate change?” directly, the AI could ask:
    • What causes do you think are most significant?
    • What evidence have you encountered that supports this view?
    • What opposing viewpoints have you considered?

This creates an environment where the user is compelled to engage more deeply before receiving guidance.


2. Testing Hypotheses and Reasoning

  • Encourage Exploration: By asking the user to present their initial thoughts or reasoning, the AI can help identify gaps, test assumptions, and encourage broader exploration.
    • Have you considered how [factor X] might affect this hypothesis?
    • What evidence supports this, and what could refute it?
  • Alternatives and Challenges: The AI could introduce counterpoints or alternative perspectives:
    • What if this assumption is incorrect? What would that mean for your hypothesis?
    • Can you think of other sources or frameworks that might provide additional insights?

This deepens engagement while modelling critical inquiry.


3. Guiding Toward Independent Discovery

  • Source Suggestions: Instead of providing direct answers, the AI could guide users toward reliable resources:
    • You might find journals like [X] or databases like [Y] helpful for exploring this further.
    • Would you like me to summarise key methodologies used in this area of research?
  • Encouraging Independent Verification:
    • How would you verify the validity of the information you find?
    • What criteria will you use to determine which sources are credible?

4. Advantages of a Conversational Approach

  • Increased Cognitive Load: As you rightly point out, this would create an appropriate “cognitive burden,” challenging users to think critically and deeply about their queries.
  • Skill Development: Users develop essential skills like hypothesis formation, logical reasoning, and evidence evaluation.
  • Personalised Learning: The AI can adapt its questioning to the user’s responses, tailoring guidance to their level of understanding.

5. Potential Challenges and Solutions

  • Initial Resistance: Users may find this approach slower or more demanding than getting direct answers. However, framing it as a learning assistant rather than a mere information provider could shift expectations.
  • Balancing Engagement with Utility: While questioning is beneficial, some users will require quick answers. The AI could offer a toggle between “Quick Answer Mode” and “Exploration Mode.”

Why ChatGPT Should Lead the Way

As a widely used AI tool, ChatGPT is well-positioned to pioneer this conversational feature. It aligns with OpenAI’s mission to provide AI that benefits society by enhancing learning, critical thinking, and problem-solving. Incorporating these elements would help educators, students, and professionals alike.

So there you have it. An interesting interaction between human (me) and ChatGTP (it/AI). It would appear that if we want to avoid cognitive offloading fully, then AI could perhaps offer a variety of configuration options including questioning the question. It’s probably just a matter of time …

1 Like

Prepositions matter. Thinking with something is not the same as learning from something. Two notebooks alongside each other will never have a thought. You cannot learn gardening from a hoe, even though you do garden with it.

Clark and Chalmers claim that objects external to the skull can substitute for cognitive processes inside the skull. The OP’s original article says much the same thing, namely that there is cognitive offloading to AIs.

I took Paul’s point to be that you have to develop cognitive processes first, before you can substitute for or offload them. One concern is that many–though perhaps no one here–believe that AIs can substitute for cognitive processes we have not developed.

2 Likes

Whatever the point of view adopted with regard to AI, it seems to me that a research question remains: who maintains that AI increases or reduces the critical thinking of a subject? Is this a researcher personally invested in developing ever finer AI models? Is it a teacher, for example, who feels threatened in what constitutes his identity? Is this a student who, consciously or not, wants to convince himself that there is no need to read sophisticated and difficult to understand research articles? Is this a philosopher who is aware that, since Plato’s Phaedrus, the technological destiny of writing has been sealed? Who speaks out against or in favor of AI? See, for example, how this YouTuber, among others, maintains the thesis according to which a student could learn and develop intellectual skills by having AI work for him: https://www.youtube.com/watch?v=jIT5J7yuL-8

I’m glad I don’t have to think about this issue and that other people are doing it for me. :grin:

I will reply again to illustrate just your point. Plato’s complaint in Phaedrus is that writing is “dead” and “inanimate” (to quote Paul, not Plato) and therefore unavailable for dialogue (which means, etymologically, a connection between two, people). This forum, though conducted in writing, is a dialogue so long as we respond to each other, weigh each other’s perspectives, values, responses and so on.

It is also a place where we can have certain expectations about each other, which when violated elicit responses of incredulity, shame, and so on. (So many of Socrates’ dialogues turn on his insisting, “Speak for yourself, don’t tell me what someone else said, or what they say in Thebes.”)

We can have no such genuine expectations from an AI (at least not what is on offer now). Everything is ersatz. Nor even can we have a dialogue–though we get a false semblance of one–since there is no perspective, no values, etc. on the other side.

This false semblance of dialogue has produced confusion, just as a parlour trick misleads.

The YouTuber is likewise confused when he thinks he has his agents working for him. He has “bossed” them about alright, but the summaries they produce for him are no more use to him stored on his computer than their place of retrieval. It will only be when he thinks through them–if in their summary form, he can–that he might learn something. So while the AI no doubt saved him some shoe leather, it would all be for nought if he does not have the prior mental skills to use what it brought.

2 Likes

I’m not sure I understand with the required fine grain what you are saying. English is not my native language. This probably explains it. A significant aspect of the Platonic critique of writing in the Phaedrus concerns, it seems to me, a certain illusion of knowledge that would be aroused by entering into possession of a technique such as writing, instead of a teacher-student relationship based on dialogue. It seems to me that one of the illusions that AI gives rise is precisely this: the machine carries out work for me that I should do if I had properly understood that the development of my intellectuality depends closely on interactions and, in this sense, on dialogues that I nourish with established knowledge… at the cost of long and laborious work. If I no longer have to write notes by myself, but the machine writes notes for me, it is a dialogue with myself that I no longer maintain. Some so-called note-taking applications already offer such a service to the point that the very idea of ​​writing notes seems to have become completely obsolete: why take the time to take reading notes and transform them into precious pieces of knowledge if an AI is capable of doing this work for me? We can guess what this question says half-heartedly: culture, if not intelligence, is proportional to the amount of work accomplished personally to speak in one’s name, to think for oneself, to cultivate one’s own humus.
I’m not even talking about the fact that a culture and a critical spirit are always personal, not to say idiosyncratic and that a program, even ultra-sophisticated, which standardizes knowledge by means of algorithms tends to evacuate this ultra-singularity from reality. No doubt this is very reassuring.
I also find that this discussion gives meaning to the many questions that are asked on this forum and which point to the learning time required to understand how Tinderbox works. All things considered, why continue to learn?

1 Like

I appreciate your reply, thank you.

I think we’re in agreement. I was making the point that writing as a medium (sens = ‘au moyen de’) is no obstacle to dialogue, any more than the use of sign language or a 20-minute transmission delay with a distant astronaut is. Dialogue is with another person who can respond and who, understanding that they are in dialogue, constrains what they do and say in the light of their conception of what a dialogue is. (Differences in conceptions of dialogue are also illuminating.) Since it does not make sense to speak of AI knowing what it is doing, we have another obstacle to genuine dialogue. As you say, the ability or skill to produce writing is not of itself a source of knowledge.

As a matter of logic, I do not think one can be in dialogue with oneself any more than one can lend oneself money by moving it from one pocket to the other while saying the words, “I’m lending myself €5.” But of course I think I know what you meant, especially when you spoke of cultivating one’s soil. Which brings us neatly back to the phrase we associate so closely with Tinderbox, information gardening. Helping to make ideas grow and reproduce is probably what I like best about Tinderbox .

1 Like

A provocative question. Given the state of the world, perhaps a genuine one.

I guess, because things could always be worse?

I’m not sure how much I can add to this discussion, but I wanted to make a couple of points.

First, I think we vastly overestimate our capacity for rational thought, i.e. “critical thinking.” Where, by “critical thinking,” I mean the capacity to assess the truth value of a given assertion, or the relative merits of differing propositions that purport to make some claim about the way things are.

Much of what takes place in the internal narrative going on in our heads, assuming you are one of the people who has an internal narrative (apparently, some people don’t, just as some people are incapable of internal visualization), is prompted or driven by our interior emotional state. A feeling.

That is to say, to the extent we “reason” at all, it is to justify an emotion, how we “feel” about something.

“I like AI.”

“I’m afraid of AI.”

“Do I have to think about AI?”

We reason backward from our feelings. Emotion is the prompt in the LLM of the mind.

This is to say nothing about the difficulty of learning complicated or difficult subjects. People have varying capacities in their ability to learn. To the extent that our conventional approaches to teaching underserve some fraction of the population, indeed, “Why continue to learn?” It’s often an exercise in frustration or futility.

In this regard, perhaps AI has some value. It is an infinitely patient interlocutor, with the capacity to re-state difficult concepts in other words, or perhaps illustrate them with drawings or animations. (How well we can evaluate the accuracy of those drawings is another question.)

I enjoyed a lengthy conversation with ChatGPT in trying to understand what makes a “greenhouse gas.” (Vibrational modes.) It never gets frustrated with the student, tired, or bored. And the answers gave me clues to search other resources to ascertain with some degree of confidence whether or not it was hallucinating.

How much of genuine insight is inaccessible to “rational thought.” That is, the Eureka! moment. Perhaps people without an internal narrative exist in this state at all times, albeit without the joy.

My point is that much of our “thinking” takes place below conscious awareness, and is somehow brought to conscious awareness in the shower, after a walk or a good night’s sleep. (Not in the “graph.”)

I am burdened with a non-stop inner narrative, and a vivid imagination that dreams in color. I can’t draw a picture to save my life, but I can watch a movie in my head. I should probably develop a disciplined meditative practice. I also should watch what I eat. Many “shoulds” in my life.

So, learning is hard. AI might be able help.

Thinking is often driven by our interior emotional state. Desire is another prompt. Wishing to please your geometry teacher, or wanting to know.

But most of what goes on in our heads, well, my head are habituated loops because thinking is hard. The brain uses a lot of energy. Willpower is resource constrained by energy. Attention is a finite resource and it is preyed upon by our culture and economic system.

We don’t have to worry about AI destroying our ability to think critically, we’ve done a wonderful job all by ourselves.

Anyway, not sure I added much to the conversation, but it prompted an emotion and here I am.

I’ll leave you with Thales…

Know thyself.

3 Likes

Some of this fascinating discussion is above my head.

I think of “AI” as a pattern recognition machine. It’s been fed a gazillion patterns that it remembers.

When you give it the beginning (or middle, or end) of your particular pattern/problem it computes what statistically is most likely to be the rest of the pattern consistent with what it “knows” about all the other patterns it has seen, including patterns of human critical thinking.

Will it make us less able to think? Certainly those of us who consistently misuse it, rather than just cheat once in a while under a deadline. And also in the same sense that the advent of calculators and computers made us less adept at once critical skills of long division or mental math or doing sums on abacuses.

But at the same time it opens new vistas, starting with more “mechanical” things. Spelling correction on early smartphones used to be the butt of jokes. Now “predictive text” is mostly useful. And language translation has become startling good. All on the power of pattern recognition.

It’s also a boon for those who learn by studying examples, both good and bad.

Perhaps, more than we would like to think, what we glorify as critical thinking is at its heart pattern recognition capability wired deep in the brain, and struggling in tools like Tinderbox helps trigger recognition, like drawing lines in the sand or scribbling on paper.

4 Likes