What makes for a great "Tool for Thought"? GPT Answers

I found this on Reddit:
Here’s what was posted:

I asked ChatGPT to get an initial view. I particularly find points (4) Incremental Formalizationand (5) Transclusion and Embedding particularly insightful and not commonly discussed.


“Tools for thought” like Roam Research represent a significant evolution in the realm of personal knowledge management systems. They differ from traditional note-taking and organizational tools in several key ways:

  1. Bidirectional Linking: Unlike traditional systems that use hierarchical or linear structures, tools like Roam allow for bidirectional linking between notes or ideas. This means any piece of information can be directly linked to another, regardless of its position in a traditional hierarchy, enabling a more network-like structure of knowledge.
  2. Networked Thought: These tools facilitate a more natural representation of how our brains work. Instead of forcing information into predefined categories, they allow for a more organic, associative form of organization where ideas are interlinked, mimicking the way our brains recall and connect information.
  3. Non-linear and Flexible Organization: Traditional systems often require you to decide where to put something in a structure (like a folder or a document). In contrast, tools for thought allow for a more fluid and dynamic organization of information, accommodating non-linear and evolving understanding.
  4. Incremental Formalization: They support the gradual structuring of knowledge. You can start with loose notes and ideas, and over time, as your understanding and collection grow, you can organize and reorganize them without rigid constraints.
  5. Transclusion and Embedding: The ability to embed or transclude information (showing the same content in multiple places without duplication) is a key feature. This reduces redundancy and maintains a single source of truth for any piece of information.
  6. Personalization and Extensibility: These tools often offer extensive customization and extensibility, allowing users to tailor the experience to their specific needs and workflows. This might include custom templates, scripting, or integration with other tools and services.
  7. Focus on Thought and Idea Generation: Unlike traditional tools which are often geared towards storage, retrieval, or task management, tools for thought are designed with the generation and development of ideas as a central goal. They encourage exploration, connections, and the evolution of thoughts.
  8. Graph Views and Visual Representation: Many tools for thought provide graphical representations of your knowledge network, helping to visualize connections and discover new relationships between ideas.

In summary, “tools for thought” like Roam Research represent a paradigm shift in knowledge management, focusing on the associative, networked, and flexible organization of information that aligns more closely with human cognition and creativity.

Is there any difference of opinion or addendums in the Tinderbox community?
And, where does AI come in, because I for one find the ability to off load the writing to free me to figure out the more complex and interesting aspects of my thought work.

1 Like
  1. Bidirectional linking. GPT is recapitulating a very old debate on link representation, and shoehorning it into the service of an attack on hierarchy. It ought to read the early Storyspace papers, as well as Cathy Marshall’s oeuvre.

  2. Networked thought. No: none of these tools represent how our brains work.

  3. GPT is correct.

  4. GPT is correct.

  5. Transclusion. I think GPT believes it’s echoing the Roam party line. It is, in fact, echoing Ted Nelson. The sentence about “a single source of truth” is obviously an error; GPT must be quoting someone, but who?

  6. Personalization. I suppose GPT is right here. I’m skeptical that this deserves such prominence.

  7. Though Generation (!). GPT has lost the thread. Rhetorically, this is either the most important characteristic, and ought to be #1, or it’s part of the definition, and ought not to be part of the list.

  8. Graph views and visual representation. OK: GPT is not wrong. Then again, Conklin’s 1987 review said as much, and was just as tentative.

What you can see here is that writing is an interesting part of thought work. GPT has assembled a list of plausible elements, but has failed to place them in a useful relation to each other or to its claim.


Never send a computer to do a human’s job. The overlooked area where AI does yield is in précis and summarisation. Why? Because, it’s not that bad and it does give a machine’s-eye view of its understanding what we write. That matters as today AI speaketh unto AI so if it doesn’t understand our writing, sub-par outcomes may follow.

The essay shows the GPT student has a limited grasp of the subject matter. I’m not convinced Roam Research is a ‘paradigm shift’ (hyperbole much? It’s just software). At best it re-discovered and implemented so bit of hypertext left in the long grass when the Web cool kids grabbed the hypertext ball and ran off in a different direction.

But, if I was selling pizzas or secondary glazing, GPT probably has some use: making text that no one really reads or needs to have real or truthful meaning. People see in GPT what they want to see. I’ve yet to find a meaningful use—at least for free, as real use generally has a hidden $ cost. Cost is not unreasonable as someone has to pay for all those CPU cycles and I doubt AI shareholders want to pay.

Another question here: are we judging the number of typos/demotic errors or the pertinence of the actual text? I’d would not hire this applicant based on their understanding of the subject matter.

I’ll wager AI 5.0 will be better then this—but, consider waiting for AI 6.0 as allegedly it will be even better, etc… :slight_smile:

1 Like

Well that’s a pretty negative take, but to each his own.

I use AI extensively now to help with brainstorming how to design a fairly involved database structure, whenever I get stuck I pose my problem to the AI and it assists me in finding a workable solution.

The second big use case I use it for is as a documenting assistant for all my data-entry processes and there’s a fair bit of summarizing too. When was the last time you wanted to write 11,000 words about how to process text into custom data?

Certainly worth the $20 a month I pay to use it.

1 Like

Thanks for the useful clarification. There are several aspects here, connected but disjoint:

  • does it work
  • is it valuable
  • who should use it

Does in work? Well, it definitely works, in that there is output. Is the output (grammatical and) readable? Yes—and that is impressive, let’s not lose sight of that fact. Is the output accurate? Here things are cloudier. If the subject matter is commonplace and in the past (recalling the LLM base) then in most cases the accuracy is good enough. On less broad matters, I find accuracy falls off fast. A complicating and connecting factor is the AI model is not designed to be allowed to fail in a useful fashion. Faced with a missing information in its source data it will make up citations. This makes it untrustworthy for matters where you don’t already know the answers. Which leads us too…

Is it valuable? In the last case above, no. But if doing some everyday task where errors are benign and/or easily spotted, then likely yes. It isn’t a zero-sum judgement. It can be valuable to some and not to others, and correct in both cases. So, I can see it is of value to you and I respect that whilst it still having interest but no value for me.

In context of the last, some projects I’m involved in are beginning to see texts passed around that are described it terms of “See this: it’s only AI-generated so not accurate, but there it is anyway”—and, no surprise it isn’t accurate (so not that helpful). Though doubtless not the creator’s intent, the message received is “I couldn’t be bothered to do the work myself so here’s what the AI thinks: although I don’t vouch for its quality I still think you should read/use it”. This creates trust issues. If summarising for personal use, then there’s no problem as one is both ‘author’ and reader and can judge accordingly.

It’s also worth noting the cost. The tech is generally discussed as if totally free, but it generally turns to be the case that the output people value is pay-for (and that good as the public purse shouldn’t be totally funding private companies). So, I’m happy if some find a $20/mo cost reasonable, and it its a clear indication of a sense of value—harder to do with something that’s free (so only users’ interest and free time are at play). Whether it’s worth $20/mo to everyone at this point is less clear-cut.

Who should use it?. Unclear. Clearly, those who find it useful can and should. I’d suggest everyone—or those who can—should try it. Who uses it, in which context(s) and to what degree is thus in the eye of the beholder.

But if you had to choose? As a straight binary yes/no, I still fall in the ‘no’ camp for general unsupervised use. The tech is impressive and improving at pace. What’s lagging is the human aspect of understanding the tool beyond something that does one’s homework for free. I think respectful use beyond private/personal demands that the AI output be indicated as such: passing off an AI’s guess as established truth can be harmful until we better understand the derivation of the output.

Don’t misread me as a tech luddite. Far from it. But my early career involved hazardous work and it made me cautious, with good cause, of tech over-claiming. Optimism is fine up to the point where lazy design/implementation damages people (q.v. the mixed blessings of social networks). Meanwhile, AI is has impressively low error rates in a lot of image-recognition work and is in some of the tools I use daily.

TL;DR. Good for some, also not so good for others.