Tinderbox Meetup Video - Saturday, June 10, 2023: A Discussion with Jerry Michalski on The Brain and Tinderbox

Correct. But Michalski reported that his practice was to avoid that, and to maintain links to things stored elsewhere in the interest of compactness and speed.

2 Likes

Which may or may not be necessary with the power of modern computers.

I would imagine a TBX file could hold a lot more notes if the notes linked to the note content elsewhere. I suppose this is getting back to having the data separate and accessing it from many tools. It’s just kinda nice to have everything in one place!

Realistically, a Tinderbox map is limited to thousands of notes. If you had a million notes, you’d likely have trouble visualizing it, even on a large array of large screens. At some point, you can’t see the big picture because it’s too big, and if you do, the individual notes are too small to be useful.

The Brain’s neighborhood view avoids that by not offering any way to see the big picture. You see the immediate link neighborhood, and that’s all.

2 Likes

That is really not an option when you’re working off a “single” brain (i.e., TBX file) and curating assets (e.g., Zettles, atomic notes) across multiple projects (reports, books, articles, events, sales pipeline, etc). All these projects pull from common atomic notes. When you start getting to 1,000s of notes, some hierarchy structure in map view is useful. I guess one could gravitate to adornment and agents, but this “feels” hard and difficult to scale.

To this point, this is where containers come in handy.

For those interested, the “Jerry’s Brain” discussion is also actively happening here: Tinderbox Meetup - Saturday, June 10, 2023: A Discussion with Jerry Michalski on The Brain and Tinderbox - #43 by eastgate.

I assume the hyperbolic view comes to the rescue in navigating large maps?

I think a fundamental confusion here is that The Brain equates to Tinderbox Map view. Not so! It equates to Tinderbox Hyperbolic view. The Brain (only) shows linked items, has limited degree of choice as to the style/display of items and offers the user no control over the layout of the view. Tinderbox hyperbolic view uses a different layout algorithm, but functionally is much closer to The Brain than Map view. That might help stop us going ritualistically round the buoy on map view and containers.

I do think that people are confusing the piece with the performance. Jerry’s Brain is most impressive when Jerry’s ‘driving’ it. The latter doesn’t devalue the content, but it does alter how we experience it. Anyway, without the author, it’s someone else’s thoughts with all the rich and meagre spots you’d expect looking at parts we do don’t ourselves know well. He’s spot on about how the inability to share/merge that data (or experience/knowledge) with that of others limits its use—but those are limits of current affordances rather than self-set limits. Also insightful is the author pointing out that he’s quite deliberately not using all The Brain’s features. Plus, c.35+ years experience of the tool is a relevant factor.

Taken together (author and ‘Brain’) are pretty impressive; apart, the data alone is less tractable. But the same holds for viewing someone else’s, zettelkasten, second brain, GTD, or simply a large TBX following no tightly-defined process. Too easily we overlook the human in the loop as the source of value and the inconvenient fact that the closer the fit of our data to ourselves the less well the fit to others; it one reason process travel less well than imagined.

†. It’s faster and more responsive in demos and videos because I suspect he’s using The Brain’s desktop app and not the Web interface which—at least for me—runs glacially slow. Of course, that’s an issue of the display not the content.

6 Likes

Impressive, indeed – it was actually jaw dropping to watch him do more than merely demo his Brain – he was actually adding to and enhancing his Brain as we interacted.

I was surprised about the features he does not use such as the indexing and note functions – which for me are critical since my use of the Brain is more limited and focused as a research tool. In a sense my Brain is less of a map and more of a bridge in my workflow. As my Bookends collections of references continue to grow (41000+ as of today), and with DevonThink indexing every entry, what my Brain provides is a dynamic perspective that reminds me of the different topics and ideas related to each of the bibliographic entries I’ve made overtime.

Example: today I read an article on the difficulties the US federal system poses for more effective disaster relief. I made my Bookends entry (which was indexed in Devonthink) and entered the article as a Thought in the Brain Plex, linking it subject thoughts such as federalism, disaster relief, FEMA, etc. – concepts that already existed from previous entries. In the process related concepts and bibliographic entries popped up, leading me to other articles on collaborative arrangements among local governments and similar topics. Yes, I might have followed that path eventually, but the Brain facilitated the process visually.

No doubt TBX can be structured to do the same, but for me there is the “sunk costs” factor (I have thousands of Thoughts in that particular Plex). My hope is to use TBX on the composing and output side of the workflow…

5 Likes

Brief look at SamePage site and ridiculous pricing. Thresholds on the $10/mo price are a fraction of the volume a serious 2nd-brainer builds.

I see that you are using theBrain as a meta, top-level instrument to connect ideas from different sources. That is interesting.
So, you do not store primary materials such as the articles in there. But, where you do you write your reflections on the content of the article?
Do you write your reflections, why the article is relevant to you or your specific project; or how you are going to use it, etc, within Brain?, or do you have reading notes (files) inside DT?

Thanks for the question. Where TheBrain fits into my so-called “workflow” is the result of incremental changes in how I’ve handled the increasing number of items that have been generated over the past few decades on my main topic: accountable governance.

I track the topic via Google Scholar, and each week about a dozen publications are relevant enough to become part of my Bookends collection of references. Unfortunately this means I spend most of my time curating rather than reflecting or actually writing. For what it’s worth, this is (roughly) my process and how my SOURCES Brain fits into the flow.

  1. Each reference (paper, report, article, book) typically has an abstract or summary; if not I generate one from a quick first read of the work and enter it in Bookends “Abstract” or “Notes” section. (In the meantime, Devonthink is indexing my attachments in the monitored folder.)
  2. I then create a related Thought in my SOURCES_Brain and attach a copy of the item. (TheBrain gives you two options – to link to the item or to copy it into the Brain. I initially used the link option but now copy it where possible. There seems to be no limit to how much the BRAIN servers can handle…)
  3. I enter relevant bibliographic information and the Abstract into the Content (Notes) section. At this point TheBrain indexes the notes by highlighting any words or concepts or phrases that have associated Thoughts. It also provides a breakdown of related links.
  4. I next create a sibling Thought for each author and (in the case of articles) publication. (Where the item is a chapter in an edited book, I create a sibling Thought for the edited volume as well…)
  5. I then create or apply “content” Thoughts which reflect keywords or tags relevant to the item. Typically these are parent Thoughts, unless the idea or concept originated with the reference (they become child Thoughts).
  6. Other child Thoughts are any tables or figures copied from the item, or any items that are commentaries, reviews, etc of the focal item.
  7. Typically, I will then return to the attachment for a deeper read and use highlighting and annotations as needed. I save the modified item in Bookends and then in the Brain version, making changes to any content or links.
  8. As for “reflections” or ideas generated by the item, I usually keep “Drafts” handy and enter these “on the fly” – and eventually integrate them into some ongoing writing project (which may be in outline or word processing stage).

It is that last step where I think TBX would be most useful once I feel competent enough to make effective use of it.

Hope this all makes sense…

Mel

4 Likes

Thank you for the detailed reply.

I also have a large library of materials in Bookends. But, I do not really read most of them; and I never checked most of them. I collected them over the years, just like you did.
I have the references indexed in DT. And, so far, the only tool I am using to find associations across references (articles, books) is using Foxtrot Search (proximity search).

But, I know searching is not always the best to discover connections between ideas, because search relies on English terms, while concepts are delivered in different ways (with different terms, paragraphs, graphs, pictures, etc).

I am not a user of theBrain; nor I intend to use it anytime soon. It is too expensive to me. But, you method is fascinating; and was thinking if it can be emulated to Tinderbox. I had been attempting to use one Tinderbox file as a meta-file where I would look my references from a bird-eye-view; kind of the way you are using theBrain.

I tried that many years ago. I find the project too massive to handle; left it. But, I never totally gave up because these references are on my drive for a reason; i need to check them and have a brief summary of each reference, at least.

It would be great to know if other users are using Tinderbox for that kind of purpose (to dissect, reflect, connect primary sources such as research articles, books etc).

2 Likes

A side question … what is DEVONthink indexing? The article linked to the Bookends record or a bookends:// URL?

This is an endlessly facetted arena. Some annotate the articles (PDFs, at least) this are the referenced. Some annotate in their reference manager. Some annotate a linked resource from the Ref Mgr in DEVONthink. Some annotate in Tinderbox linked to one/several of the proceeding. There’s certainly no right way. Indeed, many of the processes/habits we’ve acquired are through pragmatic solutions to erstwhile limitations of our tools and their inter-operability (or lack thereof). A hard part—as the ongoing task of documenting Tinderbox reminds me—is grokking when my processes can/should update as a past limitation has been removed.

2 Likes

That means: the pdf article attached to a reference data in Bookends is indexed in DEVONthink. You can read and annotate it within Bookends or within DEVONthink; or some other software.

My interest is not on those articles that I actually read and annotate. It was on those that I cannot. I have limited time; but very large library of articles.
The question is: shall we leave it for Search and chance for an article to be used (discovered) in the future OR, we can attempt at least partially organize them where they can useful using a preliminary reading (+ abstracts of the articles).

I thought @mdubnick is using theBrain to organize his large library of references (articles) into useful categories/tags/links so that he can discover them while he needs them.

1 Like

Thanks most helpful. This reminds me I could use DEVONthink’s features to make better use of the 1.2k PDF supporting my main Bookends file (c.60% of the references). Bookends runs about 20+ queries looking for reference data errors caused by lazy publishers not reviewing their data.

†. For published papers, where I’ve un-paywalled access, I’ll store a local PDF (I’ve leaned the hard way not to trust publisher’s web sites to provide permanent permalinks!). Plus firing up a VPN ‘just’ to check a doc is the one I remember is a PITA; local files win out. Most others refs without PDFs are books/have not digital instance. As much of my subject area is papers in the 70s–90s many PDFs have spectacularly poor OCR (on which most data extraction is based) so having a local copy means I can use modern OCR to get a much cleaner metadata extract.

1 Like

Exactly. DT has Abbyy FineReader engine, which does an amazing job of extracting text (OCR).

There is a lot of movement in the academic world these days to publish in open-access journals. In my field , that has a huge impact. Free pdf is available to almost all the articles published in the last couple of years (in the form of pre-print or actual publication). Because of that our pdf library is growing very fast every day.

Having better searching and organization tools for pdf files is very useful. Foxtrot’s various search options, and DT’s AI do help to find important bits of information across your pdf library.

1 Like

What fascinated me most about the demo of “The Brain” was the abundance of links. But at the same time, this is exactly the issue that would keep me from using the software. The demo was impressive for me because here a user has manually maintained connections over years and decades. I wouldn’t attribute that so much to the software - it has to provide the functions as a tool so that you can do this manual work well and quickly.

But two problems remain:

  • it is still super much work
  • the links are static and thus not at all like our brain

What I’m interested in is software that builds dynamic links and adjusts them with each new note - automatically if possible. And the basis for me must always be a note written by me. If I only capture the link to a source, then I have to acquire the understanding about this source again at a later time (if the link still works at all until then). This thinking about a source costs me a considerable amount of time and I don’t want to have to do that two or three times. So my notes in TBX in my Zettelkasten are all created including this understanding - in my own words. The view on a topic changes with time and thus I would have to adjust my written down thoughts again - that’s bad enough, but I don’t want to start from scratch every time.

On top of that, I see problems with the user interface in “The Brain”. In Germany there is the phrase of “not seeing the forest for the trees” - I have that feeling with more complex pages of “The Brain”. What’s the use of lots of links if I can’t read the label of the found items until I move the mouse over them? That is not very intuitive from my point of view. But this is a very superficial and unfair assessment, because I really haven’t spent much time working with this software.

3 Likes

The Pro version of TheBrain has more layout options like MindMap or Outline where you can see more of the title. I still like the “normal” layout because after a while of using it I have some kind of spatial memory of my “brain”. If there’s too much clutter in a view, I just refactor the content.

1 Like

This depends on what your trying to achieve. If you want automatic computer/application created links to every conceivable topic, look no further, you have the internet. If you want to grow your understanding on different topics it takes time and effort to make notes and think through their connections and implications. This is super hard work. Jerry’s brain is a point of conversation because he has curated it personally over time. That is the value. And I’d add, the value to him as they’re his connections.

Static links (I’m assuming you mean, manually curated links) are crucial. You only link where you see a connection. You will link differently to anyone else because you will see different connections. This too is important and hard work.

There is no shortcut to knowledge.

2 Likes