Tinderbox Meetup Video - Saturday, June 10, 2023: A Discussion with Jerry Michalski on The Brain and Tinderbox

Brief look at SamePage site and ridiculous pricing. Thresholds on the $10/mo price are a fraction of the volume a serious 2nd-brainer builds.

I see that you are using theBrain as a meta, top-level instrument to connect ideas from different sources. That is interesting.
So, you do not store primary materials such as the articles in there. But, where you do you write your reflections on the content of the article?
Do you write your reflections, why the article is relevant to you or your specific project; or how you are going to use it, etc, within Brain?, or do you have reading notes (files) inside DT?

Thanks for the question. Where TheBrain fits into my so-called ā€œworkflowā€ is the result of incremental changes in how Iā€™ve handled the increasing number of items that have been generated over the past few decades on my main topic: accountable governance.

I track the topic via Google Scholar, and each week about a dozen publications are relevant enough to become part of my Bookends collection of references. Unfortunately this means I spend most of my time curating rather than reflecting or actually writing. For what itā€™s worth, this is (roughly) my process and how my SOURCES Brain fits into the flow.

  1. Each reference (paper, report, article, book) typically has an abstract or summary; if not I generate one from a quick first read of the work and enter it in Bookends ā€œAbstractā€ or ā€œNotesā€ section. (In the meantime, Devonthink is indexing my attachments in the monitored folder.)
  2. I then create a related Thought in my SOURCES_Brain and attach a copy of the item. (TheBrain gives you two options ā€“ to link to the item or to copy it into the Brain. I initially used the link option but now copy it where possible. There seems to be no limit to how much the BRAIN servers can handleā€¦)
  3. I enter relevant bibliographic information and the Abstract into the Content (Notes) section. At this point TheBrain indexes the notes by highlighting any words or concepts or phrases that have associated Thoughts. It also provides a breakdown of related links.
  4. I next create a sibling Thought for each author and (in the case of articles) publication. (Where the item is a chapter in an edited book, I create a sibling Thought for the edited volume as wellā€¦)
  5. I then create or apply ā€œcontentā€ Thoughts which reflect keywords or tags relevant to the item. Typically these are parent Thoughts, unless the idea or concept originated with the reference (they become child Thoughts).
  6. Other child Thoughts are any tables or figures copied from the item, or any items that are commentaries, reviews, etc of the focal item.
  7. Typically, I will then return to the attachment for a deeper read and use highlighting and annotations as needed. I save the modified item in Bookends and then in the Brain version, making changes to any content or links.
  8. As for ā€œreflectionsā€ or ideas generated by the item, I usually keep ā€œDraftsā€ handy and enter these ā€œon the flyā€ ā€“ and eventually integrate them into some ongoing writing project (which may be in outline or word processing stage).

It is that last step where I think TBX would be most useful once I feel competent enough to make effective use of it.

Hope this all makes senseā€¦

Mel

4 Likes

Thank you for the detailed reply.

I also have a large library of materials in Bookends. But, I do not really read most of them; and I never checked most of them. I collected them over the years, just like you did.
I have the references indexed in DT. And, so far, the only tool I am using to find associations across references (articles, books) is using Foxtrot Search (proximity search).

But, I know searching is not always the best to discover connections between ideas, because search relies on English terms, while concepts are delivered in different ways (with different terms, paragraphs, graphs, pictures, etc).

I am not a user of theBrain; nor I intend to use it anytime soon. It is too expensive to me. But, you method is fascinating; and was thinking if it can be emulated to Tinderbox. I had been attempting to use one Tinderbox file as a meta-file where I would look my references from a bird-eye-view; kind of the way you are using theBrain.

I tried that many years ago. I find the project too massive to handle; left it. But, I never totally gave up because these references are on my drive for a reason; i need to check them and have a brief summary of each reference, at least.

It would be great to know if other users are using Tinderbox for that kind of purpose (to dissect, reflect, connect primary sources such as research articles, books etc).

2 Likes

A side question ā€¦ what is DEVONthink indexing? The article linked to the Bookends record or a bookends:// URL?

This is an endlessly facetted arena. Some annotate the articles (PDFs, at least) this are the referenced. Some annotate in their reference manager. Some annotate a linked resource from the Ref Mgr in DEVONthink. Some annotate in Tinderbox linked to one/several of the proceeding. Thereā€™s certainly no right way. Indeed, many of the processes/habits weā€™ve acquired are through pragmatic solutions to erstwhile limitations of our tools and their inter-operability (or lack thereof). A hard partā€”as the ongoing task of documenting Tinderbox reminds meā€”is grokking when my processes can/should update as a past limitation has been removed.

2 Likes

That means: the pdf article attached to a reference data in Bookends is indexed in DEVONthink. You can read and annotate it within Bookends or within DEVONthink; or some other software.

My interest is not on those articles that I actually read and annotate. It was on those that I cannot. I have limited time; but very large library of articles.
The question is: shall we leave it for Search and chance for an article to be used (discovered) in the future OR, we can attempt at least partially organize them where they can useful using a preliminary reading (+ abstracts of the articles).

I thought @mdubnick is using theBrain to organize his large library of references (articles) into useful categories/tags/links so that he can discover them while he needs them.

1 Like

Thanks most helpful. This reminds me I could use DEVONthinkā€™s features to make better use of the 1.2k PDF supporting my main Bookends file (c.60% of the referencesā€ ). Bookends runs about 20+ queries looking for reference data errors caused by lazy publishers not reviewing their data.

ā€ . For published papers, where Iā€™ve un-paywalled access, Iā€™ll store a local PDF (Iā€™ve leaned the hard way not to trust publisherā€™s web sites to provide permanent permalinks!). Plus firing up a VPN ā€˜justā€™ to check a doc is the one I remember is a PITA; local files win out. Most others refs without PDFs are books/have not digital instance. As much of my subject area is papers in the 70sā€“90s many PDFs have spectacularly poor OCR (on which most data extraction is based) so having a local copy means I can use modern OCR to get a much cleaner metadata extract.

1 Like

Exactly. DT has Abbyy FineReader engine, which does an amazing job of extracting text (OCR).

There is a lot of movement in the academic world these days to publish in open-access journals. In my field , that has a huge impact. Free pdf is available to almost all the articles published in the last couple of years (in the form of pre-print or actual publication). Because of that our pdf library is growing very fast every day.

Having better searching and organization tools for pdf files is very useful. Foxtrotā€™s various search options, and DTā€™s AI do help to find important bits of information across your pdf library.

1 Like

What fascinated me most about the demo of ā€œThe Brainā€ was the abundance of links. But at the same time, this is exactly the issue that would keep me from using the software. The demo was impressive for me because here a user has manually maintained connections over years and decades. I wouldnā€™t attribute that so much to the software - it has to provide the functions as a tool so that you can do this manual work well and quickly.

But two problems remain:

  • it is still super much work
  • the links are static and thus not at all like our brain

What Iā€™m interested in is software that builds dynamic links and adjusts them with each new note - automatically if possible. And the basis for me must always be a note written by me. If I only capture the link to a source, then I have to acquire the understanding about this source again at a later time (if the link still works at all until then). This thinking about a source costs me a considerable amount of time and I donā€™t want to have to do that two or three times. So my notes in TBX in my Zettelkasten are all created including this understanding - in my own words. The view on a topic changes with time and thus I would have to adjust my written down thoughts again - thatā€™s bad enough, but I donā€™t want to start from scratch every time.

On top of that, I see problems with the user interface in ā€œThe Brainā€. In Germany there is the phrase of ā€œnot seeing the forest for the treesā€ - I have that feeling with more complex pages of ā€œThe Brainā€. Whatā€™s the use of lots of links if I canā€™t read the label of the found items until I move the mouse over them? That is not very intuitive from my point of view. But this is a very superficial and unfair assessment, because I really havenā€™t spent much time working with this software.

3 Likes

The Pro version of TheBrain has more layout options like MindMap or Outline where you can see more of the title. I still like the ā€œnormalā€ layout because after a while of using it I have some kind of spatial memory of my ā€œbrainā€. If thereā€™s too much clutter in a view, I just refactor the content.

1 Like

This depends on what your trying to achieve. If you want automatic computer/application created links to every conceivable topic, look no further, you have the internet. If you want to grow your understanding on different topics it takes time and effort to make notes and think through their connections and implications. This is super hard work. Jerryā€™s brain is a point of conversation because he has curated it personally over time. That is the value. And Iā€™d add, the value to him as theyā€™re his connections.

Static links (Iā€™m assuming you mean, manually curated links) are crucial. You only link where you see a connection. You will link differently to anyone else because you will see different connections. This too is important and hard work.

There is no shortcut to knowledge.

2 Likes

@svsmailus I agree with you to a large extent. But the quality and nature of the links in my Zettelkasten in Tinderbox is not comparable to the links of a hypertext medium. Well: comparable, yes, but different :wink:

So far I have maintained all the links manually. This leads to a high quality, but also to a lot of work. OpenAI is somewhere between the hyperlinks on the web and my manual work. The quality of the keywords found (the basis for automatic linking between notes) is relatively high. If I would take the trouble now and would create my own model for OpenAI, the difference to a manual linking will diminish further.

But besides the links themselves, the representation of the Zettelkasten is very important and is only marginally considered by most tools. A good user interface is a necessity for the success of any PKM solution but still gets insufficient attention in the development of the majority of many such systems. The user interface should not only allow access to the contained information, but also support working with the data, finding new linksā€¦ I really donā€™t see that in the UI of ā€œThe Brainā€. A natural language interface like what I got with OpenAI is one way to go.

1 Like

Iā€™ve had a VERY different experience with TBX. For me, 90% of my links and 505+ of my note creation, with two scripts, are automated. We the recent updates to Hyorebolic Iā€™m getting tantalizingly close to ā€œJerryā€™s Brainā€ in Tinderbox + Output. :slight_smile:

Yes, but what constitutes ā€œgoodā€ is in the eye of the beholder. I find that as I get closer to my notes and an embrace abstraction more and more the application interface becomes less important, the export code, preview, and now posters player an exceedingly important and crucial role.

I spend a lot of time on the subject of usability (work). At the same time, I often have to work with apps in a scientific context. These two things are often diametrically opposed. The IT industry is happy about every new technical function and this is then provided with an interface that, in a positive way, allows for maximum romantic memories of the 80s.
When I work with a lot of data, I want an innovative user interface - but I almost never get one. We talk about artificial intelligence and look at the Stone Age in terms of the UI. A bit exaggerated, but itā€™s going in that direction. I remember a big conference of Apple (Amsterdam, ā€œApple University Congressā€ at the beginning of 1990). There Alan Kay talked about programming environments for children and the phrase ā€œitā€™s all about the user interfaceā€ I havenā€™t forgottenā€¦
TBX in this area is not on the dark side of the force for me - at least after a steep adaption curve :wink:

2 Likes

Amen. But I do see that the user interface we seek can be costly (engineering) to make not least as it can involve working around Apple Framework bugs/missing features. As well, designing a UI intuitive to all is hard, i.e. thereā€™s a trade off. This is reflected in the fact that there are lots of wonderful very tightly-feature-scoped utilities where the dev has the luck of a very narrow design context.

On the wider context it is hard. My current work is looking at better (structured, more re-usable) research data. The limit isnā€™t software. The limit is the social engineering of getting us (yes, us, not ā€˜themā€™) to generate well-structured data when, of course we all know this is a task that should/must be done by ā€˜someone elseā€™ be it a human or a computer. the limiting factor has long since ceased to be the code. We need better, more considerate of others, humans :open_mouth: .

The more I look into UI & process, itā€™s less about the code and more about the context.

I donā€™t think that disagrees with your post.

3 Likes