@naupaka’s description of Tinderbox hits the mark, and is also the reason that I use it when I need to customize my own manipulation and exploration of notes or other source data. I use Curio when I don’t need to manipulate or compute data, I just need to organize it and present information. These are points along the spectrum of knowledge work. Tinderbox-class software and Curio-class software, in their own ways, broaden the spectrum for us, and that is a good thing.
Bingo… that is how I integrate Curio as well. More specifically, when I need a rich set of mindmapping and brainstorming tools that integrate with all kinds of media. Its a one stop shop for exploration. The infinite canvas and the ability to work with any media in a raw form is its strength for my work.
I also use Freeform when I need a similar tool that works across the mac ecosystem (Curio is mac only) and I use Heptabase for the odd reason I would need to work into the Windows platform.
I view all these tools as special infinite canvas tools that can integrate media very well. I get what I need from them always with a purpose in mind. In the end I try to integrate this workflow back into Devonthink… the mother of all knowledge for me.
Tom
Yes, this is what I’ve completely learned and love about Tinderbox—something that @mwra has been exposing since I’ve known him.
I would be cool if we could go back to one of Jenifer’s original questions (see above).
We’ve spent a lot of time talking about tools—Curio, emacs, Word, Freeform, DEVONthink, Obsidian, Pages, MindManger, EndNotes, Zotero, TextSnider, TextExpander, Pandoc, SnagIt, Scrivener, etc.
What about the systemic framework/workflow question? is there more that we can expose on that? I think @MartinBoycott-Brown (PARA reference) and @PaulWalters made a good start. PARA gives us a way of containerize our work. Paul’s images show that some tools might flow together in both a container and workflow (collect, explore, draft, complete).
Is there a PKM matrix we can build here that aligns methods with objectives/outcomes? How might we divide methods into categories like outlining, mind-mapping, tabulation, etc., and then associate these categories with tools and systems? What might my objectives/outcome categories look like? Explore—Systhesize—Ideate—Write/Create Imagery—Output—Contribute?
I’ve often thought that the most important part of the process is “making sense of it all”. The only tool for that is the brain – so it then becomes a question of what best supports the brain in that endeavour. For me, “making sense of it all” requires different tools at different times and may depend on a whole host of factors. And different people will no doubt be very different in how they approach “making sense of it all”.
I think we’re at the inevitable conclusion to the recurring issue of a well intentioned but open-ended question getting an over-wide range of answers. Read carefully, we see differing users with differing needs/styles doing same or different things in different ways yet often with the same cluster of apps. So there isn’t a right or best way, so the thread rolls on, as no one answer fits all readers.
That doesn’t devalue the useful insights above, but I make this not to explain why the OP’s answer won’t be ‘correctly’ answered - it can’t be. They may spot an app choice/combo that seems to suit them, but like as not may be overwhelmed by the range of well-intentioned but slightly conflicting replies.
If there is a right place to start it is to look at the task at hand and an strong personal process preferences , i.e. person A might use/store images one way, person B doing the same broad overall task may use a completely different approach … and neither is right/wrong.
I think the shape of the task is the ‘best’ start: the broad endpoint to which one is working. What are you starting with (source data materials, just an idea you wish to research, etc.) and to where do you wish to get? Indeed, where to get to initially (lest one tries to boil the ocean when only step #1 is needed now). Within that frame it becomes easier to assess the pertinence of app features or complete tools.
That might suggest that we have mechanical problem to solve. It isn’t. @MartinBoycott-Brown’s “it depends” is the only answer that makes sense to me. Depends on the person, depends on the goal, depends on the intent, depends on the skill, depends on available time … depends on so many things that a prescriptive answer is impossible.
Personal case stories are helpful. Here’s a story. Say I want to write about the evolution of the holdings of the “original proprietors”, from whom the lands comprising the District of Columbia were bought, into the topology of the District today. That’s the goal. So with the goal in mind, I’m going to first design (in writing) a program of research, structured note taking, interesting metadata to cover, artifacts to acquire, and outlines of what I might write. That’s my research roadmap.
Then the research goes to the field, in archives, history centers, Library of Congress; along with note taking, filing artifacts, tracing information found in different sources. Research generates over time a growing corpus of notes, handwritten or annotations on documents, or text I create in Tinderbox. All of the non-Tinderbox artifacts are stored in DEVONthink and I use DEVONthink’s standard and custom metadata to characterize and summarize content, authors, etc.
The thing about research at any stage is that piles of documents and notes expand exponentially (or seem to). I don’t go back and make sense of it after I collect it, I need to make sense of it while I collect it. For example, over time I make notes about anomalies in Baist’s 1903 survey, and a week later connect those notes to oddities in the 1800 L’Enfant-Ellicott map, and then days later I find releted comments in the Notley Young deeds of 1781. These kind of notes I keep in Tinderbox because there is power in linking these findings in ways that DEVONthink and Curio cannot provide. (This is also the kind of complex emergent sense-making that Luhmann engaged in, which many popular articles about Z get wrong. It’s not about making piles of notes; it’s about intentional and informed study, noticing, and thinking.)
Eventually I have the framework of information for my article(s). Information; I am not fond of “knowledge”, an overworked and misunderstood term. I use Curio to iterate toward a layout of the final working draft. Then revise it and use Tinderbox to draft.
Tinderbox soon becomes cumbersome because I like to work on format of content and illustrations while I’m drafting, so I move over to Pages or Word (usually Word; it’s better for this sort of thing and I know how to corral it). Then off down the path of fact-checking, re-drafts, and finally the end. All through these final stages – through all stages actually – I am curating and annotating my research. It’s not uncommon to go back to the initial research phases – and even design an sub-project – just to trace through a new conclusion about the theme.
This, for me, after decades that began with taking notes on Hollerith cards and ended up with iPhone capture of text on a page, is my method. It’s not prescriptive, just descriptive, which I think is the way to give advice on this thread’s topic.
I like this observation. As well as the point of “sense making.” Very keen,
One point I might add, however, is a thought I have about AI Agents, both organizational and personal. As we move forward into the great new voice of AI and data, especially the data we use to train our personal AI—which will have delegated authority and may one day make legally binding decisions for us—it will be crucial for us to retain the provenance of the data to which it, and honestly us a well, of the data that informed the “sense” that we made.
This is becoming more and more important in a world of extreme distrust. The adage of trust but verifying is dead or dying. The new adage of zero trust and distrust but verification is upon us. From my research, one day, a day not too far off, the content we produce will be included with our personal digital self-sovereign signatures—not just who created the content, but with what applications and devices and how the content has evolved from its original source. We’ll be able to peel back the layers. This identity will be layered, too. We’ll have a spectrum of identification and authentication layers into our content—into the sense we contribute or publish—“I know this came from a human or non-human agent” to “I know exactly which human or non-human agent this came from.” As for our verbal speaker, that too may be tracked back to our personal data store (e.g., our iCloud, our Tinderbox Files), the PIMS (the application layer), the brains, and the nervous system accessing and writing to the store. The new layer of distrust may very well be the precursor to a new era of “Extreme Trust.” We’ll see.
I really don’t know what that all means, sorry. We have the brains we have. AI thus far merely mimics patterns and gives the appearance of having “voice”.
We all are surrounded by liars and truth-tellers and either learn to deal with it, or don’t.
Fair enough…much will be unfolding in the months and years to come. As you point out, it is all above “sense making.” I’m still figuring “it” out too.
As @PaulWalters rightly notes “Personal case stories are helpful”. I think the difference her re ‘mechanical’ work is the degree of repetition (and existing experience with things like templates). By comparison to Paul’s case, @satikusala is often running the same overall report process over and over albeit with some different source data.
In that context, the ‘mechanical’ approach— i.e. using automation in Tinderbox with posters, etc.—can make sense if the scenario being mapped out will happen repeatedly. IOW, same process, same report, limited changing data. If user is also comfy with using export templates and posters then the extra effort to set up an in-Tinderbox dashboard can have significant pay off. This is likely to be far fewer users in aggregate but useful for those where the repetition amortises the effort spent in set up. Otherwise, for more ad hoc or less frequent ad hoc use an approach using Curio alongside Tinderbox likely gets people to their outcome faster and for less effort.
Which choice makes most sense is then an individual choice based on expertise (can I configure the automation needed?) and likely repetition (am I going to be using this report process multiple time with little change?). But the choice is less a case of ‘best’ but a varying choice based on the task being done. The two approaches (and there are doubtless other alternatives too) address the same overall task yet with different constraints/needs so are complimentary rather than in opposition.
My comment “[that] might suggest we have a mechanical problem to solve. It isn’t” is not in reference (or disparagement) of doing repetitive or similar tasks in Tinderbox (or other software) using action code or other logic. The comment is different and broader, as I already explained. I believe prescriptive “workflows” (such an ugly term when used for human endeavors) suck the creativity out of learning and research.
Great observations @mwra, as always. Another few points I’d add to this calculus: 1) is will working on the automation now teach me something new, give my brain something else to work on for the time being—something that may or may not be useful in the future (my experience as been that days, weeks or even months later it is useful), 2) would this be a “fun” puzzle to figure our purely for the joy of figuring it out, and 3) if I spend the time figuring this out, might this save someone else time and angst in the future.