Is there a way to create several links simultaneously?

Which attributes do you usually use for each? Better if it is not for technical-commercial type of work.

This makes sense. But in my perception (if I understood everything correctly) it not only slows down review process because you have to go inside parent note to see child notes or constantly switch between outline and map views, but also do not allow connecting review_child_note to other review_child_note on the same level. Of course, you can use aliases, but in my case it will lead to the situation where I will have aliases almost on every review_child_note because I have to have them on the same level. I should notice that I don’t work only on one level but the majority of notes are on one level in context of say work dedicated to one book. Level deeper is for elaboration on some concept, which will have its own mini-map of linked notes.

This is unwittingly perhaps the wrong question to ask. Why? Most likely these will be user attributes, with a name chosen by the user and naming choice I’ve observed to be not always the same cxhoice for all. IOW, user B getting user A’s code may likely rename all/some user attributes to names they find pertinent to their way of thinking.

More useful is to ask the data type to be used, i.e. String or List? (single or multi-value), List or Set? (De-duped/sorted, or not), String number or Date (for dates). etc.

1 Like

Mostly I create my own. In one of my files, I have over 500 user-generated attributes, but typically I’m consistently use about 20~30, e.g. $Term (a set), $Category (a set), $Theme (a set), $People (a set), $Event (a set), $Regualtion (a set), $Specification (a set), $Technology (a set), $Concept (a set), $DateInterviewed (a date), $IsNoShowTitle (a boolean), $IsNoShowTech (a boolean), $IsNoShowChild (a boolean), $ShortTitle (a string).

As you can see, I use some attributes for value collection and others for export code and output management.

Well, you don’t need to have the review as a child, you can keep them on he same level if you’d like. Note, this is why I’ve started another thread tacitly referred toa s “flattening map view.”

Why not to put an excerpt into $Text and your thoughts about it into $Name attribute?

Maybe this thread might be helpful for you – a little bit outdated and overcomplicated, but it shows some of Tinderbox’s powerful features.

I’d agree if you only want to use the map as a fixed interpretive picture. Put anotrher way: if you can’t see it it isn’t there.

The biggest challange, especially for those coming from a close reading/manual anotation tradition is to see that while that is no worse than paper—or parehchment—it perhaps falls short on the affordances offered by a digital envirornment.

Yet “show me good examples” is a self-defeating learning approach. Better is to ask yourself “what have I struggled to do on paper, or within the rigid confines of close reading?”. Moving form the constraints of print-era thinking to the digital age is less about asking people who do different, richer, things in different domains “why aren’t you sharing?”. Better is to ask, “how does my current legacy method fall short of what I need in the light of modern advances?”.

One thing I’ve learned is just to get started, not wait for the “perfect solution.”

Once I start mucking around with my data, my thinking, I find that my biased thinking gets challenged and new processes emerge, if I’m open to them. I’ve stopped asking, “why does TBX not work this way?” Rather, I’ve started asking, “I want to accomplish this, i.e., an output. Here are my inputs. How might I use the tools that TBX provides to transform my data, generate insight and get from there to there? What additional data do I need? What can the different views give me? If there isn’t a view, what template do I need to produce on my own to create my own view? Again, what exactly am I trying to do? If I’m stuck, can I ask the community?..etc.”

Invariable, I find that as I go through these questions, biases are knocked down or smoothed, new processes emerge, or I really do need to go to the community for help. I find that as I’m asking my question to the community, about half the time, the answer pops into my head, and I don’t actually need to ask the question, or it will shortly after I ask the question. Some questions are intractable and take a week or two (rarely longer) for me to find a path to the insight I want. VERY rarely, do I have to approach @eastgate or backstage to request a feature or call out unexpected app behaviors (in some VERY rare cases, there is a real bug that is fixed VERY quickly). On backstage, in the community, with @TomD, or with @eastage, more often than not, I’m given a deeper perspective that helps me look at the problem in a totally new way which leads to totally new solutions. And then the cycle repeats.

What is the result of this process?

  • 10 + 3 years ago I could accomplish nothing
  • 2 years ago I could write an article
  • 1 year ago I could draft a book
  • This summer I wrote over 30 articles, 100 posts, and over 1000 pages for 3 different reports (all in Tinderbox)
  • Today, I integrated Tinderbox and an App via the command line to manage time (local for me in TBX) and on an enterprise level for my clients via the app (Clockify).

This time horizon may sound daunting but take heart. I’ve taught people in 6 weeks to do what has taken me a few years to learn. What I’ve learned is that progress is less about the tech and more about attitude. It is about focussing on input and output. It is about leveraging the community. It is about experimentation (i.e., failing until you get “it” done). It is about getting it to work to get something done (sometimes outside of Tinderbox if there is a time crunch). Then once done, revisit to refine. I’ve learned, through Tinderbox, more than just Tinderbox. I’ve learned fundamental life skills: community, friendship, communication, parking the ego, experimentation, thinking, writing, and languages (English, RegEx, CSS, HTML, Comand Line, some Javascript, etc.). All these skills have transcended TInderbox, I use them in all that I do and am.

The journey is so very worth it!!!


This is actually quite easy to do.

The problem, here, is that you have a note inside container 19th Century and you say to yourself, “I want to link this to a note inside 20th Century”. What do I do now?

  1. Drag a link from your note to the parking space.

  2. Switch to a view of the 20th Century, typically by switching tabs

  3. Drag the link from the parking space to your destination

  4. Back to the 19th century, by switching tabs

You can also do this with a ziplink, but that’s gilding the lilly.

You are right, not clear enough. I mean types of information that he captures in attributes. Because without understanding this, type of each attribute has no value.

Thank you for writing them! I will try again to use some of them. Actually, as I said, I experimented with some, for example your $Category. But @mwra and you are right, because I don’t have clear picture of end-result I abandoned it, as I don’t know how to implement it. As I said, at least partly it is because for now this is mostly to aid understanding. I don’t have to produce something out of it. Sometimes I think in that manner during translation and that is goal itself. Anyway, seeing listed notes under some attribute in Attributes browser can sometimes be useful…
Do you use NLTags? I also experimented with that, but found it unreliable. Sometimes it do not mark notes which it must…

Yes. After I watched the recording of the last meeting, I grasped the idea a bit.

Actually, I do not have a consistent approach to that. Now I mostly put concept name and maybe very short description in $Name. Many notes have actually excerpt in $Text and thoughts in $Name. But it is impossible if I need to write quite a lot about particular excerpt.

Yes, I already noticed it. Just didn’t dive in it yet. Also I don’t like Zotero and do not need it. I use DEVONthink and LiquidText.

It is lack of map view exactly! )) Never used paper for that…

Yes I totally agree with that. But it takes time to really grasp those affordances.

Maybe when I will need to test some idea on collected material than I will get to this point. Because than at least I will need to try to find new connections between sets of notes.

This I know, of course, but thank you for reminding! I mean I need to see lines on the map between notes not just links going to some different dimension ) Approach for which I have been critisized here — they call it “don’t see — do not exist” :slight_smile: Actually I always very grateful for constructive criticism because it allows at least to try to see another perspective.

But also you see: if I don’t see something which I must see on the map than I should hold it in my mind and I lose part of exteriorization of thinking. Of course, I don’t put everything on the map, but if I put something, then it should be there to free attention for further thinking. Which I don’t need any more (which was interlinked notes on the same level) I sometimes pack into conteiner — I guess this is advice from “The Tinderbox way”.

I do use them, but mostly when I’m doing qualitative data analysis. For my close readings I can pull what I want out of the text and populate and attribute.

Don’t get me wrong, I don’t always ave a “clear” idea of what I want either, I don’t always know if I want a table or report, often, I just want to know, “what is text telling me.” In this case, I start to atomize it, and break it down to attributes (remember $Text, or $Name for that matter, is just another attribute, a string filled with one or mare characters).

Yes, this i what I do. I try to keep my $Names short and mostly, when possible

To be clear, you’re not being criticized. Having a visual of a link is and can be an extremely powerful tool for thinking and knowledge association. The breakthrough for me happened when I realized that the link could be more than just visual, that the link could also be a tool for passing attribute values back and for between notes, e.g. through the path of links, I could pass attribute value from Note A to Note D, through the link paths of Note B and Note C. Once I got this the whole world opened up. I could have Tinderbox create Resources Notes that would be persistent and used throughout my work, I could create associations between notes, both visual in terms of links and the values within the attributes. It would be great if you could share some sample data? Perhspas start with a piece of paper of the type of map you’re looking to build, take a picture of it, and share it with us.

In many ways, it it. Based on this discussion, what I’m hearing is that you want to unpack your containers and the associated linked notes and, when done, be able to put the notes back away int eh containers. I completely agree wit you; that is what I’ve been pushing for what I loosely called a “flattened map view,” which I expect at its core would be do able, but there are many edge cases to think through. For now, you can get someone close to this but running an agent and using action code to reproduce the link associations on the aliases–this approach as several limitations, but it can help in the thinking process (I’d be happy to mock something up if you could provide some demo data).

1 Like

Not criticism here but rather embracing (self-imposed) constraints. Different people have different learning/analytical styles and using a method not suited to oneself causes confusion and frustration; Tinderbox is not optimised for any particular style of use and is a toolbox: this is why the idea of a ‘right’ way is often self-defeating.

The point being made about a style where one needs to see—literally—anything of potential interest is that is does constrain one; some tools/views in the box won’t work well for such a parson. Take, for instance, the point about requiring all links to start/end on the current map. That argues for using a large map without any nested notes (i.e. notes no in the current map), which does mean that unless you’ve a very big screen, you can’t sed it all at once. This in turn brings a new stylistic balance to find: what visual elements are helping understanding and what is just for show (bear in mind that, regardless of default, all sorts of the visual map elements can be further configured). Possibly not using some features might allow more a more dense packing of objects. Similarly, if the full note title is required to be visible (without cropping/elision), then consider ways to get titles as short as possible, allowing more notes to be visible on screen at the same time.

It’s a matter of embracing (self-imposed) constraints. Seeing the finished process of another user who is working to a different set of personal constraints can confuse and draw us into unproductive discussions about the ‘right’ way to do something or ‘missing out’.

Whilst I general advise people to respect their constraints, it is useful to actually try other techniques ones personal style forbids as it can lead to productive change. I recall when Attribute Browser view was first posited, I was a bit lukewarm about it, until I had a task that needed lots of review of notes grouped by a number of criteria. AB view makes that very easy, and helped push me to adding metadata at early info collection stages: even though couldn’t see it in a map, for instance, it was there ready to let me use AB view at a later stage worth stopping to do all the categorisation I might not have done previously as—being not visible—the benefit of the work wasn’t clear.

$NLTags and other Natural Language Processing techniques are only as good as the underlying Apple sic frameworks and the algorithms behind that. Machine Learning improves all the while. I can spot patterns but it can’t read in the way a human does or understand the semantics of your writings (unless there is a discernible pattern). I’d suggest the NLP features here be used as an assist only and not relied on as a primary sort. It is helpful spotting known patterns (use of a certain word or term) and might see a pattern we don’t. Conversely it will not perform as well as a human versed in the subject material because it doesn’t ‘understand’ language in a human manner.

I can’t commend this approach strongly enough. When finding you own style/use of the Tinderbox toolbox, try lots of tests. Fail early and fail often. Rather than depend on the chance of a demo that is a real match for your perspectives, testing will give you feel for what works and what doesn’t. As importantly it will make it easier to ask for help. It is far harder to help an open-ended question like “How does process X work?” that one like “I tried doing X. I this [example]. I expected this result but got this other result instead. Why is X not working for me?”.


@mwra and @satikusala I am sorry, I was not clear enough that it was a joke about criticism.

Ok, I am sending part of what I worked on with full most complicated map of all among others which I have. There is nothing personal in it, except maybe style of writing and work with information in this environment. But, as I said, the main language is Russian. As I understand, it doesn’t matter here much, because you will see my approach anyway, including experiments with attributes. I don’t really need to say that it is messy, isn’t it? :slight_smile:

example.tbx (3.7 MB)

Yes, thank you for sharing this experience. I will try to play with AB again.

What I just understood that I didn’t actually understood how it works. As I always did I defined series of termins under one. Like this for example: emptiness: emptiness;སྟོང་པ་ཉིད་;empty of;пустоты;пустота; So I need to write this to give AI an example? So after some time it will start adding this tag to notes which do not have even one of this predefined words?

1 Like

Thanks. Real busy today/tomorrow, in terms of getting to deeper comment, but this is a lovely example and I just wanted to say thanks for sharing. The Map tab ‘Abhidharma’ best shows the complexity a map can bring.

In your ‘NLTags’ tagger, a line like;

emptiness: emptiness;སྟོང་པ་ཉིད་;empty of;пустоты;пустота;

Means that detecting any one of the individual terms after the colon will make that note add a value ‘emptiness’ in to its attribute NLTags.

However, referring to my notes on Natural Language Processing and Taggers which describe what’s known about the process, it’s not known (i.e. we users just can’t be sure) whether or not other scripts are supported. A quick experiment shows the above works but:

  • If you make a custom tagger, Tinderbox does not create the associated Set-type attribute: the user must do this. The associated attribute name is the same as the tagger note’s $Name.
  • for the five built-in NLP taggers, the necessary attributes are already present as system attributes.
  • Only $Text is scanned by the NLP process: not $Name and _not_other attributes (e.g. Displayed Attributes).
  • The NLP process needs macOS 14+ (i.e. more recent host OS than needed to install and use the overall app)
  • I simply can’t tell if the NLP understands Roman-script based languages or not, not do I know where to look (in Apple developer docs perhaps?).

Intrigued by the last, and testing some of the above, I made a test document: tagger-test.tbx (188.9 KB)

To test the custom tagger aspect, instead of using a system attribute based tagger, I made a new one ‘StateOfMind’ and sure enough I had to create the (Set-type) user attribute. I then added your tagger specimen statement 9as above). I then added a few tests. I’m sorry is the test text is a bit silly but i understand neither Cyrillic script nor (?)Tibetan.

As you will see, using test text that contains a single term from the tagger (i.e. that term must be the detection trigger) I managed a match in all 3 scripts used.

Note. The detection is not instant as in doing a find or query. I’m not sure how the magic behind the curtain works. But—soon rather than instantly—your tagger attribute in a matched note should get a value. If in doubt, do as I have done here: build a minimal demo—only enough to test the precise task—and verify it works. That can be much harder in the context if a big working doc as all sorts of other things might intrude. the point of the test is so you are confident the tagger process works and then if it fails in a working doc you can look for conflicts affecting the process, knowing the tagging process works.

A bonus point here is you can add to the Taggers for built-in NLP attributes. This can allow you the NLP assist, but without being mixed with NLP derived processes. Why is this better? My understanding is the built-in NLP processes work even if you don’t ‘see’ them. So ‘NLOrganizations’ may detect an organisation via NLP inference (not string match of a user-defined term) and put a value in $NLOrganizations. However, I think the NLP aspect of Taggers is that they are more than a string match and NLP is doing things like stemming words. So, you may not need to define a word in noun/verb/adjective/etc. as—for English, at least—the NLP engine is trained to do that mapping of one word to its variants for you automatically. Now, does it do the same for a Tibetan word? My hunch is not. Hopefully someone more expert in NLP can chime in? :slight_smile:

Note to self, when I get a moment free, I need to clarify the above two articles in aTbRef re tagging and use of NLP. For instance, I’d assumed the app made an attribute if I define a tagger. But in fact leaving to the user makes common sense. If I define a tagger and misspell its $Name several times, I don’t want a litter of new attributes reflecting each mistake. Nor should delete a custom tagger automatically delete the attribute: I might no longer need the tagger but I might want the attribute’s data still. Further note, you can delete a tagger, but the associated attribute remains read-only. You could only undo that by editing the XML on the TBX document (obscure, but not hard to do, if given instructions).

1 Like

Wonderful, we can use your example as the basis of our map view discussion on Sunday.

Thank you very much for clear example!

Yes, this is exactly what remains not clear. So we are left with using it just as a string match, as I understand. I also always used it just as a simple string match.

That’s interesting! But again unfortunately looks like I will not be able to participate because of time zone difference and necessity to wake up early next day. But I will definitely watch recording.

1 Like

Great, glad that was of help.

Right now I think any OS-derived AI / ML / NLP should be considered (a) essentially experimental (it’s still evolving) and (b) will work best on English-language text. The latter is because of the Large Language Model-based (LLM) training involved and because—by accident of the US role in early computing—it’s the language with the biggest amount of digital text upon which to train. compare that with say digitised Tibetan texts.

No one planned it this way, it’s just how things have evolved. Cyrillic texts are doubtless more prevalent but still far fewer than English or Western European languages (partly reflecting Cold War era embargo on transfer of computer tech to the USSR & Warsaw Pact countries). :roll_eyes:

Tinderbox can implement these OS offerings but doesn’t have much control over them or insight into them (AI tends to be ‘black box’ work: you get an answer but no context, derivation or accuracy estimate).

Currently, I believe the natural language system works in English, French, Spanish, Portuguese, Italian, and Chinese. It ought to work in Arabic and Russian, but I’m not convinced it does. (I get errors with Russian, and I read no Arabic at all)

The Tagger in Hints uses string matching, and should work in any language. But it’s not very bright.

1 Like

Further to discussion of taggers, I have updated my pages on Taggers and Natural Language Processing.

1 Like

Forgot to ask something essential about taggers… As I see when making custom tagger as in your example file it does not understand stemming of words. So can I use some special symbols when define custom tagger to include different endings of the word? (*) wildcard does not seem to work.

No, for the reason documented here:

Taggers do not support regular expressions. Why? Because taggers work with the OS’ NLP libraries and those do not understand regex.

So, as things stand (at v9.5.1) your tagger is doing a case-insensitive string match on any tagger-defined search terms. For any deeper explanation you’d need to write in to Eastgate.

Given that running a lot of regex (e.g. .contains() and similar) in a big doc can slow things up, having lots of regex -based taggers might have a similar effect. IOW, we’d like the result but not to cost of getting it.

HTH :slight_smile:

†. IOW, this is an issue for heavy users in big documents, not when just messing around with 30–40 notes.