Tinderbox overloading my server

I have a Tinderbox weblog file that handles few areas of my website (and 3 domains) in one tbx file. Recently, I’m getting blocked by my ISP for an excessive, “non-standard number of IP connections”.

I thought it was Filezilla, I thought it was VS Code, but the coolprit seems to be my Tinderbox file unfortunately. I might be doing one thing seriously wrong: my templates and notes work on absolute URLs for css and media. This is probably why Tinderbox is sending over 1000 requests to my server per minute?

Is there a way around the issue or the only solution is to convert absolute paths to relative in URLs?

I only have 3 agents, all set to occasional priority and they do not request anything from the server (just deliver random quotes and links to resources from internal lists)

Inspector tab 01
Inspector - tab 2

That’s bizarre! I can’t imagine what could be going on here.

I suppose that, if you leave your Tinderbox constantly on the Preview pane, and you’re previewing a complex page with lots of images, the might generate a refresh after each agent update cycle. But that’s only six refreshes a minute, so your page would need to require >150 images and css files. Not impossible, but that’s a lot.

Again, if you’re auto fetching a huge number of pages, that might generate some traffic. But again, it seems unlikely that you’re auto fetching thousands of notes.

Are you finding the Tinderbox requests from Little Snitch, or the user agent, or some other way?

It appears you might have a lot of edicts. What are their jobs?

Thank you your replies, Mark and Paul. The information about “over a 1000” pings comes from my providers’ support as the reason for blocking my IP. Mark I think there can be something with leaving Tinderbox on the Preview pane: I will check with Little Snitch if going into Preview mode triggers the spikes and come back about it.

Paul, I use edicts to help generate a visual Roadmap in D3.js. A map of inbound and outbound connections. Edicts populate a small inline .js file for each html with info about that particular note. Nothing too big: the edict gives the current note a name, labels it central, then identifies inbound and outbound notes, and creates links on displayed rectangles:

// Central node
$CurrentNote = '{ id: "' + $Name + '", type: "central", url: "' + $TechstyPath + '" }';

// Initialize nodes with the central node
$GraphNodes = $CurrentNote;

// Process inbound links
$InboundList = links.inbound..$Path;
$InboundList.each(X) {
    $GraphNodes = $GraphNodes + ',\n{ id: "' + $Name(X) + '", type: "incoming", url: "' + $TechstyPath(X) + '" }';

// Process outbound links
$OutboundList = links.outbound..$Path;
$OutboundList.each(X) {
    $GraphNodes = $GraphNodes + ',\n{ id: "' + $Name(X) + '", type: "outgoing", url: "' + $TechstyPath(X) + '" }';

// Initialize links
$GraphLinks = "";

// Links for inbound nodes
$InboundList.each(X) {
    $GraphLinks = $GraphLinks + '{ source: "' + $Name(X) + '", target: "' + $Name + '" },\n';

// Links for outbound nodes
$OutboundList.each(X) {
    $GraphLinks = $GraphLinks + '{ source: "' + $Name + '", target: "' + $Name(X) + '" },\n';

The result is a small visual Roadmap, actually 400 of them, as below:

Image of D3.js map of inbound and outbound notes

Example page – map under the text (what edicts are doing before populating a small inline D3.js declaration)


Very nice approach – I like the site and the embedded maps.

I wondered about the edicts, because though they are low priority, 400 or so of them might result in a lot of background buzz.

What’s your method for getting the data from Tinderbox to your site?

Is the Tinderbox file scrapping data from somewhere regularly?

Is the ISP block because the machine is doing a lot of POSTs or is it excessive GETs, or maybe both?

@Xanadu have a HT’24 committee meeting shortly so (after) I’ll see if I can’t at least help flesh out some more context. This is indeed a head-scratcher of a problem and the first time I recall such a thing being an issue.

@Xanadu kindly showed me his TBX (helpful for, as expected, most of its $Text is in Polish).

The site is exported locally and then uploaded. It uses a Bootstrap framework. The scripts for that, the D3, and CSS use absolute (i.e. full) URLs. The general template used by pages has no exotica I can see.

The issue of heavy calls to the ISP seems [sic] to occur in conjunction with use of the Text/preview pane though it’s not certain. We couldn’t trigger it during our call.

Even if some of the JS libraries called use external (web) URLs to call content, that should be needed only once per invocation of the Preview, so it begs the question what might need to be signalling a lot.

That suggests getting a sample of the calls (and thus their type/origin) offers the best way ahead @Xanadu is working on that.

HTH :slight_smile:

Thank you very much @mwra for going through my Tinderbox file. I have now Little Snitch installed, and also sent a follow up message to my ISP provider to give us, if possible, some more info about the nature of the requests.

@PaulWalters: the Tinderbox is not scrapping any data from outside and does not have any watch folders. As to edicts, I presented above how they look in the Inspector. In html, they are populating an inline snippet of D3.js code that is responsible for displaying the map of inbound and outbound links with a) URLs (again - absolute), and b) note names.

        const graph = {
            nodes: [{ id: "collage", type: "central", url: "https://techsty.art.pl/hipertekst/awangarda/collage.htm" },
{ id: "dunajko", type: "incoming", url: "https://techsty.art.pl/hipertekst/polska_e-literatura/liberatura/dunajko.htm" },
{ id: "burroughs", type: "outgoing", url: "https://techsty.art.pl/hipertekst/awangarda/burroughs.htm" }],

            links: [{ source: "dunajko", target: "collage" },
{ source: "collage", target: "burroughs" },

Perhaps, when previewing, the also a map can send some requests excessively at times. It did not do that when me and Mark were looking at the code and exporting the htmls.

Will keep updating the thread, thank you so much.

1 Like

Regardless of the outcome of the sleuthing, you’ve indirectly showcased for the community an interesting “Connection Roadmap” (“Mapa połączeń”) generated from Tinderbox and embedded in your site. Well done!


I suspect that edict, agents, and maps are all a red herring here :wink:

1 Like

I’d love to believe so – herrings are our national dish in Poland. If I dropped some red variety here (these would be herrings in beetroot :slight_smile: ) I sincerely apologise.

The observation still is that whenever I’m in my workflow of exporting from Tinderbox to my server, I occasionally get blocked by my own provider.

I will keep gathering my data and thank you all for replies.

1 Like

If iCloud Private Relay is active, or a VPN is in use, turn the service(s) off and see if the ISP’s problems reoccur. Not likely, but might be just another possible culprit to eliminate.

But if I recall you export to a local(hard) drive before uploading. Tinderbox can’t do the upload, unless saving directly into some online-linked folder.

I wonder if this is your ISP mis-diagnosing local-to-server synch of pages of a large static site. Many today were born after static web pages became ‘old school’ so your ISP techs likely don’t understand that a valid customer might want to sync 1000s of files. Over the last 20 years, and several different ISPs (none of them cheap) this happens to me regularly and is only solved by escalating the the problem to a grown-up. It’s normally due a new junior hire ‘optimising’ the settings for spotting misbehaviour (without the necessary skills/expertise for the task).

1 Like