Will creating too many proxies affect computer performance?

Excuse me, the file has already created 200 nodes, and it is necessary to create an agent for each node to collect information. Therefore, 200 agents will be created. Will this affect the running speed of the computer?

Probably not. But 200 agents is a lot!

What is the information you’re collecting?

What do you mean by “The file has already been created.” What created what?

Personally, I have found performance issues with “too many” agents. But, my files are VERY complex, with over 7,000 rules and edicts running. With all these rules and edicts running in the background or action code triggered with templates, and if the agents are also applying, I’ve found that hilarity can ensue. If found that turning agents off when I don’t need them running is really helpful. All you need to do is set ‘$AgentPriority=-1’.

Also note that agents have so more more uses than simple discovery. They are AWESOME for editing and transforming notes in mass (e.g., when updating notes to new prototypes), creating glossaries and indexes for reports, and much more!

Tinderbox is extremely forgiving. You can push it to an edge and then always back off. Invariably, I tend to find that significant performance issues are the result of malformed action code or templates that I’ve written; it rarely is the result of Tinderbox.

There are no hard numbers of maximum agents etc. If you’re (inadvertently) causing slow down you’ll see it and it ia prompt to probably do some re-structuring. Happily, Tinderbox makes this much easier than in many other tools. It’s a one-off bit of work. But building for problems one hasn’t yet hit can also be confusing. Better, IMO to deal with it as/when such issues of scale occur as what needs to be done makes more sense in context.

If you’re just making agents to report the presence of a certain value in notes, you may well find Attribute Browser view can replace a lot of those agents (and their associated work behind the scenes).

1 Like

I have a use case were I need to digest large PDFs and restructure text in different ways. Is Tinderbox going to help me do this? Can I have 200 agents parsing the same text and piping different “blurbs” it into different notes?

I’ve been doing this manually for two years building the structure these notes end-up in… but looking to automate the dull stuff like parsing the same text multiple different ways. That would be great!

I still don’t see the need for 200 agents here, unless you have 200 different ways to parse these pdf’s.

Do you need to reparse these pdfs frequently, as agents would do? Perhaps you want stamps, which are manually applied, rather than agents?

If my machine could handle 200 agents Im sure I will end up there eventually.

I will need to do this regularly as Im doing data-entry of the same text through different contexts; I know the difference between agents and stamps, but Im not sure about the details. Automation would be nice if I can get the outputs formatted right.

I wonder if Adobe PDF extract > to JSON would be a better 1st step; then do the parsing?

It might help us understand if you were to share some of the queries you use in your agents. At outset, it’s easy to make agents and keep those you’ve already use in case you might need them latter. But, for instance, you can always store the queries and delete the unused agents as if you’ve saved the query, they are very easy to re-create if/when needed.

As well as finding notes through queries, are your agents using actions to extract/set any attributes. Or, as the agents mainly a way for you to find (aliases of) notes you might want to review?