Im beginning to use my Tinderbox to build complex LLM Prompts for GPT 5.
To define complex I mean the following as being a minimum spec for my prompts in production:
1200 words
Context engineering
YAML Header
Instruction Preamble
Definition of contextual meaning
Category Definitions
Pass A — Detection
Pass B — Evaluation
Evaluation Framework
Elements to Evaluate
Example Outputs
Non-Example Output
Is anyone else doing similar AI work? Right now my use of tinderbox is very rudimentary - but even just using it for its visual interface is improving the prompt building process. How else might TBs features improve natural language coding of a complex and interconnected series of prompts?
In the above image you see my front panel for this work. Every adornment is going to end up with the final prompt, and each box within the adornment is a piece of the whole that needs special attention.
Backstage, a bunch of us are working on Tinderbox and Claude. This ought to be in release in the next couple of weeks. It’s based on MCP, so you might want to explore GPT-MCP solutions in advance.
The most significant development has been allowing the LLM to make notes in Tinderbox for use in subsequence discussions. This reduces the need, which I sense above, to define in each prompt thinks like output formats.
I look forward to seeing how you integrate LLMS
I can see where you’re going with this I think, I wonder if it would work like being able to pick and choose what context you cary forward via linking (?)
Yes! I find that encouraging Claude to keep hierarchical notes with well-chosen titles, and to only scan the note titles at the start of a session, helps conserve context space. This is all very new, of course, so we’re learning a lot every week.
This is a picture of my dashboard were Im building prompts. As you can see I have the prompt divided into chunks on the adornment “State: Depth”
This lets me build the prompt out one chunk at a time, and lets me visually rearrange the flow of the prompt based on the needs of the LLM understanding. Ordering the text does make a big impact on the LLM understanding.
This is very helpful, but comes with downsides.
Making changes to the code can be tricky as I have to hunt around for it by clicking on the notes and scrolling until I find it.
The process of compiling all the notes into a final process is tedious, it’s clicking on a note, copy the text, output into another note one at a time to preserve the order.
I know there has to be ways to improve this workflow. Are there any obvious improvements you all see that will help me use this design to build long-context prompts?
Another approach would be to use an attribute to indicate whether this note is to be used in the prompt and, if so, where it should be placed. Then use an agent to gather and sequence those notes.
I followed @eastgate suggestion, had gpt step me through the process; Im almost there but Im not getting any output yet. Here is a picture of what I set up - not sure if it’s enough info.
Any suggestions?
What should be happening is
(Visual layout for reference) - the note “Element 1: Survey” should be aliased by the agent and piped through the $PromptOut and the text recolored to light blue.
The “Prompt: Build” should see the $PromptOut and print the light-blue text into the $text field
The green adornment “PromptAutomation” is stamping the notes dragged in with the attributes that the agent is looking for.
So as soon as a note gets dragged into the adornment, it should be detected by the agent and piped out in light blue.
Also, I dont understand the error as the $UseInPrompt attribute exists and though it does default to “false” I manually change it to true.
You have an error in your argument; you’re calling $ UserPrompt, but it does not exist.
It is not clear what you mean by “any ouput yet.” It is not clear where you want the output to go.
L.format is not correct; you’d use L+“\n\n”;
I can’t see the agent, so it is not clear what it is collecting.
*edit: I found that my path was wrong, glad I caught it. But even with the path fixed - it’s still not quite right. Im going to try closing the program and opening it back up see if some state wasnt updating.
List pulls the IDs of each of the children’s notes and then loops through them to produce a string that you’ll then place in the $Text of the note “Prompt: Build”.
You may run into some different stags, but this should point you in the right direction.