OpenAI - Assistant API - TBX demo

as demonstrated in todays meet-up here is a demo to use the OpenAIs Assistant API with TBX.

OpenAI_Assistant-API.tbx (395.4 KB)

4 Likes

Thank you so much for this. I wish I’d been able to attend the meetup. I’d have learnt a lot.

I have a question: would it be possible to create a “generic assistant API” that could work with one’s model of choice, be it Claude AI, Gemini or Mistral?

From the little I’ve seen out there, it would entail making some generic attributes such as $APIKey, @ModelName and $Path (to the assistant) or something like that. Just a thought, in terms of further developments to the demo. I think that would be fantastic.

Thanks once again!

Hi Fidel,

sure it is possible to create a generic interface to different APIs. But this is a lot of work and may be to complex for TBX code (no debugger, no OOP…) - but it is not impossible.

The APIs differ a lot:

OpenAI and Google Gemini do have the most advanced APIs. It depends a what you try to solve with the LLMs. This should be the main reason to choose one of the AI models.

I already store path, APIKey and Modelname in the attributes of TBX notes. Still the demo is limited to OpenAI.

1 Like

Noted with thanks! The way things are going I won’t be surprised if the next major update of TBX has this functionality built in.

just a little update if you like to play with AI, and LLMs…
If you download LM Studio (great free app, the ultimate playground for AI on the Mac) you get some new options for the TBX integration:

  • you can switch from OpenAI to any LLM you like (Meta, Microsoft…)
  • all your work with TBX and the AI will run locally on your Mac. It’s fast (if you have a M1, M2 or M3 Mac) and you don’t share your prompts with the cloud

If you add AnythingLLM to you applications it will become an even better experience. AnythingLLM is free too. In LM Studio you can start a local server. You will use this server from TBX like you did before with the external OpenAI server. AnythingLLM will also connect to this server and in AnythingLLM you could add your own files to the LLM. So the AI engine will use all your data to answer your questions and this will run locally. No sharing of your data with anyone.
I don’t find the time at the moment. But I will update my TBX demo to work with this setup. So far this post is just an inspiration :wink:

5 Likes

Looking forward to the update.

1 Like

while you are waiting for my update - it might be a good idea to read this article about privacy and data security with ChatGPT.