Tinderbox and the OpenAI API

I did some experiments with this. The language part of CoreML is thinly used at that time, which made things difficult, and the training requirements seemed daunting. But it was easy enough to do…

1 Like

Hm, having met IRL the person supposedly pictured, I’m totally underwhelmed by the result of the above (the style is immaterial). Just because AI gives gives a result, it does mean there is a sensible result. ‘AI’ seems to have stopped us asking if the results even make sense, a useful habit humans have acquired ver centuries if not more.

I’m not trying to be rude about the process here—in terms of interfacing Tinderbox to AI. But, in all seriousness automated crud is still crud, just delivered to you more quickly (by AI) and without a warning as to quality. On a closed track with a trained driver, AI can give OK results but in the wider context I have trust issues.

TL;DR: getting AI output != sensible result.

1 Like

I don’t think AI can or will change our behavior in the short term. Even without AI, we believe what we want to believe. Reportedly, only 18% of Brexit supporters are still positive about the outcome. But well over 50% of this group believe it will get better. That’s without AI (well - almost).
AI is simply a tool and “a fool with a tool is still a fool” certainly applies here as well. Nevertheless, I see exciting prospects here and would evaluate your assessment as too negative. I found the results of DALL-E rather not good. That looks different with the text-based models. But they were also fed with data from Twitter, and a large part of it was probably garbage. Now AI is also to be used to distinguish good content from trash and fake. How will that turn out if the AI itself gets food written by another AI?
But I still like the idea that the basis of AI use in Tinderbox is my notes written by me, so I’m not relying on content that comes 100% from AI. I want to assess, evaluate and add to my content and selected third party content.
If I wanted to understand it as an evolutionary process, Google was the successor of pure hypertext documents and AI is now putting a step on top of Google.

1 Like

As said, I’m not critical of process, just doubting of results. Confirmation bias means we’re overly receptive of things that look right (or even anything from a process we’ve previously considered too hard). Software engineering has a particular blind spot in this regard: result == progress, and progress is generally goo therefore the result is good.

I’m not against LLM-based aI at all, I’m just conscious of societally/culturally blind I am to it’s output quality. I don’t expect honesty from someone selling replacement windows (it’s nice if they are!) but I do expect my doctor to be honest talking about a cancer treatment. A societal problem is we are treating AI as the latter not the former in contexts where that trust is not warranted by results (as in not all output is useful/truthful).

In the case of DALL-E, making pictures of imagined worlds seems reasonable and thought provoking. Using it to make a ‘better’ picture of me for my passport/ID/etc., maybe not so good. But as all we get is ‘output’ how do we tell. I don’t pretend to know the answer but it’s a challenge of the her and now given the way AI boosters oversell its veracity.

I don’t think doing experiments with new tech should stop us pondering on the quality of the output—or stop us doing experiments. The former is not explicit criticism of the latter. :slight_smile:

2 Likes

I’m afraid studies of psychology (and history) would not seem to support the proposition that we have acquired the habit of asking if results make sense – certainly not all of us. And the problem existed well before AI. The Second World War and the attendant Holocaust is merely one example of what can happen when a completely daft idea takes hold among a large number of people.

Dead right. As psychologists have shown, it is not rationality or reason that is the most potent driver of human behaviour. This article from six years ago deals with some of the research into our difficulties with facts – and “facts” … E. Kolbert – Why facts don’t change our minds

Further, Tversky and Kahneman were looking at decision-making fifty years ago, and how biases could influence it. Much of their work is summarised in Kahneman’s book, Thinking, fast and slow.

As to whether AI is going to worsen the situation – well, it wasn’t that good before AI. There have been plenty of conspiracy theories through the centuries, unfortunately. The research that has been done up to the present would seem to show that the human mind is not optimised for using rationality when processing information. Would that it were so, but the evidence for it is not strong.

A further complication, which I believe is worth considering, is that the ways that people think can be influenced by culture, and that there are noticeable differences between western and eastern cultures and societies. This has been extensively studied by Nisbett and associates, for example in Culture and systems of thought: holistic versus analytic cognition. I’m wondering if there will be an “eastern” AI, and how it might differ from a western one. (I assume that ChatGPT is based on western notions of information processing.)

Good luck to us all!

4 Likes

This was actual a question brought up by @ArtRussell last weekend, i.e., wester vs. eastern influence on AI.

ChatGPT seems to be disturbingly good at translating English to Chinese, Chinese to English. So it must have trained on masses of “western” and “eastern” documents. But wait. That assumes modern Chinese is “eastern.” Is that correct? It’s laced with a lot of Marxism these days. Is Marxism “eastern” or “western”?

It is a good bet that Chinese-style ChatGPT like services won’t train on dangerous “western” material that conflicts with (western?) Marxism. Complicated!

Anyway, great to see this discussion and these intriguing efforts to integrate various AI into Tinderbox.

3 Likes

This is an awsome tool. thank you so much for sharing!!

and this is what I got after the OpenAI engine processed the notes:

image

Were the links automatically put in place too? Am I wrong if if I say that it seems they were manually arranged? I mean: has AI got any aesthetic sense?

alle links are placed automatically!

FWIW, there is currently no automated method to set the sdie of a note from/to which links attach. The position can be altered manually.

Note: a feature request/suggestion has already been made that the link’s stored attachment position metadata (sourcepad and destpad) be made readable/editable via eachLink().

Damned! If it’s not aesthetic, is it logarithmic… mathematical… geometric? What else? Is that a real coincidence that the links have been disposed in such a concordant arrangement? Your notes seem to be at an equal distance from each other. Does this explain that?

I didn’t touch the links - but I arranged the notes by hand.

OK. That’s what I was “suspecting”. A mathematician, I guess, would explain this geometric arrangement.

The placement of link lines derives in part from the relative position of the notes. The latter are, I think, already positioned in the demo file in an aligned cross-shaped layout. Try less gridded positioning of the notes and then see how he links are placed.

I suspect what you see is a happy accident—but it is no less pleasing for being so. :slight_smile:

1 Like

Pretty Marvelous Detlef.

Question: How would I add to the code if I wanted OpenAI to give me the concepts of the text in an attribute IN ADDITION to the keywords?

Tom

Hi Tom,

you need to make two calls to OpenAI - with two different prompts.

I tried “extracts the concepts of the text” and got “1. walter benjamin;10. literary conferences;11. special journal issues;12. rising interest in continental philosophy;13. british and american literature;14. new media studies;2. the arcades project;3. charles baudelaire;4. flâneur;5. high capitalism;6. the age of mechanical reproduction;7. undergraduate classroom;8. scholarly discourse;9. harvard university press”

for a sample text.

The function callOpenAI_db now uses the attribute $OpenAI_Prompt for the prompt. If you need two calls to OpenAI for one note you need to modify the functions a little bit. You know how???

Detlef

No, I do not know how to do this, I am having trouble reading the code. If you could help by creating a new demo more comments using the concepts you just did, I think it will help me, especially. I will study the code and compare.

Thanks Detlef.

here you go…

Now callOpenAI_db(thePayLoad, theType, thePrompt) needs a third parameter - the prompt for OpenAI. Before the function used the $OpenAI_prompt attribute and this appears only once for each note.

The demo works like before - there is an $Edict attached to the prototype and there a function runOpenAIKeywordExtractor_db() will be called. No change needed here.

But now you have the option to call the function callOpenAI_db twice with different prompts. Like:

var:string result1 = callOpenAI_db($Text, 2, "Extract not more then 15 of the most significant keywords from this text as comma seperated values");
var:string result2 = callOpenAI_db($Text, 2, "extracts the concepts of the text");

OpenAI-Demo_Keywords_image.tbx (250.5 KB)

3 Likes

Thanks Detlef, I have to run some errands with my son, but will try it later this afternoon when I return. Many thanks for your assistance and thank you for your work with Tinderbox and OpenAI!

Tom