Im curious if anyone is having luck leveraging AI to do the dirty work of programming. Personally I only dabble in coding, so I was getting really nervous about having to operate Tinderbox with Regex and Perl; but now that AI can code Im really looking forward to trying out TBs most powerful features.
So far, the experiments Iâve done with asking various AI clients to write Tinderbox action code is that the AI very happily writes what it says is action code, but is in fact unworkable.
That part aside, Iâm hesitant to think that AI is a good way for folks to get working program logic, if the person who asked for the code does not understand the code. So, yes, ask AI to write code in languages that one can at least parse for valid and efficient logic.
Whatâs not clear is why AI, which has little (no?) semantic understanding of what we write is more trustworthy for being a machine. As stated above, Iâve found AI is more use in a context where I know that subject area and gan see the AIâs then-obvious bad guesses.
Noted above is AI doesnât really get âAction codeâ. This should be expected. Why? The current ChatGPT type AI runs off an LLM: Large Language Model. The âLargeâ part is pertinent. If you hoover up all the writing on Tinderbox action code I doubt it equates to large. So the old computer rule applies: garbage in, garbage out. The âgarbageâ in this case being to (unwittingly) use too small a library to train the AI. I suspect it does better at HTML code or ordering a takeaway meal (where the library is larger). AI will doubtless get better, but today it is quite limited outside a quite small number of areas. It only look authoritative as it bad at showingâunlike a humanâits confidence in the answer (especially where confidence is based on knowledge rather than data like percentage of correctly delivered take-aways)
Tinderbox action code is a world away from MS Office âwizardâ, i.e. canned solutions. Tinderbox is a big toolbox so surprisingly often you can find yourself doing something not done before.
Given the comparatively small footprint of the app, if stuck with Action Code you best/first solution should be to ask in this forum. Youâll get a real answer from real people who actually understand the app.
Footnote. Iâve nothing against AI; itâs massively impressive. I just think weâre all a bit over-optimistic in where and how it is of use. And, Yes, itâs getting better all the time. This answer might be different in a decadeâs time.
Contrary to @PaulWalters, I can say that I have had very good experiences in asking ChatGPT to help me write code. But that was in (the statistics language) R rather than in Tinderbox action code. The R community is larger and there is a lot of material on the 'net, which is why I think the quality may be better. But it has worked really well - describing the problem in detail and stating that I want R code has led to good explanations as an answer plus code that I could copy and paste into RStudio.
I feel like I have an inexhaustible research assistant who is at my beck and call whenever I feel the need - something that has escaped me in real life so far
I was only referring to failed Tinderbox action code, which I expected, so no need to disagree with me. Other languages; other more broadly documented and used languages: sure. But, when it comes to code created by any LLM, trust but verify.
Regular expressions are good to learn: they ought to be part of your toolbox, especially if you do much work with text. You donât need to worry about the exotic parts, but the basic concept and notation is, I think, important for everyone.
Very simple action code will take you and afternoon or so to master. If you get stuck, ask on the forum and someone is bound to have an answer right away. Often, simple action code is all you need!
A lot of the talk about action code is doing things that arenât simple but that are (now) possible. Cross those bridges when you come to them.
Paul, I didnât mean to disagree - just mention my experience. In fact I fully agree as to the causes of differences in results between TBX action code and R code. Apologies if my choice of words caused a misunderstanding.
Using AI to write correct RegEx statements works for me pretty well. The beauty as well is that the AI explains the code to you. The same works for me for CSS which I also use quite a bit in my projects.
Interesting. Would it be possible for you to post some specimens of the sort of explanations given by the AI? Iâm not asking because I doubt your report but because it would be interesting to see the detail granularity of the description you get back. This sort of tightly scoped use of AI does seem to be an area where it can help us without adding uncertainty into the mix.
You asked for every word before the colon. Of course if one didnât have a means to check the solution, one might just trust the AI. Oops!
That said, thanks for the example as the explanation is informative, apart from not understanding the semantics of âeveryâ.
This example seems consistent with reports from people using GPT to help with coding. It rarely gets the right answer but it gets you most of the way there. Good for the practised user who can likely see the errors. Not goodâyetâfor the novice. But the systems are getting better all the time so this is encouraging.
Sorry, âeveryâ was only me writing in âEnglishâ as german-native ;o)
I usually ask ChatGPT first for the code and then go to https://regexr.com/ to execute. If there is any mismatch I usually correct it directly in regexr.
I use regexr a lot to parse and transform text for my project before copy and paste it into my tbx workflow. Thatâs a very effective way of working for me.
ChatGPT can help a lot speeding up the coding. But you still need to know what is going on in the code. If ChatGPT runs into an error/wrong concept it will close to never fix this but repeat the wrong approach over and over again. Also ChatGPT sticks to the more simple and basic solutions. This maybe fine for smaller projects but will break your architecture for larger ones. But it is very helpful and speeds up the development.
So there was a major development last week with ChatGPT, basically OpenAI has added a feature that lets anyone create a custom chatbot with specialized knowledge added in by the user. If there is a library of special syntax than it can be uploaded and GPT will be able to print code using that.
They will also be making a market-place for people to buy and sell these specialized chatbots - called âGPTsâ. But just beware, if you make a public GPT you have to include directions to the GPT not to share its instruction set, otherwise the user can ask for a printout of its programing.
Interesting. Few (any?) real bloopers, but it is easy to see the GPT doesnât have a semantic understanding of the training corpus. Some of the suggestions, especially in later headings is vague and not really relevant. For instance:
Visual Analysis: Employ visualisation tools in Tinderbox, such as charts or graphs, if available, to visually compare data trends.
Tinderbox doesnât have âcharts or graphsâ these and Iâm unconvinced this is deliberately obliquely referring to posters. So the advice is generically useful but not really applicable to Tinderbox.
Still, for advice that costs $0, itâs not egregiously wrong.
Pertinent, what is the training corpus for your âpdfâ input? IOW, what documents was it given to âlearnâ Tinderbox? There isnât a lot of copy out there.
I fed the bot with Tinderbox way, that was it, but I can add more later. I havnt started using the program yet as Ive been too busy traveling. I only have time to run little experiments like this and think about the high-level design of my first few projects