So true; so sad.
In terms of local models:
Ollama works very well in simple command line scripts – e.g.
$ ollama run llama3.2:3b "Say hello, computer"
Hello! It's nice to meet you. How can I assist you today?
And you can just put whatever you want in the quotes, or else pipe it in from another program like
$ ls | ollama run llama3.2:3b "What kind of project would have a directory with the following files?"
Based on the file names and structure, it appears that this is a R programming language project for research or academic purposes, possibly in ecology or biology. Here's
why:
1. `.Rproj.user`, `.gitignore`, and other `.R`-related files suggest that the project uses R Studio.
2. The presence of an `aquatic_plant_clip_harvest.R` file implies that the project involves data analysis and manipulation related to aquatic plants.
3. Files like `.Final_Report.Rmd` (R Markdown), `.Final_Report.docx` (Microsoft Word), and `.README.md` suggest that the project involves writing research reports, possibly
in a format suitable for academic publications or presentations.
4. The presence of a `DESCRIPTION` file, which is a common convention in R packages, suggests that this is a package-related project.
Overall, it seems that this is an individual's research project, possibly as part of a degree program or academic endeavor.
llama3.2 is meta’s newest open model, and 3b gives the version (3b is 3 billion parameters). There are also larger versions that are slower and take much more memory (VRAM or RAM if on a mac with shared RAM) but are more accurate.
Here’s the current list of available ollama models.
As you can imagine, all of this can also happen via runCommand() in Tinderbox, although runCommand doesn’t like long-running things so potentially better to write output to a text file or something and then read that in since ollama streams the output to the terminal?
Thanks, @naupaka, sorry for the dumb question but should we install Ollama in the applications folder or would in be something like Docker? Or do we install it and then run it in docker to isolate it? Once we’ve installed it, how do we make sure that it is secure and truly local, i.e., not siphoning off data?
You just install it like a regular application from the website, or via homebrew:
In terms of tracking what it’s doing – the whole idea is that it is local and that it only is responding to what input you give it, either text piped in or else an image file, e.g.
If you wanted to verify that it is not phoning home, you can use something like Little Snitch. Out of curiosity I just opened the LS viewer and interacted with a few models (llama3.2, phi4). No network connections out. The data in/out in the right pane is when I downloaded the phi4 model.
Or you could use lsof on the command line to track all files touched or opened by the ollama process id:
In both cases I don’t see any evidence that it’s snurfling around anywhere it shouldn’t be, on the net or locally, so I personally wouldn’t worry too much. You could dockerize it or use it in a virtual machine but I don’t think it’s necessary. That said, if you wanted to and had docker installed, it’s pretty easy: