I’ve been playing around with Ollama in a VM on my machine and it is really useful.
To get started I would start by making sure you have capable hardware. You will need recent hardware so that old computer you have laying around may not be enough. I created a VM on my laptop with KVM and gave it 8gb of ram and 12 cores.
Next, read the readme. You can find the Readme at the github repo
https://github.com/ollama/ollama
Once you run the install script you will need to download models. I would download Llama2, Mistral and LLava. As an example you can pull down llama2 with ollama pull llama2
Ollama models are available in the online repo. You can see all of them here: https://ollama.com/library
Once they are downloaded you need to setup openwebui. First, install docker. I am going to assume you already know how to do that. Once docker is installed pull and deploy open web UI with this command. Notice its a little different than the command in the open web UI docs. docker run -d --net=host -e OLLAMA_BASE_URL="http://localhost:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Notice that the networking is shared with the host. This is needed for the connection. I also am setting the environment variable in order to point open web UI to ollama.
Once that’s done open up the host IP on port 8080 and create an account. Once that’s done you should be all set.
You can teach it things and upload documents for it to process
deleted by creator
Ollama is just a the backend. You need open web UI or a similar application to use it
Check AnythingLLM out, its just an appimage
Not as maintainable long term and it doesn’t have user management
There’s a dockerized version if you need those
https://github.com/Mintplex-Labs/anything-llm/blob/master/docker/HOW_TO_USE_DOCKER.md
So why is it better than OpenwebUI? It seems like each has there own use case.
I’ll give it a try just for fun but it doesn’t seem to be better as far as I can tell
No idea if its better, its the thing I tried and it was pretty seamless to set up. With my aging hardware and AMD GPU, I have been pretty much sitting in the sidelines with this whole LLM thing