|
- Ollama Hallucinations for Simple Questions : r ollama - Reddit
Recently I installed Ollama and started to test its chatting skills Unfortunately, so far, the results were very strange Basically, I'm getting too…
- Ollama GPU Support : r ollama - Reddit
I've just installed Ollama in my system and chatted with it a little Unfortunately, the response time is very slow even for lightweight models like…
- Training a model with my own data : r LocalLLaMA - Reddit
I'm using ollama to run my models I want to use the mistral model, but create a lora to act as an assistant that primarily references data I've supplied during training This data will include things like test procedures, diagnostics help, and general process flows for what to do in different scenarios
- Request for Stop command for Ollama Server : r ollama - Reddit
Ok so ollama doesn't Have a stop or exit command We have to manually kill the process And this is not very useful especially because the server respawns immediately So there should be a stop command as well Edit: yes I know and use these commands But these are all system commands which vary from OS to OS I am talking about a single command
- Need help installing ollama : ( : r ollama - Reddit
Properly Stop the Ollama Server: To properly stop the Ollama server, use Ctrl+C while the ollama serve process is in the foreground This sends a termination signal to the process and stops the server: bashCopy codeCtrl+C Alternatively, if Ctrl+C doesn't work, you can manually find and terminate the Ollama server process using the following
- Completely Local RAG with Ollama Web UI, in Two Docker . . . - Reddit
Here's what's new in ollama-webui: 🔍 Completely Local RAG Suppor t - Dive into rich, contextualized responses with our newly integrated Retriever-Augmented Generation (RAG) feature, all processed locally for enhanced privacy and speed
- How safe are models from ollama? : r ollama - Reddit
Models in Ollama do not contain any "code" These are just mathematical weights Like any software, Ollama will have vulnerabilities that a bad actor can exploit So, deploy Ollama in a safe manner E g : Deploy in isolated VM Hardware Deploy via docker compose , limit access to local network Keep OS Docker Ollama updated
- ollama - Reddit
Stop ollama from running in GPU I need to run ollama and whisper simultaneously As I have only 4GB of VRAM, I am thinking of running whisper in GPU and ollama in CPU How do I force ollama to stop using GPU and only use CPU Alternatively, is there any way to force ollama to not use VRAM?
|
|
|