|
- Ollama GPU Support : r ollama - Reddit
I've just installed Ollama in my system and chatted with it a little Unfortunately, the response time is very slow even for lightweight models like…
- Training a model with my own data : r LocalLLaMA - Reddit
I'm using ollama to run my models I want to use the mistral model, but create a lora to act as an assistant that primarily references data I've supplied during training This data will include things like test procedures, diagnostics help, and general process flows for what to do in different scenarios
- Local Ollama Text to Speech? : r robotics - Reddit
Yes, I was able to run it on a RPi Ollama works great Mistral, and some of the smaller models work Llava takes a bit of time, but works For text to speech, you’ll have to run an API from eleveabs for example I haven’t found a fast text to speech, speech to text that’s fully open source yet If you find one, please keep us in the loop
- Ollama running on Ubuntu 24. 04 : r ollama - Reddit
Ollama running on Ubuntu 24 04 I have an Nvidia 4060ti running on Ubuntu 24 04 and can’t get ollama to leverage my Gpu I can confirm it because running the Nvidia-smi does not show gpu I’ve google this for days and installed drivers to no avail Has anyone else gotten this to work or has recommendations?
- How to Uninstall models? : r ollama - Reddit
To get rid of the model I needed on install Ollama again and then run "ollama rm llama2" It should be transparent where it installs - so I can remove it later
- What is the best small (4b-14b) uncensored model you know and use?
Hey guys, I am mainly using my models using Ollama and I am looking for suggestions when it comes to uncensored models that I can use with it Since there are a lot already, I feel a bit overwhelmed For me the perfect model would have the following properties
- Running LLMs on Ryzen AI NPU? : r ollama - Reddit
Running LLMs on Ryzen AI NPU? Hi everyone Im pretty new to using ollama, but I managed to get the basic config going using wsl, and have since gotten the mixtral 8x7b model to work without any errors For now its only on CPU, and I have thought about getting it to work on my GPU, but honesty I'm more interested in getting it to work on the NPU
- ollama - Reddit
Stop ollama from running in GPU I need to run ollama and whisper simultaneously As I have only 4GB of VRAM, I am thinking of running whisper in GPU and ollama in CPU How do I force ollama to stop using GPU and only use CPU Alternatively, is there any way to force ollama to not use VRAM?
|
|
|