copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Ollama Get up and running with large language models
Download Ollama on Windows Download Ollama macOS Linux Windows Download for Windows Requires Windows 10 or later
library - Ollama Browse Ollama's library of models OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3 1 on English academic benchmarks
Download Ollama on macOS Download Ollama macOS Linux Windows Download for macOS Requires macOS 14 Sonoma or later
Ollama Search Search for models on Ollama olmo2 OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3 1 on English academic benchmarks
Ollama is now available as an official Docker image We are excited to share that Ollama is now available as an official Docker sponsored open-source image, making it simpler to get up and running with large language models using Docker containers
Blog · Ollama The initial versions of the Ollama Python and JavaScript libraries are now available, making it easy to integrate your Python or JavaScript, or Typescript app with Ollama in a few lines of code
OpenAI compatibility · Ollama Blog Ollama now has initial compatibility with the OpenAI Chat Completions API, making it possible to use existing tooling built for OpenAI with local models via Ollama
Ollamas new app · Ollama Blog Ollama’s new app supports file drag and drop, making it easier to reason with text or PDFs For processing large documents, Ollama’s context length can be increased in the settings
llama3 CLI Open the terminal and run ollama run llama3 API Example using curl: curl -X POST http: localhost:11434 api generate -d '{ "model": "llama3", "prompt":"Why is the sky blue?" }' API documentation Model variants Instruct is fine-tuned for chat dialogue use cases Example: ollama run llama3 ollama run llama3:70b Pre-trained is the base model