|
- Ollama
Get up and running with large language models
- Ollamas new app · Ollama Blog
Ollama’s new app supports file drag and drop, making it easier to reason with text or PDFs For processing large documents, Ollama’s context length can be increased in the settings
- Ollama is now available as an official Docker image
We are excited to share that Ollama is now available as an official Docker sponsored open-source image, making it simpler to get up and running with large language models using Docker containers
- Download Ollama on Windows
Download Ollama macOS Linux Windows Download for Windows Requires Windows 10 or later
- Download Ollama on Linux
Download Ollama for LinuxWhile Ollama downloads, sign up to get notified of new updates
- Blog · Ollama
The initial versions of the Ollama Python and JavaScript libraries are now available, making it easy to integrate your Python or JavaScript, or Typescript app with Ollama in a few lines of code
- Structured outputs · Ollama Blog
Ollama now supports structured outputs making it possible to constrain a model’s output to a specific format defined by a JSON schema The Ollama Python and JavaScript libraries have been updated to support structured outputs
- Llama 3. 2 Vision · Ollama Blog
ollama run llama3 2-vision:90b To add an image to the prompt, drag and drop it into the terminal, or add a path to the image to the prompt on Linux Note: Llama 3 2 Vision 11B requires least 8GB of VRAM, and the 90B model requires at least 64 GB of VRAM Examples Handwriting Optical Character Recognition (OCR) Charts tables Image Q A Usage
|
|
|