|
- The Llama 4 herd: The beginning of a new era of natively multimodal AI . . .
We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first built using a mixture-of-experts (MoE) architecture
- The future of AI: Built with Llama
This year, we developed Llama Stack, an interface for canonical toolchain components to customize Llama models and build agentic applications We believe that offering the best simplified tool for building with Llama will only accelerate the incredible adoption we’ve already witnessed across sectors
- Introducing LLaMA: A foundational, 65-billion-parameter language model
Like other large language models, LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text To train our model, we chose text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets
- Introducing Meta Llama 3: The most capable openly available LLM to date
Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases
- Introducing Llama 3. 1: Our most capable models to date - Meta AI
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3 1 405B— the first frontier-level open source AI model
- AI at Meta
Experience personal AI and bring your imagination to life with new ways to restyle your videos—all built with our latest models
- Everything we announced at our first-ever LlamaCon
Llama API provides easy one-click API key creation and interactive playgrounds to explore different Llama models, including the Llama 4 Scout and Llama 4 Maverick models we announced earlier this month
- Llama 3. 2: Revolutionizing edge AI and vision with open, customizable . . .
Today, we’re releasing Llama 3 2, which includes small and medium-sized vision LLMs, and lightweight, text-only models that fit onto edge and mobile devices
|
|
|