- The Llama 4 herd: The beginning of a new era of natively multimodal AI . . .
We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first built using a mixture-of-experts (MoE) architecture
- Introducing LLaMA: A foundational, 65-billion-parameter language model
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release LLaMA is more efficient and competitive with previously published models of a similar size on existing benchmarks
- Introducing Meta Llama 3: The most capable openly available LLM to date
Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases
- The future of AI: Built with Llama
This year, we saw momentum on AWS with customers who were seeking choice, customization, and cost efficiency turning to Llama to build, deploy, and scale generative AI applications In one case, Arcee AI enabled its customers to fine-tune Llama models on their data, resulting in a 47% reduction in total cost of ownership compared to closed LLMs
- Introducing Llama 3. 1: Our most capable models to date - Meta AI
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3 1 405B— the first frontier-level open source AI model
- AI at Meta
Experience personal AI and bring your imagination to life with new ways to restyle your videos—all built with our latest models
- Meta and Microsoft Introduce the Next Generation of Llama
We’re now ready to open source the next version of Llama 2 and are making it available free of charge for research and commercial use We’re including model weights and starting code for the pretrained model and conversational fine-tuned versions too
- Llama 3. 2: Revolutionizing edge AI and vision with open, customizable . . .
Today, we’re releasing Llama 3 2, which includes small and medium-sized vision LLMs, and lightweight, text-only models that fit onto edge and mobile devices
|