- Introducing LLaMA: A foundational, 65-billion-parameter language model
Like other large language models, LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text To train our model, we chose text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets
- Introducing Meta Llama 3: The most capable openly available LLM to date
Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases
- The Llama 4 herd: The beginning of a new era of natively multimodal AI . . .
We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first built using a mixture-of-experts (MoE) architecture
- Introducing Llama 3. 1: Our most capable models to date - Meta AI
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3 1 405B— the first frontier-level open source AI model
- The future of AI: Built with Llama
Built with Llama 3 1, the chatbot operates on AWS and employs various tools and services during customization and inference to ensure scalability and robustness Spotify uses Llama to help deliver contextualized recommendations to boost artist discovery and create an even richer user experience
- Meta and Microsoft Introduce the Next Generation of Llama
Today, we’re introducing the availability of Llama 2, the next generation of our open source large language model Llama 2 is free for research and commercial use
- AI at Meta
Create immersive videos, discover our latest AI technology and see how we bring personal superintelligence to everyone
- Llama 3. 2: Revolutionizing edge AI and vision with open, customizable . . .
Today, we’re releasing Llama 3 2, which includes small and medium-sized vision LLMs, and lightweight, text-only models that fit onto edge and mobile devices
|