Meta has officially introduced Llama 4, its most advanced AI model to date, marketing the significant leap in its ongoing commitment to advancing artificial intelligence.
In a video on Instagram, Meta CEO Mark Zuckerberg shared the company’s bold AI ambitions. “Our goal is to build the world’s leading AI, open source it, and make it universally accessible… I’ve said for a while that open-source AI will lead the way, and with Llama 4, we’re starting to see that happen.”
The company has launched two models: Llama 4 Scout and Llama 4 Maverick, which are now available for download via the website and Hugging Face. These models are now integrated across WhatsApp, Instagram, Messenger, and the Web.
Additionally, Meta has launched Llama 4 Behemoth, describing it as the smartest large language models (LLMs) yet, which is considered the most powerful version they’ve developed and intended to train and guide future models. With this launch, Meta has also become the first to use a mixture-of-experts (MoE) framework.
Model Highlights
- Llama 4 is designed to enhance human-AI interaction, making it easier for businesses to integrate AI into their operations.
- It has one feature that stands out: Llama 4, which is its adaptability. This model can be fine-tuned to fit specific industries, offering a tailored experience for users and organizations alike.
- Talking about Meta 4 Scout, the model is built with 17 billion parameters and 16 experts, offering a 10 million-token context window.
- It operates on a single GPU, making it a lightweight, high-performance model similar to Google’s recent Gemma 3.
The CEO of Meta said, “This is just the start of the Llama 4 lineup,” Meta wrote in a blog post. “We believe the most advanced AI systems must be capable of taking generalized actions, engaging in natural conversations, and tackling problems they’ve never encountered before.”