Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Meta releases Llama 4, a new crop of flagship AI models


There is meta Left a new collection of AI modelsLlama 4, in the Llama family – a Saturday, not less.

There are only four new models: Llama 4 Scout, Llama 4 Maverick and Llama 4 Behemoth. All “many illegal text, images and video data” have been trained to give “a broad visual concept”.

The success of open models from the Chinese AI Laboratory DepthIt was reported that the increase in the methane was better or better than the previous flagship Illama models. META’s DeepSeek is said to be blocked by how to decipher how the running and models are reduced to deciphose. R1 and V3.

Scout and Maverick, Llama.com and Meta partners, including AI Dev Platform, AI Dev Platform, Behemoth are still training. AI-powered assistant Meta AI, including Meta, WhatsApp, Messengram and Instagram, has been updated to use Llama 4 in 40 countries. Multimodal features are limited in English so far USA.

Some developers can receive an issue with LLAM 4 licenses.

Users and companies are “domiciled” or “basic job location” in the EU The use or distribution of models is prohibitedIt is likely the result of management requirements set by the laws of the region and the Privacy Law of the region. (In the past, these laws have decreased more than in the past.) In addition, as in previous LLAM release, companies with more than 700 million active users must demand a special license provided by the metage.

“This Llama 4 models mark the beginning of a new era for the Ylama ecosystem” Meta wrote in a blog post. “This is the only beginning of the Llama 4 collection.”

The goal is calling 4
Photo credits:Meta

Meta is the first cohort of models (MOE) architecture (MOE) architecture (MOE) architecture in Llama 4, the architecture of professionals (MOE) is more economical to respond. Moe architecture mainly divide data processing tasks into subtascons and then entrust smaller, specialized “expert” models.

For example, Maverick, 400 billion total parameters, but only 17 billion active Settings between 128 “Specialist”. (Settings, approximately a model of problem solving the problem.) Scout has 17 billion active parameter, 16 specialists and 109 billion total parameter.

According to the internal test of the company, Maverick, the best for the company’s “chief assistant and conversation”, is the best of works like creative writing, Openai’s GPT-4O and Google’s Gemini 2.0, grounds, multilingual, long-term, long-term and visual criteria. However, Maverick, Google’s Gemini 2.5 Pro, Anthropic Clod 3.7 Sonnet and Openai’s more skilled latest models like GPT-4.5 do not measure much.

The powerful parties of Scout are in positions, as well as substantiation of documents and substantiation on large codes. Unique, there is a very large context window: 10 million Token. (“Tokens” represent the bits of the raw text – for example, the word “Fanat” “TAS” and “TAS” and “TAS” and “TAS” and “TAS” and “TAS” and “TAS” and “TAS” and “TAS” can cause millions of words that allow you to work and work with extremely large documents.

Scout can work in a NVIDIA H100 GPU, the Blue Sea requires H100 DGX system.

The meter will need a non-connected Behemoth, and even need afier. According to the company, BEHEMOTH has a 288 billion active parameter, 16 specialists and about two trillion general parameter. Meta’s internal benchmarking, as several evaluations such as solving the math problem, GPT-4.5, KLOD 3.5 Sonnet and Gemini 3.7 Sonnet and Gemini 2.0 Pro (but 2.5 PRO).

It should be noted that none of the Llama 4 models is the correct “justifying” model along Openai O1 and O3-mini lines. Check the facts thinking and usually respond more reliably to the questions, but the result lasts longer to give traditional, “foolish” answers.

Interestingly, Meta, Llama 4 models said that “controversial” questions are less often refused to answer. According to the company, the Llama 4 will not have a previous product of “discussed” political and social issues. In addition, the company says that the company does not turn out, which is not right.

“[Y]ou can count [Lllama 4] To ensure undecided actual, actual answers, “a meta spokesman reported to TechCrunch.”[W]The answers to more questions are continuing to respond to llama more to respond to different points […] and does not prefer some views to others. “

These pinches blame the White House allies AI politically.

Many of the President Donald Trump, including Elon Musk and Crypto and the EU “Tsar” David sacks, claimed that many EUs talked Sensor Conservative Care Points. The bags have historically doomsday Especially to be programmed and liar to wake up “Openai’s Chatgpt, politically sensitive subjects.”

In fact, it is a biased technical problem in AI. The musk is its AI company, Xai, there is arrogant To make a conversation that does not confirm some political views to others.

Did not stop companies including Openai arrangement To answer more questions to more questions, to answer more questions on controversial political issues.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *