Meta Releases ‘Largest’ Llama 3.1 AI Mannequin That Beats OpenAI’s GPT-4o

Meta on Tuesday launched its newest and largest synthetic intelligence (AI) mannequin to the general public. Referred to as Meta Llama 3.1 405B, the corporate says the open-source mannequin outperforms main closed AI fashions corresponding to GPT-4, GPT-4o, and Claude 3.5 Sonnet throughout a number of benchmarks. The beforehand launched Llama Three 8B and 70B AI fashions have additionally been upgraded. The newer variations had been distilling from the 405B mannequin and now supply a 1,28,000 tokens context window. Meta claims each of those fashions at the moment are the main open-source massive language fashions (LLMs) for his or her sizes.

Asserting the brand new AI mannequin in a weblog publish, the expertise conglomerate mentioned, “Llama 3.1 405B is the primary brazenly accessible mannequin that rivals the highest AI fashions in relation to state-of-the-art capabilities generally data, steerability, math, device use, and multilingual translation.”

Notably, 405B right here refers to 405 billion parameters, which might be understood because the LLM’s variety of data nodes. The upper the parameter dimension, the more proficient an AI mannequin is in dealing with complicated queries. The context window of the mannequin is 128,000 tokens. It helps English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai languages.

The corporate claims the Llama 3.1 405B was evaluated on greater than 150 benchmark assessments throughout a number of experience. Primarily based on the info shared within the publish, Meta’s AI mannequin scored 96.eight within the Grade Faculty Math 8K (GSM8K) GPT-4’s 94.2, GPT-4o’s 96.1, and Claude 3.5 Sonnet’s 96.4. It additionally outperformed these fashions within the AI2’s Reasoning Problem (ARC) benchmark for science proficiency, Nexus for device use, and the Multilingual Grade Faculty Math (MGSM) benchmark.

Meta’s largest AI mannequin was educated on greater than 15 trillion tokens with greater than 16 thousand Nvidia H100 GPUs. One of many main introductions within the Llama 3.1 405B is the official assist for tool-calling which can permit builders to make use of Courageous Seek for net searches, Wolfram Alpha to carry out complicated mathematical calculations, and Code Interpreter to generate Python code.

For the reason that Meta Llama 3.1 405B is on the market in open supply, people can entry it from both the corporate’s web site or from its Hugging Face itemizing. Nevertheless, being a big mannequin, it requires roughly 750GB of disk cupboard space to run. For inferencing, two nodes on Mannequin Parallel 16 (MP16) may also be needed. Mannequin Parallelism 16 is a selected implementation of mannequin parallelism the place a big neural community is separated into 16 units or processors.

Other than being accessible publicly, the mannequin can also be accessible on main AI platforms by AWS, Nvidia, Databricks, Groq, Dell, Azure, Google Cloud, Snowflake, and extra. The corporate says a complete of 25 such platforms shall be powered by Llama 3.1 405B. For security and safety, the corporate has used Llama Guard Three and Immediate Guards, two new instruments that safeguard the LLM from potential hurt and abuse.