LmCast :: Stay tuned in

Meta's Omnilingual MT for 1,600 Languages

Recorded: March 21, 2026, 10 p.m.

Original Summarized

Omnilingual MT: Machine Translation for 1,600 Languages | Research - AI at Meta

Meta AIAI ResearchThe LatestAboutGet LlamaTry Meta AIRESEARCHNLPOmnilingual MT: Machine Translation for 1,600 LanguagesMarch 17, 2026AbstractAdvances made through No Language Left Behind (NLLB) have demonstrated that high-quality machine translation (MT) scale to 200 languages. Later Large Language Models (LLMs) have been adopted for MT, increasing in quality but not necessarily extending language coverage. Current systems remain constrained by limited coverage and a persistent generation bottleneck: while crosslingual transfer enables models to somehow understand many undersupported languages, they often cannot generate them reliably, leaving most of the world’s 7,000 languages—especially endangered and marginalized ones—outside the reach of modern MT. Early explorations in extreme scaling offered promising proofs of concept but did not yield sustained solutions.

We present Omnilingual Machine Translation (OMT), the first MT system supporting more than 1,600 languages. This scale is enabled by a comprehensive data strategy that integrates large public multilingual corpora with newly created datasets, including manually curated MeDLEY bitext, synthetic backtranslation, and mining, substantially expanding coverage across long-tail languages, domains, and registers. To ensure both reliable and expansive evaluation, we combined standard metrics with a suite of evaluation artifacts: BLASER 3 quality estimation model (reference-free), OmniTOX toxicity classifier, BOUQuET dataset (a newly created, largest-to-date multilingual evaluation collection built from scratch and manually extended across a wide range of linguistic families), and Met-BOUQuET dataset (faithful multilingual quality estimation at scale). We explore two ways of specializing an LLM for machine translation: as a decoder-only model (OMT-LLaMA) or as a module in an encoder–decoder architecture (OMT-NLLB). The former consists of a model built on LLaMA3, with multilingual continual pretraining and retrieval-augmented translation for inference-time adaptation. The latter is a model built on top of a multilingual aligned space (OmniSONAR, itself also based on LLaMA3), and introduces a training methodology that can exploit non-parallel data, allowing us to incorporate the decoder-only continuous pretraining data into the training of an encoder–decoder architecture. Notably, all our 1B to 8B parameter models match or exceed the MT performance of a 70B LLM baseline, revealing a clear specialization advantage and enabling strong translation quality in low-compute settings. Moreover, our evaluation of English-to-1,600 translations further shows that while baseline models can interpret undersupported languages, they frequently fail to generate them with meaningful fidelity; OMT-LLaMA models substantially expand the set of languages for which coherent generation is feasible. Additionally, OMT models improve in cross-lingual transfer, being close to solving the “understanding” part of the puzzle in MT for the 1,600 evaluated. Beyond strong out-of-the-box performance, we find that finetuning and retrieval-augmented generation offer additional pathways to improve quality for the given subset of languages when targeted data or domain knowledge is available. Our leaderboard and main humanly created evaluation datasets (BOUQuET and Met-BOUQuET) are dynamically evolving towards Omnilinguality and freely available.Download the PaperAUTHORSWritten byOmnilingual MT TeamBelen AlastrueyNiyati BafnaAndrea CaciolaiKevin HeffernanArtyom KozhevnikovChristophe RopersEduardo SánchezCharles-Eric Saint-JamesIoannis TsiamasChierh CHENGJoe ChuangPaul-Ambroise DuquenneMark DuppenthalerNate EkbergCynthia GaoPere Lluís Huguet CabotJoão Maria JaneiroJean MaillardGabriel Mejia GonzalezHolger SchwenkEdan ToledoArina TurkatenkoAlbert Ventayol-BoadaRashel MoritzAlexandre MourachkoSurya ParimiMary WilliamsonShireen YatesDavid DaleMarta R. Costa-jussaPublisherarXivResearch TopicsNatural Language Processing (NLP)Related PublicationsMarch 17, 2026RESEARCHSPEECH & AUDIOOmnilingual SONAR: Cross-Lingual and Cross-Modal Sentence Embeddings Bridging Massively Multilingual Text and SpeechOmnilingual SONAR Team, João Maria Janeiro, Pere Lluís Huguet Cabot, Ioannis Tsiamas, Yen Meng, Vivek Iyer, Guillem Ramirez, Loic Barrault, Belen Alastruey, Yu-An Chung, Marta R. Costa-jussa, David Dale, Kevin Heffernan, Jaehyeong Jo, Artyom Kozhevnikov, Alexandre Mourachko, Christophe Ropers, Holger Schwenk, Paul-Ambroise DuquenneMarch 17, 2026Read the PaperFebruary 27, 2026HUMAN & MACHINE INTELLIGENCERESEARCHUnified Vision–Language Modeling via Concept Space AlignmentYifu Qiu, Paul-Ambroise Duquenne, Holger SchwenkFebruary 27, 2026Read the PaperFebruary 26, 2026CONVERSATIONAL AIRESEARCHLearning Personalized Agents from Human FeedbackKaiqu Liang, Julia Kruk, Shengyi Qian, Xianjun Yang, Shengjie Bi, Shaoliang Nie, Michael Zhang, Lijuan Liu, Jaime Fernández Fisac, Shuyan Zhou, Saghar HosseiniFebruary 26, 2026Read the PaperFebruary 11, 2026RESEARCHCOMPUTER VISIONUniT: Unified Multimodal Chain-of-Thought Test-time ScalingLeon Liangyu Chen, Haoyu Ma, Zhipeng Fan, Ziqi Huang, Animesh Sinha, Xiaoliang Dai, Jialiang Wang, Zecheng He, Jianwei Yang, Chunyuan Li, Junzhe Sun, Chu Wang, Serena Yeung-Levy, Felix Juefei-XuFebruary 11, 2026Read the PaperSee All PapersHelp Us Pioneer The Future of AIWe share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.Join our TeamOur approachAbout AI at MetaPeopleCareersResearchInfrastructureResourcesDemosMeta AIExplore Meta AIGet Meta AIAI StudioLatest newsBlogNewsletterFoundational modelsLlamaOur approachOur approachAbout AI at MetaPeopleCareersResearchResearchInfrastructureResourcesDemosMeta AIMeta AIExplore Meta AIGet Meta AIAI StudioLatest newsLatest newsBlogNewsletterFoundational modelsLlamaPrivacy PolicyTermsCookies Meta © 2026

Omnilingual Machine Translation (OMT) represents a significant advancement in the field of machine translation, achieving unprecedented scale by supporting translation across 1,600 languages. This achievement stems from a multifaceted approach initiated following the successes of the No Language Left Behind (NLLB) project, which demonstrated the feasibility of high-quality MT at a scale of 200 languages. Subsequent Large Language Model (LLM) applications for MT improved quality but struggled to broaden language coverage significantly. The core challenge remained the limited reach of current systems—specifically their inability to reliably generate translations for a substantial portion of the world’s languages, particularly those considered “long-tail” languages, endangered languages, and marginalized communities, representing approximately 7,000 languages.

The research team, led by Meta AI, addressed this bottleneck with OMT, employing a comprehensive data strategy. This strategy integrated publicly available multilingual corpora with newly created datasets, including meticulously curated MeDLEY bitext, synthetic backtranslation, and data mining techniques. These efforts substantially expanded the system’s coverage, focusing on languages and domains that were previously underrepresented. To rigorously evaluate OMT’s performance, the team combined standard metrics with innovative evaluation artifacts. These included the BLASER 3 quality estimation model, a reference-free system, the OmniTOX toxicity classifier for assessing harmful content, and the BOUQuET dataset—a novel, largest-to-date multilingual evaluation collection built entirely from scratch and augmented manually across numerous linguistic families. A related dataset, Met-BOUQuET, provided faithful multilingual quality estimation at scale.

The development of OMT utilized two primary LLM specialization strategies. The first involved an OMT-LLaMA model, built upon the LLaMA3 architecture, incorporating multilingual continual pretraining and retrieval-augmented translation for adaptive translation during inference. The second approach, OMT-NLLB, utilized an OmniSONAR model, also founded on LLaMA3, and implemented a training methodology that exploited non-parallel data, enabling the integration of continuous pretraining data—specifically decoder-only pretraining—into the training of an encoder-decoder architecture. Notably, models ranging from 1B to 8B parameters consistently matched or exceeded the performance of a 70B LLM baseline. This specialization advantage enabled strong translation quality even with lower compute requirements.

Evaluations of English-to-1,600 language translations revealed a key distinction between OMT and previous systems. While baseline models often correctly interpreted undersupported languages, they frequently failed to generate coherent, faithful translations. The OMT-LLaMA models significantly expanded the range of languages for which reliable generation was feasible. Further, OMT models demonstrated improved cross-lingual transfer, approaching the “understanding” aspect of MT for the extensive set of languages evaluated. Beyond initial out-of-the-box performance, finetuning and retrieval-augmented generation further enhanced quality when targeted data or domain-specific knowledge was available. The dynamically evolving BOUQuET and Met-BOUQuET evaluation datasets, alongside the OMT models, are intended to drive broader multilingual capabilities. The researchers are committed to maintaining these resources as freely available, facilitating continued progress in omnilinguality.