Home Community Unlabel Releases Tower: A Multilingual 7B Parameter Large Language Model (LLM) Optimized for Translation-Related Tasks

Unlabel Releases Tower: A Multilingual 7B Parameter Large Language Model (LLM) Optimized for Translation-Related Tasks

0
Unlabel Releases Tower: A Multilingual 7B Parameter Large Language Model (LLM) Optimized for Translation-Related Tasks

With the expansion of enormous language models, natural language processing has been revolutionized. Many LLMs, like GPT-3.5, LLaMA, and Mixtral, got here up last 12 months, which helped tackle diverse language tasks. Despite the fact that there are various such LLMs now, open-source models haven’t any reliable models for translation tasks. Thorough research has been done to tackle this challenge.

Consequently, a collaboration between the researchers of Unbabel, the SARDINE Lab at Instituto Superior Técnico, and the researchers of the MICS lab at CentraleSupélec, University of Paris-Saclay, has created a brand new multilingual model Tower. This Llama 2-based multilingual LLM has 7B parameters specifically designed for translation-related tasks. The major highlight of this model is that, unlike other open-source models, that are predominantly built with English data, Tower supports 10 languages. These languages are English, German, French, Spanish, Chinese, Portuguese, Italian, Russian, Korean, and Dutch.

Along with multilingual translation, it also has capabilities for pre-translation activities, like grammar improvement, to translation assessment jobs, like machine translation and automatic post-editing. The researchers of this collaboration found that this model performed higher than the state-of-the-art counterparts in translation and higher than alternative open-source solutions, including ALMA 13B and LLaMA-2 70B. 

The researchers used two stages to formulate Tower: prolonged pre-training and instruction tuning. The researchers emphasized that they used continued pre-training because it enhances LLaMA2’s proficiency in non-English languages, while instruction tuning improves its performance in addressing particular problems without prior experience. To do continued pre-training, they used a dataset of 20 billion tokens evenly distributed amongst different languages. They sourced two-thirds of the tokens from monolingual data, they usually sourced one-third of the information from publicly accessible bilingual datasets, similar to OPUS. 

The second step of instruction tuning enhanced the model’s ability to handle specific tasks at a better level in a 0-shot fashion. They developed a dataset named TowerBlocks for supervised fine-tuning. The dataset comprises code instructions and conversational data and has task-specific records. This dataset helped the model to keep up competency across various translation-related tasks by providing prompts for all tasks, including zero and few-shot templates.

In conclusion, TowerInstruct could be a significant step in multilingual machine translation because it outperforms GPT-3.5 and Mixtral 8x7B models. Its features, including automatic post-edition, named-entity recognition, or source error correction, will be very helpful on this domain. Because the researchers give attention to enhancing the model’s efficiency, this model could be a revolutionary stride in multilingual translation. The researchers of this collaboration are also looking forward to the discharge of TowerEval, an evaluation repository focused on machine translation and related tasks. It will help users reproduce benchmarks and assess the performance of their language models against Tower’s standards. 


Try the Model and Reference Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

For those who like our work, you’ll love our newsletter..

Don’t Forget to hitch our Telegram Channel


Rachit Ranjan is a consulting intern at MarktechPost . He’s currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He’s actively shaping his profession in the sector of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields.


🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and lots of others…

LEAVE A REPLY

Please enter your comment!
Please enter your name here