Home Community This AI Paper from the University of Washington Proposes Cross-lingual Expert Language Models (X-ELM): A Recent Frontier in Overcoming Multilingual Model Limitations

This AI Paper from the University of Washington Proposes Cross-lingual Expert Language Models (X-ELM): A Recent Frontier in Overcoming Multilingual Model Limitations

0
This AI Paper from the University of Washington Proposes Cross-lingual Expert Language Models (X-ELM): A Recent Frontier in Overcoming Multilingual Model Limitations

Large-scale multilingual language models are the inspiration of many cross-lingual and non-English Natural Language Processing (NLP) applications. These models are trained on massive volumes of text in multiple languages. Nonetheless, the downside to their widespread use is that because quite a few languages are modeled in a single model, there may be competition for the limited capability of the model. This, thereby, ends in lower performance in individual languages as in comparison with monolingual models. This problem, often known as the curse of multilingualism, primarily affects languages with little resources.

To beat the prevalent problem of multilingual language models (LMs) performing worse than monolingual ones on account of inter-language competition for model parameters., a team of researchers from the University of Washington, Charles University in Prague, and the Allen Institute for Artificial Intelligence has suggested Cross-lingual Expert Language Models (X-ELM) as an answer. This approach includes training language models individually on portions of a multilingual corpus.

The essential goal of X-ELM is to cut back the inter-language conflict for model parameters by enabling autonomous specialization of every language model within the ensemble on a selected subset of the multilingual data. This method goals to preserve the ensemble’s efficiency while adjusting each model’s level of proficiency to a certain language.

The team has shared that independent training has been done on a unique subset of a multilingual corpus for each X-ELM. Through the use of an ensemble technique, the model capability has been scaled effectively to reflect the entire corpus’s languages more accurately. The team has also presented x-BTM, an expansion of the Branch-Train-Merge (BTM) paradigm designed for the more heterogeneous multilingual environment, in an effort to train X-ELMs. 

x-BTM enhances current BTM methods by introducing a balanced multilingual data clustering approach based on typological similarity. It also includes Hierarchical Multi-Round training (HMR), a method that effectively educates recent experts with specialized knowledge of previously undiscovered languages or other multilingual data distributions.

The research paper released by the team shows that experts might be dynamically chosen for inference once the primary X-ELMs are trained. Further rounds of x-BTM on recent experts branching from current X-ELMs allow the models to be adjusted to recent situations, expanding the overall X-ELM set without changing the prevailing experts.

Twenty languages have been utilized in the experiments, and 4 recent languages have been adapted to indicate that X-ELMs perform higher in several experimental conditions than dense language models with the identical compute budget. The increases in perplexity seen in X-ELM languages have been distributed evenly throughout language resourcedness. The HMR training has proven to be a more efficient technique of adapting the models to recent languages than traditional language-adaptive pretraining techniques.

The studies have shown that X-ELM performs higher than jointly trained, multilingual models in all languages considered when given the identical computational resources. Its performance improvements also apply to downstream operations, demonstrating the model’s usefulness in real-world scenarios. The model can even adjust to recent languages without affected by catastrophic forgetting of previously learned languages because of its iterative capability of adding recent experts to the ensemble. 

In conclusion, this research perfectly addresses the difficulties in using massive multilingual language models (LMs) and presents cross-lingual expert language models (X-ELM) as a possible solution. 


Try the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

When you like our work, you’ll love our newsletter..

Don’t Forget to affix our Telegram Channel


Tanya Malhotra is a final 12 months undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and significant considering, together with an ardent interest in acquiring recent skills, leading groups, and managing work in an organized manner.


🧑‍💻 [FREE AI WEBINAR] ‘Construct Real-Time Document/Image Analytics with GPT-4 Vision’ (Jan 29, 2024)

LEAVE A REPLY

Please enter your comment!
Please enter your name here