Home Community Stanford and Mila Researchers Propose Hyena: An Attention-Free Drop-in Alternative to the Core Constructing Block of Many Large-Scale Language Models

Stanford and Mila Researchers Propose Hyena: An Attention-Free Drop-in Alternative to the Core Constructing Block of Many Large-Scale Language Models

0
Stanford and Mila Researchers Propose Hyena: An Attention-Free Drop-in Alternative to the Core Constructing Block of Many Large-Scale Language Models

As everyone knows that the race to develop and provide you with mindblowing Generative models comparable to ChatGPT and Bard, and their underlying technology comparable to GPT3 and GPT4, has taken the AI world by magnanimous force, there are still many challenges in terms of the accessibility, training and actual feasibility of those models in numerous use cases which pertains to our daily problems. 

If anyone has ever played around with any of such sequence models, there may be one sure-shot problem that may need ruined their excitement. That’s, the length of input they’ll send in to prompt the model. 

In the event that they are enthusiasts who need to dabble within the core of such technologies and train their custom model, the entire optimization process makes it quite an not possible task. 

🚀 JOIN the fastest ML Subreddit Community

At the center of those problems lies the quadratic nature of the optimization of attention models that sequence models utilize. One in every of the largest reasons is the computation cost of such algorithms and the resources needed to unravel this issue. It could actually be a particularly expensive solution, especially if someone desires to scale it up, which ends up in only a number of concentrated organizations having a vivid sense of understanding and real control of such algorithms. 

Simply put, attention exhibits quadratic cost in sequence length. Limiting the quantity of context accessible and scaling it’s a costly affair. 

Nevertheless, worry not; there may be recent architecture called the Hyena, which is now making waves within the NLP community, and folks ordain it because the rescuer all of us need. It challenges the dominance of the prevailing attention mechanisms, and the research paper demonstrates its potential to topple the prevailing system. 

Developed by a team of researchers at a number one university, Hyena boasts a formidable performance on a spread of subquadratic NLP tasks by way of optimization. In this text, we are going to look closely at Hyena’s claims.

This paper suggests that subquadratic operators can match the standard of attention models at scale without being that costly by way of parameters and optimization cost. Based on targeted reasoning tasks, the authors distill the three most significant properties contributing to its performance. 

  1. Data control
  2. Sublinear parameter scaling
  3. Unrestricted context. 

Aiming with these points in mind, they then introduce the Hyena hierarchy. This recent operator combines long convolutions and element-wise multiplicative gating to match the standard of attention at scale while reducing the computational cost. 

The experiments conducted reveal mindblowing results. 

  1. Language modeling. 

Hyena’s scaling was tested on autoregressive language modeling, which, when evaluated on perplexity on benchmark dataset WikiText103 and The Pile, revealed that Hyena is the primary attention-free, convolution architecture to match GPT quality with a 20% reduction in total FLOPS.

Perplexity on WikiText103 (same tokenizer). ∗ are results from (Dao et al., 2022c). Deeper and thinner models (Hyena-slim) achieve lower perplexity

Perplexity on The Pile for models trained until a complete variety of tokens e.g., 5 billion (different runs for every token total). All models use the identical tokenizer (GPT2). FLOP count is for the 15 billion token run

  1. Large Scale image classification 

The paper demonstrates the potential of Hyena as a general deep-learning operator for image classification. On image translation, they drop-in replace attention layers within the Vision Transformer(ViT) with the Hyena operator and match the performance with ViT.

On CIFAR-2D, we test a 2D version of Hyena long convolution filters in a typical convolutional architecture, which improves on the 2D long convolutional model S4ND (Nguyen et al., 2022) in accuracy with an 8% speedup and 25% fewer parameters.

The promising results on the sub-billion parameter scale suggest that spotlight is probably not all we’d like and that simpler subquadratic designs comparable to Hyena, informed by easy guiding principles and evaluation on mechanistic interpretability benchmarks, form the premise for efficient large models.

With the waves this architecture is creating locally, it’s going to be interesting to see if the Hyena would have the last laugh.


Take a look at the Paper and Github link. Don’t forget to hitch our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you could have any questions regarding the above article or if we missed anything, be happy to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Data Scientist currently working for S&P Global Market intelligence. Worked as data scientist for AI product startups. Reader and a learner at heart.


LEAVE A REPLY

Please enter your comment!
Please enter your name here