Tokens are generated in rapid succession using causal language models based on transformers. The model takes within the K preceding tokens after which iteratively calculates K intermediate vectors in each hidden layer to provide the (K + 1)th token. The module operates on the previous layer’s output vectors, and every vector in itself is the output of a module. Despite the complexity of the whole procedure, one unusual restriction have to be met: the variety of operations required to find out the subsequent token is constrained by the variety of tokens already viewed.
A recent study by Carnegie Mellon University and Google investigated the strategy of adding fake tokens to the input of a decoder-only model to postpone its output. On this work, they decided to choose a (learnable) pause token and append it to the input in a sequence of a number of times. To acquire the model’s answer after the last token has been seen, they simply ignore the matching outputs until then.
Importantly, the researchers take into consideration inserting such delays at inference and through downstream fine-tuning and pretraining. What effect this seemingly little adjustment may need in the true world can’t be known now. The delay creates a potentially “wider” computational channel, which the Transformer may use to its advantage. A less complicated result may very well be that the model ignores the tokens’ ability to cause delays and continues running. In spite of everything, neither the tokens themselves nor the small number of recent parameters introduced by embedding a single token are adequate to encode any additional information from the training data. These meaningless tokens may obscure useful signals and weaken the model.
The team undertook an empirical assessment to grasp the final result of introducing (appended) delays in all training and inference phases. They examine pause training on a 1B and 130M parameter decoder-only model initially trained on C4 (Raffel et al., 2019) after which fine-tuned on nine downstream tasks covering extractive query response, reasoning, general understanding, and fact recall. Most importantly, this method raises the 1B model’s exact match rating by 18% on the SQuAD extractive question-answering task. Similarly, they observed an 8% increase in the overall understanding task of CommonSense QA and a 1% accuracy gain on the reasoning task of GSM8k over the usual model’s accuracy of seven.5%.
Then again, when tokens are introduced only throughout the final fine-tuning stage (using the baseline pretrained model), improvements are seen in only a small fraction of cases. The team also conducted a series of key ablations, including:
- Discovering that appending tokens is usually superior to prepending them.
- Discovering that there’s an optimal variety of tokens for any downstream task.
- Discovering that decreasing the variety of inference-time tokens ends in a graceful performance degradation.
The team believes that the essential next step could be developing ways to directly make delays helpful on a standard pretrained model. They envision several recent theoretical and applied research directions opening up due to their work expanding the paradigm of delayed next-token prediction.
Take a look at the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to affix our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more.
When you like our work, you’ll love our newsletter..
We’re also on WhatsApp. Join our AI Channel on Whatsapp..
Dhanshree
” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2022/11/20221028_101632-Dhanshree-Shenwai-169×300.jpg” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2022/11/20221028_101632-Dhanshree-Shenwai-576×1024.jpg”>
Dhanshree Shenwai is a Computer Science Engineer and has a superb experience in FinTech corporations covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is captivated with exploring recent technologies and advancements in today’s evolving world making everyone’s life easy.