Home Community Microsoft AI Open-Sources DeepSpeed Chat: An End-To-End RLHF Pipeline To Train ChatGPT-like Models

Microsoft AI Open-Sources DeepSpeed Chat: An End-To-End RLHF Pipeline To Train ChatGPT-like Models

0
Microsoft AI Open-Sources DeepSpeed Chat: An End-To-End RLHF Pipeline To Train ChatGPT-like Models

There isn’t any exaggeration in saying that ChatGPT-like concepts have had a revolutionary effect on the digital world. For that reason, the AI open-source community is working on some projects (resembling ChatLLaMa, Alpaca, etc.) that aim to make ChatGPT-style models more widely available. These models are extremely flexible and may execute tasks resembling summarization, coding, and translation at or above human levels of experience.

Despite these impressive efforts, a publicly available end-to-end RLHF pipeline can still not train a sturdy ChatGPT-like model. Training efficiency is ceaselessly lower than 5% of those machines’ capabilities, even when access to such computing resources is offered. Despite access to multi-GPU clusters, existing systems cannot support the easy, fast, and cheap training of state-of-the-art ChatGPT models with billions of parameters.

These restrictions originate from the incontrovertible fact that the subtle RLHF training pipeline utilized by InstructGPT is just not well-supported by existing DL systems, that are optimized for more conventional pre-training and fine-tuning pipelines. To make ChatGPT-like models more widely available and RLHF training more easily accessible, the Microsoft team is releasing DeepSpeed-Chat, which offers an end-to-end RLHF pipeline to coach ChatGPT-like models. It has the next features:

🚀 Check Out 100’s AI Tools in AI Tools Club

1. A Convenient Environment for Training and Inferring ChatGPT-Similar Models: InstructGPT training might be executed on a pre-trained Huggingface model with a single script utilizing the DeepSpeed-RLHF system. This enables user to generate their ChatGPT-like model. After the model is trained, an inference API might be used to check out conversational interactions.

2. The DeepSpeed-RLHF Pipeline: The DeepSpeed-RLHF pipeline largely replicates the training pipeline from the InstructGPT paper. The team ensured full and exact correspondence between the three steps a) Supervised Fantastic-tuning (SFT), b) Reward Model Fantastic-tuning, and c) Reinforcement Learning with Human Feedback (RLHF). As well as, in addition they provide tools for data abstraction and mixing that make it possible to coach using data from various sources.

3. The DeepSpeed-RLHF System: Hybrid Engine (DeepSpeed-HE) for RLHF is a robust and complex system that mixes the training and inference capabilities of DeepSpeed. The Hybrid Engine can easily switch between RLHF’s inference and training modes, benefiting from DeepSpeed-Inference’s optimizations like tensor-parallelism and high-performance transformer kernels for generation, in addition to RLHF’s many memory optimization strategies like ZeRO and LoRA. To further optimize memory management and data transfer across the varied stages of RLHF, DeepSpeed-HE is moreover aware of the entire RLHF pipeline. The DeepSpeed-RLHF system achieves unprecedented efficiency at scale, allowing the AI community to quickly, cheaply, and conveniently access training on complex RLHF models.

4. Efficiency and Affordability: Because DeepSpeed-HE is over 15 times quicker than conventional systems, RLHF training could also be accomplished quickly and cheaply.

5. Excellent Scalability: DeepSpeed-HE’s strong scalability on multi-node multi-GPU systems allows it to accommodate models with a whole bunch of billions of parameters.

6. Expanding Access to RLHF Education: DeepSpeed-HE enables data scientists without access to multi-GPU systems to construct not only toy RLHF models but massive and powerful ones that might be deployed in real-world settings, all with only a single GPU for training.

The researchers have included a complete end-to-end training pipeline in DeepSpeed-Chat and modeled it after InstructGPT to make the training process as streamlined as possible.

The production process consists of three stages:

1. The pretrained language models are fine-tuned via supervised fine-tuning (SFT), by which human responses to numerous inquiries are fastidiously chosen.

2. Next, the team performs “reward model fine-tuning,” which involves training a special (often smaller than the SFT) model (RW) using a dataset that features human-provided rankings of diverse answers to the identical inquiry.

3. Lastly, in RLHF training, the Proximal Policy Optimization (PPO) algorithm is used to further adjust the SFT model with the reward feedback from the RW model.

The AI community can now access DeepSpeed-Chat because of its open-sourced nature. On the DeepSpeed GitHub website, the researchers invite users to report issues, submit PRs, and take part in discussions.


Take a look at the Code. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to hitch our 18k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more.

🚀 Check Out 100’s AI Tools in AI Tools Club


Tanushree

” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2020/10/Tanushree-Picture-225×300.jpeg” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2020/10/Tanushree-Picture-768×1024.jpeg”>

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest within the scope of application of artificial intelligence in various fields. She is captivated with exploring the brand new advancements in technologies and their real-life application.


🚀 JOIN the fastest ML Subreddit Community

LEAVE A REPLY

Please enter your comment!
Please enter your name here