Home Artificial Intelligence Techniques for training large neural networks

Techniques for training large neural networks

0
Techniques for training large neural networks

Pipeline parallelism splits a model “vertically” by layer. It’s also possible to “horizontally” split certain operations inside a layer, which is frequently called  training. For a lot of modern models (reminiscent of the Transformer), the computation bottleneck is multiplying an activation batch matrix with a big weight matrix. Matrix multiplication might be considered dot products between pairs of rows and columns; it’s possible to compute independent dot products on different GPUs, or to compute parts of every dot product on different GPUs and sum up the outcomes. With either strategy, we are able to slice the load matrix into even-sized “shards”, host each shard on a special GPU, and use that shard to compute the relevant a part of the general matrix product before later communicating to mix the results.

One example is Megatron-LM, which parallelizes matrix multiplications inside the Transformer’s self-attention and MLP layers. PTD-P uses tensor, data, and pipeline parallelism; its pipeline schedule assigns multiple non-consecutive layers to every device, reducing bubble overhead at the price of more network communication.

Sometimes the input to the network might be parallelized across a dimension with a high degree of parallel computation relative to cross-communication. Sequence parallelism is one such idea, where an input sequence is split across time into multiple sub-examples, proportionally decreasing peak memory consumption by allowing the computation to proceed with more granularly-sized examples.

LEAVE A REPLY

Please enter your comment!
Please enter your name here