Introduction
Transformer models consume a large amount of computational power and memory to effectively train their parameters. This is due to the enormous depth and complexity of their architectures, which requires them to learn billions of parameters, using large amounts of training data. Thus, their training process is time-consuming and generates a huge carbon footprint. Several techniques have been proposed by researchers to improve training efficiency. This article discusses a compilation of such techniques, spanning improvements to data, model, memory usage, and hardware acceleration, which facilitates efficient training of huge transformers.
Large Language Models
Most of the newest large language models are based on huge transformer architectures. BERT, developed by Google, was the forerunner of them all. The BERT-large version was trained on 340 million parameters. With the advent of generative AI, LLMs with human interaction capabilities have become increasingly popular. OpenAI's ChatGPT, which is heavily used to generate textual content such as poems, recipes, explanations of mathematical formulae, etc, has 175 billion parameters. Its open-source counterpart Llama model, developed by Meta, has 65 billion parameters.
Approach to Training Huge LLMs
The transformer models are usually trained on massive amounts of text data to effectively learn the huge amount of parameters involved. The traditional way to train a transformer model is through pre-training, whereby the model learns to predict the next word in a sequence of words, and updates its weights to minimize the prediction error. While pre-trained models work well for generic tasks such as next-sentence prediction, they may not work well when handling specialized domains such as medical literature for which the model had no training data. In such cases, fine-tuning the model is useful to further pre-train the model with additional layers that are specific to the task at hand. In fine-tuning, only the weights in the newly added layers are updated while the weights in the previous layers remain unchanged from that of the original pre-trained model.
With the recent advancements in generative AI, several new methods have been introduced to train huge LLMs. One such method is Reinforcement Learning with Human Feedback (RLHF). In this method, the answers returned by a chat-based model are ranked by humans, and this ranking is used to train a reinforcement learning-based reward model. Once the reward model achieves a certain level of accuracy, it then replaces the human feedback, i.e., the outputs returned by the LLM are then ranked by the reward model which, in turn, is used to train the LLM to produce outputs that align with human expectations.
Why Do Existing Approaches Fail?
Current approaches have several challenges that impede their efficient training. The amount of computing power needed to learn billions of parameters using trillions of tokens is enormous. An energy study determined that it takes the carbon footprint equivalent to the lifetime emissions of 5 cars to train a transformer based model with neural architecture search. Thus the computing power needed to train a model far outweighs the advantages resulting from advances in hardware architecture and technology. Training of huge LLMs is still achievable only by elite companies, which can afford the computational cost. On a similar note, the constraints on budget and carbon footprints force many transformer models to halt training before the training loss reaches a saturation point. An example can be seen from the training loss plot of the open-source Llama2 model, which shows room for further optimization.
Another important challenge is the requirement of huge amounts of memory to store the intermediate tensors and weights for the billions of parameters the model needs. Hence memory efficient training approaches are required to scale LLMs to learn complex tasks. Yet another challenge is the ethical consideration arising from the massive datasets used for training LLMs. These datasets generally require large amounts of cleansing and have to be carefully considered for issues related to fairness and data privacy.
How to Efficiently Train Huge Transformers
Several techniques have been implemented in the literature to improve efficiency in the training of LLMs. These approaches are related to hardware acceleration, optimization, and parallelism in computations. A survey paper by Bohan Zhuang et.al. discusses techniques to boost the training efficiency of attention-based models. They are categorized into methods for computational efficiency, memory efficiency, and hardware/algorithm co-design. In this section, we highlight some of the techniques that could be employed. Implementations for many of the techniques discussed in the survey paper are available here.
Improving Computational Efficiency
- Optimization: Nesterov Accelerated Gradient [1] and adaptive learning-based AdamW [2] are commonly used to obtain faster convergence for the gradient descent algorithm used in neural networks. Another recently proposed optimizer is Lion (Evolved Sign Momentum) [3] which only keeps track of the momentum of the first-order gradient, and hence is more memory efficient and accurate than AdamW. With the complex parameter spaces in huge LLMs, minimizing the training loss alone may not result in effective generalization of the model to unseen data. Instead, sharpness-based minimization techniques are proposed. Sharpness-aware minimization (SAM) [4] aims to minimize both the training loss and training sharpness to improve generalization ability. It searches for parameters that lie in spaces with uniformly low loss and formulates an efficient optimization.
- Weight Initialization: The generalization ability and convergence rate of a neural network depend on a good initialization of its parameters. Approaches to improve the initialization include Fixup [5], ReZero [6], and SkipInit [7], and they aim to reduce the issue of vanishing gradients and the requirement of normalization layers in deep neural networks. ConViT [8] is a new initialization technique for CNNs, which leverages the relationship between attention modules and convolutional layers.
- Sparse Training: Sparse training methods aim to find sub-networks or connections that influence the training loss, and aim to train only such sub-networks instead of the entire network. Methods for sparse training can be found at [9], [10].
- Overparameterization: Huge LLMs are heavily overparameterized models, i.e. their parameter size is much larger than the training sample size. In this technique, training is performed on huge LLMs, and then early stopping is performed to limit the usage of computational and hardware resources. The overparameterized model is then compressed and trained to obtain a reasonable model performance with gains in computational efficiency. An example of this approach is the ongoing TinyLlama project, which compressed the Llama2 model to obtain a small AI model.
- Large Batch Training: Using a large batch size reduces the number of epochs and leads to lower resource utilization. However, this has to be done together with tuning the learning step size, to achieve good generalization performance. Methods such as LARS [11] and LAMB [12] are widely used for tuning the step size.
- Incremental Learning: These techniques involve progressively learning a model, such as by stacking layers, initializing a larger model from a smaller one, or progressively dropping layers to improve the optimization.
Improving Data Efficiency
Since LLMs are trained on massive datasets, techniques to improve data efficiency will significantly improve the training process.
- Token Masking: This technique is widely used in self-supervised pre-training of LLMs using masked language modeling, whereby some tokens from the input are randomly masked and the LLM is trained to predict the masked tokens. Token masking (and the equivalent image patch masking for vision tasks) reduces the input length and computational resources.
- Importance Sampling: This technique chooses informative examples for training, which helps speed up the convergence rate. Approaches utilizing the gradient norms are widely used to estimate per-sample importance.
Improving Memory Efficiency
Memory consumption in LLMs is a key bottleneck in the training process, as memory resources have to be allocated to store the model states, optimizer states, gradients, parameters, and activations. The memory usage can be optimized by the techniques given below.
- Parallelism: Parallelism can be applied to different aspects of the training. Data parallelism techniques split a minibatch across various devices. Model parallelism divides a model subgraph across multiple workers. Similarly, tensor parallelism divides a tensor operation across multiple workers.
- Quantized Training: Quantized training works by reducing the precision of floating point operations. A method commonly used for transformers is Automatic Mixed-Precision (AMP) [13] where a copy of the weights is stored in the original precision format for updates and a reduced precision is used for arithmetic operations.
- Rematerialization and Offloading: Rematerialisation techniques store only a portion of the activations and weights, while the remaining are recomputed during backpropagation. Offloading refers to the usage of an external CPU as an extension of GPU memory.
- Parameter Efficient Tuning: These techniques are an alternative to full fine-tuning of LLMs, whereby a subset of parameters are updated while the remaining pre-trained model is frozen. An example method in this category is LoRA [14], which approximates the self-attention weights as a product of two low-rank matrices.
Hardware/Algorithm Co-Design
Hardware accelerators are another way to improve training efficiency. Some of the hardware acceleration methods are given below:
- Sparse Matrix Multiplication: These techniques take advantage of the sparseness of attention matrices, and reduce the number of computations in multiplications with dense matrices.
- Hardware-Aware Low-Precision: In this method, the low precision for floating point operations is implemented on the hardware components such as adders, multipliers, and memory blocks, to reduce power consumption and to achieve speedup.
- Hardware-Aware Efficient Attention: In this case, efficient attention mechanisms are implemented in the hardware.
Conclusion
Several open-source frameworks implement a combination of the techniques discussed above to improve training efficiency. Examples include NVIDIA's Megatron - LM and Microsoft's DeepSpeed. Several factors contribute to the training efficiency as discussed in this article, and there is ongoing research to further advance in these areas.
References
[1] Nesterov Accelerated Gradient
[2] Decoupled Weight Decay Regularization (AdamW).
[3] Symbolic Discovery of Optimization Algorithms.
[4] Sharpness-aware Minimization for Efficiently Improving Generalization.
[5] Fixup Initialization: Residual Learning without Normalization.
[6] Rezero is All You Need: Fast Convergence at Large Depth.
[7] Batch Normalization in Deep Networks.
[8] Convit: Improving vision transformers with soft convolutional inductive biases.
[9] Single-shot network pruning based on connection sensitivity.
[10] Drawing early-bird tickets: Towards more efficient training of deep networks.
[11] Large batch training of convolutional networks.
[12] Large batch optimization for deep learning: Training Bert in 76 minutes.
[13] Mixed precision training.
[14] Lora: Low-rank adaptation of large language models.