Home United States USA — software DeepMind’s PEER scales language models with millions of tiny experts

DeepMind’s PEER scales language models with millions of tiny experts

138
0
SHARE

Parameter-Efficient Expert Retrieval (PEER) is a technique that allows LLMs to scale to millions of experts and remain resource efficient.
Mixture-of-Experts (MoE) has become a popular technique for scaling large language models (LLMs) without exploding computational costs. Instead of using the entire model capacity for every input, MoE architectures route the data to small but specialized “expert” modules. MoE enables LLMs to increase their parameter while keeping inference costs low. MoE is used in several popular LLMs, including Mixtral, DBRX, Grok and reportedly GPT-4.
However, current MoE techniques have limitations that restrict them to a relatively small number of experts. In a new paper, Google DeepMind introduces Parameter Efficient Expert Retrieval (PEER), a novel architecture that can scale MoE models to millions of experts, further improving the performance-compute tradeoff of large language models.The challenge of scaling LLMs
The past few years have shown that scaling language models by increasing their parameter count leads to improved performance and new capabilities. However, there is a limit to how much you can scale a model before running into computational and memory bottlenecks.
Every transformer block used in LLMs has attention layers and feedforward (FFW) layers. The attention layer computes the relations between the sequence of tokens fed to the transformer block. The feedforward network is responsible for storing the model’s knowledge. FFW layers account for two-thirds of the model’s parameters and are one of the bottlenecks of scaling transformers. In the classic transformer architecture, all the parameters of the FFW are used in inference, which makes their computational footprint directly proportional to their size.
MoE tries to address this challenge by replacing the FFW with sparsely activated expert modules instead of a single dense FFW layer. Each of the experts contains a fraction of the parameters of the full dense layer and specializes in certain areas. The MoE has a router that assigns each input to several experts who are likely to provide the most accurate answer.

Continue reading...