Structured Pruning of Large Language Models

This paper is an incremental improvement to Learning Sparse Neural Networks through L0 Regularization . The L_0 paper can only control model size via the coefficient which is hard to tune. This paper enables controllable pruning through a Lagrangian method.

\newcommand{\bm}[1]{\boldsymbol{\mathbf{#1}}} \begin{align*} g(\lambda, \bm{\alpha}) = \lambda_1 \cdot (s(\bm{\alpha})-t) + \lambda_2 \cdot (s(\bm{\alpha})-t)^2 \end{align*}

where \lambda_1, \lambda_2 \in \mathbb{R} are two Lagrangian multipliers that will be jointly updated during training. \bm{\alpha} is the parameter of the Hard Concrete distribution defined as:

\newcommand{\mask}{\mathbf{z}} \begin{align*} & \mathbf{u} \sim U(0,1) \\ & \ \ \mathbf{s} = \text{sigmoid}(\log \mathbf{u} - \log(1-\mathbf{u}) + \bm{\alpha}) \\ & \ \ \bar{\mathbf{s}} = \mathbf{s} \times (r - l) + l \\ & \ \ \mask = \min(1, \max(0, \bar{\mathbf{s}})) \end{align*}

s(\bm{\alpha}) is the expected number of parameters, t is the target budget.

The random variables \mask can be used as the gates for blocks or the diagonal matrix \mathbf{G} = \text{diag}(z_1, \cdots, z_r) used in factorization based pruning.

\begin{align*} \mathbf{W} = \mathbf{PGQ} = \sum_{k=1}^r z_k \times (\mathbf{p}_k \times\,\mathbf{q}_k) \end{align*}

The overall training optimization is an adversarial game,

\newcommand{\param}{\bm{\theta}} \begin{align*} \max_{\lambda_1, \lambda_2}\, \min_{\param, \bm{\alpha}}\, \mathbb{E}_{\mathbf{u}} \left[\frac{1}{D}\sum_{i=1}^D\mathcal{L}(\mathbf{x}_i,\mathbf{y}_i;\tilde{\param})\right] + g(\lambda, \bm{\alpha}). \end{align*}

where the first order deviation will ensure the equality is satisfied.

Overall Recommendation

  • 5: Transformative: This paper is likely to change our field. It should be considered for a best paper award.
  • 4.5: Exciting: It changed my thinking on this topic. I would fight for it to be accepted.
  • 4: Strong: I learned a lot from it. I would like to see it accepted.
  • 3.5: Leaning positive: It can be accepted more or less in its current form. However, the work it describes is not particularly exciting and/or inspiring, so it will not be a big loss if people don’t see it in this conference.
  • 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., I didn’t learn much from it, evaluation is not convincing, it describes incremental work). I believe it can significantly benefit from another round of revision, but I won’t object to accepting it if my co-reviewers are willing to champion it.
  • 2.5: Leaning negative: I am leaning towards rejection, but I can be persuaded if my co-reviewers think otherwise.
  • 2: Mediocre: I would rather not see it in the conference.
  • 1.5: Weak: I am pretty confident that it should be rejected.
  • 1: Poor: I would fight to have it rejected.