This paper is an incremental improvement to Learning Sparse Neural Networks through L0 Regularization . The L_0 paper can only control model size via the coefficient which is hard to tune. This paper enables controllable pruning through a Lagrangian method.
where \lambda_1, \lambda_2 \in \mathbb{R} are two Lagrangian multipliers that will be jointly updated during training. \bm{\alpha} is the parameter of the Hard Concrete distribution defined as:
s(\bm{\alpha}) is the expected number of parameters, t is the target budget.
The random variables \mask can be used as the gates for blocks or the diagonal matrix \mathbf{G} = \text{diag}(z_1, \cdots, z_r) used in factorization based pruning.
The overall training optimization is an adversarial game,
where the first order deviation will ensure the equality is satisfied.
Overall Recommendation
- 5: Transformative: This paper is likely to change our field. It should be considered for a best paper award.
- 4.5: Exciting: It changed my thinking on this topic. I would fight for it to be accepted.
- 4: Strong: I learned a lot from it. I would like to see it accepted.
- 3.5: Leaning positive: It can be accepted more or less in its current form. However, the work it describes is not particularly exciting and/or inspiring, so it will not be a big loss if people don’t see it in this conference.
- 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., I didn’t learn much from it, evaluation is not convincing, it describes incremental work). I believe it can significantly benefit from another round of revision, but I won’t object to accepting it if my co-reviewers are willing to champion it.
- 2.5: Leaning negative: I am leaning towards rejection, but I can be persuaded if my co-reviewers think otherwise.
- 2: Mediocre: I would rather not see it in the conference.
- 1.5: Weak: I am pretty confident that it should be rejected.
- 1: Poor: I would fight to have it rejected.