Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese

This technical paper presents a lightweight pre-trained NLG model for Chinese called Mengzi. Their methods involve some unpublished tricks of Dynamic Gradient Correction. Other than that, most of the pre-training part is a combination of vanilla T5 architecture, semi-supervised POS/NER learning.

In the fine-tuning part, they were very keen about leaderboard scores and tried lots of tricks to boost the scores.They showed that Mengzi is able to significantly outperform recent PLMs like Pangu and Motian even using as few as 1B parameters.

Comments

  • Zhou et al., (2020) reported that POS/NER provides very little gain to the fine-tuning task (-0.3 to + 0.3) and the POS task (below SOTA). So why did you choose to adopt these two tasks? Do you have an ablation study to support this?
  • Your CMRC18 score falls behind a lot from Pangu, Motian and ShenZhou, why did you highlight it in bold? Do you have an idea why Mengzi performed so bad on it?
  • Without these fine-tuning tricks, Mengzi seemed to perform poorly on the dev sets. Do you have these results on test so we can compare? Does this mean it’s the fine-tuning tricks that helps you to win the champion?
Rating
  • 5: Transformative: This paper is likely to change our field. It should be considered for a best paper award.
  • 4.5: Exciting: It changed my thinking on this topic. I would fight for it to be accepted.
  • 4: Strong: I learned a lot from it. I would like to see it accepted.
  • 3.5: Leaning positive: It can be accepted more or less in its current form. However, the work it describes is not particularly exciting and/or inspiring, so it will not be a big loss if people don’t see it in this conference.
  • 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., I didn’t learn much from it, evaluation is not convincing, it describes incremental work). I believe it can significantly benefit from another round of revision, but I won’t object to accepting it if my co-reviewers are willing to champion it.
  • 2.5: Leaning negative: I am leaning towards rejection, but I can be persuaded if my co-reviewers think otherwise.
  • 2: Mediocre: I would rather not see it in the conference.
  • 1.5: Weak: I am pretty confident that it should be rejected.
  • 1: Poor: I would fight to have it rejected.

0 投票者