Dynamic Sampling Strategies for Multi-Task Reading Comprehension

Multi-task learning often suffers from catastrophic forgetting due to imbalanced datasets. This paper proposes a simple dynamic sampling strategy, sampling instances proportional to the gap between current performance and best single-task learning performance, to remedy this issue. Combined with heterogeneously mixing samples in each batch, their method gives substantive gains over prior MTL sampling strategies.


  • Unbelievable that such a simple trick brings so significant gains. ML practitioners definitely should take a try.
  • 5: Transformative: This paper is likely to change our field. It should be considered for a best paper award.
  • 4.5: Exciting: It changed my thinking on this topic. I would fight for it to be accepted.
  • 4: Strong: I learned a lot from it. I would like to see it accepted.
  • 3.5: Leaning positive: It can be accepted more or less in its current form. However, the work it describes is not particularly exciting and/or inspiring, so it will not be a big loss if people don’t see it in this conference.
  • 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., I didn’t learn much from it, evaluation is not convincing, it describes incremental work). I believe it can significantly benefit from another round of revision, but I won’t object to accepting it if my co-reviewers are willing to champion it.
  • 2.5: Leaning negative: I am leaning towards rejection, but I can be persuaded if my co-reviewers think otherwise.
  • 2: Mediocre: I would rather not see it in the conference.
  • 1.5: Weak: I am pretty confident that it should be rejected.
  • 1: Poor: I would fight to have it rejected.

0 投票者

做成dynamic是个good point
不过根据val_metrics 来调有点水多加水面多加面,把所有任务和在一起的意思。很多时候我们并不关心所有task的avg val_metrics,而是关心特定某一个task的val_metrics,其他tasks只是用来当regularizer
个人经验是根据training_metrics的gap来调task间的loss weight,对抗过拟合很有效

1 Like

我也比较常用loss re-weighting,但我关心的目标有所不同。主要对抗被一两个task主导的局面,平衡所有的task。在这种setting里面,有些loss比其他task的loss大很多,所以根据loss值进行了re-weight。可能根据training metrics平衡更合理一些,但很多structure prediction的decoding很慢,计算metrics比训练都慢所以还是根据loss度量。

1 Like