Data Augmentation for Cross-Domain Named Entity Recognition

This paper applies back translation tricks commonly used in text style transfer onto low resource data generation from high resource data. They first linearize NER data by inserting BI tags before each entity token then train a seq2seq using two copies of word embeddings, which correspond to source and target domain, with a denoising objective. Then they swap embeddings to generate cross domain data and swap again to “back-translate” the data back to the original domain. With this de-transforming objective they teach the seq2seq to generate cross-domain training data. Lastly, they use generated data to train a tagger in the low-resource domain.

Comments

  • Good idea. I like this one.
  • They are not utilizing PLMs. Not sure if this method will benefit PLM or not.
Rating
  • 5: Transformative: This paper is likely to change our field. It should be considered for a best paper award.
  • 4.5: Exciting: It changed my thinking on this topic. I would fight for it to be accepted.
  • 4: Strong: I learned a lot from it. I would like to see it accepted.
  • 3.5: Leaning positive: It can be accepted more or less in its current form. However, the work it describes is not particularly exciting and/or inspiring, so it will not be a big loss if people don’t see it in this conference.
  • 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., I didn’t learn much from it, evaluation is not convincing, it describes incremental work). I believe it can significantly benefit from another round of revision, but I won’t object to accepting it if my co-reviewers are willing to champion it.
  • 2.5: Leaning negative: I am leaning towards rejection, but I can be persuaded if my co-reviewers think otherwise.
  • 2: Mediocre: I would rather not see it in the conference.
  • 1.5: Weak: I am pretty confident that it should be rejected.
  • 1: Poor: I would fight to have it rejected.

0 投票人