This paper adopts the Break-It-Fix-It framework from code repair task to GEC by replacing the oracle compiler with their proposed LM-Critic. LM-Critic works by evaluating whether a sentence is grammatical or ungrammatical, according to whether it has the largest LM probability among a set of local neighbors generated by a heuristic perturbation function. After dividing unlabelled dataset into good and bad ones, they apply a pre-trained fixer to the bad ones and access if the results are good using LM-Critic. Then they train a breaker on the resulting paired data. After that they apply the breaker to the good ones. Finally they train the fixer on the newly generated paired data and the first and the third steps. These steps run multiple cycles.
- The intuition is very good. Naively corrupted good sentences often don’t match the real distribution but this one can better match it.
- 5: Transformative: This paper is likely to change our field. It should be considered for a best paper award.
- 4.5: Exciting: It changed my thinking on this topic. I would fight for it to be accepted.
- 4: Strong: I learned a lot from it. I would like to see it accepted.
- 3.5: Leaning positive: It can be accepted more or less in its current form. However, the work it describes is not particularly exciting and/or inspiring, so it will not be a big loss if people don’t see it in this conference.
- 3: Ambivalent: It has merits (e.g., it reports state-of-the-art results, the idea is nice), but there are key weaknesses (e.g., I didn’t learn much from it, evaluation is not convincing, it describes incremental work). I believe it can significantly benefit from another round of revision, but I won’t object to accepting it if my co-reviewers are willing to champion it.
- 2.5: Leaning negative: I am leaning towards rejection, but I can be persuaded if my co-reviewers think otherwise.
- 2: Mediocre: I would rather not see it in the conference.
- 1.5: Weak: I am pretty confident that it should be rejected.
- 1: Poor: I would fight to have it rejected.