About Language Model
|
|
0
|
365
|
August 28, 2020
|
An Embarrassingly Simple Method to Mitigate Undesirable Properties of
|
|
0
|
51
|
December 27, 2022
|
Deduplicating Training Data Makes Language Models Better
|
|
0
|
58
|
December 25, 2022
|
Prompt-free and Efficient Few-shot Learning with Language Models
|
|
0
|
59
|
December 15, 2022
|
Making Transformers Solve Compositional Tasks
|
|
0
|
84
|
November 16, 2022
|
Adaptive Testing and Debugging of NLP Models
|
|
0
|
124
|
September 26, 2022
|
如何实现拼音转汉字
|
|
0
|
191
|
August 25, 2022
|
Sparse Progressive Distillation: Resolving Overfitting under
|
|
0
|
180
|
May 29, 2022
|
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis
|
|
1
|
194
|
February 17, 2022
|
Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to...
|
|
0
|
247
|
January 24, 2022
|
Finetuning Pretrained Transformers into RNNs
|
|
0
|
143
|
January 16, 2022
|
Block Pruning For Faster Transformers
|
|
0
|
154
|
January 16, 2022
|
What’s in Your Head? Emergent Behaviour in Multi-Task Transformer Models
|
|
0
|
193
|
December 30, 2021
|
AdapterDrop: On the Efficiency of Adapters in Transformers
|
|
0
|
310
|
December 30, 2021
|
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
|
|
0
|
350
|
November 30, 2021
|
Condenser: a Pre-training Architecture for Dense Retrieval
|
|
0
|
292
|
November 21, 2021
|
How to Train BERT with an Academic Budget
|
|
0
|
262
|
November 17, 2021
|
The Power of Scale for Parameter-Efficient Prompt Tuning
|
|
0
|
223
|
November 15, 2021
|
Constrained Language Models Yield Few-Shot Semantic Parsers
|
|
0
|
282
|
November 15, 2021
|
ConvFiT: Conversational Fine-Tuning of Pretrained Language Models
|
|
0
|
210
|
November 10, 2021
|
#EMNLP21#干细胞假说:神经网络也具备干细胞难成全才
|
|
0
|
341
|
November 6, 2021
|
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
|
|
0
|
209
|
November 6, 2021
|
Lower Perplexity is Not Always Human-Like
|
|
0
|
244
|
November 5, 2021
|
Bird’s Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach
|
|
0
|
171
|
November 2, 2021
|
When Do You Need Billions of Words of Pretraining Data?
|
|
0
|
219
|
October 26, 2021
|
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language...
|
|
1
|
274
|
October 18, 2021
|
LIMIT-BERT : Linguistics Informed Multi-Task BERT
|
|
0
|
216
|
October 18, 2021
|
Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese
|
|
0
|
260
|
October 18, 2021
|
What Context Features Can Transformer Language Models Use?
|
|
0
|
272
|
September 22, 2021
|
有个疑问,为什么ltp,hanlp都用electra,而不是其他预训练模型?
|
|
2
|
669
|
September 11, 2021
|