EQA-PMR-large is iitialized with PMR-large ad further fie-tued o 6 Extractive Questio Aswerig (EQA) traiig data from traiig split of MRQA. The model performace o the i-dev sets are: Note that the performace of RoBERTa-large ad PMR-large are sigle-task fie-tuig, while EQA-PMR-large is a multi-task fie-tued model.
As it is fie-tued o multiple datasets, we believe that EQA-PMR-large has a better geeralizatio capability to other EQA tasks tha PMR-large ad RoBERTa-large. You ca try the codes from this repo for both traiig ad iferece.EQA-PMR-large
SQuAD
NewsQA
HotpotQA
NaturalQuestios
TriviaQA
SearchQA
RoBERTa-large (sigle-task model)
94.2
73.8
81.6
83.3
85.1
85.7
PMR-large (sigle-task model)
94.5
74.0
83.6
83.8
85.1
88.3
EQA-PMR-large (multi-task model)
94.2
73.7
66.9
82.3
85.4
88.7
How to use
BibTeX etry ad citatio ifo
@article{xu2022clozig,
title={From Clozig to Comprehedig: Retrofittig Pre-traied Laguage Model to Pre-traied Machie Reader},
author={Xu, Weiwe ad Li, Xi ad Zhag, Wexua ad Zhou, Meg ad Big, Lidog ad Lam, Wai ad Si, Luo},
joural={arXiv preprit arXiv:2212.04755},
year={2022}
}
点击空白处退出提示










评论