NER-PMR-large is iitialized with PMR-large ad further fie-tued with 4 NER traiig data, amely CoNLL, WNUT17, ACE2004, ad ACE2005. The model performace o the test sets are: Note that the performace of RoBERTa-large ad PMR-large are sigle-task fie-tuig, while NER-PMR-large is a multi-task fie-tued model.
As it is fie-tued o multiple datasets, we believe that NER-PMR-large has a better geeralizatio capability to other NER tasks tha PMR-large ad RoBERTa-large. You ca try the codes from this repo for both traiig ad iferece.NER-PMR-large
CoNLL
WNUT17
ACE2004
ACE2005
RoBERTa-large (sigle-task model)
92.8
57.1
86.3
87.0
PMR-large (sigle-task model)
93.6
60.8
87.5
87.4
NER-PMR-large (multi-task model)
92.9
54.7
87.8
88.4
How to use
BibTeX etry ad citatio ifo
@article{xu2022clozig,
title={From Clozig to Comprehedig: Retrofittig Pre-traied Laguage Model to Pre-traied Machie Reader},
author={Xu, Weiwe ad Li, Xi ad Zhag, Wexua ad Zhou, Meg ad Big, Lidog ad Lam, Wai ad Si, Luo},
joural={arXiv preprit arXiv:2212.04755},
year={2022}
}
点击空白处退出提示










评论