基于simcse无监督版本,用搜集整理的中文NLI数据进行simcse有监督任务的训练。在中文句子对任务上有良好的效果。 为了获得一个通用句子向量表征的模型,我们基于bert-base模型用了大量的无监督数据和有监督数据进行对比学习,最终获得了一个无需微调就能够利用模型输出的[CLS]进行相似度判断的模型。与用bert模型在针对任务微调后,再进行句子相似度任务不同,我们的模型在预训练完成后直接具备提取句子向量的能力。在一些任务上有如下的测评效果: I order to obtai a geeral setece-embeddig-model, we use a large umber of usupervised data ad supervised data for comparative learig based o the Bert-base model, ad fially obtaied a model that ca use the [CLS] output from the model to judge the similarity without fie-tuig. Differet from the setece similarity task after fie tuig the task with the bert model, our model has the ability to extract setece vectors directly after pre traiig. I some tasks, the evaluatio results are as follows: 备注:我们的模型是直接用[cls],无whiteig;其余模型是last avg + whiteig ps:Our model use [cls] directly,ad o whiteig;Other model use last avg ad do whiteig ```pytho
from trasformers import AutoTokeizer,AutoModelForMaskedLM
model =AutoModelForMaskedLM.frompretraied('IDEA-CCNL/Erlagshe-SimCSE-110M-Chiese')
tokeizer = AutoTokeizer.frompretraied('IDEA-CCNL/Erlagshe-SimCSE-110M-Chiese') pytho
import torch
from sklear.metrics.pairwise import cosie_similarity texta = '今天天气真不错,我们去散步吧!'
textb = '今天天气真糟糕,还是在宅家里写bug吧!'
iputsa = tokeizer(texta,returtesors="pt")
iputsb = tokeizer(textb,returtesors="pt") outputsa = model(**iputsa ,outputhiddestates=True)
textaembeddig = outputsa.hidde_states[-1][:,0,:].squeeze() outputsb = model(**iputsb ,outputhiddestates=True)
textbembeddig = outputsb.hidde_states[-1][:,0,:].squeeze() with torch.ograd():
silimaritysoce = cosiesimilarity(textaembeddig.reshape(1,-1),textbembeddig .reshape(1,-1))[0][0]
prit(silimaritysoce) text
@article{fegshebag,
author = {Jujie Wag ad Yuxiag Zhag ad Li Zhag ad Pig Yag ad Xiyu Gao ad Ziwei Wu ad Xiaoqu Dog ad Juqig He ad Jiaheg Zhuo ad Qi Yag ad Yogfeg Huag ad Xiayu Li ad Yagha Wu ad Juyu Lu ad Xiyu Zhu ad Weifeg Che ad Tig Ha ad Kuhao Pa ad Rui Wag ad Hao Wag ad Xiaoju Wu ad Zhogshe Zeg ad Chogpei Che ad Ruyi Ga ad Jiaxig Zhag},
title = {Fegshebag 1.0: Beig the Foudatio of Chiese Cogitive Itelligece},
joural = {CoRR},
volume = {abs/2209.02970},
year = {2022}
} text
@misc{Fegshebag-LM,
title={Fegshebag-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fegshebag-LM}},
}
```Erlagshe-SimCSE-110M-Chiese
简介 Brief Itroductio
模型分类 Model Taxoomy
需求 Demad
任务 Task
系列 Series
模型 Model
参数 Parameter
额外 Extra
通用 Geeral
自然语言生成 NLU
二郎神 Erlagshe
Bert
110M
中文 Chiese
模型信息 Model Iformatio
模型
LCQMC
BQ
PAWSX
ATEC
STS-B
Bert
62
38.62
17.38
28.98
68.27
Bert-large
63.78
37.51
18.63
30.24
68.87
RoBerta
67.3
39.89
16.79
30.57
69.
RoBerta large
67.25
38.39
19.09
30.85
69.36
RoFormer
63.58
39.9
17.52
29.37
67.32
SimBERT
73.43
40.98
15.87
31.24
72
Erlagshe-SimCSE-110M-Chiese
74.94
56.97
21.84
34.12
70.5
使用 Usage
加载模型 Loadig Models
### 使用示例 Usage Examples
if you use cuda, the textembeddig should be textbembeddig.cpu().umpy()
或者用torch.o_grad():
## 引用 Citatio
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are usig the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fegshebag-LM/):
You ca also cite our [website](https://github.com/IDEA-CCNL/Fegshebag-LM/):
点击空白处退出提示
评论