Randeng-T5-77M-MultiTask-KF-Chinese 燃灯-T5-77M-多任务-中文-KF
请注意:“model.safetensors”模型文件比较大,已放入脸书,请自行下载。
huggingface: https://huggingface.co/zhaoxiaopang111/Randeng-T5-77M-MultiTask-KF-Chinese/
使用方法:git本模型+huggingface下载“model.safetensors”模型
简介 Brief Introduction
在Fengshenbang/Randeng-T5-77M-MultiTask-Chinese的基础上,使用了5k客服语料“对话摘要”、“用户意图和坐席回答”进行加强微调模型,进行有监督任务预训练。
Based on the Fengshenbang/Randeng-T5-784M-MultiTask-Chinese model, a supervised pre-training task was performed using 5,000 customer service corpora consisting of "dialogue summaries," "user intentions," and "agent responses."
模型分类 Model Taxonomy
需求 Demand 任务 Task 系列 Series 模型 Model 参数 Parameter 额外 Extra 通用 General 自然语言转换 NLT 燃灯 Randeng MultiTask 77M 多任务-中文 MultiTask-Chinese
模型信息 Model Information
参考论文:Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
基于Randeng-T5-77M,我们在收集的100+个中文领域的多任务数据集(从中采样了30w+个样本)上微调了它,得到了此多任务版本。这些多任务包括:情感分析,新闻分类,文本分类,意图识别,自然语言推理,多项选择,指代消解,抽取式阅读理解,实体识别,关键词抽取,生成式摘要。
Based on Randeng-T5-77M, we fine-tuned it on a collection of 100+ multitasking datasets in Chinese domains (from which 30w+ samples were sampled) to obtain this multitasking version. These multitasks include: sentiment analysis, news classification, text classification, intention recognition, natural language inference, multiple choice, denotational disambiguation, extractive reading comprehension, entity recognition, keyword extraction, and generative summarization.
使用 Usage
'''
# load tokenizer and model
import torch
from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration
pretrained_model = "zhaoxiaopang111/Randeng-T5-77M-MultiTask-KF-Chinese"
special_tokens = ["<extra_id_{}>".format(i) for i in range(1024)]
tokenizer = T5Tokenizer.from_pretrained(
pretrained_model,
do_lower_case=True,
max_length=1024,
truncation=True,
additional_special_tokens=special_tokens,
)
config = T5Config.from_pretrained(pretrained_model)
model = T5ForConditionalGeneration.from_pretrained(pretrained_model, config=config)
model.resize_token_embeddings(len(tokenizer))
model.eval()
def modelPredictMain_ZhaiYao(text):
text = "摘要下面对话任务:【{0}】这段文本对话的摘要是什么?".format(text)
encode_dict = tokenizer(text, max_length=1024, padding='max_length', truncation=True)
inputs = {
"input_ids": torch.tensor([encode_dict['input_ids']]).long(),
"attention_mask": torch.tensor([encode_dict['attention_mask']]).long(),
}
# generate answer
outputs = model.generate(
input_ids=inputs['input_ids'],
max_length=1024,
do_sample=True
# early_stopping=True,
)
# print(logits)
logits = outputs[:, 1:]
# print(logits)
predict_label = [tokenizer.decode(i, skip_special_tokens=True) for i in logits][0]
return predict_label
# print(predict_label)
def modelPredictMain_YiYu(text):
text = "用户意图和客服回答任务:【{0}】这段文本对话的用户意图和客服回答是什么?".format(text)
encode_dict = tokenizer(text, max_length=1024, padding='max_length', truncation=True)
inputs = {
"input_ids": torch.tensor([encode_dict['input_ids']]).long(),
"attention_mask": torch.tensor([encode_dict['attention_mask']]).long(),
}
# generate answer
outputs = model.generate(
input_ids=inputs['input_ids'],
max_length=1024,
do_sample=True
# early_stopping=True,
)
# print(logits)
logits = outputs[:, 1:]
# print(logits)
predict_label = [tokenizer.decode(i, skip_special_tokens=True) for i in logits][0]
return predict_label
if __name__ == "__main__":
print("Procedures begin to execute!")
# tokenize
input_text = ""
# promopt:自动摘要 用户意图和客服回答
result_text = modelPredictMain_YiYu(text=input_text)
# result_text = modelPredictMain_ZhaiYao(text=input_text)
print(result_text)
'''
checkpoint-20240327
This model is a fine-tuned version of D:\chatGPTBigModels\modelLoraMain\models\Randeng-T5-77M-MultiTask-Chinese on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.9802
- Rouge1: 35.5908
- Rouge2: 16.2325
- Rougel: 35.4323
- Rougelsum: 35.4673
- Gen Len: 48.1477
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- trainbatchsize: 2
- evalbatchsize: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lrschedulertype: linear
- num_epochs: 8.0
Training results
Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
评论