This is a re-trained 3-layer RoBERTa-wwm-ext-large model. Chinese BERT with Whole Word Masking For f
250pytorchbert
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
260pytorch
Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, w
450pytorch
This model is trained on 180G data, we recommend using this one than the original version. Chinese E
220pytorch
This model is trained on 180G data, we recommend using this one than the original version. Chinese E
240pytorch
Please use 'Bert' related functions to load this model! Chinese BERT with Whole Word Masking For fur
270pytorchbert
Please use 'Bert' related functions to load this model! Chinese BERT with Whole Word Masking For fur
250pytorchbert
Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, w
290pytorch
This model is specifically designed for legal domain. Chinese ELECTRA Google and Stanford University
200pytorch
Please use 'Bert' related functions to load this model! 示例代码 from modelscope.models import Model if
250pytorch
Please use ElectraForPreTraining for discriminator and ElectraForMaskedLM for generator if you are r
210pytorch
This is a re-trained 4-layer RoBERTa-wwm-ext model. Chinese BERT with Whole Word Masking For further
270pytorchbert
Please use 'Bert' related functions to load this model! ALL English models are UNCASED (lowercase=Tr
230pytorch
Please use ElectraForPreTraining for discriminator and ElectraForMaskedLM for generator if you are r
230pytorch
This model is trained on 180G data, we recommend using this one than the original version. Chinese E
220pytorch
Please use ElectraForPreTraining for discriminator and ElectraForMaskedLM for generator if you are r
200pytorch
This model is specifically designed for legal domain. Chinese ELECTRA Google and Stanford University
250pytorch
Please use ElectraForPreTraining for discriminator and ElectraForMaskedLM for generator if you are r
200pytorch
This model is specifically designed for legal domain. Chinese ELECTRA Google and Stanford University
250pytorch
CINO: Pre-trained Language Models for Chinese Minority Languages Multilingual Pre-trained Language M
240pytorch
当前共161667个项目
×
寻找源码
源码描述
联系方式
提交