dnabert2-conservation

我要开发同款
匿名用户2024年07月31日
88阅读

技术信息

官网地址
https://github.com/zhangtaolab
开源地址
https://modelscope.cn/models/zhangtaolab/dnabert2-conservation
授权协议
CC-BY-NC-SA-4.0

作品详情

植物基础DNA大语言模型 (Plat foudatio DNA large laguage models)

The plat DNA large laguage models (LLMs) cotai a series of foudatio models based o differet model architectures, which are pre-traied o various plat referece geomes.
All the models have a comparable model size betwee 90 MB ad 150 MB, BPE tokeizer is used for tokeizatio ad 8000 tokes are icluded i the vocabulary.

开发者: zhagtaolab

Model Sources

  • Repository: Plat DNA LLMs
  • Mauscript: [Versatile applicatios of foudatio DNA large laguage models i plat geomes]()

Architecture

The model is traied based o the zhiha1996/DNABERT-2-117M model with modified tokeizer.

This model is fie-tued for predictig sequece coservatio.

How to use

Istall the rutime library first:

pip istall trasformers

Here is a simple code for iferece:

from trasformers import AutoModelForSequeceClassificatio, AutoTokeizer, pipelie

model_ame = 'dabert2-coservatio'
# load model ad tokeizer
model = AutoModelForSequeceClassificatio.from_pretraied(f'zhagtaolab/{model_ame}', trust_remote_code=True)
tokeizer = AutoTokeizer.from_pretraied(f'zhagtaolab/{model_ame}', trust_remote_code=True)

# iferece
sequeces = ['ACATGCTAAATTAGTTGGCAATTTTTTCTCAGGTAGCTGGGCACAATTTGGTAGTCCAGTTGAACAAAATCCATTAGCTTCTTTTAGCAAGTCCCCTGGTTTGGGCCCTGCCAGTCCCATTAATACCAACCATTTGTCTGGATTGGCTGCAATTCTTTCCCCACAAGCAACAACCTCTACCAAGATTGCACCGATTGGCAAGGACCCTGGAAGGGCTGCAAATCAGATGTTTTCTAACTCTGGATCAACACAAGGAGCAGCTTTTCAGCATTCTATATCCTTTCCTGAGCAAAATGTAAAGGCAAGTCCTAGGCCTATATCTACTTTTGGTGAATCAAGTTCTAGTGCATCAAGTATTGGAACACTGTCCGGTCCTCAATTTCTTTGGGGAAGCCCAACTCCTTACTCTGAGCATTCAAACACTTCTGCCTGGTCTTCATCTTCGGTGGGGCTTCCATTTACATCTAGTGTCCAAAGGCAGGGTTTCCCATATACTAGTAATCACAGTCCTTTTCTTGGCTCCCACTCTCATCATCATGTTGGATCTGCTCCATCTGGCCTTCCGCTTGATAGGCATTTTAGCTACTTCCCTGAGTCACCTGAAGCTTCTCTCATGAGCCCGGTTGCATTTGGGAATTTAAATCACGGTGATGGGAATTTTATGATGAACAACATTAGTGCTCGTGCATCTGTAGGAGCCGGTGTTGGTCTTTCTGGAAATACCCCTGAAATTAGTTCACCCAATTTCAGAATGATGTCTCTGCCTAGGCATGGTTCCTTGTTCCATGGAAATAGTTTGTATTCTGGACCTGGAGCAACTAACATTGAGGGATTAGCTGAACGTGGACGAAGTAGACGACCTGAAAATGGTGGGAACCAAATTGATAGTAAGAAGCTGTACCAGCTTGATCTTGACAAAATCGTCTGTGGTGAAGATACAAGGACTACTTTAATGATTAAAAACATTCCTAACAAGTAAGAATAACTAAACATCTATCCT',
             'GTCGCAAAAATTGGGCCACTTGCAGTTCAATCTGTTTAATCAAAATTGCATGTGTATCAACTTTTTGCCCAATACTAGCTATATCACACCTCAACTCTTTAATGTGTTCATCACTAGTGTCGAACCTCCTCATCATTTTGTCCAACATATCCTCAACTCGCGCCATACTATCTCCACCATCCCTAGGAGTAACTTCACGATTTTGAGGAGGGACATAGGGCCCATTCCTGTCGTTTCTATTAGCATAGTTACTCCTGTTAAAGTTGTTGTCGCGGTTGTAGTTTCCATCACGTACATAATGACTCTCACGGTTGTAGTTACCATAGTTCCGACCTGGGTTCCCTTGAACTTGGCGCCAGTTATCCTGATTTGAGCCTTGGGCGCTTGGTCGGAAACCCCCTGTCTGCTCATTTACTGCATAAGTGTCCTCCGCGTAACATCATTAGGAGGTGGTGGTTTAGCAAAGTAGTTGACTGCATTTATCTTTTCTGCACCCCCTGTGACATTTTTTAGTACCAACCCAAGCTCAGTTCTCATCTGAGACATTTCTTCTCGAATCTCATCTGTGGCTCGGTTGTGAGTGGACTGCACTACGAAGGTGTTTTTCCCTGTATCAAACTTCCTAGTACTCCAAGCTTTGTTATTTCGGGAGATTTTCTCTAGTTTTTCTGCAATCTCAACATAAGTGCATTCTCCATAAGATCCACCTGCTATAGTGTCCAACACCGCTTTATTGTTATCATCCTGTCCCCGATAGAAGTATTCCTTCAGTGACTCATCATCTATACGGTGATTTAGAACACTTCTCAAGAATGAGGTGAATCTATCCCAAGAACTACTAACTAACTCTCCTGGTAGTGCCACAAAGCTGTTCACCCTTTCTTTGTGGTTTAACTTCTTGGAGATCGGATAGTAGCGTGCTAAGAAGACATCCCTTAGTTGGTTCCAAGTGAATATGGAGTTGTATGCGAGCTTAGTGAACCACATTGCAGCCTCTCCC']
pipe = pipelie('text-classificatio', model=model, tokeizer=tokeizer,
                trust_remote_code=True, top_k=Noe)
results = pipe(sequeces)
prit(results)

Traiig data

We use BertForSequeceClassificatio to fie-tue the model.
Detailed traiig procedure ca be foud i our mauscript.

Hardware

Model was traied o a NVIDIA GTX1080Ti GPU (11 GB).

功能介绍

植物基础DNA大语言模型 (Plant foundation DNA large language models) The plant DNA large language models (LLMs)

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论