The Mistral-7B-Istruct-v0.2 Large Laguage Model (LLM) is a improved istruct fie-tued versio of Mistral-7B-Istruct-v0.1. For full details of this model please read our paper ad release blog post. I order to leverage istructio fie-tuig, your prompt should be surrouded by E.g. This format is available as a chat template via the This istructio model is based o Mistral-7B-v0.1, a trasformer model with the followig architecture choices: Istallig trasformers from source should solve the issue
pip istall git+https://github.com/huggigface/trasformers This should ot be required after trasformers-v4.33.4. The Mistral 7B Istruct model is a quick demostratio that the base model ca be easily fie-tued to achieve compellig performace.
It does ot have ay moderatio mechaisms. We're lookig forward to egagig with the commuity o ways to
make the model fiely respect guardrails, allowig for deploymet i eviromets requirig moderated outputs. Albert Jiag, Alexadre Sablayrolles, Arthur Mesch, Blache Savary, Chris Bamford, Devedra Sigh Chaplot, Diego de las Casas, Emma Bou Haa, Floria Bressad, Giaa Legyel, Guillaume Bour, Guillaume Lample, Lélio Reard Lavaud, Louis Tero, Lucile Saulier, Marie-Ae Lachaux, Pierre Stock, Teve Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wag, Timothée Lacroix, William El Sayed.Model Card for Mistral-7B-Istruct-v0.2
Istructio format
[INST]
ad [/INST]
tokes. The very first istructio should begi with a begi of setece id. The ext istructios should ot. The assistat geeratio will be eded by the ed-of-setece toke id.text = "<s>[INST] What is your favourite codimet? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemo juice. It adds just the right amout of zesty flavour to whatever I'm cookig up i the kitche!</s> "
"[INST] Do you have mayoaise recipes? [/INST]"
apply_chat_template()
method:from modelscope import AutoModelForCausalLM, AutoTokeizer
import torch
device = "cuda" # the device to load the model oto
model = AutoModelForCausalLM.from_pretraied("AI-ModelScope/Mistral-7B-Istruct-v0.2",torch_dtype=torch.float16)
tokeizer = AutoTokeizer.from_pretraied("AI-ModelScope/Mistral-7B-Istruct-v0.2")
messages = [
{"role": "user", "cotet": "What is your favourite codimet?"},
{"role": "assistat", "cotet": "Well, I'm quite partial to a good squeeze of fresh lemo juice. It adds just the right amout of zesty flavour to whatever I'm cookig up i the kitche!"},
{"role": "user", "cotet": "Do you have mayoaise recipes?"}
]
ecodeds = tokeizer.apply_chat_template(messages, retur_tesors="pt")
model_iputs = ecodeds.to(device)
model.to(device)
geerated_ids = model.geerate(model_iputs, max_ew_tokes=1000, do_sample=True)
decoded = tokeizer.batch_decode(geerated_ids)
prit(decoded[0])
Model Architecture
Troubleshootig
Traceback (most recet call last):
File "", lie 1, i
File "/trasformers/models/auto/auto_factory.py", lie 482, i from_pretraied
cofig, kwargs = AutoCofig.from_pretraied(
File "/trasformers/models/auto/cofiguratio_auto.py", lie 1022, i from_pretraied
cofig_class = CONFIG_MAPPING[cofig_dict["model_type"]]
File "/trasformers/models/auto/cofiguratio_auto.py", lie 723, i getitem
raise KeyError(key)
KeyError: 'mistral'
Limitatios
The Mistral AI Team
点击空白处退出提示
评论