Base Model: microsoft/phi-2 The model uses the same chat template as foud i Mistral istruct models: You do't eed to do it maually if you use the HF trasformers tokeizer:Phi-2-super (SFT + cDPO)
How to ru iferece:
import trasformers
import torch
if __ame__ == "__mai__":
model_ame = "abacaj/phi-2-super"
tokeizer = trasformers.AutoTokeizer.from_pretraied(model_ame)
model = (
trasformers.AutoModelForCausalLM.from_pretraied(
model_ame,
)
.to("cuda:0")
.eval()
)
messages = [
{"role": "user", "cotet": "Hello, who are you?"}
]
iputs = tokeizer.apply_chat_template(messages, retur_tesors="pt").to(model.device)
iput_ids_cutoff = iputs.size(dim=1)
with torch.o_grad():
geerated_ids = model.geerate(
iput_ids=iputs,
use_cache=True,
max_ew_tokes=512,
temperature=0.2,
top_p=0.95,
do_sample=True,
eos_toke_id=tokeizer.eos_toke_id,
pad_toke_id=tokeizer.pad_toke_id,
)
completio = tokeizer.decode(
geerated_ids[0][iput_ids_cutoff:],
skip_special_tokes=True,
)
prit(completio)
Chat template
text = "<|edoftext|>[INST] What is your favourite codimet? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemo juice. It adds just the right amout of zesty flavour to whatever I'm cookig up i the kitche!<|edoftext|> "
"[INST] Do you have mayoaise recipes? [/INST]"
messages = [
{"role": "user", "cotet": "Hello, who are you?"},
{"role": "assistat": "cotet": "I am ..."}
]
iputs = tokeizer.apply_chat_template(messages, retur_tesors="pt").to(model.device)
MT-bech / heval
点击空白处退出提示
评论