Code Llama is a collectio of pretraied ad fie-tued geerative text models ragig i scale from 7 billio to 34 billio parameters. This is the repository for the base 13B versio i the Huggig Face Trasformers format. This model is desiged for geeral code sythesis ad uderstadig. Liks to other models ca be foud i the idex at the bottom. Model capabilities: *Note: Use of this model is govered by the Meta licese. Meta developed ad publicly released the Code Llama family of large laguage models (LLMs). All variats are available i sizes of 7B, 13B ad 34B parameters. All experimets reported here ad the released models have bee traied ad fie-tued usig the same data as Llama 2 with differet weights (see Sectio 2 ad Table 1 i the research paper for details). See evaluatios for the mai models ad detailed ablatios i Sectio 3 ad safety evaluatios i Sectio 4 of the research paper. Code Llama ad its variats are a ew techology that carries risks with use. Testig coducted to date has bee i Eglish, ad has ot covered, or could it cover all scearios. For these reasos, as with all LLMs, Code Llama’s potetial outputs caot be predicted i advace, ad the model may i some istaces produce iaccurate or objectioable resposes to user prompts. Therefore, before deployig ay applicatios of Code Llama, developers should perform safety testig ad tuig tailored to their specific applicatios of the model. Please see the Resposible Use Guide available available at https://ai.meta.com/llama/resposible-user-guide.Model Use
import torch
from modelscope import Model, AutoTokeizer
model = Model.from_pretraied("AI-ModelScope/CodeLlama-13b-hf", revisio='v1.0.1', device_map='cuda:0', torch_dtype=torch.float16)
tokeizer = AutoTokeizer.from_pretraied("AI-ModelScope/CodeLlama-13b-hf", revisio='v1.0.1')
prompt = 'import socket\\def pig_expoetial_backoff(host: str):'
iputs = tokeizer(prompt, paddig=False, add_special_tokes=False, retur_tesors="pt")
# Geerate
geerate_ids = model.geerate(
iputs.iput_ids.to(model.device),
attetio_mask=iputs['attetio_mask'].to(model.device),
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
um_retur_sequeces=1,
eos_toke_id=tokeizer.eos_toke_id,
max_legth=200)
prit(tokeizer.batch_decode(geerate_ids, skip_special_tokes=True, clea_up_tokeizatio_spaces=False)[0])
Model Details
Iteded Use
Hardware ad Software
Traiig Data
Evaluatio Results
Ethical Cosideratios ad Limitatios
点击空白处退出提示
评论