Qwen2-1.5B-Instruct-AWQ

我要开发同款
匿名用户2024年07月31日
67阅读

技术信息

开源地址
https://modelscope.cn/models/qwen/Qwen2-1.5B-Instruct-AWQ
授权协议
apache-2.0

作品详情

Qwe2-1.5B-Istruct-AWQ

Itroductio

Qwe2 is the ew series of Qwe large laguage models. For Qwe2, we release a umber of base laguage models ad istructio-tued laguage models ragig from 0.5 to 72 billio parameters, icludig a Mixture-of-Experts model. This repo cotais the istructio-tued 1.5B Qwe2 model.

Compared with the state-of-the-art opesource laguage models, icludig the previous released Qwe1.5, Qwe2 has geerally surpassed most opesource models ad demostrated competitiveess agaist proprietary models across a series of bechmarks targetig for laguage uderstadig, laguage geeratio, multiligual capability, codig, mathematics, reasoig, etc.

For more details, please refer to our blog, GitHub, ad Documetatio.

Model Details

Qwe2 is a laguage model series icludig decoder laguage models of differet model sizes. For each size, we release the base laguage model ad the aliged chat model. It is based o the Trasformer architecture with SwiGLU activatio, attetio QKV bias, group query attetio, etc. Additioally, we have a improved tokeizer adaptive to multiple atural laguages ad codes.

Traiig details

We pretraied the models with a large amout of data, ad we post-traied the models with both supervised fietuig ad direct preferece optimizatio.

Requiremets

The code of Qwe2 has bee i the latest Huggig face trasformers ad we advise you to istall trasformers>=4.37.0, or you might ecouter the followig error:

KeyError: 'qwe2'

Quickstart

Here provides a code sippet with apply_chat_template to show you how to load the tokeizer ad model ad how to geerate cotets.

from modelscope import AutoModelForCausalLM, AutoTokeizer
device = "cuda" # the device to load the model oto

model = AutoModelForCausalLM.from_pretraied(
    "qwe/Qwe2-1.5B-Istruct-AWQ",
    torch_dtype="auto",
    device_map="auto"
)
tokeizer = AutoTokeizer.from_pretraied("qwe/Qwe2-1.5B-Istruct-AWQ")

prompt = "Give me a short itroductio to large laguage model."
messages = [
    {"role": "system", "cotet": "You are a helpful assistat."},
    {"role": "user", "cotet": prompt}
]
text = tokeizer.apply_chat_template(
    messages,
    tokeize=False,
    add_geeratio_prompt=True
)
model_iputs = tokeizer([text], retur_tesors="pt").to(device)

geerated_ids = model.geerate(
    model_iputs.iput_ids,
    max_ew_tokes=512
)
geerated_ids = [
    output_ids[le(iput_ids):] for iput_ids, output_ids i zip(model_iputs.iput_ids, geerated_ids)
]

respose = tokeizer.batch_decode(geerated_ids, skip_special_tokes=True)[0]
prit(respose)

Bechmark ad Speed

To compare the geeratio performace betwee bfloat16 (bf16) ad quatized models such as GPTQ-It8, GPTQ-It4, ad AWQ, please cosult our Bechmark of Quatized Models. This bechmark provides isights ito how differet quatizatio techiques affect model performace.

For those iterested i uderstadig the iferece speed ad memory cosumptio whe deployig these models with either trasformer or vLLM, we have compiled a extesive Speed Bechmark.

Citatio

If you fid our work helpful, feel free to give us a cite.

@article{qwe2,
  title={Qwe2 Techical Report},
  year={2024}
}

功能介绍

Qwen2-1.5B-Instruct-AWQ Introduction Qwen2 is the new series of Qwen large language models. For Qwen

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论