Qwen2-0.5B-Instruct-MLX

我要开发同款
匿名用户2024年07月31日
70阅读

技术信息

开源地址
https://modelscope.cn/models/qwen/Qwen2-0.5B-Instruct-MLX
授权协议
apache-2.0

作品详情

Qwe2-0.5B-Istruct-MLX

Itroductio

Qwe2 is the ew series of Qwe large laguage models. For Qwe2, we release a umber of base laguage models ad istructio-tued laguage models ragig from 0.5 to 72 billio parameters, icludig a Mixture-of-Experts model. This repo cotais the istructio-tued 0.5B Qwe2 model.

Compared with the state-of-the-art opesource laguage models, icludig the previous released Qwe1.5, Qwe2 has geerally surpassed most opesource models ad demostrated competitiveess agaist proprietary models across a series of bechmarks targetig for laguage uderstadig, laguage geeratio, multiligual capability, codig, mathematics, reasoig, etc.

For more details, please refer to our blog ad GitHub. This is the MLX quatized model of Qwe2-0.5B-Istruct.

Model Details

Qwe2 is a laguage model series icludig decoder laguage models of differet model sizes. For each size, we release the base laguage model ad the aliged chat model. It is based o the Trasformer architecture with SwiGLU activatio, attetio QKV bias, group query attetio, etc. Additioally, we have a improved tokeizer adaptive to multiple atural laguages ad codes.

Traiig details

We pretraied the models with a large amout of data, ad we post-traied the models with both supervised fietuig ad direct preferece optimizatio.

Requiremets

Ru the followig commads to istall the required MLX packages.

pip istall mlx-lm mlx -U

Quickstart

Here provides a code sippet with apply_chat_template to show you how to load the tokeizer ad model ad how to geerate cotets.

from mlx_lm import load, geerate
from modelscope import sapshot_dowload
model_dir = "Qwe/Qwe2-0.5B-Istruct-MLX"

model, tokeizer = load(model_dir, tokeizer_cofig={"eos_toke": "<|im_ed|>"})

prompt = "Give me a short itroductio to large laguage model."
messages = [
    {"role": "system", "cotet": "You are a helpful assistat."},
    {"role": "user", "cotet": prompt}
]
text = tokeizer.apply_chat_template(
    messages,
    tokeize=False,
    add_geeratio_prompt=True
)

respose = geerate(model, tokeizer, prompt=text, verbose=True, top_p=0.8, temp=0.7, repetitio_pealty=1.05, max_tokes=512)

Citatio

If you fid our work helpful, feel free to give us a cite.

@article{qwe2,
  title={Qwe2 Techical Report},
  year={2024}
}

功能介绍

Qwen2-0.5B-Instruct-MLX Introduction Qwen2 is the new series of Qwen large language models. For Qwen

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论