The Model mlx-commuity/Qwe2-7B-Istruct-8bit was coverted to MLX format from Qwe/Qwe2-7B-Istruct usig mlx-lm versio mlx-commuity/Qwe2-7B-Istruct-8bit
Use with mlx
pip istall mlx-lm
from mlx_lm import load, geerate
model, tokeizer = load("mlx-commuity/Qwe2-7B-Istruct-8bit")
respose = geerate(model, tokeizer, prompt="hello", verbose=True)
点击空白处退出提示
评论