Meta-Llama-3-8B-Instruct-GPTQ

我要开发同款
匿名用户2024年07月31日
42阅读
所属分类ai、llama、region:us、text-generation-infe、has_space、endpoints_compatible、autotrain_compatible、license:other、en、conversational
开源地址https://modelscope.cn/models/coin1860/Meta-Llama-3-8B-Instruct-GPTQ
授权协议apache-2.0

作品详情

Description

MaziyarPanahi/Meta-Llama-3-8B-Instruct-GPTQ is a quantized (GPTQ) version of meta-llama/Meta-Llama-3-8B-Instruct

How to use

Install the necessary packages

pip install --upgrade accelerate auto-gptq transformers

Example Python code

from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch

model_id = "MaziyarPanahi/Meta-Llama-3-8B-Instruct-GPTQ"

quantize_config = BaseQuantizeConfig(
        bits=4,
        group_size=128,
        desc_act=False
    )

model = AutoGPTQForCausalLM.from_quantized(
        model_id,
        use_safetensors=True,
        device="cuda:0",
        quantize_config=quantize_config)

tokenizer = AutoTokenizer.from_pretrained(model_id)

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.95,
    repetition_penalty=1.1
)

outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论