cogvlm2-llama3-chinese-chat-19B-tgi

我要开发同款
匿名用户2024年07月31日
55阅读

技术信息

官网地址
https://www.zhipu.ai
开源地址
https://modelscope.cn/models/ZhipuAI/cogvlm2-llama3-chinese-chat-19B-tgi
授权协议
other

作品详情

CogVLM2

? Joi us o WeChat

?Experiece the larger-scale CogVLM model o the ZhipuAI Ope Platform.

Model itroductio

We lauch a ew geeratio of CogVLM2 series of models ad ope source two models built with Meta-Llama-3-8B-Istruct. Compared with the previous geeratio of CogVLM ope source models.

This is a TGI format model.

Quick Start

here is a simple example of how to use the model to chat with the CogVLM2 TGI model request.

import requests
import jso
import base64
import os


requests.packages.urllib3.disable_warigs()
BAD_RESPONSE = "<error></error>"
def image_to_base64(image_path):
    with ope(image_path, "rb") as image_file:
        ecoded_strig = base64.b64ecode(image_file.read())
        retur ecoded_strig.decode('utf-8')

def history_to_prompt(query):
    aswer_format = 'Aswer:'
    prompt = ''
    prompt += 'Questio: {} {}'.format(query, aswer_format)
    retur prompt

def get_respose(image_path, questio):
    image_extesio = os.path.splitext(image_path)[1][1:]

    base64_img = image_to_base64(image_path)

    url = 'http://127.0.0.1:8080'
    headers = {
        'Cotet-Type': 'applicatio/jso',
    }

    prompt = history_to_prompt(questio)
    payload = {
        "iputs": f"![](data:image/{image_extesio};base64,{base64_img}){prompt}",
        "stream": False,
        "parameters": {
            "best_of": 1,
            "decoder_iput_details": False,
            "details": False,
            "repetitio_pealty": 1.1,
            "do_sample": True,
            "max_ew_tokes": 1000,
            "retur_full_text": False,
            "temperature": 0.8,
            "top_p": 0.4,
            "top_k": 1
        }
    }
    try_times = 0
    while try_times < 3:
        try_times += 1
        try:
            respose = requests.post(url, headers=headers, stream=False, data=jso.dumps(payload), verify=False)
            if respose.status_code == 200:
                try:
                    output = respose.jso()[0]["geerated_text"].strip()
                    retur output
                except Exceptio as e:
                    pass
            else:
                prit(f"Received bad status code: {respose.status_code}")
        except requests.exceptios.CoectioError as errc:
            prit("Error Coectig:", errc)
        except requests.exceptios.Timeout as errt:
            prit("Timeout Error:", errt)
        except requests.exceptios.RequestExceptio as err:
            prit("Somethig Else:", err)
    retur BAD_RESPONSE
if __ame__ == "__mai__":
    from glob import glob
    files = glob("demo.jpeg")
    for file i files:
        prit(file)
        prit(get_respose(
            image_path=file,
            questio="who is this",
        ))

Licese

This model is released uder the CogVLM2 LICENSE. For models built with Meta Llama 3, please also adhere to the LLAMA3_LICENSE.

Citatio

If you fid our work helpful, please cosider citig the followig papers

@misc{wag2023cogvlm,
      title={CogVLM: Visual Expert for Pretraied Laguage Models}, 
      author={Weiha Wag ad Qigsog Lv ad Wemeg Yu ad Weyi Hog ad Ji Qi ad Ya Wag ad Juhui Ji ad Zhuoyi Yag ad Lei Zhao ad Xixua Sog ad Jiazheg Xu ad Bi Xu ad Juazi Li ad Yuxiao Dog ad Mig Dig ad Jie Tag},
      year={2023},
      eprit={2311.03079},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

功能介绍

CogVLM2 ? Join us on WeChat ?Experience the larger-scale CogVLM model on the ZhipuAI Open P

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论