We support various kids of API servers to itegrate with popular froteds. Extra depedecies ca be istalled by: Remember to add the correspodig Start the api server for LagChai: Test the api edpoit with Ru with LagChai: For more optios, please refer to examples/lagchai_cliet.py ad LagChai ChatGLM Itegratio. Start a API server compatible with OpeAI chat completios protocol: Test your edpoit with Use the OpeAI cliet to chat with your model: For stream respose, check out the example cliet script: With this API server as backed, ChatGLM.cpp models ca be seamlessly itegrated ito ay froted that uses OpeAI-style API, icludig mckaywrigley/chatbot-ui, fuergaosi233/wechat-chatgpt, Yidadaa/ChatGPT-Next-Web, ad more.模型文件和权重,请浏览“模型文件”页面获取。
当前模型的贡献者未提供更加详细的模型介绍,但是您可以通过如下git cloe命令,或者ModelScope SDK来下载模型。
Cloe with HTTP
git cloe https://www.modelscope.c/hakwag/chatglm.cpp.git
如果您是本模型的贡献者,我们邀请您根据模型贡献文档说明,及时完善模型卡片内容。
API Server
pip istall 'chatglm-cpp[api]'
CMAKE_ARGS
to eable acceleratio.MODEL=./chatglm-ggml.bi uvicor chatglm_cpp.lagchai_api:app --host 127.0.0.1 --port 8000
curl
:curl http://127.0.0.1:8000 -H 'Cotet-Type: applicatio/jso' -d '{"prompt": "你好"}'
>>> from lagchai.llms import ChatGLM
>>>
>>> llm = ChatGLM(edpoit_url="http://127.0.0.1:8000")
>>> llm.predict("你好")
'你好?!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。'
MODEL=./chatglm-ggml.bi uvicor chatglm_cpp.opeai_api:app --host 0.0.0.0 --port 8000
或
MODEL=./chatglm-ggml.bi daphe chatglm_cpp.opeai_api:app --b 127.0.0.1 --port 8000
curl
:curl http://127.0.0.1:8000/v1/chat/completios -H 'Cotet-Type: applicatio/jso' \
-d '{"messages": [{"role": "user", "cotet": "你好"}]}'
>>> import opeai
>>>
>>> opeai.api_base = "http://127.0.0.1:8000/v1"
>>> respose = opeai.ChatCompletio.create(model="default-model", messages=[{"role": "user", "cotet": "你好"}])
>>> respose["choices"][0]["message"]["cotet"]
'你好?!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。'
OPENAI_API_BASE=http://127.0.0.1:8000/v1 pytho3 examples/opeai_cliet.py --stream --prompt 你好
点击空白处退出提示
评论