CodeQwe1.5-7B-OpeDevi is a code-specific model targetig o OpeDevi Aget tasks.
The model is fietued from CodeQwe1.5-7B, the code-specific large laguage model based o Qwe1.5 pretraied o large-scale code data.
CodeQwe1.5-7B is strogly capable of uderstadig ad geeratig codes, ad it supports the cotext legth of 65,536 tokes (for more iformatio about CodeQwe1.5, please refer to the blog post ad GitHub repo).
The fietued model, CodeQwe1.5-7B-OpeDevi, shares similar features, while it is desiged for rapid developmet, debuggig, ad iteratio. We evaluate CodeQwe1.5-7B-OpeDevi o SWE-Bech-Lite by implemetig the model o OpeDevi CodeAct 1.3 ad follow the OpeDevi evaluatio pipelie.
CodeQwe1.5-7B-OpeDevi successfully solves 4 problems by commmitig pull requests targetig o the issues. The code of Qwe1.5 has bee i the latest Huggig face trasformers ad we advise you to istall To use local models to ru OpeDevi, we advise you to deploy CodeQwe1.5-7B-OpeDevi o a GPU device ad access it through OpeAI API For more details, please refer to the official documetatio of vLLM for OpeAI Compatible server. After the deploymet, followig the guidace of OpeDevi ad ru the followig commad to set up eviromet variables: ad ru the docker commad: Now you should be able to coect This is just a fietuig experimet, ad we admit that the performace of the model is still laggig far behid GPT-4. I the future, we will update our datasets for aget-specific fietuig ad provide better ad larger models. Stay tued!CodeQwe1.5-7B-OpeDevi
Itroductio
Performace
Requiremets
trasformers>=4.37.0
, or you might ecouter the followig error:KeyError: 'qwe2'.
Quickstart
pytho -m vllm.etrypoits.opeai.api_server --model OpeDevi/CodeQwe1.5-7B-OpeDevi --dtype auto --api-key toke-abc123
# The directory you wat OpeDevi to work with. MUST be a absolute path!
export WORKSPACE_BASE=$(pwd)/workspace;
export LLM_API_KEY=toke-abc123;
export LLM_MODEL=OpeDevi/CodeQwe1.5-7B-OpeDevi;
export LLM_BASE_URL=http://localhost:8000/v1;
docker ru \
-it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e LLM_BASE_URL=$LLM_BASE_URL \
-e LLM_API_KEY=$LLM_API_KEY \
-e LLM_MODEL=$LLM_MODEL \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/ru/docker.sock:/var/ru/docker.sock \
-p 3000:3000 \
--add-host host.docker.iteral:host-gateway \
ghcr.io/opedevi/opedevi:0.5
http://localhost:3000/
. Set up the cofiguratio at the froted by clickig the butto at the bottom right, ad iput the right model ame ad api key.
The, you ca ejoy playig with OpeDevi based o CodeQwe1.5-7B-OpeDevi!Note
Citatio
@misc{codeqwe1.5,
title = {Code with CodeQwe1.5},
url = {https://qwelm.github.io/blog/codeqwe1.5/},
author = {Qwe Team},
moth = {April},
year = {2024}
}
点击空白处退出提示
评论