CodeQwen1_5-7B-OpenDevin

我要开发同款
匿名用户2024年07月31日
69阅读

技术信息

开源地址
https://modelscope.cn/models/OpenDevin/CodeQwen1_5-7B-OpenDevin
授权协议
Apache License 2.0

作品详情

CodeQwe1.5-7B-OpeDevi

Itroductio

CodeQwe1.5-7B-OpeDevi is a code-specific model targetig o OpeDevi Aget tasks. The model is fietued from CodeQwe1.5-7B, the code-specific large laguage model based o Qwe1.5 pretraied o large-scale code data. CodeQwe1.5-7B is strogly capable of uderstadig ad geeratig codes, ad it supports the cotext legth of 65,536 tokes (for more iformatio about CodeQwe1.5, please refer to the blog post ad GitHub repo). The fietued model, CodeQwe1.5-7B-OpeDevi, shares similar features, while it is desiged for rapid developmet, debuggig, ad iteratio.

Performace

We evaluate CodeQwe1.5-7B-OpeDevi o SWE-Bech-Lite by implemetig the model o OpeDevi CodeAct 1.3 ad follow the OpeDevi evaluatio pipelie. CodeQwe1.5-7B-OpeDevi successfully solves 4 problems by commmitig pull requests targetig o the issues.

Requiremets

The code of Qwe1.5 has bee i the latest Huggig face trasformers ad we advise you to istall trasformers>=4.37.0, or you might ecouter the followig error:

KeyError: 'qwe2'.

Quickstart

To use local models to ru OpeDevi, we advise you to deploy CodeQwe1.5-7B-OpeDevi o a GPU device ad access it through OpeAI API

pytho -m vllm.etrypoits.opeai.api_server --model OpeDevi/CodeQwe1.5-7B-OpeDevi --dtype auto --api-key toke-abc123

For more details, please refer to the official documetatio of vLLM for OpeAI Compatible server.

After the deploymet, followig the guidace of OpeDevi ad ru the followig commad to set up eviromet variables:

# The directory you wat OpeDevi to work with. MUST be a absolute path!
export WORKSPACE_BASE=$(pwd)/workspace;
export LLM_API_KEY=toke-abc123;
export LLM_MODEL=OpeDevi/CodeQwe1.5-7B-OpeDevi;
export LLM_BASE_URL=http://localhost:8000/v1;

ad ru the docker commad:

docker ru \
    -it \
    --pull=always \
    -e SANDBOX_USER_ID=$(id -u) \
    -e LLM_BASE_URL=$LLM_BASE_URL \
    -e LLM_API_KEY=$LLM_API_KEY \
    -e LLM_MODEL=$LLM_MODEL \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/ru/docker.sock:/var/ru/docker.sock \
    -p 3000:3000 \
    --add-host host.docker.iteral:host-gateway \
    ghcr.io/opedevi/opedevi:0.5

Now you should be able to coect http://localhost:3000/. Set up the cofiguratio at the froted by clickig the butto at the bottom right, ad iput the right model ame ad api key. The, you ca ejoy playig with OpeDevi based o CodeQwe1.5-7B-OpeDevi!

Note

This is just a fietuig experimet, ad we admit that the performace of the model is still laggig far behid GPT-4. I the future, we will update our datasets for aget-specific fietuig ad provide better ad larger models. Stay tued!

Citatio

@misc{codeqwe1.5,
    title = {Code with CodeQwe1.5},
    url = {https://qwelm.github.io/blog/codeqwe1.5/},
    author = {Qwe Team},
    moth = {April},
    year = {2024}
}

功能介绍

CodeQwen1.5-7B-OpenDevin Introduction CodeQwen1.5-7B-OpenDevin is a code-specific model targeting on

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论