deepseek-llm-67b-chat-SOTA2bit-imatrix-GGUF

我要开发同款
匿名用户2024年07月31日
38阅读
所属分类ai、其他
开源地址https://modelscope.cn/models/whatever1983/deepseek-llm-67b-chat-SOTA2bit-imatrix-GGUF
授权协议Apache License 2.0

作品详情

This is llama.cpp's official SOTA 2bit quantization using imatrix down to as low as 16.95GB in size for the best overall bilingual model with 73.8 humaneval!

It turns out that computation of the imatrix with GPU accelerated is very intensive.

Testing shows that the perplexity difference of using imatrix computed from Q6k and Q2K is negligible.

Now it is possible to run Q2KS on 24GB VRAM GPUs with 2K contextsize, IQ2XS on 22GB VRAM GPU, and IQ2_XXS on 20GB VRAM GPU.

Clone with HTTP

 git clone https://www.modelscope.cn/whatever1983/deepseek-llm-67b-chat-SOTA2bit-imatrix-GGUF.git
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论