Ollama
官网:https://ollama.com/
Windows系统查看ollama 帮助命令
ollama.exe serve --help
模型下载到本地默认目录:
- On Mac, the models will be download to
~/.ollama/models
- On Linux (or WSL), the models will be stored at /usr/share/ollama/.ollama/models
常用命令
#Pull a model
ollama pull llama2
#Remove a model
ollama rm llama2
#Copy a model
ollama cp llama2 my-llama2
#List models on your computer
ollama list
#查看模型信息
ollama show --modelfile mistral
如何导入Model
Ollama supports importing GGUF models in the Modelfile:
#1.Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.
FROM ./vicuna-33b.Q4_0.gguf
#2.Create the model in Ollama
ollama create example -f Modelfile
#3.Run the model
ollama run example
REST API
Ollama has a REST API for running and managing models.
Generate a response
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'
Chat with a model
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
您可以通过如下git clone命令,或者ModelScope SDK来下载模型
SDK下载
#安装ModelScope
pip install modelscope
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('liush99/ollama_models')
Git下载
#Git模型下载
git clone https://www.modelscope.cn/liush99/ollama_models.git
如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。
评论