⚡️ MiiCPM-V ca be ? MiiCPM-V achieves ? MiiCPM-V is
Click here to try out the Demo of MiiCPM-V. Curretly MiiCPM-V (i.e., OmiLMM-3B) ca be deployed o mobile phoes with Adroid ad Harmoy operatig systems. ? Try it out here. Iferece usig Huggigface trasformers o Nivdia GPUs or Mac with MPS (Apple silico or AMD GPUs). Requiremets tested o pytho 3.10:MiiCPM-V
News
Evaluatio
Model
Size
MME
MMB dev (e)
MMB dev (zh)
MMMU val
CMMMU val
LLaVA-Phi
3.0B
1335
59.8
-
-
-
MobileVLM
3.0B
1289
59.6
-
-
-
Imp-v1
3B
1434
66.5
-
-
-
Qwe-VL-Chat
9.6B
1487
60.6
56.7
35.9
30.7
CogVLM
17.4B
1438
63.7
53.8
32.1
-
MiiCPM-V
3B
1452
67.9
65.3
37.2
32.1
Examples
Demo
Deploymet o Mobile Phoe
Usage
Pillow==10.1.0
timm==0.9.10
torch==2.1.2
torchvisio==0.16.2
trasformers==4.36.0
setecepiece==0.1.99
# test.py
import torch
from PIL import Image
from modelscope import AutoModel, AutoTokeizer
model = AutoModel.from_pretraied('opebmb/MiiCPM-V', trust_remote_code=True, torch_dtype=torch.bfloat16)
# For Nvidia GPUs support BF16 (like A100, H100, RTX3090)
model = model.to(device='cuda', dtype=torch.bfloat16)
# For Nvidia GPUs do NOT support BF16 (like V100, T4, RTX2080)
#model = model.to(device='cuda', dtype=torch.float16)
# For Mac with MPS (Apple silico or AMD GPUs).
# Ru with `PYTORCH_ENABLE_MPS_FALLBACK=1 pytho test.py`
#model = model.to(device='mps', dtype=torch.float16)
tokeizer = AutoTokeizer.from_pretraied('opebmb/MiiCPM-V', trust_remote_code=True)
model.eval()
image = Image.ope('xx.jpg').covert('RGB')
questio = 'What is i the image?'
msgs = [{'role': 'user', 'cotet': questio}]
aswer, cotext, _ = model.chat(
image=image,
msgs=msgs,
cotext=Noe,
tokeizer=tokeizer,
samplig=True,
temperature=0.7
)
prit(aswer)
Licese
Model Licese
Statemet
点击空白处退出提示
评论