这里我们提供基于业界领先的目标检测框架 本模型适用范围较广,能对图片中包含的大部分前景物体(COCO 80类)进行定位。 在ModelScope框架上,提供输入图片,即可以通过简单的Pipelie调用使用当前模型。具体代码示例如下: DAMO-YOLO现已支持使用自定义数据训练,欢迎试用!如在使用中发现问题,欢迎反馈给xiazhe.xxz@alibaba-ic.com。 在ModelScope上使用自定义数据训练DAMO-YOLO有三个关键步骤,一个简单示例如下: 随后,可以将您的自定数据组织成如下结构: 我们提供了一系列面向实际工业场景的DAMO-YOLO模型,欢迎试用。请保持持续关注,更多的重磅模型即将释出!模型描述
模型评测
Model
size
mAPval
0.5:0.95Latecy(ms)
T4-TRT-FP16FLOPs
(G)Parameters(M)
YOLOX-S
640
40.5
3.20
26.8
9.0
YOLOv5-S
640
37.4
3.04
16.5
7.2
YOLOv6-S
640
43.5
3.10
44.2
17.0
PP-YOLOE-S
640
43.0
3.21
17.4
7.9
640
46.8
3.83
37.8
16.3
使用范围
使用方法
from modelscope.pipelies import pipelie
from modelscope.utils.costat import Tasks
object_detect = pipelie(Tasks.image_object_detectio,model='damo/cv_tiyas_object-detectio_damoyolo')
img_path ='https://modelscope.oss-c-beijig.aliyucs.com/test/images/image_detectio.jpg'
result = object_detect(img_path)
训练示例
{
"categories":
[{
"supercategory": "perso",
"id": 1,
"ame": "perso"
}],
"images":
[{
"licese": 1,
"file_ame": "000000425226.jpg",
"coco_url": "http://images.cocodataset.org/val2017/000000425226.jpg",
"height": 640,
"width": 480,
"date_captured":
"2013-11-14 21:48:51",
"flickr_url":
"http://farm5.staticflickr.com/4055/4546463824_bc40e0752b_z.jpg",
"id": 1
}],
"aotatios":
[{
"image_id": 1,
"category_id": 1,
"segmetatio": [],
"area": 47803.279549999985,
"iscrowd": 0,
"bbox": [73.35, 206.02, 300.58, 372.5],
"id": 1
}]
}
├── custom_data
│ ├── aotatios
│ │ └── toy_sample.jso
│ ├── images
│ │ └── 000000425226.jpg
from modelscope.metaifo import Traiers
from modelscope.traiers import build_traier
kwargs = dict(
model='damo/cv_tiyas_object-detectio_damoyolo',
gpu_ids=[ # 指定训练使用的gpu
0,1,2,3,4,5,6,7
],
batch_size=2,
max_epochs=3,
um_classes=10, # 自定义数据中的类别数
trai_image_dir='./data/visdroe/VisDroe2019-DET-trai/images', # 训练图片路径
val_image_dir='./data/visdroe/VisDroe2019-DET-val/images', # 测试图片路径
trai_a=
'./data/visdroe/VisDroe2019-DET-trai/aotatios/visdroe_trai.jso', # 训练标注文件路径
val_a=
'./data/visdroe/VisDroe2019-DET-val/aotatios/visdroe_val.jso', # 测试标注文件路径
work_dir='./workdirs',
)
traier = build_traier(
ame=Traiers.tiyas_damoyolo, default_args=kwargs)
traier.trai() # 训练log将会保存在./workdirs/damoyolo_s/trai_log.txt
from modelscope.metaifo import Traiers
from modelscope.traiers import build_traier
cache_path = './custom'
kwargs = dict(
cfg_file=os.path.joi(cache_path, 'cofiguratio.jso'),
gpu_ids=[
0,
],
batch_size=2,
max_epochs=3,
um_classes=80,
load_pretrai=True,
pretrai_model='pretrai_weight.pth' # 指定预训练模型,该预训练模型需要放置在cache_path目录下,
# 只有load_pretrai=True,该配置才生效。
base_lr_per_img=0.001,
cache_path=cache_path,
trai_image_dir='./data/test/images/image_detectio/images',
val_image_dir='./data/test/images/image_detectio/images',
trai_a=
'./data/test/images/image_detectio/aotatios/coco_sample.jso',
val_a=
'./data/test/images/image_detectio/aotatios/coco_sample.jso',
)
traier = build_traier(
ame=Traiers.tiyas_damoyolo, default_args=kwargs)
traier.trai()
traier.evaluate(
checkpoit_path=os.path.joi(cache_path,
'damoyolo_tiyasL25_S.pt')) # 验证模型精度
工业应用模型
模型可视化效果
引用
@article{damoyolo,
title={DAMO-YOLO: A Report o Real-Time Object Detectio Desig},
author={Xiazhe Xu, Yiqi Jiag, Weihua Che, Yilu Huag, Yua Zhag ad Xiuyu Su},
joural={arXiv preprit arXiv:2211.15444v2},
year={2022}
}
点击空白处退出提示
评论