匿名用户2024年07月31日
53阅读

技术信息

开源地址
https://modelscope.cn/models/AI-ModelScope/dino-vitb16
授权协议
Apache License 2.0

作品详情

Visio Trasformer (base-sized model, patch size 16) traied usig DINO

Visio Trasformer (ViT) model traied usig the DINO method. It was itroduced i the paper Emergig Properties i Self-Supervised Visio Trasformers by Mathilde Caro, Hugo Touvro, Isha Misra, Hervé Jégou, Julie Mairal, Piotr Bojaowski, Armad Jouli ad first released i this repository.

Disclaimer: The team releasig DINO did ot write a model card for this model so this model card has bee writte by the Huggig Face team.

Model descriptio

The Visio Trasformer (ViT) is a trasformer ecoder model (BERT-like) pretraied o a large collectio of images i a self-supervised fashio, amely ImageNet-1k, at a resolutio of 224x224 pixels.

Images are preseted to the model as a sequece of fixed-size patches (resolutio 16x16), which are liearly embedded. Oe also adds a [CLS] toke to the begiig of a sequece to use it for classificatio tasks. Oe also adds absolute positio embeddigs before feedig the sequece to the layers of the Trasformer ecoder.

Note that this model does ot iclude ay fie-tued heads.

By pre-traiig the model, it lears a ier represetatio of images that ca the be used to extract features useful for dowstream tasks: if you have a dataset of labeled images for istace, you ca trai a stadard classifier by placig a liear layer o top of the pre-traied ecoder. Oe typically places a liear layer o top of the [CLS] toke, as the last hidde state of this toke ca be see as a represetatio of a etire image.

Iteded uses & limitatios

You ca use the raw model for image classificatio. See the model hub to look for fie-tued versios o a task that iterests you.

How to use

Here is how to use this model:

from trasformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests

url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.ope(requests.get(url, stream=True).raw)

processor = ViTImageProcessor.from_pretraied('facebook/dio-vitb16')
model = ViTModel.from_pretraied('facebook/dio-vitb16')

iputs = processor(images=image, retur_tesors="pt")
outputs = model(**iputs)
last_hidde_states = outputs.last_hidde_state

BibTeX etry ad citatio ifo

@article{DBLP:jourals/corr/abs-2104-14294,
  author    = {Mathilde Caro ad
               Hugo Touvro ad
               Isha Misra ad
               Herv{\'{e}} J{\'{e}}gou ad
               Julie Mairal ad
               Piotr Bojaowski ad
               Armad Jouli},
  title     = {Emergig Properties i Self-Supervised Visio Trasformers},
  joural   = {CoRR},
  volume    = {abs/2104.14294},
  year      = {2021},
  url       = {https://arxiv.org/abs/2104.14294},
  archivePrefix = {arXiv},
  eprit    = {2104.14294},
  timestamp = {Tue, 04 May 2021 15:12:43 +0200},
  biburl    = {https://dblp.org/rec/jourals/corr/abs-2104-14294.bib},
  bibsource = {dblp computer sciece bibliography, https://dblp.org}
}

功能介绍

Vision Transformer (base-sized model, patch size 16) trained using DINO Vision Transformer (ViT) mod

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论