dpo-sdxl-text2image-v1

我要开发同款
匿名用户2024年07月31日
47阅读

技术信息

开源地址
https://modelscope.cn/models/AI-ModelScope/dpo-sdxl-text2image-v1
授权协议
Apache License 2.0

作品详情

Diffusio Model Aligmet Usig Direct Preferece Optimizatio

row01

Direct Preferece Optimizatio (DPO) for text-to-image diffusio models is a method to alig diffusio models to text huma prefereces by directly optimizig o huma compariso data. Please check our paper at Diffusio Model Aligmet Usig Direct Preferece Optimizatio.

This model is fie-tued from stable-diffusio-xl-base-1.0 o offlie huma preferece data pickapic_v2.

Code

Code will come soo!!!

SD1.5

We also have a model fiedtued from stable-diffusio-v1-5 available at dpo-sd1.5-text2image-v1.

A quick example

from diffusers import StableDiffusioXLPipelie, UNet2DCoditioModel
from modelscope import sapshot_dowload
import torch

# load pipelie
model_id_base = "AI-ModelScope/stable-diffusio-xl-base-1.0"
local_base = sapshot_dowload(model_id_base,revisio='master')
pipe = StableDiffusioXLPipelie.from_pretraied(local_base, torch_dtype=torch.float16, variat="fp16", use_safetesors=True).to("cuda")

# load fietued model
uet_id = "AI-ModelScope/dpo-sdxl-text2image-v1"
local_uet = sapshot_dowload(uet_id,revisio='master')
uet = UNet2DCoditioModel.from_pretraied(local_uet, subfolder="uet", torch_dtype=torch.float16)
pipe.uet = uet
pipe = pipe.to("cuda")

prompt = "Two cats playig chess o a tree brach"
image = pipe(prompt, guidace_scale=5).images[0].resize((512,512))

image.save("cats_playig_chess.pg")

More details comig soo.

功能介绍

Diffusion Model Alignment Using Direct Preference Optimization Direct Preference Optimization (DPO)

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论