Diffusers库的初识及使用

时间:2023-02-23 12:12:20

diffusers库的目标是:

  • 将扩散模型(diffusion models)集中到一个单一且长期维护的项目中
  • 以公众可访问的方式复现高影响力的机器学习系统,如DALLE、Imagen等
  • 让开发人员可以很容易地使用API进行模型训练或者使用现有模型进行推理

diffusers的核心分成三个组件:

  • Pipelines: 高层类,以一种用户友好的方式,基于流行的扩散模型快速生成样本
  • Models:训练新扩散模型的流行架构,如UNet
  • Schedulers:推理场景下基于噪声生成图像或训练场景下基于噪声生成带噪图像的各种技术
diffusers的安装
pip install diffusers
先看推理

导入Pipeline,from_pretrained()加载模型,可以是本地模型,或从the Hugging Face Hub自动下载。

from diffusers import StableDiffusionPipeline

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
# 加载本地模型:
# image_pipe = StableDiffusionPipeline.from_pretrained("./models/Stablediffusion/stable-diffusion-v1-4")
image_pipe.to("cuda")

prompt = "a photograph of an astronaut riding a horse"
pipe_out = image_pipe(prompt)

image = pipe_out.images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")

我们查看下image_pipe的内容:

StableDiffusionPipeline {
  "_class_name": "StableDiffusionPipeline",
  "_diffusers_version": "0.10.2",
  "feature_extractor": [
    "transformers",
    "CLIPFeatureExtractor"
  ],
  "requires_safety_checker": true,
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
  ],
  "scheduler": [
    "diffusers",
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
  ],
  "tokenizer": [
    "transformers",
    "CLIPTokenizer"
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
  "vae": [
    "diffusers",
    "AutoencoderKL"
  ]
}

查看Images的结构:

StableDiffusionPipelineOutput(
images=[<PIL.Image.Image image mode=RGB size=512x512 at 0x1A14BDD7730>], 
nsfw_content_detected=[False])

由此,可以看到pipe_out的包含两部分,第一部分就是生成的图片列表,如果只有一张图片,则pipe_out.images[0]即可取出目标图像。

如果我们要一次生成多张图像呢?只需要修改prompt的list长度即可,代码如下。

from diffusers import StableDiffusionPipeline

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

image_pipe.to("cuda")
prompt = ["a photograph of an astronaut riding a horse"] * 3
out_images = image_pipe(prompt).images
for i, out_image in enumerate(out_images):
    out_image.save("astronaut_rides_horse" + str(i) + ".png")

在使用image_pipe生成图像时,默认是float32精度的,若本地现在不足,可能会报Out of memory的错误,此时,可以通过加载float16精度的模型来解决。

Note: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above.

You can do so by loading the weights from the fp16 branch and by telling diffusers to expect the weights to be in float16 precision:

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16)

对于每个PipeLine都有一些特定的配置,如StableDiffusionPipeline除了必要的prompt参数,还可以配置如下参数:

  • num_inference_steps: int = 50
  • guidance_scale: float = 7.5
  • generator: Optional[torch.Generator] = None
  • 等等

示例:如果你想要每次得到的结果均一致,可以设置每次的种子都一样

generator = torch.Generator("cuda").manual_seed(1024)
prompt = ["a photograph of an astronaut riding a horse"] * 3
out_images = image_pipe(prompt, generator=generator).images
再看训练