Skip to content

Releases: huggingface/diffusers

v0.2.4: Patch release

22 Aug 17:09
Compare
Choose a tag to compare

This patch release allows the Stable Diffusion pipelines to be loaded with float16 precision:

pipe = StableDiffusionPipeline.from_pretrained(
           "CompVis/stable-diffusion-v1-4", 
           revision="fp16", 
           torch_dtype=torch.float16, 
           use_auth_token=True
)
pipe = pipe.to("cuda")

The resulting models take up less than 6900 MiB of GPU memory.

v0.2.3: Stable Diffusion public release

22 Aug 08:59
Compare
Choose a tag to compare

🎨 Stable Diffusion public release

The Stable Diffusion checkpoints are now public and can be loaded by anyone! 🥳

Make sure to accept the license terms on the model page first (requires login): https://huggingface.co/CompVis/stable-diffusion-v1-4
Install the required packages: pip install diffusers==0.2.3 transformers scipy
And log in on your machine using the huggingface-cli login command.

from torch import autocast
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler

# this will substitute the default PNDM scheduler for K-LMS  
lms = LMSDiscreteScheduler(
    beta_start=0.00085, 
    beta_end=0.012, 
    beta_schedule="scaled_linear"
)

pipe = StableDiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4", 
    scheduler=lms,
    use_auth_token=True
).to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt)["sample"][0]  
    
image.save("astronaut_rides_horse.png")

The safety checker

Following the model authors' guidelines and code, the Stable Diffusion inference results will now be filtered to exclude unsafe content. Any images classified as unsafe will be returned as blank. To check if the safety module is triggered programmaticaly, check the nsfw_content_detected flag like so:

outputs = pipe(prompt)
image = outputs
if any(outputs["nsfw_content_detected"]):
    print("Potential unsafe content was detected in one or more images. Try again with a different prompt and/or seed.")

Improvements and bugfixes

Full Changelog: v0.2.2...v0.2.3

v0.2.2

16 Aug 17:59
Compare
Choose a tag to compare

This patch release fixes an import of the StableDiffusionPipeline

[K-LMS Scheduler] fix import by @patrickvonplaten in #191

v0.2.1 Patch release

16 Aug 16:32
Compare
Choose a tag to compare

This patch release fixes a small bug of the StableDiffusionPipeline

v0.2.0: Stable Diffusion early access, K-LMS sampling

16 Aug 15:52
Compare
Choose a tag to compare

Stable Diffusion

Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It's trained on 512x512 images from a subset of the LAION-5B database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
See the model card for more information.

The Stable Diffusion weights are currently only available to universities, academics, research institutions and independent researchers. Please request access applying to this form

from torch import autocast
from diffusers import StableDiffusionPipeline

# make sure you're logged in with `huggingface-cli login`
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-3-diffusers", use_auth_token=True)  

prompt = "a photograph of an astronaut riding a horse"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7)["sample"][0]  # image here is in PIL format
    
image.save(f"astronaut_rides_horse.png")

K-LMS sampling

The new LMSDiscreteScheduler is a port of k-lms from k-diffusion by Katherine Crowson.
The scheduler can be easily swapped into existing pipelines like so:

from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler

model_id = "CompVis/stable-diffusion-v1-3-diffusers"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)

Integration test with text-to-image script of Stable-Diffusion

#182 and #186 make sure that DDIM and PNDM/PLMS scheduler yield 1-to-1 the same results as stable diffusion.
Try it out yourself:

In Stable-Diffusion:

python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --n_samples 4 --n_iter 1 --fixed_code --plms

or

python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --n_samples 4 --n_iter 1 --fixed_code

In diffusers:

from diffusers import StableDiffusionPipeline, DDIMScheduler
from time import time
from PIL import Image
from einops import rearrange
import numpy as np
import torch
from torch import autocast
from torchvision.utils import make_grid

torch.manual_seed(42)

prompt = "a photograph of an astronaut riding a horse"
#prompt = "a photograph of the eiffel tower on the moon"
#prompt = "an oil painting of a futuristic forest gives"

# uncomment to use DDIM
# scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
# pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-3-diffusers", use_auth_token=True, scheduler=scheduler)  # make sure you're logged in with `huggingface-cli login`

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-3-diffusers", use_auth_token=True)  # make sure you're logged in with `huggingface-cli login`

all_images = []
num_rows = 1
num_columns = 4
for _ in range(num_rows):
    with autocast("cuda"):
        images = pipe(num_columns * [prompt], guidance_scale=7.5, output_type="np")["sample"]  # image here is in [PIL format](https://pillow.readthedocs.io/en/stable/)
        all_images.append(torch.from_numpy(images))

# additionally, save as grid
grid = torch.stack(all_images, 0)
grid = rearrange(grid, 'n b h w c -> (n b) h w c')
grid = rearrange(grid, 'n h w c -> n c h w')
grid = make_grid(grid, nrow=num_rows)

# to image
grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy()
image = Image.fromarray(grid.astype(np.uint8))

image.save(f"./images/diffusers/{'_'.join(prompt.split())}_{round(time())}.png")

Improvements and bugfixes

Full Changelog: 0.1.3...v0.2.0

0.1.3 Patch release

28 Jul 09:04
Compare
Choose a tag to compare

This patch releases refactors the model architecture of VQModel or AutoencoderKL including the weight naming. Therefore the official weights of the CompVis organization have been re-uploaded, see:

Corresponding PR: #137

Please make sure to upgrade diffusers to have those models running correctly: pip install --upgrade diffusers

Bug fixes

  • Fix FileNotFoundError: 'model_card_template.md' #136

Initial release of 🧨 Diffusers

21 Jul 14:52
5311f56
Compare
Choose a tag to compare

These are the release notes of the 🧨 Diffusers library

Introducing Hugging Face's new library for diffusion models.

Diffusion models proved themselves very effective in artificial synthesis, even beating GANs for images. Because of that, they gained traction in the machine learning community and play an important role for systems like DALL-E 2 or Imagen to generate photorealistic images when prompted on text.

While the most prolific successes of diffusion models have been in the computer vision community, these models have also achieved remarkable results in other domains, such as:

and more.

Goals

The goals of diffusers are:

  • to centralize the research of diffusion models from independent repositories to a clear and maintained project,
  • to reproduce high impact machine learning systems such as DALLE and Imagen in a manner that is accessible for the public, and
  • to create an easy to use API that enables one to train their own models or re-use checkpoints from other repositories for inference.

Release overview

Quickstart:

Diffusers aims to be a modular toolbox for diffusion techniques, with a focus the following categories:

🚄 Inference pipelines

Inference pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box. The goal is for them to stick as close as possible to their original implementation, and they can include components of other libraries (such as text encoders).

The original release contains the following pipelines:

We are currently working on enabling other pipelines for different modalities. The following pipelines are expected to land in a subsequent release:

  • BDDMPipeline for spectrogram-to-sound vocoding
  • GLIDEPipeline to support OpenAI's GLIDE model
  • Grad-TTS for text to audio generation / conditional audio generation
  • A reinforcement learning pipeline (happening in #105)

⏰ Schedulers

  • Schedulers are the algorithms to use diffusion models in inference as well as for training. They include the noise schedules and define algorithm-specific diffusion steps.
  • Schedulers can be used interchangable between diffusion models in inference to find the preferred tradef-off between speed and generation quality.
  • Schedulers are available in numpy, but can easily be transformed into PyTorch.

The goal is for each scheduler to provide one or more step() functions that should be called iteratively to unroll the diffusion loop during the forward pass. They are framework agnostic, but offer conversion methods which should allow easy conversion to PyTorch utilities.

The initial release contains the following schedulers:

🏭 Models

Models are hosted in the src/diffusers/models folder.

For the initial release, you'll get to see a few building blocks, as well as some resulting models:

  • UNet2DModel can be seen as a version of the recent UNet architectures as shown in recent papers. It can be seen as the unconditional version of the UNet model, in opposition to the conditional version that follows below.
  • UNet2DConditionModel is similar to the UNet2DModel, but is conditional: it uses the cross-attention mechanism in order to have skip connections in its downsample and upsample layers. These cross-attentions can be fed by other models. An example of a pipeline using a conditional UNet model is the latent diffusion pipeline.
  • AutoencoderKL and VQModel are still experimental models that are prone to breaking changes in the near future. However, they can already be used as part of the Latent Diffusion pipelines.

📃 Training example

The first release contains a dataset-agnostic unconditional example and a training notebook:

Credits

This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:

We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available here.