We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
来源出处
Adding Conditional Control to Text-to-Image Diffusion Models
http://arxiv.org/abs/2302.05543
相关内容
发布日期
08/04/2020 - 01:35
发布日期
06/17/2022 - 10:21
发布日期
06/22/2024 - 17:53
发布日期
01/10/2022 - 19:32
发布日期
09/21/2023 - 22:52
发布日期
02/10/2022 - 15:24
发布日期
01/10/2022 - 19:31
发布日期
08/04/2020 - 01:35
发布日期
10/23/2024 - 19:39
发布日期
09/02/2024 - 19:26
发布日期
08/04/2020 - 01:35
发布日期
08/04/2020 - 01:35
发布日期
06/07/2024 - 17:46
发布日期
08/20/2024 - 19:21
发布日期
10/31/2021 - 01:12
发布日期
04/18/2024 - 09:29
发布日期
08/04/2020 - 01:35
发布日期
09/18/2024 - 19:30
发布日期
09/02/2024 - 19:26
发布日期
07/02/2023 - 18:27