DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model

CVPR 2023

1Dept. of Electrical and Computer Engineering, 2INMC & IPAI
Seoul National University, Korea

Video

DATID-3D succeeded in text-guided domain adaptation of 3D-aware generative models while preserving diversity that is inherent in the text prompt as well as enabling high-quality pose-controlled image synthesis with excellent text-image correspondence .

Abstract

Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information.

Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality.

Here we propose DATID-3D, a novel pipeline of text-guided domain adaptation tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text.

Overview

{"overview"}

Overview of DATID-3D, our novel pipeline for text-guided domain adaptation of 3D generative models. We construct target dataset using the pre-trained text-to-image diffusion models, followed by refining the dataset through filtering process. Finally, we fine-tune our models using adversarial loss and density regularization.

Interactive demo

{"result1"}

We provide a interactive Gradio app as well as Colab demo to enjoy the results of DATID-3D.

Pose-controlled images and 3D shapes

{"result1"}

Instance-selected domain adaptation

{"instance_selected"}

One-shot fine-tuning of text-to-image diffusion models for instance-selected domain adaptation. Resulting text-to-image diffusion models are applied to the Stage 1.

{"result1"}

Results of instance-selected domain adaptation, selecting one Pixar sample to generate more diverse samples for it.

Single-view 3D manipulated reconstruction

{"result1"}

As advancements of prior 2D text-guided image manipulation, our method enables (1) lifting the text-guided manipulated images to 3D and (2) choosing one among diverse results from one text prompt.

Wide range of text-guided adaption results

{"result1"}
{"result1"}
{"result1"}

BibTeX

@misc{kim2022datid3d,
      title={DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model},
      author={Gwanghyun Kim and Se Young Chun},
      year={2022},
      eprint={2211.16374},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}