2023:
- Li et al., "3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion," arXiv, 2023.
- Hong et al., "Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation," arXiv, 2023.
- Kim et al., "PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion," arXiv, 2023.
- Singer et al., Text-to-4d dynamic scene generation, arXiv,2023.
- Liu et al., Zero-1-to-3: Zero-shot One Image to 3D Object, arXiv, 2023.
- Seo et al., Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, arXiv, 2023.
- Tang et al., Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, arXiv, 2023.
- Zhang et al.,Text-to-image Diffusion Models in Generative AI: A Survey, arXiv, 2023.
- Lin et al., Magic3D: High-Resolution Text-to-3D Content Creation, CVPR, 2023.
- Poole et al., DreamFusion: Text-to-3D using 2D Diffusion, ICLR, 2023.
- Richardson et al., TEXTure: Text-Guided Texturing of 3D Shapes, arXiv, 2023.
- Haque et al., Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, arXiv, 2023.
- Jun and Nichol, Shap·E: Generating Conditional 3D Implicit Functions, arXiv, 2023.
- Chen et al., Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, arXiv, 2023.