A curated list of awesome AIGC 3D papers, inspired by awesome-NeRF.
- Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era, Li et al., arxiv 2023 | bibtex
Object
high quality
- DreamFields: Zero-Shot Text-Guided Object Generation with Dream Fields, Jain et al., CVPR 2022 | github | bibtex
- DreamFusion: Text-to-3D using 2D Diffusion, Poole et al., ICLR 2023 | github | bibtex
- Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, Wang et al., CVPR 2023 |github| bibtex
- RealFusion: 360° Reconstruction of Any Object from a Single Image, Melas-Kyriazi et al., CVPR 2023 | github | bibtex
- 3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Seo et al., arxiv 2023 | github | bibtex
- Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models, Xu et al., CVPR 2023 | bibtex
- Magic3D: High-Resolution Text-to-3D Content Creation, Lin et al., CVPR 2023 | bibtex
- Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, Chen et al., ICCV 2023 | github | bibtex
- Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, Tang et al., ICCV 2023 | github | bibtex
- HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance, Zhu et al., arxiv 2023 | github | bibtex
- ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, Wang et al., NeurIPS 2023 | github | bibtex
- DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior, Sun et al., arxiv 2023 | github | bibtex
- LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching, Liang et al., arxiv 2023 | github | bibtex
- RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D, Qiu et al., arxiv 2023 | github | bibtex
- X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap Between Text-to-2D and Text-to-3D Generation, Ma et al., arxiv 2023 | github | bibtex
- StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D, Guo et al., arxiv 2023 | bibtex
- CAD: Photorealistic 3D Generation via Adversarial Distillation, Wan et al., arxiv 2023 | github | bibtex
- BiDiff: Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors, Ding et al., arxiv 2023 | github | bibtex
- SSD: Stable Score Distillation for High-Quality 3D Generation, Tang et al., arxiv 2023 | bibtex
- UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation, Liu et al., arxiv 2023 | github | bibtex
- Text-to-3D with Classifier Score Distillation, Yu et al., arxiv 2023 | github | bibtex
multi-view consistent
- Zero-1-to-3: Zero-shot One Image to 3D Object, Liu et al., ICCV 2023 | github | bibtex
- ConRad: Image Constrained Radiance Fields for 3D Generation from a Single Image, Purushwalkam et al., NeurIPS 2023 | bibtex
- One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization, Liu et al., NeurIPS 2023 | github | bibtex
- Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors, Qian et al., arxiv 2023 | github | bibtex
- SyncDreamer: Generating Multiview-consistent Images from a Single-view Image, Liu et al., arxiv 2023 | github | bibtex
- MVDream: Multi-view Diffusion for 3D Generation, Shi et al., arxiv 2023 | github | bibtex
- Consistent-1-to-3: Consistent Image to 3D View Synthesis via Geometry-aware Diffusion Models, Ye et al., 3DV 2024 | bibtex
- Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors, Lin et al., arxiv 2024 | bibtex
- Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model, Shi et al., arxiv 2023 | github | bibtex
- Wonder3D: Single Image to 3D using Cross-Domain Diffusion, Long et al., arxiv 2023 | github | bibtex
- SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D, Li et al., arxiv 2023 | github | bibtex
- One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion, Liu et al., arxiv 2023 | github | bibtex
- TOSS: High-quality Text-guided Novel View Synthesis from a Single Image, Shi et al., arxiv 2023 | bibtex
- Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion, Lu et al., arxiv 2023 | bibtex
- GeoDream:Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation, Ma et al., arxiv 2023 | github | bibtex
- DreamComposer: Controllable 3D Object Generation via Multi-View Conditions, Yang et al., arxiv 2023 | github | bibtex
- Cascade-Zero123: One Image to Highly Consistent 3D with Self-Prompted Nearby Views, Chen et al., arxiv 2023 | github | bibtex
- Free3D: Consistent Novel View Synthesis without 3D Representation, Zheng et al., arxiv 2023 | github | bibtex
- Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting, Zhang et al., arxiv 2023 | github | bibtex
- Splatter Image: Ultra-Fast Single-View 3D Reconstruction, Szymanowicz et al., arxiv 2023 | github | bibtex
- Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning, Xie et al., arxiv 2023 | bibtex
- HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D, Woo et al., arxiv 2023 | github | bibtex
- ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation, Wang et al., arxiv 2023 | github | bibtex
- iFusion: Inverting Diffusion for Pose-Free Reconstruction from Sparse Views, Wu et al., arxiv 2023 | github | bibtex
faster
- DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation, Tang et al., arxiv 2023 | github | bibtex
- Gsgen: Text-to-3D using Gaussian Splatting, Chen et al., arxiv 2023 | github | bibtex
- LRM: Large Reconstruction Model for Single Image to 3D, Hong et al., arxiv 2023 | bibtex
- Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model, Li et al., arxiv 2023 | bibtex
- DMV3D:Denoising Multi-View Diffusion using 3D Large Reconstruction Model, Xu et al., arxiv 2023 | bibtex
- Instant3D : Instant Text-to-3D Generation, Li et al., arxiv 2023 | bibtex
- HyperFields:Towards Zero-Shot Generation of NeRFs from Text, Babu et al., arxiv 2023 | github | bibtex
- GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models, Yi et al., arxiv 2023 | github | bibtex
- CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting, Vilesov et al., arxiv 2023 | bibtex
- ZeroRF: Fast Sparse View 360° Reconstruction with Zero Pretraining, Shi et al., arxiv 2023 | github | bibtex
editing
- DreamBooth3D: Subject-Driven Text-to-3D Generation, Raj et al., ICCV 2023 | bibtex
- TECA: Text-Guided Generation and Editing of Compositional 3D Avatars, Zhang et al., arxiv 2023 | github | bibtex
- Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor, Shao et al., arxiv 2023 | bibtex
- Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts, Cheng et al., arxiv 2023 | github | bibtex
- GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting, Chen et al., arxiv 2023 | github | bibtex
- GaussianEditor: Editing 3D Gaussians Delicately with Text Instructions, Fang et al., arxiv 2023 | bibtex
- Gaussian Grouping: Segment and Edit Anything in 3D Scenes, Ye et al., arxiv 2023 | github | bibtex
- AGAP:Learning Naturally Aggregated Appearance for Efficient 3D Editing, Cheng et al., arxiv 2023 | github | bibtex
conditional control
- Control3D: Towards Controllable Text-to-3D Generation, Chen et al., ACM Multimedia 2023 | bibtex
- IPDreamer: Appearance-Controllable 3D Object Generation with Image Prompts, Zeng et al., arxiv 2023 | bibtex
- ControlDreamer: Stylized 3D Generation with Multi-View ControlNet, Oh et al., arxiv 2023 | github | bibtex
- DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior, Huang et al., arxiv 2023 | github | bibtex
- MVControl: Adding Conditional Control to Multi-view Diffusion for Controllable Text-to-3D Generation, Li et al., arxiv 2023 | github | bibtex
Scene
- Text2Light: Zero-Shot Text-Driven HDR Panorama Generation, Chen et al., TOG 2022 | github | bibtext
- SceneScape: Text-Driven Consistent Scene Generation, Fridman et al., arxiv 2023 | github | bibtext
- Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models, Höllein et al., ICCV 2023 | github | bibtext
- Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields, Zhang et al., arxiv 2023 | github | bibtext
- Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout Constraints, Fang et al., arxiv 2023 | github | bibtext
- ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image, Sargent et al., arxiv 2023 | github | bibtext
- LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes, Chuang et al., arxiv 2023 | bibtext
- Pyramid Diffusion for Fine 3D Large Scene Generation, Liu et al., arxiv 2023 | github | bibtext
- GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs, Gao et al., arxiv 2023 | github | bibtext
- RoomDesigner: Encoding Anchor-latents for Style-consistent and Shape-compatible Indoor Scene Generation, Zhao et al., 3DV 2024 | github | bibtext
- ControlRoom3D:Room Generation using Semantic Proxy Rooms, Schult et al., arxiv 2023 | bibtext
- AnyHome: Open-Vocabulary Generation of Structured and Textured 3D Homes, Wen et al., arxiv 2023 | bibtext
- Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion, Prabhu et al., arxiv 2023 | bibtext
- SceneWiz3D: Towards Text-guided 3D Scene Composition, Zhang et al., arxiv 2023 | github | bibtext
- Text2Immersion: Generative Immersive Scene with 3D Gaussians, Ouyang et al., arxiv 2023 | bibtext
- ShowRoom3D: Text to High-Quality 3D Room Generation Using 3D Priors, Mao et al., arxiv 2023 | github | bibtext
Procedural 3D Modeling
- ProcTHOR: Large-Scale Embodied AI Using Procedural Generation, Deitke et al., NeurIPS 2022 | github | bibtex
- 3D-GPT: Procedural 3D Modeling with Large Language Models, Sun et al., arxiv 2023 | github | bibtex
Human
- Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion, Wang et al., CVPR 2023 | bibtex
- HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation, Huang et al., arxiv 2023 | github | bibtex
- HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation, Liu et al., arxiv 2023 | bibtex
- 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting, Qian et al., arxiv 2023 | github | bibtex
Dynamic
- TADA! Text to Animatable Digital Avatars, Liao et al., 3DV 2024 | github | bibtext
- Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video, Jiang et al., arxiv 2023 | github | bibtext
- Text-To-4D Dynamic Scene Generation, Singer et al., arxiv 2023 | bibtext
- MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion, Kapon et al., arxiv 2023 | github | bibtext
- AnimatableDreamer: Text-Guided Non-rigid 3D Model Generation and Reconstruction with Canonical Score Distillation, Wang et al., arxiv 2023 | bibtext
- Virtual Pets: Animatable Animal Generation in 3D Scenes, Cheng et al., arxiv 2023 | github | bibtext
- Align Your Gaussians:Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models, Ling et al., arxiv 2023 bibtext
- Ponymation: Learning 3D Animal Motions from Unlabeled Online Videos, Sun et al., arxiv 2023 | bibtext
- 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency, Yin et al., arxiv 2023 | github | bibtext
- DreamGaussian4D: Generative 4D Gaussian Splatting, Ren et al., arxiv 2023 | github | bibtext
3D Representation
- NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, Mildenhall et al., ECCV 2020 | github | bibtex
- 3D Gaussian Splatting for Real-Time Radiance Field Rendering, Zhou et al., TOG 2023 | github | bibtex
- Uni3D: Exploring Unified 3D Representation at Scale, Zhou et al., arxiv 2023 | github | bibtex
- SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene, Duckworth et al., arxiv 2023 | bibtex
- Triplane Meets Gaussian Splatting:Fast and Generalizable Single-View 3D Reconstruction with Transformers, Zou et al., arxiv 2023 | bibtex
- SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes, Huang et al., arxiv 2023 | github | bibtex
3D Native Generative Models
- GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images, Gao et al., NeurIPS 2022 | github | bibtex
- LION: Latent Point Diffusion Models for 3D Shape Generation, Zeng et al., NeurIPS 2022 | github | bibtex
- Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions, Chou et al., ICCV 2023 | github | bibtex
- SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, Cheng et al., CVPR 2023 | github | bibtex
- DiffRF: Rendering-guided 3D Radiance Field Diffusion, Müller et al., CVPR 2023 | bibtex
- Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Nichol et al., arxiv 2022 | github | bibtex
- 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models, Zhang et al., TOG 2023 | github | bibtex
- MeshDiffusion: Score-based Generative 3D Mesh Modeling, Liu et al., ICLR 2023 | github | bibtex
- 3DGen: Triplane Latent Diffusion for Textured Mesh Generation, Gupta et al., arxiv 2023 | bibtex
- 3D VADER - AutoDecoding Latent 3D Diffusion Models, Ntavelis et al., arxiv 2023 | github | bibtex
- HoloDiffusion: Training a 3D Diffusion Model using 2D Images, Karnewar et al., CVPR 2023 | github | bibtex
- HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion, Erkoç et al., ICCV 2023 | github | bibtex
- Shap-E: Generating Conditional 3D Implicit Functions, Jun et al., arxiv 2023 | github | bibtex
- LAS-Diffusion: Locally Attentional SDF Diffusion for Controllable 3D Shape Generation, Zheng et al., TOG 2023 | github | bibtex
- Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation, Zhao et al., arxiv 2023 | github | bibtex
- ARGUS: Visualization of AI-Assisted Task Guidance in AR, Castelo et al., arxiv 2023 | bibtex
- WildFusion:Learning 3D-Aware Latent Diffusion Models in View Space, Schwarz et al., arxiv 2023 | bibtex
- MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers, Siddiqui et al., arxiv 2023 | github | bibtex
- SPiC·E: Structural Priors in 3D Diffusion Models using Cross-Entity Attention, Sella et al., arxiv 2023 | github | bibtex
- X3: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies, Ren et al., arxiv 2023 | bibtex
Material
- Generating Parametric BRDFs from Natural Language Descriptions, Memery et al., arxiv 2023 bibtex
- MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR, Xu et al., arxiv 2023 | github | bibtex
Texture
- StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions, Höllein et al., CVPR 2022 | github | bibtex
- TANGO: Text-driven PhotoreAlistic aNd Robust 3D Stylization via LiGhting DecompOsition, Chen et al., NeurIPS 2022 | github | bibtex
- CLIP-Mesh: Generating textured meshes from text using pretrained image-text models, Khalid et al., SIGGRAPH Asia 2022 | github | bibtex
- Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures, Metzer et al., CVPR 2023 | github | bibtex
- TEXTure: Text-Guided Texturing of 3D Shapes, Richardson et al., SIGGRAPH 2023 | github | bibtex
- Text2Tex: Text-driven Texture Synthesis via Diffusion Models, Chen et al., ICCV 2023 | github | bibtex
- TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models, Cao et al., ICCV 2023 | bibtex
- MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Tang et al., NeurIPS 2023 | github | bibtext
- RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent Geometry and Texture, Song et al., ACM Multimedia 2023 | bibtex
- 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with 2D Diffusion Models, Yang et al., ACM Multimedia 2023 | github | bibtex
- ITEM3D: Illumination-Aware Directional Texture Editing for 3D Models, Liu et al., arxiv 2023 | github | bibtex
- DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation, Yang et al., arxiv 2023 | github | bibtext
- Text-Guided Texturing by Synchronized Multi-View Diffusion, Liu et al., arxiv 2023 | bibtex
- SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion Priors, Chen et al., arxiv 2023 | github | bibtext
- TeMO: Towards Text-Driven 3D Stylization for Multi-Object Meshes, Zhang et al., arxiv 2023 | bibtex
- Single Mesh Diffusion Models with Field Latents for Texture Generation, Mitchel et al., arxiv 2023 | bibtex
- Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering, Youwang et al., arxiv 2023 | github | bibtext
- Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models, Zeng et al., arxiv 2023 | github | bibtext
- Objaverse-XL, Deitke et al., NeurIPS 2023 | github | bibtext
- AI 3D Generation, explained, Jia-Bin Huang
- 3D Generation, bilibili, Leo
- 3D AIGC Algorithm Trends and Industry Implementation, Ding Liang
- Threestudio, Yuan-Chen Guo, 2023 | bibtex
- stable-dreamfusion, Jiaxiang Tang, 2023 | bibtex
- Dream Textures, Carson Katri, 2023
Awesome AIGC 3D is released under the MIT license.
contact: [email protected]
.