-
Notifications
You must be signed in to change notification settings - Fork 11
/
Copy pathindex.html
1445 lines (1445 loc) · 79.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta name="generator" content="jemdoc, see http://jemdoc.jaboc.net/" />
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<link rel="stylesheet" href="jemdoc.css" type="text/css" />
</head>
<!-- <div id="layout-content">
<div id="toptitle">
<h1 align="center">3D-aware Image Synthesis – Papers, Codes and Datasets</h1>
</div> -->
<div id="layout-content">
<!-- <div id="toptitle"> -->
<p align="center">
<h1 align="center">
A Survey on Deep Generative 3D-aware Image Synthesis
</h1>
<p align="center">
ACM Computing Surveys, 2023 <br />
<a href="https://weihaox.github.io/"><strong>Weihao Xia</strong></a> ·
<a href="http://www.homepages.ucl.ac.uk/~ucakjxu/"><strong>Jing-Hao
Xue</strong></a>
</p>
<p align="center">
<a href='https://arxiv.org/abs/2210.14267'>
<img src='https://img.shields.io/badge/Paper-Paper-green?style=flat&logo=arxiv&logoColor=green' alt='arxiv Paper'>
</a>
<a href='https://weihaox.github.io/Gen3D/' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'>
</a>
<a href='https://dl.acm.org/doi/10.1145/3626193' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/CSUR-Paper-red?style=flat&logoColor=red' alt='CSUR Paper'>
</a>
</p>
</p>
<!-- </div> -->
<h2 id="introduction">Introduction</h2>
<p>This project lists representative papers/codes/datasets about deep
<strong><a href="https://weihaox.github.io/3D-aware-Gen">3D-aware image
synthesis</a></strong>. Besides <strong>3D-aware Generative
Models</strong> (GANs and Diffusion Models) discussed in this <a
href="https://arxiv.org/abs/2210.14267">survey</a>, this project
additionally covers novel view synthesis studies, especially those based
on <a
href="https://github.com/weihaox/awesome-neural-rendering#implicit-neural-representation-and-rendering">implicit
neural representations</a> such as NeRF.</p>
<p>We aim to constantly update the latest relevant papers and help the
community track this topic. Please feel free to join us and <a
href="https://github.com/weihaox/3D-aware-Gen/blob/main/CONTRIBUTING.md">contribute</a>
to the project. Please do not hesitate to reach out if you have any
questions or suggestions.</p>
<h2 id="survey-paper">Survey paper</h2>
<ul>
<li><a href="https://arxiv.org/abs/2210.14267">A Survey on Deep
Generative 3D-aware Image Synthesis</a><br />
Weihao Xia and Jing-Hao Xue. <em>ACM Computing Surveys</em>, 2023.</li>
</ul>
<h2 id="d-control-of-2d-gans">3D Control of 2D GANs</h2>
<h3 id="d-control-latent-directions">3D Control Latent Directions</h3>
<p>For 3D control over diffusion models simiar to <a
href="https://github.com/weihaox/GAN-Inversion#gan-latent-space-editing">GAN</a>,
please refer to <a
href="https://github.com/weihaox/GAN-Inversion#semantic-editing-in-diffusion-latent-spaces">diffusion
latent editing</a>.</p>
<ul>
<li><p><strong>SeFa: Closed-Form Factorization of Latent Semantics in
GANs.</strong><br> <em>Yujun Shen, Bolei Zhou.</em><br> CVPR 2021. [<a
href="https://arxiv.org/abs/2007.06600">Paper</a>] [<a
href="https://genforce.github.io/sefa/">Project</a>] [<a
href="https://github.com/genforce/sefa">Code</a>]</p></li>
<li><p><strong>GANSpace: Discovering Interpretable GAN
Controls.</strong><br> <em>Erik Härkönen, Aaron Hertzmann, Jaakko
Lehtinen, Sylvain Paris.</em><br> NeurIPS 2020. [<a
href="https://arxiv.org/abs/2004.02546">Paper</a>] [<a
href="https://github.com/harskish/ganspace">Code</a>]</p></li>
<li><p><strong>Interpreting the Latent Space of GANs for Semantic Face
Editing.</strong><br> <em><a href="http://shenyujun.github.io/">Yujun
Shen</a>, <a href="http://www.jasongt.com/">Jinjin Gu</a>, <a
href="http://www.ie.cuhk.edu.hk/people/xotang.shtml">Xiaoou Tang</a>, <a
href="http://bzhou.ie.cuhk.edu.hk/">Bolei Zhou</a>.</em><br> CVPR 2020.
[<a href="https://arxiv.org/abs/1907.10786">Paper</a>] [<a
href="https://genforce.github.io/interfacegan/">Project</a>] [<a
href="https://github.com/genforce/interfacegan">Code</a>]</p></li>
<li><p><strong>Unsupervised Discovery of Interpretable Directions in the
GAN Latent Space.</strong><br> <em>Andrey Voynov, Artem
Babenko.</em><br> ICML 2020. [<a
href="https://arxiv.org/abs/2002.03754">Paper</a>] [<a
href="https://github.com/anvoynov/GANLatentDiscovery">Code</a>]</p></li>
<li><p><strong>On the “steerability” of generative adversarial
networks.</strong><br> <em>Ali Jahanian, Lucy Chai, Phillip
Isola.</em><br> ICLR 2020. [<a
href="https://arxiv.org/abs/1907.07171">Paper</a>] [<a
href="https://ali-design.github.io/gan_steerability/">Project</a>] [<a
href="https://github.com/ali-design/gan_steerability">Code</a>]</p></li>
</ul>
<h3 id="d-parameters-as-controls">3D Parameters as Controls</h3>
<ul>
<li><p><strong>3D-FM GAN: Towards 3D-Controllable Face
Manipulation.</strong><br> <em><a
href="https://lychenyoko.github.io/">Yuchen Liu</a>, Zhixin Shu, Yijun
Li, Zhe Lin, Richard Zhang, and Sun-Yuan Kung.</em><br> ECCV 2022. [<a
href="https://arxiv.org/abs/2208.11257">Paper</a>] [<a
href="https://lychenyoko.github.io/3D-FM-GAN-Webpage/">Project</a>]</p></li>
<li><p><strong>GAN-Control: Explicitly Controllable GANs.</strong><br>
<em>Alon Shoshan, Nadav Bhonker, Igor Kviatkovsky, Gerard
Medioni.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2101.02477">Paper</a>] [<a
href="https://alonshoshan10.github.io/gan_control/">Project</a>] [<a
href="https://github.com/amazon-science/gan-control">Code</a>]</p></li>
<li><p><strong>CONFIG: Controllable Neural Face Image
Generation.</strong><br> <em>Marek Kowalski, Stephan J. Garbin, Virginia
Estellers, Tadas Baltrušaitis, Matthew Johnson, Jamie Shotton.</em><br>
ECCV 2020. [<a href="https://arxiv.org/abs/2005.02671">Paper</a>] [<a
href="https://github.com/microsoft/ConfigNet">Code</a>]</p></li>
<li><p><strong>DiscoFaceGAN: Disentangled and Controllable Face Image
Generation via 3D Imitative-Contrastive Learning.</strong><br> <em>Yu
Deng, Jiaolong Yang, Dong Chen, Fang Wen, Xin Tong.</em><br> CVPR 2020.
[<a href="https://arxiv.org/Paper/2004.11660.Paper">Paper</a>] [<a
href="https://github.com/microsoft/DiscoFaceGAN">Code</a>]</p></li>
<li><p><strong>StyleRig: Rigging StyleGAN for 3D Control over Portrait
Images.</strong><br> <em>Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj,
Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer,
Christian Theobalt.</em><br> CVPR 2020 (oral). [<a
href="https://arxiv.org/abs/2004.00121">Paper</a>] [<a
href="https://gvv.mpi-inf.mpg.de/projects/StyleRig/">Project</a>]</p></li>
<li><p><strong>PIE: Portrait Image Embedding for Semantic
Control.</strong><br> <em><a
href="http://people.mpi-inf.mpg.de/~atewari/">Ayush Tewari</a>, Mohamed
Elgharib, Mallikarjun B R., Florian Bernard, Hans-Peter Seidel, Patrick
Pérez, Michael Zollhöfer, Christian Theobalt.</em><br> TOG (SIGGRAPH
Asia) 2020. [<a
href="http://gvv.mpi-inf.mpg.de/projects/PIE/data/paper.Paper">Paper</a>]
[<a href="http://gvv.mpi-inf.mpg.de/projects/PIE/">Project</a>]</p></li>
</ul>
<h3 id="d-prior-knowledge-as-constraints">3D Prior Knowledge as
Constraints</h3>
<ul>
<li><p><strong>3D-Aware Indoor Scene Synthesis with Depth
Priors.</strong><br> <em>Zifan Shi, Yujun Shen, Jiapeng Zhu, Dit-Yan
Yeung, Qifeng Chen.</em><br> ECCV 2022 (oral). [<a
href="https://arxiv.org/abs/2202.08553">Paper</a>] [<a
href="https://vivianszf.github.io/depthgan/">Project</a>] [<a
href="https://github.com/vivianszf/depthgan">Code</a>]</p></li>
<li><p><strong>NGP: Towards a Neural Graphics Pipeline for Controllable
Image Generation.</strong><br> <em>Xuelin Chen, Daniel Cohen-Or, Baoquan
Chen, Niloy J. Mitra.</em><br> Eurographics 2021. [<a
href="https://arxiv.org/abs/2006.10569">Paper</a>] [<a
href="http://geometry.cs.ucl.ac.uk/projects/2021/ngp">Code</a>]</p></li>
<li><p><strong>Lifting 2D StyleGAN for 3D-Aware Face
Generation.</strong><br> <em><a
href="https://seasonsh.github.io/">Yichun Shi</a>, Divyansh Aggarwal, <a
href="http://www.cse.msu.edu/~jain/">Anil K. Jain</a>.</em><br> CVPR
2021. [<a href="https://arxiv.org/abs/2011.13126">Paper</a>] [<a
href="https://github.com/seasonSH/LiftedGAN">Code</a>]</p></li>
<li><p><strong>RGBD-GAN: Unsupervised 3D Representation Learning From
Natural Image Datasets via RGBD Image Synthesis.</strong><br>
<em>Atsuhiro Noguchi, Tatsuya Harada.</em><br> ICLR 2020. [<a
href="https://arxiv.org/abs/1909.12573">Paper</a>] [<a
href="https://github.com/nogu-atsu/RGBD-GAN">Code</a>]</p></li>
<li><p><strong>Visual Object Networks: Image Generation with
Disentangled 3D Representation.</strong><br> <em>Jun-Yan Zhu, Zhoutong
Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum,
William T. Freeman.</em><br> NeurIPS 2018. [<a
href="https://arxiv.org/abs/1812.02725">Paper</a>] [<a
href="http://von.csail.mit.edu/">Project</a>] [<a
href="https://github.com/junyanz/VON">Code</a>]</p></li>
<li><p><strong>3D Shape Induction from 2D Views of Multiple
Objects.</strong><br> <em>Matheus Gadelha, Subhransu Maji, Rui
Wang.</em><br> 3DV 2017. [<a
href="https://arxiv.org/abs/1612.05872">Paper</a>] [<a
href="http://mgadelha.me/prgan/index.html">Project</a>] [<a
href="https://github.com/matheusgadelha/PrGAN">Code</a>]</p></li>
<li><p><strong>Generative Image Modeling using Style and Structure
Adversarial Networks.</strong><br> <em>Xiaolong Wang, Abhinav
Gupta.</em><br> ECCV 2016. [<a
href="https://arxiv.org/abs/1603.05631">Paper</a>] [<a
href="https://github.com/facebook/eyescream">Project</a>] [<a
href="https://github.com/xiaolonw/ss-gan">Code</a>]</p></li>
</ul>
<h2 id="d-aware-gans-for-a-single-image-category">3D-aware GANs for a
Single Image Category</h2>
<h3 id="unconditional-3d-generative-models">Unconditional 3D Generative
Models</h3>
<ul>
<li><p><strong>BallGAN: 3D-aware Image Synthesis with a Spherical
Background.</strong><br> <em>Minjung Shin, Yunji Seo, Jeongmin Bae,
Young Sun Choi, Hyunsu Kim, Hyeran Byun, Youngjung Uh.</em><br> ICCV
2023. [<a href="https://arxiv.org/abs/2301.09091">Paper</a>] [<a
href="https://minjung-s.github.io/ballgan/">Project</a>] [<a
href="https://github.com/minjung-s/BallGAN">Code</a>]</p></li>
<li><p><strong>Mimic3D: Thriving 3D-Aware GANs via 3D-to-2D
Imitation.</strong><br> <em>Xingyu Chen, Yu Deng, Baoyuan Wang.</em><br>
ICCV 2023. [<a href="https://arxiv.org/abs/2303.09036">Paper</a>] [<a
href="https://seanchenxy.github.io/Mimic3DWeb/">Project</a>]</p></li>
<li><p><strong>GRAM-HD: 3D-Consistent Image Generation at High
Resolution with Generative Radiance Manifolds.</strong><br> <em>Jianfeng
Xiang, Jiaolong Yang, Yu Deng, Xin Tong.</em><br> ICCV 2023. [<a
href="https://arxiv.org/abs/2206.07255">Paper</a>] [<a
href="https://jeffreyxiang.github.io/GRAM-HD/">Project</a>]</p></li>
<li><p><strong>Live 3D Portrait: Real-Time Radiance Fields for
Single-Image Portrait View Synthesis.</strong><br> <em>Alex Trevithick,
Matthew Chan, Michael Stengel, Eric R. Chan, Chao Liu, Zhiding Yu, Sameh
Khamis, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano.</em><br> TOG
(SIGGRAPH) 2023. [<a
href="https://research.nvidia.com/labs/nxp/lp3d//media/paper.Paper">Paper</a>]
[<a
href="https://research.nvidia.com/labs/nxp/lp3d//">Project</a>]</p></li>
<li><p><strong>VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel
Grids.</strong><br> <em>Katja Schwarz, Axel Sauer, Michael Niemeyer,
Yiyi Liao, Andreas Geiger.</em><br> NeurIPS 2022. [<a
href="https://arxiv.org/Paper/2206.07695.Paper">Paper</a>] [<a
href="https://github.com/autonomousvision/voxgraf">Code</a>]</p></li>
<li><p><strong>GeoD: Improving 3D-aware Image Synthesis with A
Geometry-aware Discriminator.</strong><br> <em>Zifan Shi, Yinghao Xu,
Yujun Shen, Deli Zhao, Qifeng Chen, Dit-Yan Yeung.</em><br> NeurIPS
2022. [<a href="https://arxiv.org/abs/2209.15637">Paper</a>] [<a
href="https://vivianszf.github.io/geod">Project</a>]</p></li>
<li><p><strong>EpiGRAF: Rethinking training of 3D GANs.</strong><br>
<em><a href="https://universome.github.io/">Ivan Skorokhodov</a>, <a
href="http://www.stulyakov.com/">Sergey Tulyakov</a>, <a
href="https://sites.google.com/view/yiqun-wang/home">Yiqun Wang</a>, <a
href="https://peterwonka.net/">Peter Wonka</a>.</em><br> NeurIPS 2022.
[<a href="https://arxiv.org/abs/2206.10535">Paper</a>] [<a
href="https://universome.github.io/epigraf">Project</a>] [<a
href="https://github.com/universome/epigraf">Code</a>]</p></li>
<li><p><strong>VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel
Grids.</strong><br> <em>Schwarz, Katja, Sauer, Axel, Niemeyer, Michael,
Liao, Yiyi, and Geiger, Andreas.</em><br> NeurIPS 2022. [<a
href="https://arxiv.org/Paper/2206.07695.Paper">Paper</a>] [<a
href="https://katjaschwarz.github.io/voxgraf">Project</a>]</p></li>
<li><p><strong>Injecting 3D Perception of Controllable NeRF-GAN into
StyleGAN for Editable Portrait Image Synthesis.</strong><br>
<em>Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, Donghyeon Kim, David Han,
Hanseok Ko.</em><br> ECCV 2022. [<a
href="https://arxiv.org/abs/2207.10257">Paper</a>] [<a
href="https://jgkwak95.github.io/surfgan/">Project</a>] [<a
href="https://github.com/jgkwak95/SURF-GAN">Code</a>]</p></li>
<li><p><strong>Generative Multiplane Images: Making a 2D GAN
3D-Aware.</strong><br> <em><a href="https://xiaoming-zhao.com/">Xiaoming
Zhao</a>, <a href="https://fangchangma.github.io/">Fangchang Ma</a>, <a
href="https://scholar.google.com/citations?user=bckYvFkAAAAJ&hl=en">David
Güera</a>, <a href="https://jrenzhile.com/">Zhile Ren</a>, <a
href="https://www.alexander-schwing.de/">Alexander G. Schwing</a>, <a
href="https://www.colburn.org/">Alex Colburn</a>.</em><br> ECCV 2022.
[<a href="https://arxiv.org/abs/2207.10642">Paper</a>] [<a
href="https://xiaoming-zhao.github.io/projects/gmpi/">Project</a>] [<a
href="https://github.com/apple/ml-gmpi">Code</a>]</p></li>
<li><p><strong>3D-FM GAN: Towards 3D-Controllable Face
Manipulation.</strong><br> <em><a
href="https://lychenyoko.github.io/">Yuchen Liu</a>, Zhixin Shu, Yijun
Li, Zhe Lin, Richard Zhang, and Sun-Yuan Kung.</em><br> ECCV 2022. [<a
href="https://arxiv.org/abs/2208.11257">Paper</a>] [<a
href="https://lychenyoko.github.io/3D-FM-GAN-Webpage/">Project</a>]</p></li>
<li><p><strong>EG3D: Efficient Geometry-aware 3D Generative Adversarial
Networks.</strong><br> <em><a
href="https://ericryanchan.github.io/">Eric R. Chan</a>, <a
href="https://connorzlin.com/">Connor Z. Lin</a>, <a
href="https://matthew-a-chan.github.io/">Matthew A. Chan</a>, <a
href="https://luminohope.org/">Koki Nagano</a>, <a
href="https://cs.stanford.edu/~bxpan/">Boxiao Pan</a>, <a
href="https://research.nvidia.com/person/shalini-gupta">Shalini De
Mello</a>, <a href="https://oraziogallo.github.io/">Orazio Gallo</a>, <a
href="https://geometry.stanford.edu/member/guibas/">Leonidas Guibas</a>,
<a href="https://research.nvidia.com/person/jonathan-tremblay">Jonathan
Tremblay</a>, <a href="https://www.samehkhamis.com/">Sameh Khamis</a>,
<a href="https://research.nvidia.com/person/tero-karras">Tero
Karras</a>, <a href="https://stanford.edu/~gordonwz/">Gordon
Wetzstein</a>.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2112.07945">Paper</a>] [<a
href="https://matthew-a-chan.github.io/EG3D">Project</a>] [<a
href="https://github.com/NVlabs/eg3d">Code</a>]</p></li>
<li><p><strong>StylizedNeRF: Consistent 3D Scene Stylization as Stylized
NeRF via 2D-3D Mutual Learning.</strong><br> <em>Yi-Hua Huang, Yue He,
Yu-Jie Yuan, Yu-Kun Lai, Lin Gao.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2205.12183">Paper</a>]</p></li>
<li><p><strong>Multi-View Consistent Generative Adversarial Networks for
3D-aware Image Synthesis.</strong><br> <em>Xuanmeng Zhang, Zhedong
Zheng, Daiheng Gao, Bang Zhang, Pan Pan, Yi Yang.</em><br> CVPR 2022.
[<a href="https://arxiv.org/abs/2204.06307">Paper</a>] [<a
href="https://github.com/Xuanmeng-Zhang/MVCGAN">Code</a>]</p></li>
<li><p><strong>Disentangled3D: Learning a 3D Generative Model with
Disentangled Geometry and Appearance from Monocular Images.</strong><br>
<em><a href="https://ayushtewari.com/">Ayush Tewari</a>, Mallikarjun B
R, Xingang Pan, Ohad Fried, Maneesh Agrawala, Christian
Theobalt.</em><br> CVPR 2022. [<a
href="https://people.mpi-inf.mpg.de/~atewari/projects/D3D/data/paper.Paper">Paper</a>]
[<a
href="https://people.mpi-inf.mpg.de/~atewari/projects/D3D/">Project</a>]</p></li>
<li><p><strong>GIRAFFE HD: A High-Resolution 3D-aware Generative
Model.</strong><br> <em>Yang Xue, Yuheng Li, Krishna Kumar Singh, Yong
Jae Lee.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2203.14954">Paper</a>] [<a
href="https://github.com/AustinXY/GIRAFFEHD">Code</a>]</p></li>
<li><p><strong>StyleSDF: High-Resolution 3D-Consistent Image and
Geometry Generation.</strong><br> <em><a
href="https://homes.cs.washington.edu/~royorel/">Roy Or-El</a>, <a
href="https://roxanneluo.github.io/">Xuan Luo</a>, Mengyi Shan, Eli
Shechtman, Jeong Joon Park, Ira Kemelmacher-Shlizerman.</em><br> CVPR
2022. [<a href="https://arxiv.org/abs/2112.11427">Paper</a>] [<a
href="https://stylesdf.github.io/">Project</a>] [<a
href="https://github.com/royorel/StyleSDF">Code</a>]</p></li>
<li><p><strong>FENeRF: Face Editing in Neural Radiance
Fields.</strong><br> <em>Jingxiang Sun, Xuan Wang, Yong Zhang, Xiaoyu
Li, Qi Zhang, Yebin Liu, Jue Wang.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2111.15490">Paper</a>] [<a
href="https://github.com/MrTornado24/FENeRF">Code</a>]</p></li>
<li><p><strong>LOLNeRF: Learn from One Look.</strong><br> <em><a
href="https://vision.cs.ubc.ca/team/">Daniel Rebain</a>, Mark Matthews,
Kwang Moo Yi, Dmitry Lagun, Andrea Tagliasacchi.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2111.09996">Paper</a>] [<a
href="https://ubc-vision.github.io/lolnerf/">Project</a>]</p></li>
<li><p><strong>GRAM: Generative Radiance Manifolds for 3D-Aware Image
Generation.</strong><br> <em><a href="https://yudeng.github.io/">Yu
Deng</a>, <a href="https://jlyang.org/">Jiaolong Yang</a>, <a
href="http://www.xtong.info/">Jianfeng Xiang</a>, <a href="">Xin
Tong</a>.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2112.08867">Paper</a>] [<a
href="https://yudeng.github.io/GRAM/">Project</a>] [<a
href="https://yudeng.github.io/GRAM/">Code</a>]</p></li>
<li><p><strong>VolumeGAN: 3D-aware Image Synthesis via Learning
Structural and Textural Representations.</strong><br> <em>Yinghao Xu,
Sida Peng, Ceyuan Yang, Yujun Shen, Bolei Zhou.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2112.10759">Paper</a>] [<a
href="https://genforce.github.io/volumegan/">Project</a>] [<a
href="https://github.com/genforce/VolumeGAN">Code</a>]</p></li>
<li><p><strong>Generating Videos with Dynamics-aware Implicit Generative
Adversarial Networks.</strong><br> <em>Sihyun Yu, Jihoon Tack, Sangwoo
Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin.</em><br> ICLR 2022.
[<a href="https://openreview.net/forum?id=Czsdv-S4-w9">Paper</a>] [<a
href="https://sihyun-yu.github.io/digan/">Project</a>] [<a
href="https://github.com/sihyun-yu/digan">Code</a>]</p></li>
<li><p><strong>StyleNeRF: A Style-based 3D-Aware Generator for
High-resolution Image Synthesis.</strong><br> <em><a
href="http://jiataogu.me/">Jiatao Gu</a>, <a
href="https://lingjie0206.github.io/">Lingjie Liu</a>, <a
href="https://totoro97.github.io/about.html">Peng Wang</a>, <a
href="http://people.mpi-inf.mpg.de/~theobalt/">Christian
Theobalt</a>.</em><br> ICLR 2022. [<a
href="https://arxiv.org/abs/2110.08985">Paper</a>] [<a
href="http://jiataogu.me/style_nerf/">Project</a>]</p></li>
<li><p><strong>MOST-GAN: 3D Morphable StyleGAN for Disentangled Face
Image Manipulation.</strong><br> <em>Safa C. Medin, Bernhard Egger,
Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, Tim K.
Marks.</em><br> AAAI 2022. [<a
href="https://arxiv.org/abs/2111.01048">Paper</a>]</p></li>
<li><p><strong>A Shading-Guided Generative Implicit Model for
Shape-Accurate 3D-Aware Image Synthesis.</strong><br> <em>Xingang Pan,
Xudong Xu, Chen Change Loy, Christian Theobalt, Bo Dai.</em><br> NeurIPS
2021. [<a href="https://arxiv.org/abs/2110.15678">Paper</a>]</p></li>
<li><p><strong>pi-GAN: Periodic Implicit Generative Adversarial Networks
for 3D-Aware Image Synthesis.</strong><br> <em><a
href="https://ericryanchan.github.io/">Eric R. Chan</a>, <a
href="https://marcoamonteiro.github.io/pi-GAN-website/">Marco
Monteiro</a>, <a href="https://kellnhofer.xyz/">Petr Kellnhofer</a>, <a
href="https://jiajunwu.com/">Jiajun Wu</a>, <a
href="https://stanford.edu/~gordonwz/">Gordon Wetzstein</a>.</em><br>
CVPR 2021. [<a href="https://arxiv.org/abs/2012.00926">Paper</a>] [<a
href="https://marcoamonteiro.github.io/pi-GAN-website/">Project</a>] [<a
href="https://github.com/lucidrains/pi-GAN-pytorch">Code</a>]</p></li>
<li><p><strong>GIRAFFE: Representing Scenes as Compositional Generative
Neural Feature Fields.</strong><br> <em>Michael Niemeyer, Andreas
Geiger.</em><br> CVPR 2021 (Best Paper). [<a
href="https://arxiv.org/abs/2011.12100">Paper</a>] [<a
href="https://m-niemeyer.github.io/project-pages/giraffe/index.html">Project</a>]
[<a
href="https://github.com/autonomousvision/giraffe">Code</a>]</p></li>
<li><p><strong>BlockGAN: Learning 3D Object-aware Scene Representations
from Unlabelled Images.</strong><br> <em>Thu Nguyen-Phuoc, Christian
Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra.</em><br> NeurIPS 2020.
[<a href="https://arxiv.org/abs/2002.08988">Paper</a>] [<a
href="https://www.monkeyoverflow.com/#/blockgan/">Project</a>] [<a
href="https://github.com/thunguyenphuoc/BlockGAN">Code</a>]</p></li>
<li><p><strong>GRAF: Generative Radiance Fields for 3D-Aware Image
Synthesis.</strong><br> <em><a
href="https://katjaschwarz.github.io/">Katja Schwarz</a>, <a
href="https://yiyiliao.github.io/">Yiyi Liao</a>, <a
href="https://m-niemeyer.github.io/">Michael Niemeyer</a>, <a
href="http://www.cvlibs.net/">Andreas Geiger</a>.</em><br> NeurIPS 2020.
[<a href="https://arxiv.org/abs/2007.02442">Paper</a>] [<a
href="https://avg.is.tuebingen.mpg.de/publications/schwarz2020neurips">Project</a>]
[<a href="https://github.com/autonomousvision/graf">Code</a>]</p></li>
<li><p><strong>HoloGAN: Unsupervised learning of 3D representations from
natural images.</strong><br> <em><a
href="https://monkeyoverflow.com/about/">Thu Nguyen-Phuoc</a>, <a
href="https://lambdalabs.com/blog/author/chuan/">Chuan Li</a>, Lucas
Theis, <a href="https://richardt.name/">Christian Richardt</a>, <a
href="http://yongliangyang.net/">Yong-liang Yang</a>.</em><br> ICCV
2019. [<a href="https://arxiv.org/abs/1904.01326">Paper</a>] [<a
href="https://www.monkeyoverflow.com/hologan-unsupervised-learning-of-3d-representations-from-natural-images/">Project</a>]
[<a href="https://github.com/thunguyenphuoc/HoloGAN">Code</a>]</p></li>
</ul>
<h3 id="conditional-3d-generative-models">Conditional 3D Generative
Models</h3>
<ul>
<li><p><strong>3D-aware Conditional Image Synthesis.</strong><br> <em><a
href="https://dunbar12138.github.io/">Kangle Deng</a>, <a
href="https://gengshan-y.github.io/">Gengshan Yang</a>, <a
href="https://www.cs.cmu.edu/~deva/">Deva Ramanan</a>, <a
href="https://www.cs.cmu.edu/~junyanz/">Jun-Yan Zhu</a>.</em><br> CVPR
2023. [<a href="https://arxiv.org/abs/2302.08509">Paper</a>] [<a
href="https://www.cs.cmu.edu/~pix2pix3D/">Project</a>] [<a
href="https://github.com/dunbar12138/pix2pix3D">Code</a>]</p></li>
<li><p><strong>Sem2NeRF: Converting Single-View Semantic Masks to Neural
Radiance Fields.</strong><br> <em><a
href="https://donydchen.github.io/">Yuedong Chen</a>, <a
href="https://wuqianyi.top/">Qianyi Wu</a>, <a
href="https://www.chuanxiaz.com/">Chuanxia Zheng</a>, <a
href="https://personal.ntu.edu.sg/astjcham/">Tat-Jen Cham</a>, <a
href="https://jianfei-cai.github.io/">Jianfei Cai</a>.</em><br> ECCV
2022. [<a href="https://arxiv.org/abs/2203.10821">Paper</a>] [<a
href="https://donydchen.github.io/sem2nerf">Project</a>] [<a
href="https://github.com/donydchen/sem2nerf">Code</a>]</p></li>
<li><p><strong>IDE-3D: Interactive Disentangled Editing for
High-Resolution 3D-aware Portrait Synthesis.</strong><br> <em><a
href="https://github.com/MrTornado24">Jingxiang Sun</a>, <a
href="https://mrtornado24.github.io/IDE-3D/">Xuan Wang</a>, <a
href="https://seasonsh.github.io/">Yichun Shi</a>, <a
href="https://lizhenwangt.github.io/">Lizhen Wang</a>, <a
href="https://juewang725.github.io/">Jue Wang</a>, <a
href="https://liuyebin.com/">Yebin Liu</a>.</em><br> SIGGRAPH Asia 2022.
[<a href="https://arxiv.org/abs/2205.15517">Paper</a>] [<a
href="https://mrtornado24.github.io/IDE-3D/">Project</a>] [<a
href="https://github.com/MrTornado24/IDE-3D">Code</a>]</p></li>
<li><p><strong>NeRFFaceEditing: Disentangled Face Editing in Neural
Radiance Fields.</strong><br> <em>Kaiwen Jiang, <a
href="http://people.geometrylearning.com/csy/">Shu-Yu Chen</a>, <a
href="http://people.geometrylearning.com/lfl/">Feng-Lin Liu</a>, <a
href="http://sweb.cityu.edu.hk/hongbofu/">Hongbo Fu</a>, <a
href="http://www.geometrylearning.com/cn/">Lin Gao</a>.</em><br>
SIGGRAPH Asia 2022. [<a
href="https://arxiv.org/abs/2211.07968">Paper</a>] [<a
href="http://geometrylearning.com/NeRFFaceEditing/">Project</a>]</p></li>
<li><p><strong>GANcraft: Unsupervised 3D Neural Rendering of Minecraft
Worlds.</strong><br> <em>Zekun Hao, Arun Mallya, Serge Belongie, Ming-Yu
Liu.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2104.07659">Paper</a>] [<a
href="https://nvlabs.github.io/GANcraft/">Project</a>] [<a
href="https://github.com/NVlabs/imaginaire">Code</a>]</p></li>
</ul>
<h2 id="d-aware-diffusion-models-for-a-single-image-category">3D-aware
Diffusion Models for a Single Image Category</h2>
<ul>
<li><p><strong>Generating Images with 3D Annotations Using Diffusion
Models.</strong><br> <em><a href="https://wufeim.github.io/">Wufei
Ma</a>, <a href="https://qihao067.github.io/">Qihao Liu</a>, <a
href="https://jiahaoplus.github.io/">Jiahao Wang</a>, Angtian Wang,
Xiaoding Yuan, Yi Zhang, Zihao Xiao, Guofeng Zhang, Beijia Lu, Ruxiao
Duan, Yongrui Qi, <a href="https://adamkortylewski.com/">Adam
Kortylewski</a>, <a href="https://www.cs.jhu.edu/~yyliu/">Yaoyao
Liu</a>, <a href="https://www.cs.jhu.edu/~ayuille/">Alan
Yuille</a>.</em><br> ICLR 2024. [<a
href="https://arxiv.org/abs/2306.08103">Paper</a>] [<a
href="https://ccvl.jhu.edu/3D-DST/">Project</a>] [<a
href="https://github.com/wufeim/DST3D">Code</a>]</p></li>
<li><p><strong>GeNVS: Generative Novel View Synthesis with 3D-Aware
Diffusion Models.</strong><br> <em>Eric R. Chan, Koki Nagano, Matthew A.
Chan, Alexander W. Bergman, Jeong Joon Park, Axel Levy, Miika Aittala,
Shalini De Mello, Tero Karras, Gordon Wetzstein.</em><br> ICCV 2023. [<a
href="https://arxiv.org/abs/2304.02602">Paper</a>] [<a
href="https://nvlabs.github.io/genvs/">Project</a>] [<a
href="https://github.com/NVlabs/genvs">Code</a>]</p></li>
<li><p><strong>Single-Stage Diffusion NeRF: A Unified Approach to 3D
Generation and Reconstruction.</strong><br> <em>Hansheng Chen, Jiatao
Gu, Anpei Chen, Wei Tian, Zhuowen Tu, Lingjie Liu, Hao Su.</em><br> ICCV
2023. [<a href="http://arxiv.org/abs/2304.06714">PDF</a>] [<a
href="https://lakonik.github.io/ssdnerf">Project</a>] [<a
href="https://github.com/Lakonik/SSDNeRF">Code</a>]</p></li>
<li><p><strong>3D-aware Image Generation using 2D Diffusion
Models.</strong><br> <em><a
href="https://jeffreyxiang.github.io/">Jianfeng Xiang</a>, Jiaolong
Yang, Binbin Huang, Xin Tong.</em><br> ICCV 2023. [<a
href="https://arxiv.org/abs/2303.17905">Paper</a>] [<a
href="https://jeffreyxiang.github.io/ivid/">Project</a>] [<a
href="https://github.com/JeffreyXiang/ivid">Code</a>]</p></li>
<li><p><strong>HoloFusion: Towards Photo-realistic 3D Generative
Modeling.</strong><br> <em>Animesh Karnewar, Niloy J. Mitra, Andrea
Vedaldi, David Novotny.</em><br> ICCV 2023. [<a
href="http://arxiv.org/abs/2308.14244">Paper</a>] [<a
href="https://holodiffusion.github.io/holofusion">Project</a>]</p></li>
<li><p><strong>HyperDiffusion: Generating Implicit Neural Fields with
Weight-Space Diffusion.</strong><br> <em><a
href="https://ziyaerkoc.com/">Ziya Erkoç</a>, <a
href="https://fangchangma.github.io/">Fangchang Ma</a>, <a
href="http://shanqi.github.io/">Qi Shan</a>, <a
href="https://niessnerlab.org/members/matthias_niessner/profile.html">Matthias
Nießner</a>, <a href="https://www.3dunderstanding.org/team.html">Angela
Dai</a>.</em><br> ICCV 2023. [<a
href="https://arxiv.org/abs/2303.17015">Paper</a>] [<a
href="https://ziyaerkoc.com/hyperdiffusion/">Project</a>]</p></li>
<li><p><strong>LatentSwap3D: Semantic Edits on 3D Image
GANs.</strong><br> <em>Enis Simsar, Alessio Tonioni, Evin Pınar Örnek,
Federico Tombari.</em><br> ICCV 2023 Workshop on AI3DCC. [<a
href="https://arxiv.org/abs/2212.01381">Paper</a>]</p></li>
<li><p><strong>DiffusioNeRF: Regularizing Neural Radiance Fields with
Denoising Diffusion Models.</strong><br> <em><a
href="https://scholar.google.com/citations?user=ASP-uu4AAAAJ&hl=en&oi=ao">Jamie
Wynn</a> and <a
href="https://scholar.google.com/citations?user=ELFm0CgAAAAJ&hl=en&oi=ao">Daniyar
Turmukhambetov</a>.</em><br> CVPR 2023. [<a
href="https://arxiv.org/abs/2302.12231">Paper</a>] [<a
href="https://storage.googleapis.com/niantic-lon-static/research/diffusionerf/diffusionerf_supplemental.Paper">Supplementary
material</a>] [<a
href="https://github.com/nianticlabs/diffusionerf">COde</a>]</p></li>
<li><p><strong>NeuralField-LDM: Scene Generation with Hierarchical
Latent Diffusion Models.</strong><br> <em>Seung Wook Kim, Bradley Brown,
Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach,
Antonio Torralba, Sanja Fidler.</em><br> CVPR 2023. [<a
href="https://arxiv.org/abs/2304.09787">Paper</a>] [<a
href="https://research.nvidia.com/labs/toronto-ai/NFLDM/">Project</a>]</p></li>
<li><p><strong>Rodin: A Generative Model for Sculpting 3D Digital
Avatars Using Diffusion.</strong><br> <em>Tengfei Wang, Bo Zhang, Ting
Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong
Chen, Fang Wen, Qifeng Chen, Baining Guo.</em><br> CVPR 2023. [<a
href="https://arxiv.org/abs/2212.06135">Paper</a>] [<a
href="https://3d-avatar-diffusion.microsoft.com/">Project</a>]</p></li>
<li><p><strong>DiffRF: Rendering-guided 3D Radiance Field
Diffusion.</strong><br> <em><a
href="https://niessnerlab.org/members/norman_mueller/profile.html">Norman
Müller</a>, <a
href="https://niessnerlab.org/members/yawar_siddiqui/profile.html">Yawar
Siddiqui</a>, <a
href="https://scholar.google.com/citations?user=vW1gaVEAAAAJ">Lorenzo
Porzi</a>, <a
href="https://scholar.google.com/citations?hl=de&user=484sccEAAAAJ">Samuel
Rota Bulò</a>, <a
href="https://scholar.google.com/citations?user=CxbDDRMAAAAJ&hl=en">Peter
Kontschieder</a>, <a
href="https://niessnerlab.org/members/matthias_niessner/profile.html">Matthias
Nießner</a>.</em><br> CVPR 2023 (Highlight). [<a
href="https://arxiv.org/abs/2212.01206">Paper</a>] [<a
href="https://sirwyver.github.io/DiffRF/">Project</a>]</p></li>
<li><p><strong>RenderDiffusion: Image Diffusion for 3D Reconstruction,
Inpainting and Generation.</strong><br> <em>Titas Anciukevičius, Zexiang
Xu, Matthew Fisher, Paul Henderson, Hakan Bilen, Niloy J. Mitra, Paul
Guerrero.</em><br> CVPR 2023. [<a
href="https://arxiv.org/abs/2211.09869">Paper</a>] [<a
href="https://holodiffusion.github.io/">Project</a>] [<a
href="https://github.com/Anciukevicius/RenderDiffusion">Code</a>]</p></li>
<li><p><strong>SparseFusion: Distilling View-conditioned Diffusion for
3D Reconstruction.</strong><br> <em><a
href="https://www.zhiz.dev/">Zhizhuo Zhou</a>, <a
href="https://shubhtuls.github.io/">Shubham Tulsiani</a>.</em><br> CVPR
2023. [<a href="https://arxiv.org/abs/2212.00792">Paper</a>] [<a
href="https://sparsefusion.github.io/">Project</a>] [<a
href="https://github.com/zhizdev/sparsefusion">Code</a>]</p></li>
<li><p><strong>HoloDiffusion: Training a 3D Diffusion Model using 2D
Images.</strong><br> <em>Animesh Karnewar, Andrea Vedaldi, David
Novotny, Niloy Mitra.</em><br> CVPR 2023. [<a
href="https://arxiv.org/abs/2303.16509">Paper</a>] [<a
href="https://3d-diffusion.github.io/">Project</a>]</p></li>
<li><p><strong>3DiM: Novel View Synthesis with Diffusion
Models.</strong><br> <em>Daniel Watson, William Chan, Ricardo
Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, Mohammad
Norouzi.</em><br> ICLR 2023. [<a
href="https://arxiv.org/abs/2210.04628">Paper</a>] [<a
href="https://3d-diffusion.github.io/">Project</a>]</p></li>
<li><p><strong>3DShape2VecSet: A 3D Shape Representation for Neural
Fields and Generative Diffusion Models.</strong><br> <em><a
href="https://1zb.github.io/">Biao Zhang</a>, <a
href="https://tangjiapeng.github.io/">Jiapeng Tang</a>, <a
href="https://www.niessnerlab.org/">Matthias Niessner</a>, <a
href="http://peterwonka.net/">Peter Wonka</a>.</em><br> SIGGRAPH 2023.
[<a href="https://arxiv.org/abs/2301.11445">Paper</a>] [<a
href="https://1zb.github.io/3DShape2VecSet/">Project</a>] [<a
href="https://github.com/1zb/3DShape2VecSet">Code</a>]</p></li>
<li><p><strong>GAUDI: A Neural Architect for Immersive 3D Scene
Generation.</strong><br> <em>Miguel Angel Bautista, Pengsheng Guo,
Samira Abnar, Walter Talbott, Alexander Toshev, Zhuoyuan Chen, Laurent
Dinh, Shuangfei Zhai, Hanlin Goh, Daniel Ulbricht, Afshin Dehghan, Josh
Susskind.</em><br> NeurIPS 2022. [<a
href="https://arxiv.org/abs/2212.01381">Paper</a>] [<a
href="https://github.com/apple/ml-gaudi">Project</a>]</p></li>
<li><p><strong>Learning a Diffusion Prior for NeRFs.</strong><br>
<em>Guandao Yang, Abhijit Kundu, Leonidas J. Guibas, Jonathan T. Barron,
Ben Poole.</em><br> arxiv 2023. [<a
href="https://arxiv.org/abs/2304.14473">Paper</a>]</p></li>
<li><p><strong>3D-LDM: Neural Implicit 3D Shape Generation with Latent
Diffusion Models.</strong><br> <em>Gimin Nam, Mariem Khlifi, Andrew
Rodriguez, Alberto Tono, Linqi Zhou, Paul Guerrero.</em><br> arxiv 2022.
[<a href="https://arxiv.org/abs/2212.00842">Paper</a>]</p></li>
</ul>
<h2 id="d-aware-generative-models-on-imagenet">3D-Aware Generative
Models on ImageNet</h2>
<ul>
<li><p><strong>VQ3D: Learning a 3D-Aware Generative Model on
ImageNet.</strong><br> <em>Kyle Sargent, Jing Yu Koh, Han Zhang, Huiwen
Chang, Charles Herrmann, Pratul Srinivasan, Jiajun Wu, Deqing
Sun.</em><br> ICCV 2023 (Oral). [<a
href="https://arxiv.org/abs/2302.06833">Paper</a>] [<a
href="http://kylesargent.github.io/vq3d">Project</a>]</p></li>
<li><p><strong>3D Generation on ImageNet.</strong><br> <em>Ivan
Skorokhodov, Aliaksandr Siarohin, Yinghao Xu, Jian Ren, Hsin-Ying Lee,
Peter Wonka, Sergey Tulyakov.</em><br> ICLR 2023 (Oral). [<a
href="https://openreview.net/forum?id=U2WjB9xxZ9q">Paper</a>] [<a
href="https://u2wjb9xxz9q.github.io/">Project</a>] [<a
href="https://justimyhxu.github.io/pub.html">Code</a>]</p></li>
</ul>
<h2 id="d-aware-video-synthesis">3D-aware Video Synthesis</h2>
<ul>
<li><p><strong>3D-Aware Video Generation.</strong><br> <em><a
href="https://sherwinbahmani.github.io/">Sherwin Bahmani</a>, <a
href="https://jjparkcv.github.io/">Jeong Joon Park</a>, <a
href="https://paschalidoud.github.io/">Despoina Paschalidou</a>, <a
href="https://scholar.google.com/citations?user=9zJkeEMAAAAJ&hl=en/">Hao
Tang</a>, <a href="https://stanford.edu/~gordonwz/">Gordon
Wetzstein</a>, <a
href="https://geometry.stanford.edu/member/guibas/">Leonidas Guibas</a>,
<a
href="https://ee.ethz.ch/the-department/faculty/professors/person-detail.OTAyMzM=.TGlzdC80MTEsMTA1ODA0MjU5.html/">Luc
Van Gool</a>, <a
href="https://ee.ethz.ch/the-department/people-a-z/person-detail.MjAxNjc4.TGlzdC8zMjc5LC0xNjUwNTg5ODIw.html/">Radu
Timofte</a>.</em><br> TMLR 2023. [<a
href="https://arxiv.org/abs/2206.14797">Paper</a>] [<a
href="https://sherwinbahmani.github.io/3dvidgen/">Project</a>] [<a
href="https://github.com/sherwinbahmani/3dvideogeneration/">Code</a>]</p></li>
<li><p><strong>Streaming Radiance Fields for 3D Video
Synthesis.</strong><br> <em>Lingzhi Li, Zhen Shen, Zhongshu Wang, Li
Shen, Ping Tan.</em><br> NeurIPS 2022. [<a
href="https://arxiv.org/abs/2210.14831">Paper</a>] [<a
href="https://github.com/AlgoHunt/StreamRF">Code</a>]</p></li>
</ul>
<h2 id="inr-based-3d-novel-view-synthesis">INR-based 3D Novel View
Synthesis</h2>
<h3 id="neural-scene-representations">Neural Scene Representations</h3>
<ul>
<li><p><strong>Scene Representation Transformer: Geometry-Free Novel
View Synthesis Through Set-Latent Scene Representations.</strong><br>
<em>Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus
Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey
Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, Andrea
Tagliasacchi.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2111.13152">Paper</a>] [<a
href="https://srt-paper.github.io/">Project</a>] [<a
href="https://github.com/stelzner/srt">Code</a>]</p></li>
<li><p><strong>Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering.</strong><br> <em>Vincent Sitzmann, Semon
Rezchikov, William T. Freeman, Joshua B. Tenenbaum, Fredo
Durand.</em><br> NeurIPS 2021 (Spotlight). [<a
href="https://arxiv.org/abs/2106.02634">Paper</a>] [<a
href="https://vsitzmann.github.io/lfns/">Project</a>] [<a
href="https://github.com/vsitzmann/light-field-networks">Code</a>]</p></li>
<li><p><strong>Mip-NeRF: A Multiscale Representation for Anti-Aliasing
Neural Radiance Fields.</strong><br> <em><a
href="https://jonbarron.info/">Jonathan T. Barron</a>, <a
href="https://bmild.github.io/">Ben Mildenhall</a>, <a
href="https://www.matthewtancik.com/">Matthew Tancik</a>, <a
href="https://phogzone.com/cv.html">Peter Hedman</a>, <a
href="http://ricardomartinbrualla.com/">Ricardo Martin-Brualla</a>, <a
href="https://pratulsrinivasan.github.io/">Pratul P.
Srinivasan</a>.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2103.13415">Paper</a>] [<a
href="http://jonbarron.info/mipnerf">Project</a>] [<a
href="https://github.com/google/mipnerf">Github</a>]</p></li>
<li><p><strong>NeRF: Representing Scenes as Neural Radiance Fields for
View Synthesis.</strong><br> <em><a
href="http://people.eecs.berkeley.edu/~bmild/">Ben Mildenhall</a>, <a
href="https://people.eecs.berkeley.edu/~pratul/">Pratul P.
Srinivasan</a>, <a href="http://www.matthewtancik.com/">Matthew
Tancik</a>, <a href="https://jonbarron.info/">Jonathan T. Barron</a>, <a
href="http://cseweb.ucsd.edu/~ravir/">Ravi Ramamoorthi</a>, <a
href="https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html">Ren
Ng</a>.</em><br> ECCV 2020. [<a
href="https://arxiv.org/abs/2003.08934">Paper</a>] [<a
href="http://tancik.com/nerf">Project</a>] [<a
href="https://github.com/bmild/nerf">Gtihub-Tensorflow</a>] [<a
href="https://github.com/krrish94/nerf-pytorch">krrish94-PyTorch</a>]
[<a
href="https://github.com/yenchenlin/nerf-pytorch">yenchenlin-PyTorch</a>]</p></li>
<li><p><strong>Differentiable Volumetric Rendering: Learning Implicit 3D
Representations without 3D Supervision.</strong><br> <em>Michael
Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger.</em><br> CVPR
2020. [<a
href="http://www.cvlibs.net/publications/Niemeyer2020CVPR.Paper">Paper</a>]
[<a
href="https://github.com/autonomousvision/differentiable_volumetric_rendering">Code</a>]</p></li>
<li><p><strong>Scene Representation Networks: Continuous
3D-Structure-Aware Neural Scene Representations.</strong><br> <em><a
href="https://vsitzmann.github.io/">Vincent Sitzmann</a>, Michael
Zollhöfer, Gordon Wetzstein.</em><br> NeurIPS 2019 (Oral, Honorable
Mention “Outstanding New Directions”). [<a
href="http://arxiv.org/abs/1906.01618">Paper</a>] [<a
href="https://github.com/vsitzmann/scene-representation-networks">Project</a>]
[<a
href="https://github.com/vsitzmann/scene-representation-networks">Code</a>]
[<a
href="https://drive.google.com/drive/folders/1OkYgeRcIcLOFu1ft5mRODWNQaPJ0ps90?usp=sharing">Dataset</a>]</p></li>
<li><p><strong>LLFF: Local Light Field Fusion: Practical View Synthesis
with Prescriptive Sampling Guidelines.</strong><br> <em><a
href="http://people.eecs.berkeley.edu/~bmild/">Ben Mildenhall</a>,
Pratul Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi
Ramamoorthi, Ren Ng, Abhishek Kar.</em><br> SIGGRAPH 2019. [<a
href="https://arxiv.org/abs/1905.00889">Paper</a>] [<a
href="https://people.eecs.berkeley.edu/~bmild/llff/">Project</a>] [<a
href="https://github.com/Fyusion/LLFF">Code</a>]</p></li>
<li><p><strong>DeepVoxels: Learning Persistent 3D Feature
Embeddings.</strong><br> <em>Vincent Sitzmann, Justus Thies, Felix
Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer.</em><br>
CVPR 2019 (Oral). [<a href="https://arxiv.org/abs/1812.01024">Paper</a>]
[<a href="http://vsitzmann.github.io/deepvoxels/">Project</a>] [<a
href="https://github.com/vsitzmann/deepvoxels">Code</a>]</p></li>
</ul>
<h3 id="acceleration">Acceleration</h3>
<ul>
<li><p><strong>Instant Neural Graphics Primitives with a Multiresolution
Hash Encoding.</strong><br> <em><a href="https://tom94.net/">Thomas
Müller</a>, <a href="https://research.nvidia.com/person/alex-evans">Alex
Evans</a>, <a
href="https://research.nvidia.com/person/christoph-schied">Christoph
Schied</a>, <a
href="https://research.nvidia.com/person/alex-keller">Alexander
Keller</a>.</em><br> SIGGRAPH (TOG) 2022. [<a
href="https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.Paper">Paper</a>]
[<a href="https://nvlabs.github.io/instant-ngp">Project</a>] [<a
href="https://github.com/NVlabs/instant-ngp">Code</a>]</p></li>
<li><p><strong>DIVeR: Real-time and Accurate Neural Radiance Fields with
Deterministic Integration for Volume Rendering.</strong><br> <em><a
href="https://lwwu2.github.io/">Liwen Wu</a>, <a
href="https://jyl.kr/">Jae Yong Lee</a>, <a
href="https://anandbhattad.github.io/">Anand Bhattad</a>, <a
href="https://yxw.web.illinois.edu/">Yuxiong Wang</a>, <a
href="http://luthuli.cs.uiuc.edu/~daf/">David A. Forsyth</a>.</em><br>
CVPR 2022. [<a href="https://arxiv.org/abs/2111.10427">Paper</a>] [<a
href="https://lwwu2.github.io/diver/">Project</a>] [<a
href="https://github.com/lwwu2/diver">Code</a>]</p></li>
<li><p><strong>KiloNeRF: Speeding up Neural Radiance Fields with
Thousands of Tiny MLPs.</strong><br> <em>Christian Reiser, Songyou Peng,
Yiyi Liao, Andreas Geiger.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2103.13744">Paper</a>] [<a
href="https://github.com/creiser/kilonerf">Code</a>]</p></li>
<li><p><strong>FastNeRF: High-Fidelity Neural Rendering at
200FPS.</strong><br> <em>Stephan J. Garbin, Marek Kowalski, Matthew
Johnson, Jamie Shotton, Julien Valentin.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2103.10380">Paper</a>]</p></li>
<li><p><strong>PlenOctrees for Real-time Rendering of Neural Radiance
Fields.</strong><br> <em><a href="https://alexyu.net/">Alex Yu</a>, <a
href="https://www.liruilong.cn/">Ruilong Li</a>, <a
href="https://www.matthewtancik.com/">Matthew Tancik</a>, <a
href="https://www.hao-li.com/">Hao Li</a>, <a
href="https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html">Ren
Ng</a>, <a href="https://people.eecs.berkeley.edu/~kanazawa/">Angjoo
Kanazawa</a>.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2103.14024">Paper</a>] [<a
href="https://alexyu.net/plenoctrees/">Project</a>] [<a
href="https://github.com/sxyu/plenoctree">Code</a>]</p></li>
<li><p><strong>Baking Neural Radiance Fields for Real-Time View
Synthesis.</strong><br> <em><a href="https://phogzone.com/">Peter
Hedman</a>, <a href="https://pratulsrinivasan.github.io/">Pratul P.
Srinivasan</a>, <a href="https://bmild.github.io/">Ben Mildenhall</a>,
<a href="https://jonbarron.info/">Jonathan T. Barron</a>, <a
href="https://www.pauldebevec.com/">Paul Debevec</a>.</em><br> ICCV 2021
(oral). [<a href="https://arxiv.org/abs/2103.14645">Paper</a>] [<a
href="https://nerf.live/">Project</a>] [<a
href="https://github.com/google-research/google-research/tree/master/snerg">Code</a>]</p></li>
<li><p><strong>AutoInt: Automatic Integration for Fast Neural Volume
Rendering.</strong><br> <em>David B. Lindell, Julien N. P. Martel,
Gordon Wetzstein.</em><br> CVPR 2021 (oral). [<a
href="https://arxiv.org/abs/2012.01714">Paper</a>] [<a
href="http://www.computationalimaging.org/publications/automatic-integration/">Project</a>]
[<a
href="https://github.com/computational-imaging/automatic-integration">Code</a>]</p></li>
<li><p><strong>NSVF: Neural Sparse Voxel Fields.</strong><br> <em><a
href="https://lingjie0206.github.io/">Lingjie Liu</a>, Jiatao Gu, Kyaw
Zaw Lin, Tat-Seng Chua, Christian Theobalt.</em><br> NeurIPS 2020. [<a
href="https://arxiv.org/abs/2007.11571">Paper</a>] [<a
href="https://lingjie0206.github.io/papers/NSVF/">Project</a>] [<a
href="https://github.com/facebookresearch/NSVF">Code</a>]</p></li>
</ul>
<h3 id="from-constrained-to-in-the-wild-conditions">From Constrained to
In-the-wild Conditions</h3>
<h4 id="few-images">Few Images</h4>
<ul>
<li><p><strong>GRF: Learning a General Radiance Field for 3D
Representation and Rendering.</strong><br> <em>Alex Trevithick, Bo
Yang.</em><br> ICCV 2021. [<a
href="https://openaccess.thecvf.com/content/ICCV2021/html/Trevithick_GRF_Learning_a_General_Radiance_Field_for_3D_Representation_and_ICCV_2021_paper.html">Paper</a>]
[<a href="https://github.com/alextrevithick/GRF">Code</a>]</p></li>
<li><p><strong>MVSNeRF: Fast Generalizable Radiance Field Reconstruction
from Multi-View Stereo.</strong><br> <em><a
href="https://apchenstu.github.io/">Anpei Chen</a>, <a
href="http://cseweb.ucsd.edu/~zex014/">Zexiang Xu</a>, Fuqiang Zhao,
Xiaoshuai Zhang, <a href="https://www.fbxiang.com/">Fanbo Xiang</a>, <a
href="http://vic.shanghaitech.edu.cn/vrvc/en/people/">Jingyi Yu</a>, <a
href="https://cseweb.ucsd.edu/~haosu/">Hao Su</a>.</em><br> ICCV 2021.
[<a href="https://arxiv.org/abs/2103.15595">Paper</a>] [<a
href="https://apchenstu.github.io/mvsnerf/">Project</a>] [<a
href="https://github.com/apchenstu/mvsnerf">Code</a>]</p></li>
<li><p><strong>CodeNeRF: Disentangled Neural Radiance Fields for Object
Categories.</strong><br> <em>Wonbong Jang, Lourdes Agapito.</em><br>
ICCV 2021. [<a href="https://arxiv.org/abs/2109.01750">Paper</a>] [<a
href="https://sites.google.com/view/wbjang/home/codenerf">Project</a>]
[<a href="https://github.com/wayne1123/code-nerf">Code</a>]</p></li>
<li><p><strong>pixelNeRF: Neural Radiance Fields from One or Few
Images.</strong><br> <em><a href="https://alexyu.net/">Alex Yu</a>,
Vickie Ye, Matthew Tancik, Angjoo Kanazawa.</em><br> CVPR 2021. [<a
href="https://arxiv.org/abs/2012.02190">Paper</a>] [<a
href="https://alexyu.net/pixelnerf">Project</a>] [<a
href="https://github.com/sxyu/pixel-nerf">Code</a>]</p></li>
<li><p><strong>IBRNet: Learning Multi-View Image-Based
Rendering.</strong><br> <em>Qianqian Wang, Zhicheng Wang, Kyle Genova,
Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo
Martin-Brualla, Noah Snavely, Thomas Funkhouser.</em><br> CVPR 2021. [<a
href="https://arxiv.org/abs/2102.13090">Paper</a>] [<a
href="https://ibrnet.github.io/">Project</a>] [<a
href="https://github.com/googleinterns/IBRNet">Code</a>]</p></li>
<li><p><strong>NeRF-VAE: A Geometry Aware 3D Scene Generative
Model.</strong><br> <em>Adam R. Kosiorek, Heiko Strathmann, Daniel
Zoran, Pol Moreno, Rosalia Schneider, Soňa Mokrá, Danilo J.
Rezende.</em><br> ICML 2021. [<a
href="https://arxiv.org/abs/2104.00587">Paper</a>]</p></li>
</ul>
<h4 id="pose-free">Pose-free</h4>
<ul>
<li><p><strong>Self-Calibrating Neural Radiance Fields.</strong><br>
<em>Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Animashree Anandkumar,
Minsu Cho, Jaesik Park.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2108.13826">Paper</a>] [<a
href="https://postech-cvlab.github.io/SCNeRF/">Project</a>] [<a
href="https://github.com/POSTECH-CVLab/SCNeRF">Code</a>]</p></li>
<li><p><strong>BARF: Bundle-Adjusting Neural Radiance
Fields.</strong><br> <em><a
href="https://chenhsuanlin.bitbucket.io/">Chen-Hsuan Lin</a>, <a
href="http://people.csail.mit.edu/weichium/">Wei-Chiu Ma</a>, Antonio
Torralba, Simon Lucey.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2104.06405">Paper</a>] [<a
href="https://github.com/chenhsuanlin/bundle-adjusting-NeRF">Code</a>]</p></li>
<li><p><strong>NeRF–: Neural Radiance Fields Without Known Camera
Parameters.</strong><br> <em><a
href="https://scholar.google.com/citations?user=zCBKqa8AAAAJ&hl=en">Zirui
Wang</a>, <a href="http://elliottwu.com">Shangzhe Wu</a>, <a
href="https://weidixie.github.io/weidi-personal-webpage/">Weidi Xie</a>,
<a href="https://sites.google.com/site/drminchen/home">Min Chen</a>, <a
href="https://eng.ox.ac.uk/people/victor-prisacariu/">Victor Adrian
Prisacariu</a>.</em><br> arxiv 2021. [<a
href="https://arxiv.org/abs/2102.07064">Paper</a>] [<a
href="http://nerfmm.active.vision/">Project</a>] [<a
href="https://github.com/ActiveVisionLab/nerfmm">Code</a>]</p></li>
</ul>
<h4 id="varying-appearance">Varying Appearance</h4>
<ul>
<li><p><strong>NeRFReN: Neural Radiance Fields with
Reflections.</strong><br> <em>Yuan-Chen Guo, Di Kang, Linchao Bao, Yu
He, Song-Hai Zhang.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2111.15234">Paper</a>]
[[Project](https://bennyguo.github.io/nerfren/]</p></li>
<li><p><strong>NeRF in the Wild: Neural Radiance Fields for
Unconstrained Photo Collections.</strong><br> <em><a
href="http://www.ricardomartinbrualla.com/">Ricardo Martin-Brualla</a>,
<a
href="https://scholar.google.com/citations?user=g98QcZUAAAAJ&hl=en">Noha
Radwan</a>, <a href="https://research.google/people/105804/">Mehdi S. M.
Sajjadi</a>, <a href="https://jonbarron.info/">Jonathan T. Barron</a>,
<a
href="https://scholar.google.com/citations?user=FXNJRDoAAAAJ&hl=en">Alexey
Dosovitskiy</a>, <a
href="http://www.stronglyconvex.com/about.html">Daniel
Duckworth</a>.</em><br> CVPR 2021 (oral). [<a
href="https://arxiv.org/abs/2008.02268">Paper</a>] [<a
href="https://nerf-w.github.io/">Code</a>]</p></li>
</ul>
<h4 id="large-scale-scene">Large-scale Scene</h4>
<ul>
<li><p><strong>Grid-guided Neural Radiance Fields for Large Urban
Scenes.</strong><br> <em>Linning Xu, Yuanbo Xiangli, Sida Peng, Xingang
Pan, Nanxuan Zhao, Christian Theobalt, Bo Dai, Dahua Lin.</em><br> CVPR
2023. [<a href="https://arxiv.org/abs/2303.14001">Paper</a>] [<a
href="https://city-super.github.io/gridnerf/">Project</a>]</p></li>
<li><p><strong>S3-NeRF: Neural Reflectance Field from Shading and Shadow
under a Single Viewpoint.</strong><br> <em><a
href="https://ywq.github.io/">Wenqi Yang</a>, <a
href="https://guanyingc.github.io/">Guanying Chen</a>, <a
href="http://chaofengc.github.io/">Chaofeng Chen</a>, <a
href="https://zfchenunique.github.io/">Zhenfang Chen</a>, <a
href="http://i.cs.hku.hk/~kykwong/">Kwan-Yee K. Wong</a>.</em><br>
NeurIPS 2022. [<a href="https://arxiv.org/abs/2210.08936">Paper</a>] [<a
href="https://ywq.github.io/s3nerf">Project</a>]</p></li>
<li><p><strong>BungeeNeRF: Progressive Neural Radiance Field for Extreme
Multi-scale Scene Rendering.</strong><br> <em>Yuanbo Xiangli, Linning
Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao, Christian Theobalt, Bo Dai,
Dahua Lin.</em><br> ECCV 2022. [<a
href="https://arxiv.org/abs/2112.05504">Paper</a>] [<a
href="https://city-super.github.io/citynerf">Project</a>]</p></li>
<li><p><strong>Block-NeRF: Scalable Large Scene Neural View
Synthesis.</strong><br> <em>Matthew Tancik, Vincent Casser, Xinchen Yan,
Sabeek Pradhan, Ben Mildenhall, Pratul P. Srinivasan, Jonathan T.
Barron, Henrik Kretzschmar.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2202.05263">Paper</a>] [<a
href="https://waymo.com/research/block-nerf/">Project</a>]</p></li>
<li><p><strong>Urban Radiance Fields.</strong><br> <em><a
href="http://www.krematas.com/">Konstantinos Rematas</a>, Andrew Liu,
Pratul P. Srinivasan, Jonathan T. Barron, Andrea Tagliasacchi, Thomas
Funkhouser, Vittorio Ferrari.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2111.14643">Paper</a>] [<a
href="https://urban-radiance-fields.github.io/">Project</a>]</p></li>
<li><p><strong>Mega-NERF: Scalable Construction of Large-Scale NeRFs for
Virtual Fly-Throughs.</strong><br> <em>Haithem Turki, Deva Ramanan,
Mahadev Satyanarayanan.</em><br> CVPR 2022. [<a
href="https://openaccess.thecvf.com/content/CVPR2022/html/Turki_Mega-NERF_Scalable_Construction_of_Large-Scale_NeRFs_for_Virtual_Fly-Throughs_CVPR_2022_paper.html">Paper</a>]
[<a href="https://github.com/cmusatyalab/mega-nerf">Code</a>]</p></li>
<li><p><strong>Shadow Neural Radiance Fields for Multi-view Satellite
Photogrammetry.</strong><br> <em>Dawa Derksen, Dario Izzo.</em><br> CVPR
2021. [<a href="https://arxiv.org/abs/2104.09877">Paper</a>] [<a
href="https://github.com/esa/snerf">Code</a>]</p></li>
</ul>
<h4 id="dynamic-scene">Dynamic Scene</h4>
<ul>
<li><p><strong>NeRFPlayer: A Streamable Dynamic Scene Representation
with Decomposed Neural Radiance Fields.</strong><br> <em>Liangchen Song,
Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu,
Andreas Geiger.</em><br> TVCG 2023. [<a
href="https://arxiv.org/abs/2210.15947">Paper</a>] [<a
href="https://lsongx.github.io/projects/nerfplayer.html">Project</a>]</p></li>
<li><p><strong>Generative Deformable Radiance Fields for Disentangled
Image Synthesis of Topology-Varying Objects.</strong><br> <em>Ziyu Wang,
Yu Deng, Jiaolong Yang, Jingyi Yu, Xin Tong.</em><br> Pacific Graphics
2022. [<a href="https://arxiv.org/abs/2209.04183">Paper</a>] [<a
href="https://ziyuwang98.github.io/GDRF/">Code</a>]</p></li>
<li><p><strong>Neural Surface Reconstruction of Dynamic Scenes with
Monocular RGB-D Camera.</strong><br> <em><a
href="https://rainbowrui.github.io/">Hongrui Cai</a>, <a
href="https://github.com/WanquanF">Wanquan Feng</a>, <a
href="https://scholar.google.com/citations?hl=en&user=5G-2EFcAAAAJ">Xuetao
Feng</a>, <a href="">Yan Wang</a>, <a
href="http://staff.ustc.edu.cn/~juyong/">Juyong Zhang</a>.</em><br>
NeurIPS 2022. [<a href="https://arxiv.org/abs/2206.15258">Paper</a>] [<a
href="https://ustc3dv.github.io/ndr/">Project</a>] [<a
href="https://github.com/USTC3DV/NDR-code">Code</a>]</p></li>
<li><p><strong>LoRD: Local 4D Implicit Representation for High-Fidelity
Dynamic Human Modeling.</strong><br> <em>Boyan Jiang, Xinlin Ren,
Mingsong Dou, Xiangyang Xue, Yanwei Fu, Yinda Zhang.</em><br> ECCV 2022.
[<a href="https://arxiv.org/abs/2208.08622">Paper</a>] [<a
href="https://boyanjiang.github.io/LoRD/">Code</a>]</p></li>
<li><p><strong>Fourier PlenOctrees for Dynamic Radiance Field Rendering
in Real-time.</strong><br> <em><a
href="https://aoliao12138.github.io/">Liao Wang</a>, <a
href="https://jiakai-zhang.github.io/">Jiakai Zhang</a>, Xinhang Liu,
Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi
Yu.</em><br> CVPR 2022 (Oral). [<a
href="https://arxiv.org/abs/2202.08614">Paper</a>] [<a
href="https://aoliao12138.github.io/FPO/">Project</a>]</p></li>
<li><p><strong>CoNeRF: Controllable Neural Radiance Fields.</strong><br>
<em>Kacper Kania, Kwang Moo Yi, Marek Kowalski, Tomasz Trzciński, Andrea
Taliasacchi.</em><br> CVPR 2022. [<a
href="https://arxiv.org/abs/2112.01983">Paper</a>] [<a
href="https://conerf.github.io/">Project</a>]</p></li>
<li><p><strong>Non-Rigid Neural Radiance Fields: Reconstruction and
Novel View Synthesis of a Deforming Scene from Monocular
Video.</strong><br> <em>Edgar Tretschk, Ayush Tewari, Vladislav
Golyanik, Michael Zollhöfer, Christoph Lassner, Christian
Theobalt.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2012.12247">Paper</a>] [<a
href="https://gvv.mpi-inf.mpg.de/projects/nonrigid_nerf/">Project</a>]
[<a
href="https://github.com/facebookresearch/nonrigid_nerf">Code</a>]</p></li>
<li><p><strong>NeRFlow: Neural Radiance Flow for 4D View Synthesis and
Video Processing.</strong><br> <em>Yilun Du, Yinan Zhang, Hong-Xing Yu,
Joshua B. Tenenbaum, Jiajun Wu.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2012.09790">Paper</a>] [<a
href="https://yilundu.github.io/nerflow/">Project</a>]</p></li>
<li><p><strong>Nerfies: Deformable Neural Radiance Fields.</strong><br>
<em><a href="https://keunhong.com/">Keunhong Park</a>, <a
href="https://utkarshsinha.com/">Utkarsh Sinha</a>, <a
href="https://jonbarron.info/">Jonathan T. Barron</a>, <a
href="http://sofienbouaziz.com/">Sofien Bouaziz</a>, <a
href="https://www.danbgoldman.com/">Dan B Goldman</a>, <a
href="https://homes.cs.washington.edu/~seitz/">Steven M. Seitz</a>, <a
href="http://www.ricardomartinbrualla.com/">Ricardo-Martin
Brualla</a>.</em><br> ICCV 2021. [<a
href="https://arxiv.org/abs/2011.12948">Paper</a>] [<a
href="https://nerfies.github.io/">Project</a>] [<a
href="https://github.com/google/nerfies">Code</a>]</p></li>
<li><p><strong>D-NeRF: Neural Radiance Fields for Dynamic
Scenes.</strong><br> <em><a
href="https://www.albertpumarola.com/">Albert Pumarola</a>, <a
href="https://www.iri.upc.edu/people/ecorona/">Enric Corona</a>, <a
href="http://virtualhumans.mpi-inf.mpg.de/">Gerard Pons-Moll</a>, <a
href="http://www.iri.upc.edu/people/fmoreno/">Francesc
Moreno-Noguer</a>.</em><br> CVPR 2021. [<a
href="https://arxiv.org/abs/2011.13961">Paper</a>] [<a
href="https://www.albertpumarola.com/research/D-NeRF/index.html">Project</a>]
[<a href="https://github.com/albertpumarola/D-NeRF">Code</a>] [<a
href="https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0">Data</a>]</p></li>
<li><p><strong>Dynamic Neural Radiance Fields for Monocular 4D Facial
Avatar Reconstruction.</strong><br> <em>Guy Gafni, Justus Thies, Michael
Zollhöfer, Matthias Nießner.</em><br> CVPR 2021. [<a
href="https://arxiv.org/abs/2012.03065">Paper</a>] [<a
href="https://gafniguy.github.io/4D-Facial-Avatars/">Project</a>] [<a
href="https://youtu.be/m7oROLdQnjk">Video</a>]</p></li>
<li><p><strong>NSFF: Neural Scene Flow Fields for Space-Time View
Synthesis of Dynamic Scenes.</strong><br> <em><a
href="https://www.cs.cornell.edu/~zl548/">Zhengqi Li</a>, <a
href="https://sniklaus.com/welcome">Simon Niklaus</a>, <a
href="https://www.cs.cornell.edu/~snavely/">Noah Snavely</a>, <a
href="https://research.adobe.com/person/oliver-wang/">Oliver
Wang</a>.</em><br> CVPR 2021. [<a
href="https://arxiv.org/abs/2011.13084">Paper</a>] [<a
href="http://www.cs.cornell.edu/~zl548/NSFF">Project</a>] [<a
href="https://github.com/zhengqili/Neural-Scene-Flow-Fields">Code</a>]</p></li>
<li><p><strong>Space-time Neural Irradiance Fields for Free-Viewpoint
Video.</strong><br> <em><a
href="https://www.cs.cornell.edu/~wenqixian/">Wenqi Xian</a>, <a
href="https://filebox.ece.vt.edu/~jbhuang/">Jia-Bin Huang</a>, <a