Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
HaoWeiHsueh authored Aug 24, 2024
1 parent 50da3b8 commit bade6e8
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ In this [supplementary material](https://github.com/HaoWeiHsueh/LGNet/blob/main/
- **Detailed experiments**
- **Discussion**

Note that all the notation and abbreviations here are consistent with the main manuscript.
Note that all the notation and abbreviations here are consistent with [the main manuscript](https://github.com/HaoWeiHsueh/LGNet/blob/main/LGNet_Local-and-Global%20Feature%20Adaptive%20Network%20for%203D%20Interacting%20Hand%20Mesh%20Reconstruction.pdf).

### Abstract
Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality (VR). Especially in the field of robotics, accurate interacting hand mesh reconstruction can significantly improve the accuracy and naturalness of human-robot interaction. This task requires accurate understanding of complex interactions between two hands and ensuring reasonable alignment of the hand mesh with the image. Recent Transformer-based methods directly utilise the features of the two hands as input tokens, ignoring the correlation between local and global features of the interacting hands, leading to hand ambiguity, self-obscuration and self-similarity problems.We propose LGNet, Local and Global Feature Adaptive Network, by decoupling the hand mesh reconstruction task into three stages: a joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine-tuning the mesh image alignment using an offset mesh. LGNet enables high-quality fingertip-level mesh image alignment, effectively models the spatial relationship between two hands, and supports real-time prediction. Extensive quantitative and qualitative results on benchmark datasets show that LGNet outperforms state-of-the-art methods in terms of mesh accuracy and image alignment, and demonstrates strong generalisation capabilities in experiments on in-the-wild images. Our source code will be made available to the community.

0 comments on commit bade6e8

Please sign in to comment.