-
Music and Audio Computing Lab
- Daejeon, South Korea
- https://www.kirak.kim
- @_kirak_kim
Highlights
- Pro
Stars
Generating Talking Face Landmarks from Speech
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
MANO hand model in PyTorch (anatomy consistent, anchors, etc)
Large dataset of hand-object contact, hand- and object-pose, and 2.9 M RGB-D grasp images.
Text2HOI: Text-guided 3D Motion Generation for Hand-Object Interaction
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
A Python toolkit/library for reality-centric machine/deep learning and data mining on partially-observed time series, including SOTA neural network models for scientific analysis tasks of imputatio…
Tutorial data for KSMPC Summer School 2024, Session 3
A python package for handling modern staff notation of music
3D Piano in Unity that can playback MIDI file songs
[SIGGRAPH'24] 2D Gaussian Splatting for Geometrically Accurate Radiance Fields
Implementation of "Analyzing and Improving the Training Dynamics of Diffusion Models"
A family of diffusion models for text-to-audio generation.
Official Pytorch Implementation of "FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection", 2023
HaMeR: Reconstructing Hands in 3D with Transformers
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
[ECCV2022] D2M-GAN for music generation from dance videos
C++ implementation of a ScienceDirect paper "An accelerating cpu-based correlation-based image alignment for real-time automatic optical inspection"
A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
BeatNet is state-of-the-art (Real-Time) and Offline joint music beat, downbeat, tempo, and meter tracking system using CRNN and particle filtering. (ISMIR 2021's paper implementation).
🎶 Music-Driven Conducting Motion Generation (IEEE ICME'21 Best Demo)