Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run model on wild video #14

Open
Victcode opened this issue Aug 12, 2021 · 9 comments
Open

How to run model on wild video #14

Victcode opened this issue Aug 12, 2021 · 9 comments

Comments

@Victcode
Copy link

No description provided.

@Victcode Victcode changed the title How to run model How to run model on wild video Aug 12, 2021
@Victcode Victcode reopened this Aug 12, 2021
@Victcode
Copy link
Author

Thanks for your great work.
I use the code of VideoPose3D to run wild video. I got wrong result. Do I need to make changes to the original code(VideoPose3D)?

@jimmybuffi
Copy link

jimmybuffi commented Sep 21, 2021

bumping this, would love to have an example of how to run this in the wild when you have a chance! I've tried many different iterations, but I can't seem to get the pretrained models to work, as the output 3d points just seem random. Can you confirm if the format of the input 2d must be gt h36m keypoints or COCO keypoints?

@jimmybuffi
Copy link

Realized my issue was likely either not having the environment set up properly, or trying to run it on a CPU instead of GPU

@vicentowang
Copy link

vicentowang commented Nov 10, 2021

@jimmybuffi
same problem ocuured , I guess VideoPose3D need coco keypoints which is diffrent from h36m keypoints order. have you solved problem ?

@jimmybuffi
Copy link

@vicentowang using their gt81f model, the input required was h36m key points format. I'm not exactly sure what my issue was originally, but when I set up the environment exactly as they specified on a GPU machine using h36m key points format, it worked and the issue was resolved.

@vicentowang
Copy link

vicentowang commented Nov 11, 2021

@jimmybuffi how to do with h36m key points format , I use detectron2 according to https://github.com/facebookresearch/VideoPose3D/blob/main/INFERENCE.md, that is coco format, so i get wrong result.

cd inference
python infer_video_d2.py
--cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml
--output-dir output_directory
--image-ext mp4
input_directory

@jimmybuffi
Copy link

@vicentowang
Copy link

vicentowang commented Nov 17, 2021

@jimmybuffi
I try it , but got the wrong result. what about your experiment, thanks anyway.
image

@TomCatOnline
Copy link

I used this script from this repo to do the conversion...

https://github.com/fabro66/GAST-Net-3DPoseEstimation/blob/946f6b701452204d23969a43dae348b69eca9bd9/lib/pose/hrnet/lib/utils/coco_h36m.py#L9

@jimmybuffi
Can you specify how excatly should i do to make poseformer inference in the wild run? which code should i modify?
thanks very much !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants