Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question About Test Time? #7

Open
CocoRLin opened this issue Nov 29, 2019 · 3 comments
Open

Question About Test Time? #7

CocoRLin opened this issue Nov 29, 2019 · 3 comments

Comments

@CocoRLin
Copy link

CocoRLin commented Nov 29, 2019

Hi!
I run this code on 1 GPU(Nvidia 1080ti). And I think your code every time process 1 to 4 frames, ringt?But when I compute the forward pass(only time between "output=model(*input)"), it shows about 230ms when batch =4. It equals about 60ms a frame. It's different from your paper... Am I wrong?

Here is my code(I add time code in RANet_lib.py):

  ` inputs = [torch.cat(Img)[index_select], torch.cat(KFea)[index_select], torch.cat(KMsk)[index_select], torch.cat(PMsk)[index_select]]

    torch.cuda.synchronize()

    stime = time.time()

    outputs, _ = model(*inputs)

    torch.cuda.synchronize()

    etime = time.time()

    model_time = (etime-stime)*1000

    print('time:',model_time)`
@Storife
Copy link
Owner

Storife commented Nov 30, 2019

Hi,
Thanks for your question.

Your code has no problem. Towards this problem, we test the speed again using the default setting on two kinds of GPUs, 1080Ti and 2080Ti. We use pytorch=1.0.1, cuda=10.0.
Here are the latest testing results (in seconds).

Batchsize 1 2 4
1080Ti 0.045 0.04 0.038
2080Ti 0.037 0.025 0.023

However, it seems that the speed can not reach the speed in the original paper while using the batchsize=1. The reason is that I did not use the code "torch.cuda.synchronize()" while doing the speed test for this paper. So that the test result of the model would be faster than it actually runs. I'm terribly sorry about this problem, and we will correct it soon.

@Guptajakala
Copy link

@Storife
Hi, what's the effect of "torch.cuda.synchronize()"? Do you mean disabling it will make the network inference time faster?

@Storife
Copy link
Owner

Storife commented Jan 29, 2020

@Guptajakala
No, but the evaluated speed would be faster if your CPU do not wait for your GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants