You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to ask you for some clarification on the results. I ran the code for ~72 hours on a core i7 with Geforce Nvidia 1070Ti and here is what I got as a result (I Ctrl+C after 72 hours). I have 2 specific questions:
What's the difference between Policy/Policy+MCTS? According to the Table 2 in the paper, I guess Policy+MCTS is the AlphaTSP, Greedy is the Nearest Neighbour. Am I right?
Why the exact solution differes in each iteration? It starts with 4.43 for the first testing phase and then goes up to 4.61 and then goes down again to 4.57. What's the difference between testing phases below?
I checked the GPU usage using nvtop in Ubuntu, and I can say it barely used GPU (<10% in average), even though the Pytorch had been correctly setup. I was expecting to see more GPU usage during the execution of the code. What do you think?
Thanks again for your interesting and useful contribution.
I wanted to ask you for some clarification on the results. I ran the code for ~72 hours on a core i7 with Geforce Nvidia 1070Ti and here is what I got as a result (I Ctrl+C after 72 hours). I have 2 specific questions:
Thanks again for your interesting and useful contribution.
The text was updated successfully, but these errors were encountered: