-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to know how you visualize your attention map #23
Comments
Hi, thanks for your attention.
|
Thanks for your reply! |
|
How to visualize the average of 4 attention layers? do you use element-wise addition on all four N*N attention map and divided by 4 and visualize? OK it seems that PCT is much stronger than SPCT, haha. 感謝您的回覆! |
Hi! can you share your visualization.py? I try many times but fail. Many Thanks! |
` def Visualize_Attention_Map(u, xyz, attention, axis):
` input: I use matplotlib to print the map and here is the code. Tell me if you have any problems. |
Thanks!!But I have some problems. The attention tensor is in (Batch,256,256)(sa1,sa2,sa3,sa4 the same),as your instruction,attention: (N) the attention map is in N?How should I change it? |
you can just call this function into your model. here is my partial code: `#################################### partial code for model #################################### #################################### full code for visualize #################################### for i in range(2048): I simply return the attention map and use for loop to visualize the attention map for each points |
I use this code to visualize the attention map,but the results are not the same as in the paper. Do you realize the results? I look forward your reply! |
请问您能发布一下您pytorch重现的部分分割的代码吗?我自己重现的效果很差,如果可以的话希望能看看您重现的代码,感激不尽 |
Can you release the partial segmentation code reproduced by your pytorch? The effect of my own reproduction is very poor. If I can, I hope to see your reproduced code. Thank you very much |
Hi,
First of all, I want to thank you for the proposed method , which benefited me a lot. So I reproduce your code by pytorch and tried to visualize the attention map in part segmentation task. but when I want to use the right wing as query point, it can't attention the left wing like what you visualize on the paper. So I want to know how you show the visualization result like the paper.
In addition, other issue point out the dimension of softmax is wrong because your multiplication is Value*Attention, so I think the dimension of softmax in Attention would be 1, not -1(or 2), please correct me if there is any mistakes. And also the dimension of softmax and L1 norm is different(softmax is -1 but L1-Norm is 1), why?
Line 211:
self.softmax = nn.Softmax(dim=-1)
Line 220:
attention = attention / (1e-9 + attention.sum(dim=1, keepdims=True))
Also, I want to know how you do neighbor embedding in part segmentation. the paper said the number of output points is N, which means you didn't sampling the point and also do SG(sampling and grouping) module twice. but when I reproduce the same method, I got cuda out of memory in RTX 2080Ti(VRAM:12G). Is my VRAM not big enough or I have any problem with the understanding of the paper discription?
I'm looking forward to your reply, and thank you for your contribution.
The text was updated successfully, but these errors were encountered: