You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your cool snippets and a cool paper which is very nicely written as well. I was trying to understand the implementation of the DLSH in the coders.py file. And below are some of my queries,
Why do we do GaussianRandomProjection beforehand? Is it simply because we want to reduce the higher dimensional embeddings (like vecs of 768, 512, 1024 etc?)
Secondly, in transform_to_absolute_codes, why do we adjust the offsets? I am not sure I understood that part clearly.
Thanks Again for your paper and clean snippets!
Best,
Aditya.
The text was updated successfully, but these errors were encountered:
Hey!
Thanks for sharing your cool snippets and a cool paper which is very nicely written as well. I was trying to understand the implementation of the DLSH in the
coders.py
file. And below are some of my queries,GaussianRandomProjection
beforehand? Is it simply because we want to reduce the higher dimensional embeddings (like vecs of 768, 512, 1024 etc?)transform_to_absolute_codes
, why do we adjust the offsets? I am not sure I understood that part clearly.Thanks Again for your paper and clean snippets!
Best,
Aditya.
The text was updated successfully, but these errors were encountered: