Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about adapter initialization for segmentation vida #6

Open
hhhyyeee opened this issue Mar 22, 2024 · 1 comment
Open

question about adapter initialization for segmentation vida #6

hhhyyeee opened this issue Mar 22, 2024 · 1 comment

Comments

@hhhyyeee
Copy link

Hi, I have a question about the initialization of the adapters in vida segmentation model.

According to this ViDA OpenReview for NeurIPS '23 in this link, I noticed that you have tried 3 versions of adapter initializations for CityScapes-to-ACDC experiments: Scratch, ImageNet pretrained and Source pretrained.
(I think I read this table somewhere inside a paper, maybe in a supplementary material for ICLR '24(?), but I cannot find the paper rn.)

Since it was mentioned clearly in the experiments section of the paper that you have used SegFormer Mix Transformer as the backbone of your segmentation model, I got curious how you pretrained Mix Transformer encoder with ImageNet dataset since the dataset is specifically designed for image classification tasks.

It seems possible extracting image features using Mix Transformer encoder and then put them into MLP head for image classification, but I wanted to make sure.

Thank you!

@Yangsenqiao
Copy link
Owner

Thank you for your interest in our work. We pretrain ViDA in the same way as the segformer, by only adding ViDA to the encoder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants