-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't test "High Quality Segmentation for Ultra High-resolution Images" #21
Comments
I suggest you insert PDB or Print before line 138 of dataset/offline_dataset_crm_pad32.py to see if you can load the image manually. Thanks. |
I loaded img variable by image file.png but in seg variable, dataset miss "_seg.png" file |
So, if you don't want to modify the dataset code, please change your dataset form. Or keep your dataset, and modify the code for loading seg.png. Thanks. |
Thank you for your answer, What is the meaning of seg variable and "_seg.png" image, i want to modify the code for generate seg variable to produce "coord" variable and "cell "varibale |
Hi, have you run through the ENTITY coarse partition network yet? I am getting the following error when running this coarse partition network with instances2017 dataset: ERROR [08/05 15:18:06 d2.engine.train_loop]: Exception during training: |
I've seen it a lot in some issues, but I don't know what it is meaning of detectron2-main ?, what does it do? |
Im working with entity,The first picture shows the readme file of the entity network. Which one has you worked with? |
“High Quality Segmentation for Ultra High-resolution Images” doesn't need detectron2. Thanks. |
I got it, Thanks
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2022年8月11日(星期四) 下午4:53
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [dvlab-research/Entity] can't test "High Quality Segmentation for Ultra High-resolution Images" (Issue #21)
“High Quality Segmentation for Ultra High-resolution Images” doesn't need detectron2. Thanks.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
I used the coarse mask from pspnet
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2022年8月11日(星期四) 下午4:55
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [dvlab-research/Entity] can't test "High Quality Segmentation for Ultra High-resolution Images" (Issue #21)
I try to test by using model_10000 and take seg by gray image but the result is so weird seg = Image.open(im[:-4]+'.png').convert('L') seg = self.resize_bi(crop_lambda(Image.open(im).convert('L')))
So, what's your problem? The coarse mask from segmentation model is needed. Thanks.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
I understood seg.png as raw mask of another segmentation neural network . So "High Quality Segmentation for Ultra High Resolution Images" is a post-processing right? |
yean!
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2022年8月11日(星期四) 下午5:01
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [dvlab-research/Entity] can't test "High Quality Segmentation for Ultra High-resolution Images" (Issue #21)
coarse mask
I understood seg.png as raw mask of another segmentation neural network . So "High Quality Segmentation for Ultra High Resolution Images" is a post-processing right?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
I run test.py but i met this error, miss "_seg.png" file
python test.py --dir ./data/DUTS-TE --model ./model_10000 --output ./output --clear
/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: /home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torchvision/image.so: undefined symbol: _ZNK2at10TensorBase21__dispatch_contiguousEN3c1012MemoryFormatE
warn(f"Failed to load image Python extension: {e}")
before_Parser_time: 1659253874.6776164
Hyperparameters: {'dir': './data/DUTS-TE', 'model': './model_10000', 'output': './output', 'global_only': False, 'L': 900, 'stride': 450, 'clear': True, 'ade': False}
ASPP_4level
12 images found
before_for_time: 1659253881.0989463 ; before_for_time - before_Parser_time: 6.421329975128174
Traceback (most recent call last):
File "test.py", line 106, in
for im, seg, gt, name, crm_data in progressbar.progressbar(val_loader):
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/progressbar/shortcuts.py", line 10, in progressbar
for result in progressbar(iterator):
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/progressbar/bar.py", line 547, in next
value = next(self._iterable)
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hoang/Desktop/luanvan/implicitmodel/4cham/Entity/High-Quality-Segmention/dataset/offline_dataset_crm_pad32.py", line 138, in getitem
im, seg, gt = self.load_tuple(self.im_list[idx])
File "/home/hoang/Desktop/luanvan/implicitmodel/4cham/Entity/High-Quality-Segmention/dataset/offline_dataset_crm_pad32.py", line 110, in load_tuple
seg = Image.open(im[:-7]+'_seg.png').convert('L')
File "/home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/PIL/Image.py", line 2975, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: './data/DUTS-TE/sun_abzsyxfgntlvd_seg.png'
The text was updated successfully, but these errors were encountered: