-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shearlets #40
base: master
Are you sure you want to change the base?
Shearlets #40
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make the shearlet addition an option and not something you need to use.
And before merging, we should fix the high-pass bug.
self.kernel_size, | ||
activation='linear', | ||
padding='same', | ||
kernel_initializer='glorot_uniform', | ||
use_bias=tiling_use_bias, | ||
kernel_constraint=constraint, | ||
trainable = False, | ||
name=f'{tiling_prefix}_{str(K.get_uid(tiling_prefix))}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name=f'{tiling_prefix}_{str(K.get_uid(tiling_prefix))}', | |
name=f'{tiling_prefix}_fixed_{str(K.get_uid(tiling_prefix))}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kevinmicha I don't understand why you resolved this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was a mistake. Now really solved
details_tiled = self.convs_detail_tiling[i_scale](wav_detail) | ||
details_tiled_fixed = self.convs_detail_tiling_fixed[i_scale](wav_detail) | ||
details_tiled_train = self.convs_detail_tiling_train[i_scale](wav_detail) | ||
details_tiled = Concatenate()([details_tiled_fixed, details_tiled_train]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do not use the layer Concatenate
here: it will create a layer at inference time which we want to avoid esp. when running in eager mode.
I think it's best to use tf.concat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same for below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
solved
@@ -2,12 +2,16 @@ | |||
import os.path as op | |||
import time | |||
|
|||
import cadmos_lib as cl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make this import optional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
inputs = [tf.zeros((1, 32, 32, 1)), tf.zeros((1, 1))] | ||
model(inputs) | ||
|
||
shearlets, _ = cl.get_shearlets(512, 512, 6) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 6
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put the answer in comment if it makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
n_shearlets = np.shape(shearlets)[0] | ||
total_size = np.shape(shearlets)[1] | ||
half_filters = n_filters//2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait I don't get it...
Does it mean that you do not always consider all the shearlet filters?
I think this is a mistake since you can not just take the shearlet filters you want, you need them all.
Surely there is a way to require less shearlet filters though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
filter_size = analysis_kernel_size | ||
crop_min = total_size//2 - filter_size//2 | ||
crop_max = total_size//2 + filter_size//2 + 1 | ||
resized_shearlets = np.zeros((5, filter_size, filter_size, 1, half_filters)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely here it's not 5
but n_scales
or sthg similar.
Same for below the 6,11
shouldn't be hardcoded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
changed verbose Co-authored-by: Zaccharie Ramzi <[email protected]>
trainable param adapted to pep8 Co-authored-by: Zaccharie Ramzi <[email protected]>
Co-authored-by: Zaccharie Ramzi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is indeed a bug in the shearlets part
self.kernel_size, | ||
activation='linear', | ||
padding='same', | ||
kernel_initializer='glorot_uniform', | ||
use_bias=tiling_use_bias, | ||
kernel_constraint=constraint, | ||
trainable = False, | ||
name=f'{tiling_prefix}_{str(K.get_uid(tiling_prefix))}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kevinmicha I don't understand why you resolved this
if self.mixing_details: | ||
mixing_prefix = 'details_mixing' | ||
self.convs_detail_mixing = [ | ||
self.convs_detail_mixing_fixed = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.convs_detail_mixing_fixed = [ | |
self.convs_detail_mixing = [ |
I think this part doesn't need to be changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right
@@ -203,7 +223,7 @@ def get_config(self): | |||
|
|||
class LearnletSynthesis(Layer): | |||
__name__ = 'learnlet_synthesis' | |||
def __init__(self, normalize=True, n_scales=4, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=False, kernel_size=5, wav_type='starlet'): | |||
def __init__(self, normalize=True, n_scales=5, n_tiling=256, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=True, kernel_size=13, wav_type='starlet', n_shearlets=85): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not change the defaults you don't need for backward compatibility's sake
def __init__(self, normalize=True, n_scales=5, n_tiling=256, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=True, kernel_size=13, wav_type='starlet', n_shearlets=85): | |
def __init__(self, normalize=True, n_scales=4, n_tiling=256, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=False, kernel_size=5, wav_type='starlet', n_shearlets=85): |
use_bias=synthesis_use_bias, | ||
kernel_constraint=constraint, | ||
trainable=False, | ||
name=f'{groupping_prefix}_{str(K.get_uid(groupping_prefix))}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name=f'{groupping_prefix}_{str(K.get_uid(groupping_prefix))}', | |
name=f'{groupping_prefix}_fixed_{str(K.get_uid(groupping_prefix))}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now it's fine
padding='same', | ||
kernel_initializer='glorot_uniform', | ||
use_bias=tiling_use_bias, | ||
kernel_constraint=constraint, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be a bug: you don't want to use the unit norm constraint on shearlet filters, because they should stay as is.
This constraint might make them evolve during training.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kernel_constraint=constraint, | |
kernel_constraint=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also removed the constraint for the synthesis shearlet filters
image = self.convs_groupping_fixed[i_scale](detail[...,0:self.n_shearlets]) + image | ||
image = self.convs_groupping_train[i_scale](detail[...,self.n_shearlets:self.n_tiling]) + image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
image = self.convs_groupping_fixed[i_scale](detail[...,0:self.n_shearlets]) + image | |
image = self.convs_groupping_train[i_scale](detail[...,self.n_shearlets:self.n_tiling]) + image | |
image = self.convs_groupping_fixed[i_scale](detail[..., :self.n_shearlets]) + image | |
image = self.convs_groupping_train[i_scale](detail[...,self.n_shearlets:]) + image |
We had mentioned this bug actually.
Here the problem is that the total number of details is n_shearlets + n_tiling
.
Let's do a quick example, imagine detail = [0, 1, 2, 3, 4, 5]
, n_shearlets = 2
and n_tiling = 4
.
Then we have:
details[0:n_shearlets] == details[0:2] == [0, 1]
details[n_shearlets:n_tiling] = details[2:4] = [2, 3]
This also means that you don't need to store n_tiling
in this class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has been discussed
Now it's possible to add shearlet fixed filters using the alpha-transform package.