Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shearlets #40

Open
wants to merge 28 commits into
base: master
Choose a base branch
from
Open

Shearlets #40

wants to merge 28 commits into from

Conversation

kevinmicha
Copy link
Contributor

Now it's possible to add shearlet fixed filters using the alpha-transform package.

Copy link
Owner

@zaccharieramzi zaccharieramzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make the shearlet addition an option and not something you need to use.

And before merging, we should fix the high-pass bug.

learning_wavelets/models/learnlet_layers.py Outdated Show resolved Hide resolved
self.kernel_size,
activation='linear',
padding='same',
kernel_initializer='glorot_uniform',
use_bias=tiling_use_bias,
kernel_constraint=constraint,
trainable = False,
name=f'{tiling_prefix}_{str(K.get_uid(tiling_prefix))}',
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
name=f'{tiling_prefix}_{str(K.get_uid(tiling_prefix))}',
name=f'{tiling_prefix}_fixed_{str(K.get_uid(tiling_prefix))}',

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kevinmicha I don't understand why you resolved this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was a mistake. Now really solved

details_tiled = self.convs_detail_tiling[i_scale](wav_detail)
details_tiled_fixed = self.convs_detail_tiling_fixed[i_scale](wav_detail)
details_tiled_train = self.convs_detail_tiling_train[i_scale](wav_detail)
details_tiled = Concatenate()([details_tiled_fixed, details_tiled_train])
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do not use the layer Concatenate here: it will create a layer at inference time which we want to avoid esp. when running in eager mode.

I think it's best to use tf.concat

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

solved

@@ -2,12 +2,16 @@
import os.path as op
import time

import cadmos_lib as cl
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make this import optional

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

learning_wavelets/training_scripts/learnlet_training.py Outdated Show resolved Hide resolved
inputs = [tf.zeros((1, 32, 32, 1)), tf.zeros((1, 1))]
model(inputs)

shearlets, _ = cl.get_shearlets(512, 512, 6)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 6 ?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put the answer in comment if it makes sense

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


n_shearlets = np.shape(shearlets)[0]
total_size = np.shape(shearlets)[1]
half_filters = n_filters//2
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait I don't get it...
Does it mean that you do not always consider all the shearlet filters?
I think this is a mistake since you can not just take the shearlet filters you want, you need them all.
Surely there is a way to require less shearlet filters though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

filter_size = analysis_kernel_size
crop_min = total_size//2 - filter_size//2
crop_max = total_size//2 + filter_size//2 + 1
resized_shearlets = np.zeros((5, filter_size, filter_size, 1, half_filters))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surely here it's not 5 but n_scales or sthg similar.
Same for below the 6,11 shouldn't be hardcoded

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Owner

@zaccharieramzi zaccharieramzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is indeed a bug in the shearlets part

self.kernel_size,
activation='linear',
padding='same',
kernel_initializer='glorot_uniform',
use_bias=tiling_use_bias,
kernel_constraint=constraint,
trainable = False,
name=f'{tiling_prefix}_{str(K.get_uid(tiling_prefix))}',
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kevinmicha I don't understand why you resolved this

if self.mixing_details:
mixing_prefix = 'details_mixing'
self.convs_detail_mixing = [
self.convs_detail_mixing_fixed = [
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.convs_detail_mixing_fixed = [
self.convs_detail_mixing = [

I think this part doesn't need to be changed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right

@@ -203,7 +223,7 @@ def get_config(self):

class LearnletSynthesis(Layer):
__name__ = 'learnlet_synthesis'
def __init__(self, normalize=True, n_scales=4, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=False, kernel_size=5, wav_type='starlet'):
def __init__(self, normalize=True, n_scales=5, n_tiling=256, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=True, kernel_size=13, wav_type='starlet', n_shearlets=85):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not change the defaults you don't need for backward compatibility's sake

Suggested change
def __init__(self, normalize=True, n_scales=5, n_tiling=256, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=True, kernel_size=13, wav_type='starlet', n_shearlets=85):
def __init__(self, normalize=True, n_scales=4, n_tiling=256, n_channels=1, synthesis_use_bias=False, synthesis_norm=False, res=False, kernel_size=5, wav_type='starlet', n_shearlets=85):

use_bias=synthesis_use_bias,
kernel_constraint=constraint,
trainable=False,
name=f'{groupping_prefix}_{str(K.get_uid(groupping_prefix))}',
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
name=f'{groupping_prefix}_{str(K.get_uid(groupping_prefix))}',
name=f'{groupping_prefix}_fixed_{str(K.get_uid(groupping_prefix))}',

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now it's fine

padding='same',
kernel_initializer='glorot_uniform',
use_bias=tiling_use_bias,
kernel_constraint=constraint,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a bug: you don't want to use the unit norm constraint on shearlet filters, because they should stay as is.

This constraint might make them evolve during training.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
kernel_constraint=constraint,
kernel_constraint=None,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great!

Copy link
Contributor Author

@kevinmicha kevinmicha Jul 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also removed the constraint for the synthesis shearlet filters

Comment on lines +289 to +290
image = self.convs_groupping_fixed[i_scale](detail[...,0:self.n_shearlets]) + image
image = self.convs_groupping_train[i_scale](detail[...,self.n_shearlets:self.n_tiling]) + image
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
image = self.convs_groupping_fixed[i_scale](detail[...,0:self.n_shearlets]) + image
image = self.convs_groupping_train[i_scale](detail[...,self.n_shearlets:self.n_tiling]) + image
image = self.convs_groupping_fixed[i_scale](detail[..., :self.n_shearlets]) + image
image = self.convs_groupping_train[i_scale](detail[...,self.n_shearlets:]) + image

We had mentioned this bug actually.

Here the problem is that the total number of details is n_shearlets + n_tiling.
Let's do a quick example, imagine detail = [0, 1, 2, 3, 4, 5], n_shearlets = 2 and n_tiling = 4.
Then we have:

details[0:n_shearlets] == details[0:2] == [0, 1]
details[n_shearlets:n_tiling] = details[2:4] = [2, 3]

This also means that you don't need to store n_tiling in this class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has been discussed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants