Skip to content

Commit

Permalink
final submission
Browse files Browse the repository at this point in the history
  • Loading branch information
thatblueboy committed Apr 29, 2024
1 parent 230bcf8 commit dfc5811
Show file tree
Hide file tree
Showing 9 changed files with 13 additions and 1,452 deletions.
69 changes: 0 additions & 69 deletions =0.17.2

This file was deleted.

18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,29 +7,37 @@ This repository contains re-Implementation of https://arxiv.org/abs/2103.06255


The authors of _Involution: Inverting the Inherence of Convolution for Visual Recognition_ propose a novel involutional layers, which aims to enhance the representation power of convolutional neural networks by inverting the inherent properties of convolution operations. As such these kernals are channel agnostic and spatial specific.

### Folders

```models``` folder contains the main backbone implementations of models used as well as classification heads and lightning class for easy training and logging

```slides``` contains presentation slides

```data``` contains the data module and custom dataset

### Training

```
git clone https://github.com/thatblueboy/involution.git #clone the repo
git checkout submission_branch #change to the submission branch
```
Edit the ```train.py``` file in the main folder. Here you can change various Hyperparameters in the config dict. Note that changing lr_scheduler will require corresponding chnage in lr_sceduler_kwargs.
Note: We use a random split split on Caltech256. For uniformity we store this split in the data_module.pth and load it for every training run.
This behaviour could be changed by setting the 'data_module_path' value in the config dict = None.
Edit the ```train.py``` file in the main folder.

- Here you can change various Hyperparameters in the config dict. Note that changing ```lr_scheduler``` will require corresponding chnage in ```lr_sceduler_kwargs```.

- Note: We use a random split split on Caltech256. For uniformity we store this split in the data_module.pth and load it for every training run. This behaviour could be changed by setting the ```'data_module_path'``` value in the config dict to ```None```.


To switch from training to testing mode, change the last line in the train.py from
- To switch from training to testing mode, change the last line in the train.py from
```
trainer.fit(model, data_module)
```
to
```
trainer.test(test, data_module)
```
after making all the necessary changes do:
- After making all the necessary changes do:
```
wandb login
python train.py
Expand Down
Binary file removed images/channel_agnostic.PNG
Binary file not shown.
Binary file removed images/spatial_specific.gif
Binary file not shown.
Loading

0 comments on commit dfc5811

Please sign in to comment.