Skip to content

Pytorch code for CVPR 2021 paper: Learning Tensor Low-Rank Prior for Hyperspectral Image Reconstruction

Notifications You must be signed in to change notification settings

MaxtBIT/DTLP

 
 

Repository files navigation

DTLP

PyTorch codes for paper: Shipeng Zhang, Lizhi Wang, Lei Zhang, and Hua Huang, Learning Tensor Low-Rank Prior for Hyperspectral Image Reconstruction, CVPR, 2021.[Link]

Abstract

Snapshot hyperspectral imaging has been developed to capture the spectral information of dynamic scenes. In this paper, we propose a deep neural network by learning the tensor low-rank prior of hyperspectral images (HSI) in the feature domain to promote the reconstruction quality. Our method is inspired by the canonical-polyadic (CP) decomposition theory, where a low-rank tensor can be expressed as a weight summation of several rank-1 component tensors. Specifically, we first learn the tensor low-rank prior of the image features with two steps: (a) we generate rank-1 tensors with discriminative components to collect the contextual information from both spatial and channel dimensions of the image features; (b) we aggregate those rank-1 tensors into a low-rank tensor as a 3D attention map to exploit the global correlation and refine the image features. Then, we integrate the learned tensor low-rank prior into an iterative optimization algorithm to obtain an end-to-end HSI reconstruction. Experiments on both synthetic and real data demonstrate the superiority of our method.

Data

In the paper, two benchmarks are utilized for training and testing. Harvard Dataset, which is one of them, is used for reproduction. In addition, an extra-experiment following TSA-Net is implemented on CAVE Dataset and KAIST Dataset. To start your work, make HDF5 files of the same length and place them in the correct path. The file structure is as follows:

--data/

--Havard_train/

--trainset_1.h5
...
--trainset_n.h5
--train_files.txt
--validset_1.h5
...
--validset_n.h5
--valid_files.txt

--Havard_test/

--test1/
...
--testn/

A few descriptions of datasets can be checked in README. Note that, every image for testing is saved as several 2D images according to different channels.

Environment

Python 3.6.2
CUDA 10.0
Torch 1.7.0
OpenCV 4.5.4
h5py 3.1.0
TensorboardX 2.4
spectral 0.22.4

Usage

  1. Download this repository via git or download the ZIP file manually.
git clone https://github.com/wang-lizhi/DTLP_Pytorch.git
  1. Download the pre-trained models if you need.
  2. Make the datasets and place them in correct paths. Then, adjust the settings in utils.py according to your data.
  3. Run the file main.py to train a model.
  4. Run the files test_for_paper.py and test_for_kaist.py to test models.

Results

1. Reproducing Results on Harvard Dataset

The results reproduced on Harvard Dataset. In this stage, the mask is randomly generated for each batch. And the size of patches is 48 * 48 * 31. In addition, only the central areas with 256 * 256 * 31 are compared in testing.

Paper Reproducing
PSNR 32.43 32.22
SSIM 0.941 0.936
SAM 0.090 0.067

2. Results of Extra-Experiments on CAVE&KAIST Datasets

For academic reference, we have added some comparisons with the latest methods on CAVE Dataset and KAIST Dataset. Methods for comparison include TSA, DGSM and DSSP, and our method is completely consistent with the experimental setup of these methods. In addition, we have also increased the comparison of using different masks. In "Real-mask", a given real mask in the range of 0-1 is utilized, which is provided by TSA. In "Binary-mask", the given real mask is rounded to a binary mask. When training the model, a 48 * 48 sub-mask should be randomly derived from the given real mask for each batch. Note that, images with a size of 256 * 256 * 28, matched the given real mask, are used for comparison.

TSA DGSM DSSP DTLP
Real-mask Real-mask Real-mask Binary-mask Real-mask Binary-mask
PSNR 31.46 32.63 32.39 32.84 33.88 34.07
SSIM 0.894 0.917 0.971 0.974 0.926 0.929
SAM - - 0.177 0.163 0.099 0.097

Citation

@inproceedings{DTLP,
  title={Learning Tensor Low-Rank Prior for Hyperspectral Image Reconstruction},
  author={Zhang, Shipeng and Wang, Lizhi and Zhang, Lei and Huang, Hua},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={12006--12015},
  year={2021}
}

About

Pytorch code for CVPR 2021 paper: Learning Tensor Low-Rank Prior for Hyperspectral Image Reconstruction

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%