diff --git a/MANIFEST.in b/MANIFEST.in new file mode 100644 index 0000000..104f047 --- /dev/null +++ b/MANIFEST.in @@ -0,0 +1 @@ +recursive-include spas *.txt diff --git a/README.md b/README.md index 000fb16..1be1f4b 100644 --- a/README.md +++ b/README.md @@ -1,26 +1,31 @@ -# Single-Pixel Acquisition Software (SPAS) - Python version +# Single-Pixel Acquisition Software (SPAS) -A python toolbox for acquisition of images based on the single-pixel framework. -It has been tested using a Digital Light Processor [DLP7000](https://www.vialux.de/en/hi-speed-v-modules.html) from ViALUX GmbH and a Spectrometer [AvaSpec-ULS2048CL-EVO](https://www.avantes.com/products/spectrometers/starline/avaspec-uls2048cl-evo/) from Avantes, but may as well work for similar equipment with a few minor changes. +SPAS is python package designed for single-pixel acquisition. -## Installation (Windows only) +SPAS has been tested for controlling a [DLP7000](https://www.vialux.de/en/hi-speed-v-modules.html) Spatial Light Modulator and an [AvaSpec-ULS2048CL-EVO](https://www.avantes.com/products/spectrometers/starline/avaspec-uls2048cl-evo/) spectrometer. It should work as well for for similar equipment with a few changes. -1. Create a new environment (tested under conda) +SPAS is a companion package to the [SPyRiT](https://github.com/openspyrit/spyrit) package. -```powershell -conda create --name my_spas_env -conda activate my_spas_env -conda install -c anaconda pip -``` -2. Install the [SPyRiT](https://github.com/openspyrit/spyrit) package (tested with version 1.0.0). Typically +# Installation + +## For users +The SPAS package can be installed on Linux, MacOs and Windows. ```powershell -pip install requests torch==1.8.0+cpu torchvision==0.9.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -pip install spyrit==1.0.0 +pip install git+https://github.com/openspyrit/spas.git +``` + +Check your installation +``` python +from spas import read_metadata ``` +This functions reads the metadata an existing acquisition (e.g., available on [SPIHIM](https://pilot-warehouse.creatis.insa-lyon.fr/)) -2. Clone the SPAS repository +## For developers +The SPAS package can be installed on Linux, MacOs and Windows. However, it will be fully functional on Windows only due to DLL dependencies required for harware control. + +* Clone the SPAS repository ```powershell git clone git@github.com:openspyrit/spas.git @@ -33,16 +38,15 @@ pip install -r requirements.txt pip install -e . ``` -3. Add DLLs +* Add DLLs (optional, for instrumentation control only) -The following dynamic-link libraries (DLLs) are required + The following dynamic-link libraries (DLLs) were required to control our instrumentation -* `avaspecx64.dll` provided by your Avantes distributor -* `alpV42.dll` available [here](https://www.vialux.de/en/hi-speed-download.html) by installing the entire ALP4 library + * `avaspecx64.dll` provided by your Avantes distributor + * `alpV42.dll` available [here](https://www.vialux.de/en/hi-speed-download.html) by installing the entire ALP4 library -They should be placed inside the `lib` folder -4. The typical directory structure is +* The DLLs should be placed inside the `lib` folder. The typical directory structure is ``` ├───lib @@ -62,12 +66,30 @@ They should be placed inside the `lib` folder │ ├───Cov_64x64.npy ``` +# API Documentation +https://spas.readthedocs.io/ + +# Contributors (alphabetical order) +* Thomas Baudier +* Guilherme Beneti-Martin +* [Nicolas Ducros](https://www.creatis.insa-lyon.fr/~ducros/WebPage/index.html) +* Laurent Mahieu Williame + +# How to cite? +When using SPAS in scientific publications, please cite the following paper: + +* G. Beneti-Martin, L Mahieu-Williame, T Baudier, N Ducros, "OpenSpyrit: an Ecosystem for Reproducible Single-Pixel Hyperspectral Imaging," Optics Express, Vol. 31, No. 10, (2023). https://doi.org/10.1364/OE.483937. + +# License +This project is licensed under the LGPL-3.0 license - see the [LICENSE.md](LICENSE.md) file for details + +# Getting started ## Preparation (just once) ### 1. Creating Walsh-Hadamard patterns Run in Python: ``` python -from spas import walsh_patterns +from spas.generate import walsh_patterns walsh_patterns(save_data=True) ``` By default the patterns are 1024x768 PNG images saved in `./Walsh_64_64/`. diff --git a/license.md b/license.md new file mode 100644 index 0000000..65c5ca8 --- /dev/null +++ b/license.md @@ -0,0 +1,165 @@ + GNU LESSER GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + + This version of the GNU Lesser General Public License incorporates +the terms and conditions of version 3 of the GNU General Public +License, supplemented by the additional permissions listed below. + + 0. Additional Definitions. + + As used herein, "this License" refers to version 3 of the GNU Lesser +General Public License, and the "GNU GPL" refers to version 3 of the GNU +General Public License. + + "The Library" refers to a covered work governed by this License, +other than an Application or a Combined Work as defined below. + + An "Application" is any work that makes use of an interface provided +by the Library, but which is not otherwise based on the Library. +Defining a subclass of a class defined by the Library is deemed a mode +of using an interface provided by the Library. + + A "Combined Work" is a work produced by combining or linking an +Application with the Library. The particular version of the Library +with which the Combined Work was made is also called the "Linked +Version". + + The "Minimal Corresponding Source" for a Combined Work means the +Corresponding Source for the Combined Work, excluding any source code +for portions of the Combined Work that, considered in isolation, are +based on the Application, and not on the Linked Version. + + The "Corresponding Application Code" for a Combined Work means the +object code and/or source code for the Application, including any data +and utility programs needed for reproducing the Combined Work from the +Application, but excluding the System Libraries of the Combined Work. + + 1. Exception to Section 3 of the GNU GPL. + + You may convey a covered work under sections 3 and 4 of this License +without being bound by section 3 of the GNU GPL. + + 2. Conveying Modified Versions. + + If you modify a copy of the Library, and, in your modifications, a +facility refers to a function or data to be supplied by an Application +that uses the facility (other than as an argument passed when the +facility is invoked), then you may convey a copy of the modified +version: + + a) under this License, provided that you make a good faith effort to + ensure that, in the event an Application does not supply the + function or data, the facility still operates, and performs + whatever part of its purpose remains meaningful, or + + b) under the GNU GPL, with none of the additional permissions of + this License applicable to that copy. + + 3. Object Code Incorporating Material from Library Header Files. + + The object code form of an Application may incorporate material from +a header file that is part of the Library. You may convey such object +code under terms of your choice, provided that, if the incorporated +material is not limited to numerical parameters, data structure +layouts and accessors, or small macros, inline functions and templates +(ten or fewer lines in length), you do both of the following: + + a) Give prominent notice with each copy of the object code that the + Library is used in it and that the Library and its use are + covered by this License. + + b) Accompany the object code with a copy of the GNU GPL and this license + document. + + 4. Combined Works. + + You may convey a Combined Work under terms of your choice that, +taken together, effectively do not restrict modification of the +portions of the Library contained in the Combined Work and reverse +engineering for debugging such modifications, if you also do each of +the following: + + a) Give prominent notice with each copy of the Combined Work that + the Library is used in it and that the Library and its use are + covered by this License. + + b) Accompany the Combined Work with a copy of the GNU GPL and this license + document. + + c) For a Combined Work that displays copyright notices during + execution, include the copyright notice for the Library among + these notices, as well as a reference directing the user to the + copies of the GNU GPL and this license document. + + d) Do one of the following: + + 0) Convey the Minimal Corresponding Source under the terms of this + License, and the Corresponding Application Code in a form + suitable for, and under terms that permit, the user to + recombine or relink the Application with a modified version of + the Linked Version to produce a modified Combined Work, in the + manner specified by section 6 of the GNU GPL for conveying + Corresponding Source. + + 1) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (a) uses at run time + a copy of the Library already present on the user's computer + system, and (b) will operate properly with a modified version + of the Library that is interface-compatible with the Linked + Version. + + e) Provide Installation Information, but only if you would otherwise + be required to provide such information under section 6 of the + GNU GPL, and only to the extent that such information is + necessary to install and execute a modified version of the + Combined Work produced by recombining or relinking the + Application with a modified version of the Linked Version. (If + you use option 4d0, the Installation Information must accompany + the Minimal Corresponding Source and Corresponding Application + Code. If you use option 4d1, you must provide the Installation + Information in the manner specified by section 6 of the GNU GPL + for conveying Corresponding Source.) + + 5. Combined Libraries. + + You may place library facilities that are a work based on the +Library side by side in a single library together with other library +facilities that are not Applications and are not covered by this +License, and convey such a combined library under terms of your +choice, if you do both of the following: + + a) Accompany the combined library with a copy of the same work based + on the Library, uncombined with any other library facilities, + conveyed under the terms of this License. + + b) Give prominent notice with the combined library that part of it + is a work based on the Library, and explaining where to find the + accompanying uncombined form of the same work. + + 6. Revised Versions of the GNU Lesser General Public License. + + The Free Software Foundation may publish revised and/or new versions +of the GNU Lesser General Public License from time to time. Such new +versions will be similar in spirit to the present version, but may +differ in detail to address new problems or concerns. + + Each version is given a distinguishing version number. If the +Library as you received it specifies that a certain numbered version +of the GNU Lesser General Public License "or any later version" +applies to it, you have the option of following the terms and +conditions either of that published version or of any later version +published by the Free Software Foundation. If the Library as you +received it does not specify a version number of the GNU Lesser +General Public License, you may choose any version of the GNU Lesser +General Public License ever published by the Free Software Foundation. + + If the Library as you received it specifies that a proxy can decide +whether future versions of the GNU Lesser General Public License shall +apply, that proxy's public statement of acceptance of any version is +permanent authorization for you to choose that version for the +Library. diff --git a/scripts/acquisition-recon-multiprocessing.py b/scripts/acquisition-recon-multiprocessing.py deleted file mode 100644 index 8aaf3f1..0000000 --- a/scripts/acquisition-recon-multiprocessing.py +++ /dev/null @@ -1,83 +0,0 @@ -""" -Example of an acquisition of 1/4 of the Hadamard patterns and then performs a -reconstruction using 1/4 of the patterns a DenoiCompNet model and a noise model -in "real-time", using multiprocessing. -""" -import spyrit.misc.walsh_hadamard as wh -from spas import * - -if __name__ == '__main__': - -#%% Init - spectrometer, DMD, DMD_initial_memory = init() - - #%% Setup - metadata = MetaData( - output_directory='../data/.../', - pattern_order_source='../communication/...', - pattern_source='../Patterns/.../', - pattern_prefix='Hadamard_64x64', - experiment_name='...', - light_source='...', - object='...', - filter='...', - description='...') - - acquisition_parameters = AcquisitionParameters( - pattern_compression=0.25, - pattern_dimension_x=64, - pattern_dimension_y=64) - - spectrometer_params, DMD_params = setup( - spectrometer=spectrometer, - DMD=DMD, - DMD_initial_memory=DMD_initial_memory, - metadata=metadata, - acquisition_params=acquisition_parameters, - integration_time=1.0) - - network_params = { - 'img_size': 64, - 'CR': 1024, - 'net_arch': 0, - 'denoise': True, - 'epochs': 20, - 'learning_rate': 1e-3, - 'step_size': 10, - 'gamma': 0.5, - 'batch_size': 256, - 'regularization': 1e-7, - 'N0': 2500, - 'sig': 0.5 - } - - cov_path = './...' - mean_path = './...' - H = wh.walsh2_matrix(64)/64 - model_root = './...' - - model, device = setup_reconstruction(cov_path, mean_path, H, model_root, network_params) - noise = load_noise('../noise-calibration/fit_model.npz') - - reconstruction_params = { - 'model': model, - 'device': device, - 'batches': 1, - 'noise': noise, - } - #%% Acquire - - spectral_data = acquire( - ava=spectrometer, - DMD=DMD, - metadata=metadata, - spectrometer_params=spectrometer_params, - DMD_params=DMD_params, - acquisition_params=acquisition_parameters, - repetitions=3, - verbose=True, - reconstruct=True, - reconstruction_params=reconstruction_params) - - #%% Disconnect - disconnect(spectrometer, DMD) \ No newline at end of file diff --git a/scripts/acquisition-recon-sequential.py b/scripts/acquisition-recon-sequential.py deleted file mode 100644 index 90584df..0000000 --- a/scripts/acquisition-recon-sequential.py +++ /dev/null @@ -1,145 +0,0 @@ -""" -Example of an acquisition followed by a reconstruction using 100 % of the -Hadamard patterns and then, a reconstruction using 1/4 of the patterns -(subsampled) with a DenoiCompNet model and a noise model. Reconstructions are -performed after the acquisition and not in "real-time". -""" - -from spas import * -import os -import numpy as np -import spyrit.misc.walsh_hadamard as wh -import spas.transfer_data_to_girder as transf -from spas import plot_spec_to_rgb_image as plt_rgb -from matplotlib import pyplot as plt - -#%% Init -spectrometer, DMD, DMD_initial_memory = init() - -#%% Setup acquisition and send pattern to the DMD -setup_version = 'setup_v1.2' -data_folder_name = '2021-12-15_test_3ieme' -data_name = 'tomato_slice' - -if not os.path.exists('../data/' + data_folder_name): - os.makedirs('../data/' + data_folder_name) - -subfolder_path = '../data/' + data_folder_name + '/' + data_name -overview_path = subfolder_path + '/overview' -if not os.path.exists(overview_path): - os.makedirs(overview_path) - -data_path = subfolder_path + '/' + data_name -had_reco_path = data_path + '_had_reco.npz' -nn_reco_path = data_path + '_nn_reco.npz' -fig_had_reco_path = overview_path + '/' + 'HAD_RECO_' + data_name -fig_nn_reco_path = overview_path + '/' + 'NN_RECO_' + data_name - -metadata = MetaData( - output_directory=subfolder_path, - pattern_order_source='../stats/pattern_order.npz',#'../communication/raster.txt',# - pattern_source='../Patterns/PosNeg/DMD_Walsh_64x64',#'../Patterns/RasterScan_64x64',# - pattern_prefix='Walsh_64x64',#'RasterScan_64x64_1',# - experiment_name=data_name, - light_source='White LED light',#'Nothing',#'HgAr multilines Source (HG-1 Oceanoptics)',# - object='nothing',#'Nothing',#'USAF',#'Star Sector',#'Nothing' - filter='Diffuser',#' linear colored filter + OD#0',#'Nothing',#'OD#0',# - description='system description : DMD_120mm_f50_10mm_MOx20_0mm_redFiber') - -acquisition_parameters = AcquisitionParameters( - pattern_compression=1.0, - pattern_dimension_x=64, - pattern_dimension_y=64) - -spectrometer_params, DMD_params = setup( - spectrometer=spectrometer, - DMD=DMD, - DMD_initial_memory=DMD_initial_memory, - metadata=metadata, - acquisition_params=acquisition_parameters, - integration_time=1.0,) - -#%% Setup reconstruction - -network_params = ReconstructionParameters( - img_size=64, - CR=1024, - denoise=True, - epochs=40, - learning_rate=1e-3, - step_size=20, - gamma=0.2, - batch_size=256, - regularization=1e-7, - N0=50.0, - sig=0.0, - arch_name='c0mp',) - -cov_path = '../stats/new-nicolas/Cov_64x64.npy' -mean_path = '../stats/new-nicolas/Average_64x64.npy' -model_root = '../models/new-nicolas/' -H = wh.walsh2_matrix(64)/64 - -model, device = setup_reconstruction(cov_path, mean_path, H, model_root, network_params) -noise = load_noise('../noise-calibration/fit_model2.npz') - -reconstruction_params = { - 'model': model, - 'device': device, - 'batches': 1, - 'noise': noise, -} - -#%% Acquire -spectral_data = acquire( - ava=spectrometer, - DMD=DMD, - metadata=metadata, - spectrometer_params=spectrometer_params, - DMD_params=DMD_params, - acquisition_params=acquisition_parameters, - repetitions=1, - reconstruct=False) - -#%% Reconstruction without NN -Q = wh.walsh2_matrix(64) - -GT = reconstruction_hadamard(acquisition_parameters.patterns, 'walsh', Q, spectral_data) - -F_bin, wavelengths_bin, bin_width = spectral_binning(GT.T, acquisition_parameters.wavelengths, 530, 730, 8) -F_bin_rot = np.rot90(F_bin, axes=(1,2)) -F_bin_flip = F_bin_rot[:,::-1,:] - - -F_bin_1px, wavelengths_bin, bin_width = spectral_slicing(GT.T, acquisition_parameters.wavelengths, 530, 730, 8) - - -plot_color(F_bin_flip, wavelengths_bin) -plot_color(F_bin_1px, wavelengths_bin) - - -#%% RGB view -image_arr = plt_rgb.plot_spec_to_rgb_image(GT, acquisition_parameters.wavelengths) -plt.imshow(image_arr) - -#%% Reconstruct with NN -F_bin, wavelengths_bin, bin_width, noise_bin = spectral_binning(spectral_data.T, acquisition_parameters.wavelengths, 530, 730, 8, noise) -recon = reconstruct(model, device, F_bin[:,0:8192//4], 1, noise_bin) -plot_color(recon, wavelengths_bin) -plt.show() - - -plt.imshow(np.sum(recon, axis=0)) -plt.title('NN reco, sum of all wavelengths') -plt.show() - -F_bin, wavelengths_bin, bin_width, noise_bin = spectral_slicing(spectral_data.T, acquisition_parameters.wavelengths, 514, 751, 8, noise) -recon2 = reconstruct(model, device, F_bin[:,0:8192//4], 4, noise_bin) -plot_color(recon2, wavelengths_bin) - -#%% transfer data to girder -transf.transfer_data(metadata, acquisition_parameters, spectrometer_params, DMD_params, - setup_version, data_folder_name, data_name) - -#%% Disconnect -disconnect(spectrometer, DMD) \ No newline at end of file diff --git a/scripts/acquisiton_script.py b/scripts/acquisiton_script.py deleted file mode 100644 index 8ccab2b..0000000 --- a/scripts/acquisiton_script.py +++ /dev/null @@ -1,43 +0,0 @@ -from spas import * - -#%% Init -spectrometer, DMD, DMD_initial_memory = init() # just once, two consecutively returns an error - -#%% Setup -metadata = MetaData( - output_directory='../meas/', - pattern_order_source='../stats_download/Cov_64x64.npy', # covariance matrix or - pattern_source='../Walsh_64x64/', - pattern_prefix='Walsh_64x64', - experiment_name='my_first_measurement', - light_source='white_lamp', - object='no_object', - filter='no_filter', - description='my_first_description') - -acquisition_parameters = AcquisitionParameters( - pattern_compression=1.0, - pattern_dimension_x=64, - pattern_dimension_y=64) - -spectrometer_params, DMD_params = setup( - spectrometer=spectrometer, - DMD=DMD, - DMD_initial_memory=DMD_initial_memory, - metadata=metadata, - acquisition_params=acquisition_parameters, - integration_time=1.0) - -#%% Acquire -spectral_data = acquire( - ava=spectrometer, - DMD=DMD, - metadata=metadata, - spectrometer_params=spectrometer_params, - DMD_params=DMD_params, - acquisition_params=acquisition_parameters, - repetitions=1, - reconstruct=False) - -#%% Disconnect -disconnect(spectrometer, DMD) \ No newline at end of file diff --git a/scripts/generate_patterns_and_order_pattern.py b/scripts/generate_patterns_and_order_pattern.py new file mode 100644 index 0000000..768b2cb --- /dev/null +++ b/scripts/generate_patterns_and_order_pattern.py @@ -0,0 +1,155 @@ +# -*- coding: utf-8 -*- +""" +Created on Tue Sep 27 09:49:52 2022 + +@author: mahieu +""" + +############################################################################### +# prog to generate patterns and its order +# Covariance matrix is required to generate Walsh patterns +############################################################################### + +from spas.generate import walsh_patterns #, raster_patterns +from spas.generate import generate_hadamard_order +import numpy as np +import os +import time + +################################## INPUT ###################################### +Np_tab = [8]#[32, 64, 128] # Number of pixels in one dimension of the image (image: NpxNp) +scan_mode_tab = ['Walsh']#, 'Raster'] +zoom = [] # leave empty to execute all the possible zoom, otherwise specify one or more values of zoom +DMD_minor_size = 768 # minor size of the DMD +spas_path = 'C:/openspyrit/spas/' +######################## CREATE ALL THE POSSIBLE ZOOM ######################### +stop = 0 +if len(zoom) == 0: + if max(Np_tab) > DMD_minor_size: + print('Error, The image size cannot be higher than the minimum size of the DMD') + print('max(Np) <= DMD_minor_size') + print('program stop') + stop = 1 + else: + zoom_vector = [1, 2, 3, 4] + z = 3 + while 1: + z = z*2 + zoom_vector.append(z) + if z >= DMD_minor_size: + break +else: + zoom_vector = zoom +################################## BEGIN ###################################### +t0 = time.time() +if stop == 0: + for scan_mode in scan_mode_tab: + for Np in Np_tab: + max_zoom = DMD_minor_size//Np + zoom_tab = zoom_vector[:zoom_vector.index(DMD_minor_size//Np)+1] + for zoom in zoom_tab: + t00 = time.time() + print('zoom = ' + str(zoom) + ' || Np = ' + str(Np) + ' || scan mode : ' + scan_mode) + ############################# PATH ################################ + pattern_order_source = spas_path + 'stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz' + pattern_source = spas_path + 'Patterns_test/Zoom_x' + str(zoom) + '/' + scan_mode + '_' + str(Np) + 'x' + str(Np) + pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np) + ########################## CREATE PATH ############################ + if os.path.isdir(pattern_source) == False: + if os.path.isdir(spas_path + 'Patterns_test/Zoom_x' + str(zoom)) == False: + os.mkdir(spas_path + 'Patterns_test/Zoom_x' + str(zoom)) + os.mkdir(pattern_source) + else: + os.mkdir(pattern_source) + ##################### generate patterns ####################### + if scan_mode == 'Walsh': + walsh_patterns(N = Np, save_data = True, path = pattern_source + '/', N_DMD = DMD_minor_size//zoom) + elif scan_mode == 'Raster': + raster_patterns(N=Np, save_data = True, path = pattern_source + '/', N_DMD = DMD_minor_size//zoom) + #################### delay of the loop ######################## + hours, rem = divmod(time.time()-t00, 3600) + minutes, seconds = divmod(rem, 60) + print(" delay : {:0>2}h{:0>2}m{:0>2}s".format(int(hours),int(minutes),int(seconds))) + ###################### elapsed time ########################### + hours, rem = divmod(time.time()-t0, 3600) + minutes, seconds = divmod(rem, 60) + print("elapsed time : {:0>2}h{:0>2}m{:0>2}s".format(int(hours),int(minutes),int(seconds))) + ####################### generate pattern order #################### + if scan_mode == 'Walsh': + cov_path = spas_path + 'stats/Cov_' + str(Np) + 'x' + str(Np) + '.npy' + generate_hadamard_order(N = Np, name = 'pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np), cov_path = cov_path, pos_neg = True) + elif scan_mode == 'Raster': + pattern_order=np.arange(0, Np**2, dtype=np.uint16) + np.savez(pattern_order_source[:len(pattern_order_source)-4], pattern_order = pattern_order, pos_neg = False) +elapsed = time.time() - t0 +print('FINISHED') +################################### END ####################################### + + +# #%% generate inverted patterns and its order from existed files +# from PIL import Image, ImageOps +# from matplotlib import pyplot as plt +# import shutil +# import os + +# ################################## INPUT ###################################### +# Np = 32 # Number of pixels in one dimension of the image (image: NpxNp) +# scan_mode_orig = 'Walsh' +# zoom = 1 +# ############################# PATH ################################ +# scan_mode = scan_mode_orig + '_inv' +# spas_path = 'C:/openspyrit/spas/' +# pattern_order_source_orig = spas_path + 'stats/pattern_order_' + scan_mode_orig + '_' + str(Np) + 'x' + str(Np) + '.npz' +# pattern_source_orig = spas_path + 'Patterns/Zoom_x' + str(zoom) + '/' + scan_mode_orig + '_' + str(Np) + 'x' + str(Np) +# pattern_prefix_orig = scan_mode_orig + '_' + str(Np) + 'x' + str(Np) +# pattern_order_source = spas_path + 'stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz' +# pattern_source = spas_path + 'Patterns/Zoom_x' + str(zoom) + '/' + scan_mode + '_' + str(Np) + 'x' + str(Np) +# pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np) +# ########################## CREATE PATH ############################ + +# if os.path.isdir(pattern_source) == False: +# if os.path.isdir(spas_path + 'Patterns/Zoom_x' + str(zoom)) == False: +# os.mkdir(spas_path + 'Patterns/Zoom_x' + str(zoom)) +# os.mkdir(pattern_source) +# else: +# os.mkdir(pattern_source) +# ######################## load fig and inverted it ##################### +# if scan_mode_orig == 'Walsh': +# fac = 2 +# else: +# fac = 1 + +# for ind in range(Np**2*fac): +# im = Image.open(pattern_source_orig + '/' + pattern_prefix_orig + '_'+str(ind) + '.png') + +# # plt.figure() +# # plt.imshow(im) +# # plt.colorbar() + +# im_invert = ImageOps.invert(im) + +# # plt.figure() +# # plt.imshow(im_invert) +# # plt.colorbar() + +# im_invert.save(pattern_source + '/' + pattern_prefix + '_'+str(ind) + '.png', quality=95) + +# ####################### copy pattern order #################### +# shutil.copyfile(pattern_order_source_orig, pattern_order_source) + + + + + + + + + + + + + + + + + diff --git a/scripts/acquisition-recon-sequential-girder.py b/scripts/main_SPC2D_1arm.py similarity index 98% rename from scripts/acquisition-recon-sequential-girder.py rename to scripts/main_SPC2D_1arm.py index c84a3aa..1fd8d85 100644 --- a/scripts/acquisition-recon-sequential-girder.py +++ b/scripts/main_SPC2D_1arm.py @@ -16,7 +16,7 @@ import ctypes as ct import ALP4 #%% Init -spectrometer, DMD, DMD_initial_memory = init() +spectrometer, DMD, DMD_initial_memory = init(dmd_lib_version = '4.2') #%% Setup acquisition and send pattern to the DMD setup_version = 'setup_v1.3' data_folder_name = '2022-06-17_test_oldProg' diff --git a/scripts/main_SPC2D_2arms.py b/scripts/main_SPC2D_2arms.py new file mode 100644 index 0000000..de6a97f --- /dev/null +++ b/scripts/main_SPC2D_2arms.py @@ -0,0 +1,195 @@ +""" +Example of an acquisition followed by a reconstruction using 100 % of the +Hadamard patterns and then, a reconstruction using 1/4 of the patterns +(subsampled) with a DenoiCompNet model and a noise model. Reconstructions are +performed after the acquisition and not in "real-time". +""" + +from spas.acquisition_SPC2D import init_2arms, setup_cam, AcquisitionParameters, setup_2arms, setup, acquire, acquire_2arms, snapshot, disconnect_2arms, captureVid, displaySpectro, setup_tuneSpectro, change_patterns +from spas.metadata_SPC2D import MetaData, func_path, save_metadata_2arms +from spas.reconstruction import reconstruction_hadamard +from spas.reconstruction_nn import ReconstructionParameters, setup_reconstruction, reorder_subsample +from spas.noise import load_noise +from spas.visualization import snapshotVisu, displayVid, plot_reco_without_NN, plot_reco_with_NN, extract_ROI_coord +from spas.transfer_data_to_girder import transfer_data_2arms +import spyrit.misc.walsh_hadamard as wh +from spas import reconstruct +import time +from pathlib import Path +import numpy as np +#%% Initialize hardware +spectrometer, DMD, DMD_initial_memory, camPar = init_2arms(dmd_lib_version = '4.2') # possible version : '4.1', '4.2' or '4.3' +#%% Define the AOI of the camera +# Warning, not all values are allowed for Width and Height (max: 2076x3088 | ex: 768x544) +camPar.rectAOI.s32X.value = 1100# // X +camPar.rectAOI.s32Y.value = 640# // Y +camPar.rectAOI.s32Width.value = 768#1544#3088#1544 # 3000#3088# // Width must be multiple of 8 +camPar.rectAOI.s32Height.value = 544#1038#2076#1730#2000## 1038# 2076 // Height + +camPar = captureVid(camPar) +#%% Set Camera Parameters +# It is advice to execute this cell twice to take into account the parameter changement +camPar = setup_cam(camPar, + pixelClock = 474, # Allowed values : [118, 237, 474] (MHz) + fps = 220, # FrameRate boundary : [1 - No value(depend of the AOI size)] + Gain = 0, # Gain boundary : [0 100] + gain_boost = 'OFF', # set1"ON"to activate gain boost, "OFF" to deactivate + nGamma = 1, # Gamma boundary : [1 - 2.2] + ExposureTime = .65,# Exposure Time (ms) boudary : [0.013 - 56.221] + black_level = 4) # lack Level boundary : [0 255] + +snapshotVisu(camPar) +#%% Display video in continuous mode for optical tuning +displayVid(camPar) +#%% Display the spectrum in continuous mode for optical tuning +pattern_to_display = 'white' #'gray'#'black', +ti = 1 # Integration time of the spectrometer +zoom = 1 # Numerical zoom applied in the DMD + +metadata, spectrometer_params, DMD_params, acquisition_parameters = setup_tuneSpectro(spectrometer = spectrometer, DMD = DMD, DMD_initial_memory = DMD_initial_memory, + pattern_to_display = pattern_to_display, ti = ti, zoom = zoom, xw_offset = 128, yh_offset = 0) +displaySpectro(ava = spectrometer, DMD = DMD, metadata = metadata, spectrometer_params = spectrometer_params, DMD_params = DMD_params, acquisition_params = acquisition_parameters) +#%% Setup acquisition and send pattern to the DMD +setup_version = 'setup_v1.3.1' +collection_access = 'public' #'private'# +Np = 64 # Number of pixels in one dimension of the image (image: NpxNp) +ti = 1 # Integration time of the spectrometer +zoom = 2 # Numerical zoom applied in the DMD +xw_offset = 428 # Default = 128 +yh_offset = 26 # Default = 0 +pattern_compression = 1 +scan_mode = 'Walsh' #'Walsh_inv' #'Raster_inv' #'Raster' # +source = 'white_LED'#White_Zeiss_lamp'#No-light'#'Bioblock'#'Thorlabs_White_halogen_lamp'#'Laser_405nm_1.2W_A_0.14'#'''#' + white LED might'#'HgAr multilines Source (HG-1 Oceanoptics)' +object_name = 'cat_roi_fh'#'Arduino_box_position_1'#'biopsy-9-posterior-margin'#GP-without-sample'##-OP'# +data_folder_name = '2025-01-16_myFirstAcq2'#'Patient-69_exvivo_LGG_BU' +data_name = 'obj_' + object_name + '_source_' + source + '_' + scan_mode + '_im_'+str(Np)+'x'+str(Np)+'_ti_'+str(ti)+'ms_zoom_x'+str(zoom) + +camPar.acq_mode = 'snapshot'# 'video' # +camPar.vidFormat = 'avi' #'bin'# +camPar.insert_patterns = 0 # 0: no insertion / 1: insert white patterns for the camera / In the case of snapshot, put 0 to avoid bad reco +camPar.gate_period = 16 # a multiple of the integration time of the spectro, between [2 - 16] (2: insert one white pattern between each pattern) +camPar.black_pattern_num = 1 # insert the picture number (in the pattern_source folder) of the pattern you want to insert +all_path = func_path(data_folder_name, data_name, ask_overwrite = True) +if 'mask_index' not in locals(): mask_index = []; x_mask_coord = []; y_mask_coord = [] # execute "mask_index = []" to not apply the mask + +if all_path.aborted == False: + metadata = MetaData( + output_directory = all_path.subfolder_path, + pattern_order_source = 'C:/openspyrit/spas/stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz', + pattern_source = 'C:/openspyrit/spas/Patterns/' + scan_mode + '_' + str(Np) + 'x' + str(Np), + pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np), + experiment_name = data_name, + light_source = source, + object = object_name, + filter = 'Diffuser', #+ OD=0.3',''No filter',#'linear colored filter',#'Orange filter (600nm)',#'Dichroic_420nm',#'HighPass_500nm + LowPass_750nm + Dichroic_560nm',#'BandPass filter 560nm Dl=10nm',#'None', # + , #'Nothing',#'Diffuser + HighPass_500nm + LowPass_750nm',##'Microsope objective x40',#'' linear colored filter + OD#0',#'Nothing',# + description = 'test with pinehole to have a point source' + # description = 'two positions of the lens 80mm, P1:12cm (zoom=0.5), P2:22cm (zoom=1.5) from the DMD. Dichroic plate (T:>420nm, R:<420nm), HighPass_500nm in front of the cam, GP: Glass Plate, OP: other position, OA: out of anapath', + ) + try: change_patterns(DMD = DMD, acquisition_params = acquisition_parameters, zoom = zoom, xw_offset = xw_offset, yh_offset = yh_offset, + force_change = False) + except: pass + + acquisition_parameters = AcquisitionParameters(pattern_compression = pattern_compression, pattern_dimension_x = Np, pattern_dimension_y = Np, + zoom = zoom, xw_offset = xw_offset, yh_offset = yh_offset, mask_index = mask_index, + x_mask_coord = x_mask_coord, y_mask_coord = y_mask_coord) + + spectrometer_params, DMD_params, camPar = setup_2arms(spectrometer = spectrometer, DMD = DMD, camPar = camPar, DMD_initial_memory = DMD_initial_memory, + metadata = metadata, acquisition_params = acquisition_parameters, DMD_output_synch_pulse_delay = 0, + integration_time = ti) + + if DMD_params.patterns != None: + print('Total expected acq time : ' + str(int(acquisition_parameters.pattern_amount*(ti+0.356)/1000 // 60)) + ' min ' + + str(round(acquisition_parameters.pattern_amount*(ti+0.356)/1000 % 60)) + ' s') +else: + print('setup aborted') +#%% Acquire +# time.sleep(0) +if camPar.acq_mode == 'video': + spectral_data = acquire_2arms( + ava = spectrometer, + DMD = DMD, + camPar = camPar, + metadata = metadata, + spectrometer_params = spectrometer_params, + DMD_params = DMD_params, + acquisition_params = acquisition_parameters, + repetitions = 1, + reconstruct = False) +elif camPar.acq_mode == 'snapshot': + snapshot(camPar, all_path.pathIDSsnapshot, all_path.pathIDSsnapshot_overview) + spectral_data = acquire( + ava = spectrometer, + DMD = DMD, + metadata = metadata, + spectrometer_params = spectrometer_params, + DMD_params = DMD_params, + acquisition_params = acquisition_parameters, + repetitions = 1, + reconstruct = False) + + save_metadata_2arms(metadata, DMD_params, spectrometer_params, camPar, acquisition_parameters) +#%% Hadamard Reconstruction +Q = wh.walsh2_matrix(Np) +GT = reconstruction_hadamard(acquisition_parameters, 'walsh', Q, spectral_data, Np) +plot_reco_without_NN(acquisition_parameters, GT, all_path) +#%% Neural Network setup (executed it just one time) +network_param = ReconstructionParameters( + # Reconstruction network + M = Np*Np, # Number of measurements + img_size = 128, # Image size of the NN reconstruction + arch = 'dc-net', # Main architecture + denoi = 'unet', # Image domain denoiser (possibility to do not apply, put : None) + subs = 'rect', # Subsampling scheme + + # Training + data = 'imagenet', # Training database + N0 = 10, # Intensity (max of ph./pixel) + + # Optimisation (from train2.py) + num_epochs = 30, # Number of training epochs + learning_rate = 0.001, # Learning Rate + step_size = 10, # Scheduler Step Size + gamma = 0.5, # Scheduler Decrease Rate + batch_size = 256, # Size of the training batch + regularization = 1e-7 # Regularisation Parameter + ) + +cov_folder = 'C:/openspyrit/stat/ILSVRC2012_v10102019/' +cov_path = Path(cov_folder) / f'Cov_8_{network_param.img_size}x{network_param.img_size}.npy' +model_folder = 'C:/openspyrit/models/' +model, device = setup_reconstruction(cov_path, model_folder, network_param) +#%% Neural Network Reconstruction +plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, network_param, all_path, cov_path) +#%% transfer data to girder +transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, DMD_params, camPar, + setup_version, data_folder_name, data_name, collection_access, upload_metadata = 1) +#%% Draw a ROI +# Comment data_folder_name & data_name to draw a ROI in the current acquisition, else specify the acquisition name +data_folder_name = '2025-01-16_myFirstAcq' +data_name = 'obj_cat_source_white_LED_Walsh_im_64x64_ti_1ms_zoom_x1' +mask_index, x_mask_coord, y_mask_coord = extract_ROI_coord(DMD_params, acquisition_parameters, all_path, + data_folder_name, data_name, GT, ti, Np) +#%% Disconnect +disconnect_2arms(spectrometer, DMD, camPar) + + + + + + + + + + + + + + + + + + + + + + diff --git a/scripts/main_SPIM1D.py b/scripts/main_SPIM1D.py new file mode 100644 index 0000000..c4a816a --- /dev/null +++ b/scripts/main_SPIM1D.py @@ -0,0 +1,196 @@ +""" +Example of an acquisition followed by a reconstruction using 100 % of the +Hadamard patterns and then, a reconstruction using 1/4 of the patterns +(subsampled) with a DenoiCompNet model and a noise model. Reconstructions are +performed after the acquisition and not in "real-time". +""" + +from spas.acquisition_SPIM1D import init, setup_cam, AcquisitionParameters, setup_2arms, setup, acquire, acquire_2arms, snapshot, disconnect_2arms, captureVid, displaySpectro, setup_tuneSpectro +from DMD_module import change_patterns +from spas.metadata_SPC2D import MetaData, func_path, save_metadata_2arms +from spas.reconstruction import reconstruction_hadamard +from spas.reconstruction_nn import ReconstructionParameters, setup_reconstruction, reorder_subsample +from spas.noise import load_noise +from spas.visualization import snapshotVisu, displayVid, plot_reco_without_NN, plot_reco_with_NN, extract_ROI_coord +from spas.transfer_data_to_girder import transfer_data_2arms +import spyrit.misc.walsh_hadamard as wh +from spas import reconstruct +import time +from pathlib import Path +import numpy as np +#%% Initialize hardware +spectrometer, DMD, DMD_initial_memory, camPar = init(dmd_lib_version = '4.2') # possible version : '4.1', '4.2' or '4.3' +#%% Define the AOI of the camera +# Warning, not all values are allowed for Width and Height (max: 2076x3088 | ex: 768x544) +camPar.rectAOI.s32X.value = 1100# // X +camPar.rectAOI.s32Y.value = 640# // Y +camPar.rectAOI.s32Width.value = 768#1544#3088#1544 # 3000#3088# // Width must be multiple of 8 +camPar.rectAOI.s32Height.value = 544#1038#2076#1730#2000## 1038# 2076 // Height + +camPar = captureVid(camPar) +#%% Set Camera Parameters +# It is advice to execute this cell twice to take into account the parameter changement +camPar = setup_cam(camPar, + pixelClock = 474, # Allowed values : [118, 237, 474] (MHz) + fps = 220, # FrameRate boundary : [1 - No value(depend of the AOI size)] + Gain = 0, # Gain boundary : [0 100] + gain_boost = 'OFF', # set1"ON"to activate gain boost, "OFF" to deactivate + nGamma = 1, # Gamma boundary : [1 - 2.2] + ExposureTime = .65,# Exposure Time (ms) boudary : [0.013 - 56.221] + black_level = 4) # lack Level boundary : [0 255] + +snapshotVisu(camPar) +#%% Display video in continuous mode for optical tuning +displayVid(camPar) +#%% Display the spectrum in continuous mode for optical tuning +pattern_to_display = 'white' #'gray'#'black', +ti = 1 # Integration time of the spectrometer +zoom = 1 # Numerical zoom applied in the DMD + +metadata, spectrometer_params, DMD_params, acquisition_parameters = setup_tuneSpectro(spectrometer = spectrometer, DMD = DMD, DMD_initial_memory = DMD_initial_memory, + pattern_to_display = pattern_to_display, ti = ti, zoom = zoom, xw_offset = 128, yh_offset = 0) +displaySpectro(ava = spectrometer, DMD = DMD, metadata = metadata, spectrometer_params = spectrometer_params, DMD_params = DMD_params, acquisition_params = acquisition_parameters) +#%% Setup acquisition and send pattern to the DMD +setup_version = 'setup_v1.3.1' +collection_access = 'public' #'private'# +Np = 64 # Number of pixels in one dimension of the image (image: NpxNp) +ti = 1 # Integration time of the spectrometer +zoom = 2 # Numerical zoom applied in the DMD +xw_offset = 428 # Default = 128 +yh_offset = 26 # Default = 0 +pattern_compression = 1 +scan_mode = 'Walsh' #'Walsh_inv' #'Raster_inv' #'Raster' # +source = 'white_LED'#White_Zeiss_lamp'#No-light'#'Bioblock'#'Thorlabs_White_halogen_lamp'#'Laser_405nm_1.2W_A_0.14'#'''#' + white LED might'#'HgAr multilines Source (HG-1 Oceanoptics)' +object_name = 'cat_roi_fh'#'Arduino_box_position_1'#'biopsy-9-posterior-margin'#GP-without-sample'##-OP'# +data_folder_name = '2025-01-16_myFirstAcq2'#'Patient-69_exvivo_LGG_BU' +data_name = 'obj_' + object_name + '_source_' + source + '_' + scan_mode + '_im_'+str(Np)+'x'+str(Np)+'_ti_'+str(ti)+'ms_zoom_x'+str(zoom) + +camPar.acq_mode = 'snapshot'# 'video' # +camPar.vidFormat = 'avi' #'bin'# +camPar.insert_patterns = 0 # 0: no insertion / 1: insert white patterns for the camera / In the case of snapshot, put 0 to avoid bad reco +camPar.gate_period = 16 # a multiple of the integration time of the spectro, between [2 - 16] (2: insert one white pattern between each pattern) +camPar.black_pattern_num = 1 # insert the picture number (in the pattern_source folder) of the pattern you want to insert +all_path = func_path(data_folder_name, data_name, ask_overwrite = True) +if 'mask_index' not in locals(): mask_index = []; x_mask_coord = []; y_mask_coord = [] # execute "mask_index = []" to not apply the mask + +if all_path.aborted == False: + metadata = MetaData( + output_directory = all_path.subfolder_path, + pattern_order_source = 'C:/openspyrit/spas/stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz', + pattern_source = 'C:/openspyrit/spas/Patterns/' + scan_mode + '_' + str(Np) + 'x' + str(Np), + pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np), + experiment_name = data_name, + light_source = source, + object = object_name, + filter = 'Diffuser', #+ OD=0.3',''No filter',#'linear colored filter',#'Orange filter (600nm)',#'Dichroic_420nm',#'HighPass_500nm + LowPass_750nm + Dichroic_560nm',#'BandPass filter 560nm Dl=10nm',#'None', # + , #'Nothing',#'Diffuser + HighPass_500nm + LowPass_750nm',##'Microsope objective x40',#'' linear colored filter + OD#0',#'Nothing',# + description = 'test with pinehole to have a point source' + # description = 'two positions of the lens 80mm, P1:12cm (zoom=0.5), P2:22cm (zoom=1.5) from the DMD. Dichroic plate (T:>420nm, R:<420nm), HighPass_500nm in front of the cam, GP: Glass Plate, OP: other position, OA: out of anapath', + ) + try: change_patterns(DMD = DMD, acquisition_params = acquisition_parameters, zoom = zoom, xw_offset = xw_offset, yh_offset = yh_offset, + force_change = False) + except: pass + + acquisition_parameters = AcquisitionParameters(pattern_compression = pattern_compression, pattern_dimension_x = Np, pattern_dimension_y = Np, + zoom = zoom, xw_offset = xw_offset, yh_offset = yh_offset, mask_index = mask_index, + x_mask_coord = x_mask_coord, y_mask_coord = y_mask_coord) + + spectrometer_params, DMD_params, camPar = setup_2arms(spectrometer = spectrometer, DMD = DMD, camPar = camPar, DMD_initial_memory = DMD_initial_memory, + metadata = metadata, acquisition_params = acquisition_parameters, DMD_output_synch_pulse_delay = 0, + integration_time = ti) + + if DMD_params.patterns != None: + print('Total expected acq time : ' + str(int(acquisition_parameters.pattern_amount*(ti+0.356)/1000 // 60)) + ' min ' + + str(round(acquisition_parameters.pattern_amount*(ti+0.356)/1000 % 60)) + ' s') +else: + print('setup aborted') +#%% Acquire +# time.sleep(0) +if camPar.acq_mode == 'video': + spectral_data = acquire_2arms( + ava = spectrometer, + DMD = DMD, + camPar = camPar, + metadata = metadata, + spectrometer_params = spectrometer_params, + DMD_params = DMD_params, + acquisition_params = acquisition_parameters, + repetitions = 1, + reconstruct = False) +elif camPar.acq_mode == 'snapshot': + snapshot(camPar, all_path.pathIDSsnapshot, all_path.pathIDSsnapshot_overview) + spectral_data = acquire( + ava = spectrometer, + DMD = DMD, + metadata = metadata, + spectrometer_params = spectrometer_params, + DMD_params = DMD_params, + acquisition_params = acquisition_parameters, + repetitions = 1, + reconstruct = False) + + save_metadata_2arms(metadata, DMD_params, spectrometer_params, camPar, acquisition_parameters) +#%% Hadamard Reconstruction +Q = wh.walsh2_matrix(Np) +GT = reconstruction_hadamard(acquisition_parameters, 'walsh', Q, spectral_data, Np) +plot_reco_without_NN(acquisition_parameters, GT, all_path) +#%% Neural Network setup (executed it just one time) +network_param = ReconstructionParameters( + # Reconstruction network + M = Np*Np, # Number of measurements + img_size = 128, # Image size of the NN reconstruction + arch = 'dc-net', # Main architecture + denoi = 'unet', # Image domain denoiser (possibility to do not apply, put : None) + subs = 'rect', # Subsampling scheme + + # Training + data = 'imagenet', # Training database + N0 = 10, # Intensity (max of ph./pixel) + + # Optimisation (from train2.py) + num_epochs = 30, # Number of training epochs + learning_rate = 0.001, # Learning Rate + step_size = 10, # Scheduler Step Size + gamma = 0.5, # Scheduler Decrease Rate + batch_size = 256, # Size of the training batch + regularization = 1e-7 # Regularisation Parameter + ) + +cov_folder = 'C:/openspyrit/stat/ILSVRC2012_v10102019/' +cov_path = Path(cov_folder) / f'Cov_8_{network_param.img_size}x{network_param.img_size}.npy' +model_folder = 'C:/openspyrit/models/' +model, device = setup_reconstruction(cov_path, model_folder, network_param) +#%% Neural Network Reconstruction +plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, network_param, all_path, cov_path) +#%% transfer data to girder +transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, DMD_params, camPar, + setup_version, data_folder_name, data_name, collection_access, upload_metadata = 1) +#%% Draw a ROI +# Comment data_folder_name & data_name to draw a ROI in the current acquisition, else specify the acquisition name +data_folder_name = '2025-01-16_myFirstAcq' +data_name = 'obj_cat_source_white_LED_Walsh_im_64x64_ti_1ms_zoom_x1' +mask_index, x_mask_coord, y_mask_coord = extract_ROI_coord(DMD_params, acquisition_parameters, all_path, + data_folder_name, data_name, GT, ti, Np) +#%% Disconnect +disconnect_2arms(spectrometer, DMD, camPar) + + + + + + + + + + + + + + + + + + + + + + diff --git a/scripts/main_seq_2arms.py b/scripts/main_seq_2arms.py deleted file mode 100644 index b7413f1..0000000 --- a/scripts/main_seq_2arms.py +++ /dev/null @@ -1,148 +0,0 @@ -""" -Example of an acquisition followed by a reconstruction using 100 % of the -Hadamard patterns and then, a reconstruction using 1/4 of the patterns -(subsampled) with a DenoiCompNet model and a noise model. Reconstructions are -performed after the acquisition and not in "real-time". -""" - -# from spas import * -from spas.acquisition import init_2arms, setup_cam, AcquisitionParameters, setup_2arms, acquire, acquire_2arms, snapshot, disconnect_2arms, captureVid -from spas.metadata import MetaData, func_path -from spas.reconstruction import reconstruction_hadamard -from spas.reconstruction_nn import ReconstructionParameters, setup_reconstruction, reorder_subsample -from spas.noise import load_noise -from spas.visualization import snapshotVisu, displayVid, plot_reco_without_NN, plot_reco_with_NN -from spas.transfer_data_to_girder import transfer_data_2arms -import spyrit.misc.walsh_hadamard as wh -from spas import reconstruct -import time -#%% Initialize hardware -spectrometer, DMD, DMD_initial_memory, camPar = init_2arms() -#%% Define the AOI -# Warning, not all values are allowed for Width and Height (max: 2076x3088 | ex: 768x544) -camPar.rectAOI.s32X.value = 1100#0 #0# // X -camPar.rectAOI.s32Y.value = 765#0#0#800# // Y -camPar.rectAOI.s32Width.value = 768# 1544 #3000#3088#3088# // Width must be multiple of 8 -camPar.rectAOI.s32Height.value = 544 #1730#2000## 1038# 2076 // Height - -camPar = captureVid(camPar) -#%% Set Camera Parameters -# It is advice to execute this cell twice to take into account the parameter changement -camPar = setup_cam(camPar, - pixelClock = 474, # Allowed values : [118, 237, 474] (MHz) - fps = 220, # FrameRate boundary : [1 - No value(depend of the AOI size)] - Gain = 0, # Gain boundary : [0 100] - gain_boost = 'OFF', # set "ON" to activate gain boost, "OFF" to deactivate - nGamma = 1, # Gamma boundary : [1 - 2.2] - ExposureTime = 0.04,# Exposure Time (ms) boudary : [0.013 - 56.221] - black_level = 5) # Black Level boundary : [0 255] - -snapshotVisu(camPar) -#%% Display video in continous mode for optical tuning -displayVid(camPar) -#%% Setup acquisition and send pattern to the DMD -setup_version = 'setup_v1.3.1' -Np = 64 # Number of pixels in one dimension of the image (image: NpxNp) -zoom = 1 # Numerical zoom applied in the DMD -ti = 1 # Integration time of the spectrometer -scan_mode = 'Walsh' #'Raster' #'Walsh_inv' #'Raster_inv' # -data_folder_name = '2023-05-12_test_ALP4' -data_name = 'cat_' + scan_mode + '_im_'+str(Np)+'x'+str(Np)+'_ti_'+str(ti)+'ms_zoom_x'+str(zoom) - -camPar.acq_mode = 'snapshot'#'video' # -camPar.vidFormat = 'avi' #'bin'# -camPar.insert_patterns = 0 # 0: no insertion / 1: insert white patterns for the camera -camPar.gate_period = 1 # a multiple of the integration time of the spectro, between [2 - 16] (2: insert one white pattern between each pattern) -camPar.black_pattern_num = 1 # insert the picture number (in the pattern_source folder) of the pattern you want to insert -all_path = func_path(data_folder_name, data_name) - -metadata = MetaData( - output_directory = all_path.subfolder_path, - pattern_order_source = 'C:/openspyrit/spas/stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz', - pattern_source = 'C:/openspyrit/spas/Patterns/Zoom_x' + str(zoom) + '/' + scan_mode + '_' + str(Np) + 'x' + str(Np), - pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np), - - experiment_name = data_name, - light_source = 'White LED light',#'Zeiss KL2500 white lamp',#'LED Laser 385nm + optical fiber 600µm, P = 30 mW',#'the sun',#'IKEA lamp 10W LED1734G10',#or BlueLaser 50 mW',#' (74) + 'Bioblock power: II',#'HgAr multilines Source (HG-1 Oceanoptics)',#'Nothing',# - object = 'Cat',#two little tubes containing PpIX at 634 and 620 state',#'Apple',#',#'USAF',#'Nothing''color checker' - filter = 'None', #'BandPass filter 560nm Dl=10nm',#'HighPass_500nm + LowPass_750nm',# + optical density = 0.1', #'Nothing',#'Diffuser + HighPass_500nm + LowPass_750nm',##'Microsope objective x40',#'' linear colored filter + OD#0',#'Nothing',# - description = 'test after changing metadata.py and acquisition.py to be used on Linux and MacOs plateform. We would like to be sure that SPAS for Windows is ok') - -acquisition_parameters = AcquisitionParameters( - pattern_compression = 1.0, - pattern_dimension_x = Np, - pattern_dimension_y = Np) - -spectrometer_params, DMD_params, camPar = setup_2arms( - spectrometer = spectrometer, - DMD = DMD, - camPar = camPar, - DMD_initial_memory = DMD_initial_memory, - metadata = metadata, - acquisition_params = acquisition_parameters, - DMD_output_synch_pulse_delay = 42, - integration_time = ti) -#%% Acquire -time.sleep(0) -if camPar.acq_mode == 'video': - spectral_data = acquire_2arms( - ava = spectrometer, - DMD = DMD, - camPar = camPar, - metadata = metadata, - spectrometer_params = spectrometer_params, - DMD_params = DMD_params, - acquisition_params = acquisition_parameters, - repetitions = 1, - reconstruct = False) -elif camPar.acq_mode == 'snapshot': - snapshot(camPar, all_path.pathIDSsnapshot, all_path.pathIDSsnapshot_overview) - spectral_data = acquire( - ava = spectrometer, - DMD = DMD, - metadata = metadata, - spectrometer_params = spectrometer_params, - DMD_params = DMD_params, - acquisition_params = acquisition_parameters, - repetitions = 1, - reconstruct = False) -#%% Hadamard Reconstruction -Q = wh.walsh2_matrix(Np) -GT = reconstruction_hadamard(acquisition_parameters.patterns, 'walsh', Q, spectral_data, Np) -plot_reco_without_NN(acquisition_parameters, GT, Q, all_path) -#%% Neural Network Reconstruction -t0 = time.time() -network_param = ReconstructionParameters( - # Reconstruction network - M = 64*64, # Number of measurements - img_size = 128, # Image size - arch = 'dc-net', # Main architecture - denoi = 'unet', # Image domain denoiser - subs = 'rect', # Subsampling scheme - - # Training - data = 'imagenet', # Training database - N0 = 10, # Intensity (max of ph./pixel) - - # Optimisation (from train2.py) - num_epochs = 30, # Number of training epochs - learning_rate = 0.001, # Learning Rate - step_size = 10, # Scheduler Step Size - gamma = 0.5, # Scheduler Decrease Rate - batch_size = 256, # Size of the training batch - regularization = 1e-7 # Regularisation Parameter - ) - -cov_path = 'C:/openspyrit/stat/ILSVRC2012_v10102019/Cov_8_128x128.npy' -model_folder = 'C:/openspyrit/models/' -model, device = setup_reconstruction(cov_path, model_folder, network_param) -meas = reorder_subsample(spectral_data.T, acquisition_parameters, network_param) # Reorder and subsample -reco = reconstruct(model, device, meas) # Reconstruction -plot_reco_with_NN(acquisition_parameters, reco, all_path) -print('elapsed time = ' + str(round(time.time()-t0)) + ' s') -#%% transfer data to girder -transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, DMD_params, camPar, - setup_version, data_folder_name, data_name, upload_metadata = 1) -#%% Disconnect -disconnect_2arms(spectrometer, DMD, camPar) - diff --git a/scripts/reconstruction_script.py b/scripts/reconstruction_script.py index 6bf22e7..fd8f781 100644 --- a/scripts/reconstruction_script.py +++ b/scripts/reconstruction_script.py @@ -1,52 +1,500 @@ # -*- coding: utf-8 -*- __author__ = 'Guilherme Beneti Martins' +#%% Package + +import os import numpy as np -from scipy.io import loadmat -from matplotlib import pyplot as plt -from spyrit.misc.statistics import Hadamard_Transform_Matrix -from spas import read_metadata, reconstruction_hadamard +import spyrit.misc.walsh_hadamard as wh +from spas.visualization import plot_reco_without_NN, plot_reco_with_NN +from spas.metadata import read_metadata, read_metadata_2arms, func_path +from spas.transfer_data_to_girder import transfer_data_2arms, transfer_data +from spas.reconstruction_nn import ReconstructionParameters, setup_reconstruction +from spas.reconstruction import reconstruction_hadamard + +import time +import pickle +import csv + +#%% INPUT +nb_loop = 1000 +delete_old_fig = 0 +read_spectral_data = 0 +read_had_reco = 0 +read_nn_reco = 1 + +transfer_matched = 0 +tranfer = 0 +upload_metadata = 0 +check_data_exist_in_girder = 0 +write_to_csv_file = 0 +#%% Begin +t_tot_0 = time.time() +############################ CSV file ########################## +csv_file_path = 'data_in_Girder/data.csv' +csv_exist = os.path.isfile(csv_file_path) +fieldnames = ['setup_version', 'data_folder_name', 'data_name', 'transfered_to_girder', 'had_reco', 'nn_reco', 'check_data_exist_in_girder', 'delete_old_fig'] +########################## to be change ############################ +setup_version = 'setup_v1.3.1' +# data_folder_name = '2023-03-17_test_Leica_microscope_HCL_Bron'# +# data_folder_name = '2023-04-05_PpIX_at_lab_to_compare_to_hospital' +# data_folder_name = '2023-04-07_PpIX_at_lab_to_compare_to_hospital' +# data_folder_name = '2023-11-21_Arduino_hologram' +# data_folder_name = '2024-02-02_test_chromaticity' +data_folder_name = '2024-02-02_test_reco' + +data_file_list = os.listdir('../data/' + data_folder_name) +# data_file_list = ['red_and_black_ink_im_64x64_ti_20ms_zoom_x2'] +# data_file_list = ['obj_Arduino_hologram_pos_1_source_White_Zeiss_KL-2500-LCD_lamp_f80mm-P2_Walsh_im_64x64_ti_30ms_zoom_x1'] +# data_file_list = ['obj_cat_source_white_LED_f80mm-P2_Walsh_im_64x64_ti_1ms_zoom_x1'] +data_file_list = ['obj_cat_source_white_LED_f80mm-P2_Walsh_im_64x64_ti_1ms_zoom_x1'] +############################ Beginning ########################## +inc = 0 +for data_name in data_file_list: + ########################### path ################################### + print('data folder : '+data_folder_name) + print(' -' + data_name) + all_path = func_path(data_folder_name, data_name) + ###################### delete old figures ########################### + if delete_old_fig == 1: + fig_list = os.listdir(all_path.overview_path) + for fig in fig_list: + if fig.find('HAD_RECO') >=0 or fig.find('NN_RECO') >=0: + print(fig) + os.remove(all_path.overview_path + '/' + fig) + ####################### check if files exist ############ + if os.path.isfile(all_path.had_reco_path): + exist_had_reco = 1 + else: + exist_had_reco = 0 + + if os.path.isfile(all_path.nn_reco_path): + exist_nn_reco = 1 + else: + exist_nn_reco = 0 + + if read_nn_reco == 1: + read_spectral_data = 1 + + + # data_overview_list = os.listdir(all_path.overview_path) + # if len(data_overview_list) > 1 and had_reco == 0: + # had_reco = 0 + # had_reco_for_csv = 1 + # else: + # had_reco = 1 + # had_reco_for_csv = 0 + + # if os.path.isfile(all_path.nn_reco_path): + # nn_reco_for_csv = 1 + # else: + # nn_reco_for_csv = 0 + ########################## read spectral data ########################### + if read_spectral_data == 1: + print('--- read spectral data') + file_spectral_data = np.load(all_path.data_path+'_spectraldata.npz') + try: + spectral_data = file_spectral_data['spectral_data'] + # print('npz item : spectral_data') + # for k in file.data_name: + # print(k) + except: + spectral_data = file_spectral_data['arr_0'] + print('npz item : arr_0') + ########################### read metadata ########################### + metadata_path = all_path.data_path + '_metadata.json' + metadata, acquisition_parameters, spectrometer_parameters, DMD_parameters = read_metadata(metadata_path) + + wavelengths = acquisition_parameters.wavelengths + meas_size = acquisition_parameters.pattern_dimension_x * acquisition_parameters.pattern_dimension_y * 2 + Np = acquisition_parameters.pattern_dimension_x + #################### Hadamard reconstruction ####################### + if read_had_reco == 1 and exist_had_reco == 1: + print('--- read had reco matrix') + file_had_reco = np.load(all_path.data_path+'_had_reco.npz') + GT = file_had_reco['arr_0'] + GT = np.rot90(GT, 2) + had_reco_for_csv = 1 + elif read_had_reco == 1: + print('--- reconstruct had reco matrix from spectral data') + # subsampling + nsub = 1 + M_sub = spectral_data[:8192//nsub,:] + patterns_sub = acquisition_parameters.patterns[:8192//nsub] + + ### Hadamard Reconstruction + Q = wh.walsh2_matrix(Np) + GT = reconstruction_hadamard(patterns_sub, 'walsh', Q, M_sub, Np) + had_reco_for_csv = 1 + else: + had_reco_for_csv = 0 + + if read_had_reco == 1: + plot_reco_without_NN(acquisition_parameters, GT, all_path) + + ######################### read cam metadata ######################## + try: + cam_metadata_path = all_path.data_path + '_metadata_cam.pkl' + + file = open(cam_metadata_path,'rb') + cam_metadata = pickle.load(file) + file.close() + metadata_cam = 1 + except: + print('metada of the cam does not exist') + cam_metadata = [] + metadata_cam = 0 + + camPar = cam_metadata + + #%% Neural Network Reconstruction + if read_nn_reco == 1: + print('---------- nn reco ----------') + t0 = time.time() + network_param = ReconstructionParameters( + # Reconstruction network + M = round(meas_size/2), #64*64, # Number of measurements + img_size = 128, # Image size + arch = 'dc-net', # Main architecture + denoi = 'unet', # Image domain denoiser + subs = 'rect', # Subsampling scheme + + # Training + data = 'imagenet', # Training database + N0 = 10, # Intensity (max of ph./pixel) + + # Optimisation (from train2.py) + num_epochs = 30, # Number of training epochs + learning_rate = 0.001, # Learning Rate + step_size = 10, # Scheduler Step Size + gamma = 0.5, # Scheduler Decrease Rate + batch_size = 256, # Size of the training batch + regularization = 1e-7 # Regularisation Parameter + ) + + cov_path = 'C:/openspyrit/stat/ILSVRC2012_v10102019/Cov_8_128x128.npy' + model_folder = 'C:/openspyrit/models/' + model, device = setup_reconstruction(cov_path, model_folder, network_param) + plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, network_param, all_path, cov_path) + print('elapsed time = ' + str(round(time.time()-t0)) + ' s') + nn_reco_for_csv = 1 + #%% transfer data to girder + if tranfer == 1: + if metadata_cam == 0: + transfer_data(metadata, acquisition_parameters, spectrometer_parameters, DMD_parameters, + setup_version, data_folder_name, data_name, upload_metadata) + else: + transfer_data_2arms(metadata, acquisition_parameters, spectrometer_parameters, DMD_parameters, camPar, + setup_version, data_folder_name, data_name, upload_metadata) + + + inc = inc + 1 + if nb_loop == inc: + print('stop loop') + break + #%% Write dataLog in csv file + if write_to_csv_file == 1: + rows = [ + {'setup_version': setup_version, + 'data_folder_name': data_folder_name, + 'data_name': data_name, + 'transfered_to_girder': transfer_matched, + 'had_reco': had_reco_for_csv, + 'nn_reco': nn_reco_for_csv, + 'check_data_exist_in_girder': check_data_exist_in_girder, + 'delete_old_fig': delete_old_fig} + ] + + with open(csv_file_path, 'a', encoding='UTF8', newline='') as f: + writer = csv.DictWriter(f, fieldnames=fieldnames) + if csv_exist == False: + writer.writeheader() + writer.writerows(rows) + +print('total elapsed time = ' + str(round(time.time()-t_tot_0)) + ' s') +# #%%####################### spectra ##################################### +# from scipy.signal import savgol_filter + +# plot_raw_spectrum = 0 +# plot_smooth_spectra = 1 +# plot_norm_smooth_spectra = 1 + + +# if data_name == 'Painting_Don_Quichotte_64x64_ti_150ms': +# y1 = np.mean(np.mean(GT[8:11,29:34,:], axis=1), axis=0) #'white ref' +# y2 = np.mean(np.mean(GT[25:28,27:30,:], axis=1), axis=0) #'base de la queue' +# y3 = np.mean(GT[22,33:37,:], axis=0) #'milieu de la queue' +# y4 = np.mean(np.mean(GT[17:21,19:24,:], axis=1), axis=0) #'background' +# elif data_name == 'Painting_St-Tropez_64x64_ti_100ms': +# y1 = np.mean(np.mean(GT[19:33,54:57,:], axis=1), axis=0) #'white ref' +# y2 = np.mean(np.mean(GT[15:19,21:28,:], axis=1), axis=0) #'left house' +# y3 = np.mean(np.mean(GT[20:22,41:50,:], axis=1), axis=0) #'right house' +# y4 = np.mean(np.mean(GT[45:56,15:33,:], axis=1), axis=0) #'boat' + + +# window_size = 51 +# polynomial_order = 4 +# ysm1 = savgol_filter(y1, window_size, polynomial_order) +# ysm2 = savgol_filter(y2, window_size, polynomial_order) +# ysm3 = savgol_filter(y3, window_size, polynomial_order) +# ysm4 = savgol_filter(y4, window_size, polynomial_order) + +# if plot_raw_spectrum == 1: +# plt.figure() +# plt.plot(wavelengths, y1, color = 'blue') +# plt.plot(wavelengths, ysm1, color = 'red') +# plt.title('white ref') +# plt.grid() + +# plt.figure() +# plt.plot(wavelengths, y2, color = 'blue') +# plt.plot(wavelengths, ysm2, color = 'red') +# plt.title('base de la queue') +# plt.grid() + +# plt.figure() +# plt.plot(wavelengths, y3, color = 'blue') +# plt.plot(wavelengths, ysm3, color = 'red') +# plt.title('milieu de la queue') +# plt.grid() + +# plt.figure() +# plt.plot(wavelengths, y4, color = 'blue') +# plt.plot(wavelengths, ysm4, color = 'red') +# plt.title('background') +# plt.grid() + +# if plot_smooth_spectra == 1: +# plt.figure() +# plt.plot(wavelengths, ysm1, color = 'green') +# plt.plot(wavelengths, ysm2, color = 'blue') +# plt.plot(wavelengths, ysm3, color = 'red') +# plt.plot(wavelengths, ysm4, color = 'black') +# plt.legend(['1', '2', '3', '4']) +# plt.grid() + +# if plot_norm_smooth_spectra == 1: +# cut = 10 +# ym1 = ysm1[cut:-cut]/np.amax(ysm1[cut:-cut]) +# ym2 = ysm2[cut:-cut]/ym1 +# ym3 = ysm3[cut:-cut]/ym1 +# ym4 = ysm4[cut:-cut]/ym1 + +# plt.figure() +# plt.plot(wavelengths[cut:-cut], ym2, color = 'blue') +# plt.plot(wavelengths[cut:-cut], ym3, color = 'red') +# plt.plot(wavelengths[cut:-cut], ym4, color = 'black') +# plt.legend(['2', '3', '4']) +# plt.grid() + +# if data_name == 'Falcon_620_WhiteLight_OFF_BlueLaser_ON_im_32x32_Zoom_x1_ti_1000ms#_tc_10.0ms': +# # 620nm +# plt.figure() +# plt.plot(wavelengths, np.mean(np.mean(GT[3:26,5:25,:], axis=1), axis=0)) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('620 nm') +# plt.grid() + +# if data_name == 'Falcon_634_WhiteLight_OFF_BlueLaser_ON_im_64x64_Zoom_x1_ti_50ms#_tc_4.619ms': +# plt.figure() +# plt.plot(wavelengths, np.mean(np.mean(GT[15:45,15:45,:], axis=1), axis=0)) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('634 nm') +# plt.grid() + +# plt.figure() +# plt.plot(wavelengths, GT[32,32,:]) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('634 nm - single pixel') +# plt.grid() + +# if data_name == 'WhiteLight_OFF_BlueLaser_ON_im_64x64_Zoom_x1_ti_100ms#_tc_4.619ms': +# # 620nm +# plt.figure() +# plt.plot(wavelengths, np.mean(np.mean(GT[6:22,19:32,:], axis=1), axis=0)) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('620 nm') +# plt.grid() + +# # 634nm +# plt.figure() +# plt.plot(wavelengths, np.mean(np.mean(GT[20:50,37:54,:], axis=1), axis=0)) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('634 nm') +# plt.grid() + +# plt.figure() +# plt.plot(wavelengths, GT[25,44,:]) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('634 nm - single pixel') +# plt.grid() + +# # S0 +# plt.figure() +# plt.plot(wavelengths, np.mean(np.mean(GT[27:52,8:30,:], axis=1), axis=0)) +# plt.axvline(x = 620, color = 'b', label = 'axvline - full height') +# plt.axvline(x = 634, color = 'r', label = 'axvline - full height') +# plt.title('S_0') +# plt.grid() +######################### subsampling ####################################### +# ============================================================================= +# N = 64 +# nsub = 2 +# M_sub = M[:8192//nsub,:] +# acquisition_parameters.patterns_sub = acquisition_parameters.patterns[:8192//nsub] +# GT_sub = reconstruction_hadamard(acquisition_parameters.patterns_sub, 'walsh', Q, M_sub) +# F_bin_sub, wavelengths_bin, bin_width = spectral_binning(GT_sub.T, acquisition_parameters.wavelengths, 530, 730, 8) +# +# +# +# plot_color(F_bin_sub, wavelengths_bin) +# plt.savefig(fig_had_reco_path + '_wavelength_binning_subsamplig=' + str(nsub) + '.png') +# plt.show() +# ============================================================================= + +# ============================================================================= +# plt.figure +# plt.imshow(GT[:,:,0]) +# plt.title(f'lambda = {wavelengths[0]:.2f} nm') +# plt.savefig(fig_had_reco_path + '_' + f'lambda = {wavelengths[0]:.2f} nm.png') +# plt.show() +# +# plt.figure +# plt.imshow(GT[:,:,410]) +# plt.title(f'lambda = {wavelengths[410]:.2f} nm') +# plt.savefig(fig_had_reco_path + '_' + f'lambda = {wavelengths[410]:.2f} nm.png') +# plt.show() +# +# plt.figure +# plt.imshow(GT[:,:,820]) +# plt.title(f'lambda = {wavelengths[820]:.2f} nm') +# plt.savefig(fig_had_reco_path + '_' + f'lambda = {wavelengths[820]:.2f} nm.png') +# plt.show() +# +# plt.figure +# plt.imshow(GT[:,:,1230]) +# plt.title(f'lambda = {wavelengths[1230]:.2f} nm') +# plt.savefig(fig_had_reco_path + '_' + f'lambda = {wavelengths[1230]:.2f} nm.png') +# plt.show() +# +# plt.figure +# plt.imshow(GT[:,:,2047]) +# plt.title(f'lambda = {wavelengths[2047]:.2f} nm') +# plt.savefig(fig_had_reco_path + '_' + f'lambda = {wavelengths[2047]:.2f} nm.png') +# plt.show() +# +# plt.figure +# plt.imshow(np.sum(GT,axis=2)) +# plt.title('Sum of all wavelengths') +# plt.savefig(fig_had_reco_path + '_sum_of_wavelengths.png') +# plt.show() +# +# plt.figure +# plt.scatter(wavelengths, np.mean(np.mean(GT,axis=1),axis=0)) +# plt.grid() +# plt.xlabel('Lambda (nm)') +# plt.title('Spectral view in the spatial mean') +# plt.savefig(fig_had_reco_path + '_spectral_axe_of_the_hypercube.png') +# plt.show() +# +# indx = np.where(GT == np.max(GT)) +# sp = GT[indx[0],indx[1],:] +# plt.figure +# plt.scatter(wavelengths, sp.T) +# plt.grid() +# plt.xlabel('Lambda (nm)') +# plt.title('Spectral view of the max intensity') +# plt.savefig(fig_had_reco_path + '_spectral_axe_of_max_intensity.png') +# plt.show() +# ============================================================================= +# #%% Reconstruct with NN + +# #%% Setup reconstruction +# network_params = ReconstructionParameters( +# img_size = Np, +# CR = 1024, +# denoise = True, +# epochs = 40, +# learning_rate = 1e-3, +# step_size = 20, +# gamma = 0.2, +# batch_size = 256, +# regularization = 1e-7, +# N0 = 50.0, +# sig = 0.0, +# arch_name = 'c0mp') + +# cov_path = 'C:/openspyrit/spas/stats/Cov_'+str(Np)+'x'+str(Np)+'.npy' +# mean_path = 'C:/openspyrit/spas/stats/Average_'+str(Np)+'x'+str(Np)+'.npy' +# model_root = 'C:/openspyrit/spas/models/new-nicolas/' +# H = wh.walsh2_matrix(Np)/Np + +# model, device = setup_reconstruction(cov_path, mean_path, H, model_root, network_params) +# noise = load_noise('C:/openspyrit/spas/noise-calibration/fit_model2.npz') + +# reconstruction_params = { +# 'model' : model, +# 'device' : device, +# 'batches': 1, +# 'noise' : noise} + +# # network_params = ReconstructionParameters( +# # img_size=64, +# # CR=1024, +# # denoise=True, +# # epochs=40, +# # learning_rate=1e-3, +# # step_size=20, +# # gamma=0.2, +# # batch_size=256, +# # regularization=1e-7, +# # N0=50.0, +# # sig=0.0, +# # arch_name='c0mp',) + +# # cov_path = '../stats/new-nicolas/Cov_64x64.npy' +# # mean_path = '../stats/new-nicolas/Average_64x64.npy' +# # H_path = '../stats/new-nicolas/H.npy' +# # model_root = '../models/new-nicolas/' + +# # model, device = setup_reconstruction(cov_path, mean_path, H_path, model_root, network_params) +# # noise = load_noise('../noise-calibration/fit_model2.npz') -# Matlab patterns -#file = loadmat('../data/matlab.mat') -#Q = file['Q'] +# # reconstruction_params = { +# # 'model': model, +# # 'device': device, +# # 'batches': 1, +# # 'noise': noise, +# # } -# fht patterns -Q = Hadamard_Transform_Matrix(64) +# F_bin, wavelengths_bin, bin_width, noise_bin = spectral_binning(M.T, acquisition_parameters.wavelengths, 530, 730, 8, 0, noise) +# recon = reconstruct(model, device, F_bin[0:8192//4,:], 1, noise_bin) +# plot_color(recon, wavelengths_bin) +# plt.savefig(nn_reco_path + '_reco_wavelength_binning.png') +# plt.show() -data_path = '../data/22-04-2021-test-acq/fht_patterns' -file = np.load(data_path+'_spectraldata.npz') -M = file['spectral_data'] +# #%% transfer data to girder +# transf.transfer_data_to_girder(metadata, acquisition_parameters, spectrometer_params, DMD_params, setup_version, data_folder_name, data_name) +#%% delete plots +# Question = input("Do you want to delete the figures yes [y] ? ") +# if Question == ("y") or Question == ("yes"): +# shutil.rmtree(overview_path) +# print ("==> figures deleted") -metadata, acquisition_parameters, spectrometer_params, DMD_params = read_metadata(data_path+'_metadata.json') -N = 64 -frames = reconstruction_hadamard(acquisition_parameters.patterns, 'fht', Q, M) -plt.figure(1) -plt.imshow(frames[:,:,0]) -plt.title(f'lambda = {wavelengths[0]:.2f} nm') -plt.figure(2) -plt.imshow(frames[:,:,410]) -plt.title(f'lambda = {wavelengths[410]:.2f} nm') -plt.figure(3) -plt.imshow(frames[:,:,820]) -plt.title(f'lambda = {wavelengths[820]:.2f} nm') -plt.figure(4) -plt.imshow(frames[:,:,1230]) -plt.title(f'lambda = {wavelengths[1230]:.2f} nm') -plt.figure(5) -plt.imshow(frames[:,:,2047]) -plt.title(f'lambda = {wavelengths[2047]:.2f} nm') -plt.figure(6) -plt.imshow(np.sum(frames,axis=2)) -plt.title('Sum of all wavelengths') \ No newline at end of file diff --git a/setup.py b/setup.py index ccaaf47..af0e4ea 100644 --- a/setup.py +++ b/setup.py @@ -5,19 +5,20 @@ setup( name='spas', - version='0.0.1', + version='1.4.0', + include_package_data=True, description='A python toolbox for acquisition of images based on the single-pixel framework.', author='Guilherme Beneti Martins', url='https://github.com/openspyrit/spas', long_description=readme, long_description_content_type = "text/markdown", install_requires=[ - 'ALP4lib @ git+https://github.com/guilhermebene/ALP4lib.git@7e35abf3a5c2e31652f7cfb2e4243b279b6a3a47', + 'ALP4lib @ git+https://github.com/openspyrit/ALP4lib@3db7bec88b260e5396626b1b185d7a2f678e9bbf', 'dataclasses-json (==0.5.2)', 'certifi', 'cycler', 'kiwisolver', - 'matplotlib', + 'matplotlib', #==3.7.5 'numpy', 'msl-equipment @ git+https://github.com/MSLNZ/msl-equipment.git', 'Pillow', @@ -30,7 +31,10 @@ 'spyrit', 'wincertstore', 'pyueye', - 'girder-client' + 'tensorboard', + 'girder-client', + 'plotter', + 'tikzplotlib' ], packages=find_packages() ) diff --git a/spas/DMD_module.py b/spas/DMD_module.py new file mode 100644 index 0000000..a4969cd --- /dev/null +++ b/spas/DMD_module.py @@ -0,0 +1,699 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Tue Mar 4 09:08:24 2025 + +@author: mahieu +""" + +from time import perf_counter_ns +from pathlib import Path +from enum import IntEnum +from dataclasses import dataclass, InitVar +from typing import Optional, List, Tuple +from dataclasses_json import dataclass_json +import numpy as np +from tqdm import tqdm + +##### DLL for the DMD +try: + from ALP4 import ALP4, ALP_FIRSTFRAME, ALP_LASTFRAME + from ALP4 import ALP_AVAIL_MEMORY, ALP_DEV_DYN_SYNCH_OUT1_GATE, tAlpDynSynchOutGate + # print('ALP4 is ok in Acquisition file') +except: + class ALP4: + pass + +from spas.metadata_SPC2D import MetaData, AcquisitionParameters + +# connect to the DMD +def init_DMD(dmd_lib_version: str = '4.2') -> Tuple[ALP4, int]: + """Initialize a DMD and clean its allocated memory from a previous use. + + Args: + dmd_lib_version [str]: the version of the DMD library + + Returns: + Tuple[ALP4, int]: Tuple containing initialized DMD object and DMD + initial available memory. + """ + + # Initializing DMD + stop_init = False + if dmd_lib_version == '4.1': + print('dmd lib version = ' + dmd_lib_version + ' not installed, please, install it at the location : "openspyrit/spas/alpV41"') + stop_init = True + elif dmd_lib_version == '4.2': + dll_path = Path(__file__).parent.parent.joinpath('lib/alpV42').__str__() + DMD = ALP4(version='4.2',libDir=dll_path) + elif dmd_lib_version == '4.3': + dll_path = Path(__file__).parent.parent.joinpath('lib/alpV43').__str__() + DMD = ALP4(version='4.3',libDir=dll_path) + else: + print('unknown version of dmd library') + stop_init = True + + if stop_init == False: + DMD.Initialize(DeviceNum=None) + + #print(f'DMD initial available memory: {DMD.DevInquire(ALP_AVAIL_MEMORY)}') + print('DMD connected') + + return DMD, DMD.DevInquire(ALP_AVAIL_MEMORY) + else: + print('DMD initialisation aborted') + + +# create the class DMDParameters +class DMDTypes(IntEnum): + """Enumeration of DMD types and respective codes.""" + ALP_DMDTYPE_XGA = 1 + ALP_DMDTYPE_SXGA_PLUS = 2 + ALP_DMDTYPE_1080P_095A = 3 + ALP_DMDTYPE_XGA_07A = 4 + ALP_DMDTYPE_XGA_055A = 5 + ALP_DMDTYPE_XGA_055X = 6 + ALP_DMDTYPE_WUXGA_096A = 7 + ALP_DMDTYPE_WQXGA_400MHZ_090A = 8 + ALP_DMDTYPE_WQXGA_480MHZ_090A = 9 + ALP_DMDTYPE_WXGA_S450 = 12 + ALP_DMDTYPE_DISCONNECT = 255 + + +@dataclass_json +@dataclass +class DMDParameters: + """Class containing DMD configurations and status. + + Further information: ALP-4.2 API Description (14/04/2020). + + Attributes: + add_illumination_time_us (int): + Extra time in microseconds to account for the spectrometer's + "dead time". + initial_memory (int): + Initial memory available before sending patterns to DMD. + dark_phase_time_us (int, optional): + Time in microseconds taken by the DMD mirrors to completely tilt. + Minimum time for XGA type DMD is 44 us. + illumination_time_us (int, optional): + Duration of the display of one pattern in a DMD sequence. Units in + microseconds. + picture_time_us (int, optional): + Time between the start of two consecutive pictures (i.e. this + parameter defines the image display rate). Units in microseconds. + synch_pulse_width_us (int, optional): + Duration of DMD's frame synch output pulse. Units in microseconds. + synch_pulse_delay (int, optional): + Time in microseconds between start of the frame synch output pulse + and the start of the pattern display (in master mode). + device_number (int, optional): + Serial number of the ALP device. + ALP_version (int, optional): + Version number of the ALP device. + id (int, optional): + ALP device identifier for a DMD provided by the API. + synch_polarity (str, optional): + Frame synch output signal polarity: 'High' or 'Low.' + trigger_edge (str, optional): + Trigger input signal slope. Can be a 'Falling' or 'Rising' edge. + type (str, optional): + Digital light processing (DLP) chip present in DMD. + usb_connection (bool, optional): + True if USB connection is ok. + ddc_fpga_temperature (float, optional): + Temperature of the DDC FPGA (IC4) at DMD connection. Units in °C. + apps_fpga_temperature (float, optional): + Temperature of the Applications FPGA (IC3) at DMD connection. Units + in °C. + pcb_temperature (float, optional): + Internal temperature of the temperature sensor IC (IC2) at DMD + connection. Units in °C. + display_height (int, optional): + DMD display height in pixels. + display_width (int, optional): + DMD display width in pixels. + patterns (int, optional): + Number of patterns uploaded to DMD. + unused_memory (int, optional): + Memory available after sending patterns to DMD. + bitplanes (int, optional): + Bit depth of the patterns to be displayed. Values supported from 1 + to 8. + DMD (InitVar[ALP4.ALP4], optional): + Initialization DMD object. Can be used to automatically fill most of + the DMDParameters' attributes. Unnecessary if reconstructing object + from JSON file. Defaut is None. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'DMD parameters'. + """ + + add_illumination_time_us: int + initial_memory: int + + dark_phase_time_us: Optional[int] = None + illumination_time_us: Optional[int] = None + picture_time_us: Optional[int] = None + synch_pulse_width_us: Optional[int] = None + synch_pulse_delay: Optional[int] = None + + device_number: Optional[int] = None + ALP_version: Optional[int] = None + id: Optional[int] = None + + synch_polarity: Optional[str] = None + trigger_edge: Optional[str] = None + + # synch_polarity_OUT1: Optional[str] = None + # synch_period_OUT1: Optional[str] = None + # synch_gate_OUT1: Optional[str] = None + + type: Optional[str] = None + usb_connection: Optional[bool] = None + + ddc_fpga_temperature: Optional[float] = None + apps_fpga_temperature: Optional[float] = None + pcb_temperature: Optional[float] = None + + display_height: Optional[int] = None + display_width: Optional[int] = None + + patterns: Optional[int] = None + patterns_wp: Optional[int] = None + unused_memory: Optional[int] = None + bitplanes: Optional[int] = None + + DMD: InitVar[ALP4.ALP4] = None + + class_description: str = 'DMD parameters' + + + def __post_init__(self, DMD: Optional[ALP4.ALP4] = None): + """ Post initialization of attributes. + + Receives a DMD object and directly asks it for its configurations and + status, then sets the majority of SpectrometerParameters's attributes. + During reconstruction from JSON, DMD is set to None and the function + does nothing, letting initialization for the standard __init__ function. + + Args: + DMD (ALP4.ALP4, optional): + Connected DMD. Defaults to None. + """ + if DMD == None: + pass + + else: + self.device_number = DMD.DevInquire(ALP4.ALP_DEVICE_NUMBER) + self.ALP_version = DMD.DevInquire(ALP4.ALP_VERSION) + self.id = DMD.ALP_ID.value + + polarity = DMD.DevInquire(ALP4.ALP_SYNCH_POLARITY) + if polarity == 2006: + self.synch_polarity = 'High' + elif polarity == 2007: + self.synch_polarity = 'Low' + + edge = DMD.DevInquire(ALP4.ALP_TRIGGER_EDGE) + if edge == 2008: + self.trigger_edge = 'Falling' + elif edge == 2009: + self.trigger_edge = 'Rising' + + # synch_polarity_OUT1 = + + self.type = DMDTypes(DMD.DevInquire(ALP4.ALP_DEV_DMDTYPE)) + + if DMD.DevInquire(ALP4.ALP_USB_CONNECTION) == 0: + self.usb_connection = True + else: + self.usb_connection = False + + # Temperatures converted to °C + self.ddc_fpga_temperature = DMD.DevInquire( + ALP4.ALP_DDC_FPGA_TEMPERATURE)/256 + self.apps_fpga_temperature = DMD.DevInquire( + ALP4.ALP_APPS_FPGA_TEMPERATURE)/256 + self.pcb_temperature = DMD.DevInquire( + ALP4.ALP_PCB_TEMPERATURE)/256 + + self.display_width = DMD.nSizeX + self.display_height = DMD.nSizeY + + + def update_memory(self, unused_memory: int): + + self.unused_memory = unused_memory + self.patterns = self.initial_memory - unused_memory + + + def update_sequence_parameters(self, add_illumination_time, + DMD: Optional[ALP4.ALP4] = None): + + self.bitplanes = DMD.SeqInquire(ALP4.ALP_BITPLANES) + self.illumination_time_us = DMD.SeqInquire(ALP4.ALP_ILLUMINATE_TIME) + self.picture_time_us = DMD.SeqInquire(ALP4.ALP_PICTURE_TIME) + self.dark_phase_time_us = self.picture_time_us - self.illumination_time_us + self.synch_pulse_width_us = DMD.SeqInquire(ALP4.ALP_SYNCH_PULSEWIDTH) + self.synch_pulse_delay = DMD.SeqInquire(ALP4.ALP_SYNCH_DELAY) + self.add_illumination_time_us = add_illumination_time + + + +# setup +def calculate_timings(integration_time: float = 1, + integration_delay: int = 0, + add_illumination_time: int = 300, + synch_pulse_delay: int = 0, + dark_phase_time: int = 44, + ) -> Tuple[int, int, int]: + """Calculate spectrometer and DMD dependant timings. + + Args: + integration_time (float) [ms]: + Spectrometer exposure time during one scan in miliseconds. + Default is 1 ms. + integration_delay (int) [µs]: + Parameter used to start the integration time not immediately after + the measurement request (or on an external hardware trigger), but + after a specified delay. Unit is based on internal FPGA clock cycle. + Default is 0 us. + add_illumination_time (int) [µs]: + Extra time in microseconds to account for the spectrometer's + "dead time". Default is 365 us. + synch_pulse_delay (int) [µs]: + Time in microseconds between start of the frame synch output pulse + and the start of the pattern display (in master mode). Default is + 0 us. + dark_phase_time (int) [µs]: + Time in microseconds taken by the DMD mirrors to completely tilt. + Minimum time for XGA type DMD is 44 us. Default is 44 us. + + Returns: + [Tuple]: DMD timings which depend on spectrometer's parameters. + synch_pulse_width: Duration of DMD's frame synch output pulse. Units + in microseconds. + illumination_time: Duration of the display of one pattern in a DMD + sequence. Units in microseconds. + picture_time: Time between the start of two consecutive pictures + (i.e. this parameter defines the image display rate). Units in + microseconds. + """ + + illumination_time = (integration_delay/1000 + integration_time*1000 + + add_illumination_time) + picture_time = illumination_time + dark_phase_time + synch_pulse_width = round(illumination_time/2 + synch_pulse_delay) + illumination_time = round(illumination_time) + picture_time = round(picture_time) + + return synch_pulse_width, illumination_time, picture_time + + +def setup_DMD(DMD: ALP4, + add_illumination_time: int, + initial_memory: int + ) -> DMDParameters: + """Create DMD metadata. + + Creates basic DMD metadata, but leaves most of its fields empty to be set + later. Sets up the initial free memory present in the DMD. + This function's name is used to create cohesion between spectrometer and DMD + related functions. + + Args: + DMD (ALP4): + Connected DMD object. + add_illumination_time (int): + Extra time in microseconds to account for the spectrometer's + "dead time". + initial_memory (int): + Initial memory available in DMD after initialization. + + Returns: + DMDParameters: + DMD metadata object. + """ + + return DMDParameters( + add_illumination_time_us=add_illumination_time, + initial_memory=initial_memory, + DMD=DMD) + + +def _sequence_limits(DMD: ALP4, + pattern_compression: int, + sequence_lenght: int, + pos_neg: bool = True) -> int: + """Set sequence limits based on a sequence already uploaded to DMD. + + Args: + DMD (ALP4): + Connected DMD object. + pattern_compression (int): + Percentage of total available patterns to be present in an + acquisition sequence. + sequence_lenght (int): + Amount of patterns present in DMD memory. + pos_neg (bool): + Boolean indicating if sequence is formed by positive and negative + patterns. Default is True. + + Returns: + frames (int): + Amount of patterns to be used from a sequence based on the pattern + compression. + """ + + # Choosing beggining of the sequence + # DMD.SeqControl(ALP_BITNUM, 1) + DMD.SeqControl(ALP_FIRSTFRAME, 0) + + # Choosing the end of the sequence + if (round(pattern_compression * sequence_lenght) % 2 == 0) or not (pos_neg): + frames = round(pattern_compression * sequence_lenght) + else: + frames = round(pattern_compression * sequence_lenght) + 1 + + DMD.SeqControl(ALP_LASTFRAME, frames - 1) + + return frames + + +def _update_sequence(DMD: ALP4, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + pattern_source: str, + pattern_prefix: str, + pattern_order: List[int], + bitplanes: int = 1): + """Send new complete pattern sequence to DMD. + + Args: + DMD (ALP4): + Connected DMD object. + DMD_params (DMDParameters): + DMD metadata object to be updated with pattern related data and with + memory available after patterns are sent to DMD. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + pattern_source (str): + Pattern source folder. + pattern_preffix (str): + Prefix used in pattern naming. + pattern_order (List[int]): + List of the pattern indices in a certain order for upload to DMD. + bitplanes (int, optional): + Pattern bitplanes. Defaults to 1. + """ + + import cv2 + + path_base = Path(pattern_source) + + seqId = DMD.SeqAlloc(nbImg=len(pattern_order), + bitDepth=bitplanes) + + zoom = acquisition_params.zoom + x_offset = acquisition_params.xw_offset + y_offset = acquisition_params.yh_offset + Np = acquisition_params.pattern_dimension_x + + dmd_height = DMD_params.display_height + dmd_width = DMD_params.display_width + len_im = int(dmd_height / zoom) + + t = perf_counter_ns() + + # for adaptative patterns into a ROI + apply_mask = False + mask_index = acquisition_params.mask_index + + if len(mask_index) > 0: + apply_mask = True + Npx = acquisition_params.pattern_dimension_x + Npy = acquisition_params.pattern_dimension_y + mask_element_nbr = len(mask_index) + x_mask_coord = acquisition_params.x_mask_coord + y_mask_coord = acquisition_params.y_mask_coord + x_mask_length = x_mask_coord[1] - x_mask_coord[0] + y_mask_length = y_mask_coord[1] - y_mask_coord[0] + + first_pass = True + for index,pattern_name in enumerate(tqdm(pattern_order, unit=' patterns', total=len(pattern_order))): + # read numpy patterns + path = path_base.joinpath(f'{pattern_prefix}_{pattern_name}.npy') + im = np.load(path) + + patterns = np.zeros((dmd_height, dmd_width), dtype=np.uint8) + + if apply_mask == True: # for adaptative patterns into a ROI + pat_mask_all = np.zeros(y_mask_length*x_mask_length) # initialize a vector of lenght = size of the cropped mask + pat_mask_all[mask_index] = im[:mask_element_nbr] #pat_re_vec[:mask_element_nbr] # put the pattern into the vector + pat_mask_all_mat = np.reshape(pat_mask_all, [y_mask_length, x_mask_length]) # reshape the vector into a matrix of the 2d cropped mask + # resize the matrix to the DMD size + pat_mask_all_mat_DMD = cv2.resize(pat_mask_all_mat, (int(dmd_height*x_mask_length/(Npx*zoom)), int(dmd_height*y_mask_length/(Npy*zoom))), interpolation = cv2.INTER_NEAREST) + + if first_pass == True: + first_pass = False + len_im3 = pat_mask_all_mat_DMD.shape + + patterns[y_offset:y_offset+len_im3[0], x_offset:x_offset+len_im3[1]] = pat_mask_all_mat_DMD + else: # send the entire square pattern without the mask + im_mat = np.reshape(im, [Np,Np]) + im_HD = cv2.resize(im_mat, (int(dmd_height/zoom), int(dmd_height/zoom)), interpolation = cv2.INTER_NEAREST) + + if first_pass == True: + len_im = im_HD.shape + first_pass = False + + patterns[y_offset:y_offset+len_im[0], x_offset:x_offset+len_im[1]] = im_HD + + # if pattern_name == 800: + # plt.figure() + # # plt.imshow(pat_c_re) + # # plt.imshow(pat_mask_all_mat) + # # plt.imshow(pat_mask_all_mat_DMD) + # plt.imshow(np.rot90(patterns,2)) + # plt.colorbar() + # plt.title('pattern n°' + str(pattern_name)) + + patterns = patterns.ravel() + + DMD.SeqPut( + imgData=patterns.copy(), + PicOffset=index, + PicLoad=1) + + print(f'\nTime for sending all patterns: ' + f'{(perf_counter_ns() - t)/1e+9} s') + + +def setup_patterns(DMD: ALP4, + metadata: MetaData, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + cov_path: str = None, + pattern_to_display: str = 'white', + loop: bool = False) -> None: + """Read and send patterns to DMD. + + Reads patterns from a file and sends a percentage of them to the DMD, + considering positve and negative Hadamard patterns, which should be even in + number. + Prints time taken to read all patterns and send the requested ones + to DMD. + Updates available memory in DMD metadata object (DMD_params). + + Args: + DMD (ALP4): + Connected DMD object. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + DMD_params (DMDParameters): + DMD metadata object to be updated with pattern related data and with + memory available after patterns are sent to DMD. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + loop (bool): + is to projet in loop, one or few patterns continously (see AlpProjStartCont + in the doc for more detail). Default is False + """ + + file = np.load(Path(metadata.pattern_order_source)) + pattern_order = file['pattern_order'] + pos_neg = file['pos_neg'] + + if loop == True: + pos_neg = False + if pattern_to_display == 'white': + pattern_order = np.array(pattern_order[0:1], dtype=np.int16) + elif pattern_to_display == 'black': + pattern_order = np.array(pattern_order[1:2], dtype=np.int16) + elif pattern_to_display == 'gray': + index = int(np.where(pattern_order == 1953)[0]) + print(index) + pattern_order = np.array(pattern_order[index:index+1], dtype=np.int16) + + bitplanes = 1 + DMD_params.bitplanes = bitplanes + + if (DMD_params.initial_memory - DMD.DevInquire(ALP_AVAIL_MEMORY) == + len(pattern_order)) and loop == False: + print('Reusing patterns from previous acquisition') + acquisition_params.pattern_amount = _sequence_limits( + DMD, + acquisition_params.pattern_compression, + len(pattern_order), + pos_neg=pos_neg) + + else: + if (DMD.Seqs): + DMD.FreeSeq() + + _update_sequence(DMD, DMD_params, acquisition_params, metadata.pattern_source, metadata.pattern_prefix, + pattern_order, bitplanes) + print(f'DMD available memory after sequence allocation: ' + f'{DMD.DevInquire(ALP_AVAIL_MEMORY)}') + acquisition_params.pattern_amount = _sequence_limits( + DMD, + acquisition_params.pattern_compression, + len(pattern_order), + pos_neg=pos_neg) + + acquisition_params.patterns = ( + pattern_order[0:acquisition_params.pattern_amount]) + + # Confirm memory allocated in DMD + DMD_params.update_memory(DMD.DevInquire(ALP_AVAIL_MEMORY)) + + +def setup_timings(DMD: ALP4, + DMD_params: DMDParameters, + picture_time: int, + illumination_time: int, + synch_pulse_delay: int, + synch_pulse_width: int, + trigger_in_delay: int, + add_illumination_time: int) -> None: + """Setup pattern sequence timings in DMD. + + Send previously user-defined plus calculated timings to DMD. + Updates DMD metadata with sequence and timing related data. + This function has no default values for timings and lets the burden of + setting them to the setup function. + + Args: + DMD (ALP4): + Connected DMD object. + DMD_params (DMDParameters): + DMD metadata object to be updated with pattern related data and with + memory available after patterns are sent to DMD. + picture_time (int): + Time between the start of two consecutive pictures (i.e. this + parameter defines the image display rate). Units in microseconds. + illumination_time (int): + Duration of the display of one pattern in a DMD sequence. + Units in microseconds. + synch_pulse_delay (int): + Time in microseconds between start of the frame synch output pulse + and the start of the pattern display (in master mode). + synch_pulse_width (int): + Duration of DMD's frame synch output pulse. Units in microseconds. + trigger_in_delay (int): + Time in microseconds between the incoming trigger edge and the start + of the pattern display on DMD (slave mode). + add_illumination_time (int): + Extra time in microseconds to account for the spectrometer's + "dead time". + """ + + DMD.SetTiming(illuminationTime=illumination_time, + pictureTime=picture_time, + synchDelay=synch_pulse_delay, + synchPulseWidth=synch_pulse_width, + triggerInDelay=trigger_in_delay) + + DMD_params.update_sequence_parameters(add_illumination_time, DMD=DMD) + + +def change_patterns(DMD: ALP4, + acquisition_params: AcquisitionParameters, + zoom: int = 1, + xw_offset: int = 0, + yh_offset: int = 0, + force_change: bool = False + ): + """ + Delete patterns in the memory of the DMD in the case where the zoom or (x,y) offset change + + DMD (ALP4): + Connected DMD. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + zoom (int): + digital zoom. Deafult is x1. + xw_offset (int): + offset int he width direction of the patterns for zoom > 1. + Default is 0. + yh_offset (int): + offset int he height direction of the patterns for zoom > 1. + Default is 0. + force_change (bool): + to force the changement of the pattern sequence. Default is False. + """ + + if acquisition_params.zoom != zoom or acquisition_params.xw_offset != xw_offset or acquisition_params.yh_offset != yh_offset or force_change == True: + if (DMD.Seqs): + DMD.FreeSeq() + + +def disconnect_DMD(DMD: ALP4): + if DMD is not None: + # Stop the sequence display + try: + DMD.Halt() + # Free the sequence from the onboard memory (if any is present) + if (DMD.Seqs): + DMD.FreeSeq() + + DMD.Free() + print('DMD disconnected') + + except: + print('probelm to Halt the DMD') + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/spas/__init__.py b/spas/__init__.py index a711ef3..f0f2ee2 100644 --- a/spas/__init__.py +++ b/spas/__init__.py @@ -1,14 +1,14 @@ # -*- coding: utf-8 -*- __author__ = 'Guilherme Beneti Martins' -#from .acquisition import init, setup, acquire, disconnect #, init_2arms, disconnect_2arms -from .metadata import MetaData, AcquisitionParameters -from .metadata import DMDParameters, SpectrometerParameters -from .metadata import read_metadata, save_metadata -#from .generate import * -from .reconstruction import * -from .visualization import * -from .noise import * -from .reconstruction_nn import * -from .convert_spec_to_rgb import * -from .plot_spec_to_rgb_image import * \ No newline at end of file +# #from .acquisition import init, setup, acquire, disconnect #, init_2arms, disconnect_2arms +# from .metadata import MetaData, AcquisitionParameters +# from .metadata import DMDParameters, SpectrometerParameters +# from .metadata import read_metadata, save_metadata +# #from .generate import * +# from .reconstruction import * +# from .visualization import * +# from .noise import * +# from .reconstruction_nn import * +# from .convert_spec_to_rgb import * +# from .plot_spec_to_rgb_image import * \ No newline at end of file diff --git a/spas/acquisition.py b/spas/acquisition_SPC2D.py similarity index 68% rename from spas/acquisition.py rename to spas/acquisition_SPC2D.py index 9476fbc..de2cf75 100644 --- a/spas/acquisition.py +++ b/spas/acquisition_SPC2D.py @@ -3,47 +3,8 @@ """Acquisition utility functions. -Typical usage example: - -# Initialization -spectrometer, DMD, DMD_initial_memory = init() - -# Acquisition -metadata = MetaData( - output_directory=Path('./data/...'), - pattern_order_source=Path('./communication/communication.txt'), - pattern_source=Path('./Patterns/...), - pattern_prefix='Hadamard_64x64' - experiment_name='...', - light_source='...', - object='...', - filter='...', - description='...') - -acquisition_parameters = AcquisitionParameters( - pattern_compression=1.0, - pattern_dimension_x=64, - pattern_dimension_y=64) - -spectrometer_params, DMD_params, wavelenghts = setup( - spectrometer=spectrometer, - DMD=DMD, - DMD_initial_memory=DMD_initial_memory, - metadata=metadata, - acquisition_params=acquisition_parameters, - integration_time=1.0,) - -acquire( - spectrometer, - DMD, - metadata, - spectrometer_params, - DMD_params, - acquisition_parameters, - wavelenghts) - -# Disconnect -disconnect(spectrometer, DMD) + Acquisition module is a generic module that call function in different setup (SPC2D_1arm, SPC2D_2arms, SCP1D and SPIM) + """ import warnings @@ -53,6 +14,7 @@ from pathlib import Path from multiprocessing import Process, Queue import shutil +import math import numpy as np from PIL import Image @@ -72,8 +34,8 @@ class ALP4: pass from tqdm import tqdm -from spas.metadata import DMDParameters, MetaData, AcquisitionParameters -from spas.metadata import SpectrometerParameters, save_metadata, CAM, save_metadata_2arms +from spas.metadata_SPC2D import DMDParameters, MetaData, AcquisitionParameters +from spas.metadata_SPC2D import SpectrometerParameters, save_metadata, CAM, save_metadata_2arms from spas.reconstruction_nn import reconstruct_process, plot_recon, ReconstructionParameters # DLL for the IDS CAMERA @@ -82,17 +44,12 @@ class ALP4: except: print('ueye DLL not installed') -# from pyueye import ueye, ueye_tools from matplotlib import pyplot as plt -from PIL import Image +from IPython import get_ipython import ctypes as ct -import math import logging import time import threading -#from spas.visualization import snapshotVisu - - def _init_spectrometer() -> Avantes: @@ -120,8 +77,11 @@ def _init_spectrometer() -> Avantes: return ava -def _init_DMD() -> Tuple[ALP4, int]: +def _init_DMD(dmd_lib_version: str = '4.2') -> Tuple[ALP4, int]: """Initialize a DMD and clean its allocated memory from a previous use. + + Args: + dmd_lib_version [str]: the version of the DMD library Returns: Tuple[ALP4, int]: Tuple containing initialized DMD object and DMD @@ -129,21 +89,36 @@ def _init_DMD() -> Tuple[ALP4, int]: """ # Initializing DMD - - dll_path = Path(__file__).parent.parent.joinpath('lib/alpV42').__str__() + stop_init = False + if dmd_lib_version == '4.1': + print('dmd lib version = ' + dmd_lib_version + ' not installed, please, install it at the location : "openspyrit/spas/alpV41"') + stop_init = True + elif dmd_lib_version == '4.2': + dll_path = Path(__file__).parent.parent.joinpath('lib/alpV42').__str__() + DMD = ALP4(version='4.2',libDir=dll_path) + elif dmd_lib_version == '4.3': + dll_path = Path(__file__).parent.parent.joinpath('lib/alpV43').__str__() + DMD = ALP4(version='4.3',libDir=dll_path) + else: + print('unknown version of dmd library') + stop_init = True + + if stop_init == False: + DMD.Initialize(DeviceNum=None) - DMD = ALP4(version='4.2',libDir=dll_path) - DMD.Initialize(DeviceNum=None) - - #print(f'DMD initial available memory: {DMD.DevInquire(ALP_AVAIL_MEMORY)}') - print('DMD connected') - - return DMD, DMD.DevInquire(ALP_AVAIL_MEMORY) - + #print(f'DMD initial available memory: {DMD.DevInquire(ALP_AVAIL_MEMORY)}') + print('DMD connected') + + return DMD, DMD.DevInquire(ALP_AVAIL_MEMORY) + else: + print('DMD initialisation aborted') -def init() -> Tuple[Avantes, ALP4, int]: +def init(dmd_lib_version: str = '4.2') -> Tuple[Avantes, ALP4, int]: """Call functions to initialize spectrometer and DMD. + + Args: + dmd_lib_version [str]: the version of the DMD library Returns: Tuple[Avantes, ALP4, int]: Tuple containing equipments and DMD initial @@ -156,11 +131,15 @@ def init() -> Tuple[Avantes, ALP4, int]: Initial memory available in DMD after initialization. """ - DMD, DMD_initial_memory = _init_DMD() + DMD, DMD_initial_memory = _init_DMD(dmd_lib_version) return _init_spectrometer(), DMD, DMD_initial_memory -def init_2arms() -> Tuple[Avantes, ALP4, int]: + +def init_2arms(dmd_lib_version: str = '4.2') -> Tuple[Avantes, ALP4, int]: """Call functions to initialize spectrometer and DMD. + + Args: + dmd_lib_version [str]: the version of the DMD library Returns: Tuple[Avantes, ALP4, int]: Tuple containing equipments and DMD initial @@ -173,10 +152,11 @@ def init_2arms() -> Tuple[Avantes, ALP4, int]: Initial memory available in DMD after initialization. """ - DMD, DMD_initial_memory = _init_DMD() + DMD, DMD_initial_memory = _init_DMD(dmd_lib_version) camPar = _init_CAM() return _init_spectrometer(), DMD, DMD_initial_memory, camPar + def _calculate_timings(integration_time: float = 1, integration_delay: int = 0, add_illumination_time: int = 300, @@ -268,7 +248,7 @@ def _setup_spectrometer(ava: Avantes, # Get the number of pixels that the spectrometer has initial_available_pixels = ava.get_num_pixels() - print(f'\nThe spectrometer has {initial_available_pixels} pixels') + # print(f'\nThe spectrometer has {initial_available_pixels} pixels') # Enable the 16-bit AD converter for High-Resolution ava.use_high_res_adc(True) @@ -360,9 +340,10 @@ def _setup_DMD(DMD: ALP4, DMD=DMD) -def _sequence_limits(DMD: ALP4, pattern_compression: int, - sequence_lenght: int, - pos_neg: bool = True) -> int: +def _sequence_limits(DMD: ALP4, + pattern_compression: int, + sequence_lenght: int, + pos_neg: bool = True) -> int: """Set sequence limits based on a sequence already uploaded to DMD. Args: @@ -384,6 +365,7 @@ def _sequence_limits(DMD: ALP4, pattern_compression: int, """ # Choosing beggining of the sequence + # DMD.SeqControl(ALP_BITNUM, 1) DMD.SeqControl(ALP_FIRSTFRAME, 0) # Choosing the end of the sequence @@ -398,6 +380,8 @@ def _sequence_limits(DMD: ALP4, pattern_compression: int, def _update_sequence(DMD: ALP4, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, pattern_source: str, pattern_prefix: str, pattern_order: List[int], @@ -407,6 +391,13 @@ def _update_sequence(DMD: ALP4, Args: DMD (ALP4): Connected DMD object. + DMD_params (DMDParameters): + DMD metadata object to be updated with pattern related data and with + memory available after patterns are sent to DMD. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. pattern_source (str): Pattern source folder. pattern_preffix (str): @@ -417,31 +408,95 @@ def _update_sequence(DMD: ALP4, Pattern bitplanes. Defaults to 1. """ + import cv2 + path_base = Path(pattern_source) seqId = DMD.SeqAlloc(nbImg=len(pattern_order), bitDepth=bitplanes) - - print(f'Pattern order size: {len(pattern_order)}') + + zoom = acquisition_params.zoom + x_offset = acquisition_params.xw_offset + y_offset = acquisition_params.yh_offset + Np = acquisition_params.pattern_dimension_x + + dmd_height = DMD_params.display_height + dmd_width = DMD_params.display_width + len_im = int(dmd_height / zoom) + t = perf_counter_ns() - for index,pattern_name in enumerate(tqdm(pattern_order, unit=' patterns')): - path = path_base.joinpath(f'{pattern_prefix}_{pattern_name}.png') - image = Image.open(path) + + # for adaptative patterns into a ROI + apply_mask = False + mask_index = acquisition_params.mask_index + + if len(mask_index) > 0: + apply_mask = True + Npx = acquisition_params.pattern_dimension_x + Npy = acquisition_params.pattern_dimension_y + mask_element_nbr = len(mask_index) + x_mask_coord = acquisition_params.x_mask_coord + y_mask_coord = acquisition_params.y_mask_coord + x_mask_length = x_mask_coord[1] - x_mask_coord[0] + y_mask_length = y_mask_coord[1] - y_mask_coord[0] + + first_pass = True + for index,pattern_name in enumerate(tqdm(pattern_order, unit=' patterns', total=len(pattern_order))): + # read numpy patterns + path = path_base.joinpath(f'{pattern_prefix}_{pattern_name}.npy') + im = np.load(path) + + patterns = np.zeros((dmd_height, dmd_width), dtype=np.uint8) + + if apply_mask == True: # for adaptative patterns into a ROI + pat_mask_all = np.zeros(y_mask_length*x_mask_length) # initialize a vector of lenght = size of the cropped mask + pat_mask_all[mask_index] = im[:mask_element_nbr] #pat_re_vec[:mask_element_nbr] # put the pattern into the vector + pat_mask_all_mat = np.reshape(pat_mask_all, [y_mask_length, x_mask_length]) # reshape the vector into a matrix of the 2d cropped mask + # resize the matrix to the DMD size + pat_mask_all_mat_DMD = cv2.resize(pat_mask_all_mat, (int(dmd_height*x_mask_length/(Npx*zoom)), int(dmd_height*y_mask_length/(Npy*zoom))), interpolation = cv2.INTER_NEAREST) + + if first_pass == True: + first_pass = False + len_im3 = pat_mask_all_mat_DMD.shape + + patterns[y_offset:y_offset+len_im3[0], x_offset:x_offset+len_im3[1]] = pat_mask_all_mat_DMD + else: # send the entire square pattern without the mask + im_mat = np.reshape(im, [Np,Np]) + im_HD = cv2.resize(im_mat, (int(dmd_height/zoom), int(dmd_height/zoom)), interpolation = cv2.INTER_NEAREST) + + if first_pass == True: + len_im = im_HD.shape + first_pass = False + + patterns[y_offset:y_offset+len_im[0], x_offset:x_offset+len_im[1]] = im_HD + + # if pattern_name == 800: + # plt.figure() + # # plt.imshow(pat_c_re) + # # plt.imshow(pat_mask_all_mat) + # # plt.imshow(pat_mask_all_mat_DMD) + # plt.imshow(np.rot90(patterns,2)) + # plt.colorbar() + # plt.title('pattern n°' + str(pattern_name)) + + patterns = patterns.ravel() - # Converting image to nparray and transforming it to a 1D vector (ravel) - patterns = np.array(image,dtype=np.uint8).ravel() DMD.SeqPut( imgData=patterns.copy(), PicOffset=index, PicLoad=1) - + print(f'\nTime for sending all patterns: ' f'{(perf_counter_ns() - t)/1e+9} s') -def _setup_patterns(DMD: ALP4, metadata: MetaData, DMD_params: DMDParameters, - acquisition_params: AcquisitionParameters, - cov_path: str = None) -> None: +def _setup_patterns(DMD: ALP4, + metadata: MetaData, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + cov_path: str = None, + pattern_to_display: str = 'white', + loop: bool = False) -> None: """Read and send patterns to DMD. Reads patterns from a file and sends a percentage of them to the DMD, @@ -462,18 +517,33 @@ def _setup_patterns(DMD: ALP4, metadata: MetaData, DMD_params: DMDParameters, memory available after patterns are sent to DMD. acquisition_params (AcquisitionParameters): Acquisition related metadata object. User must partially fill up - with pattern_compression, pattern_dimension_x, pattern_dimension_y. + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + loop (bool): + is to projet in loop, one or few patterns continously (see AlpProjStartCont + in the doc for more detail). Default is False """ - + file = np.load(Path(metadata.pattern_order_source)) pattern_order = file['pattern_order'] pos_neg = file['pos_neg'] - + + if loop == True: + pos_neg = False + if pattern_to_display == 'white': + pattern_order = np.array(pattern_order[0:1], dtype=np.int16) + elif pattern_to_display == 'black': + pattern_order = np.array(pattern_order[1:2], dtype=np.int16) + elif pattern_to_display == 'gray': + index = int(np.where(pattern_order == 1953)[0]) + print(index) + pattern_order = np.array(pattern_order[index:index+1], dtype=np.int16) + bitplanes = 1 DMD_params.bitplanes = bitplanes if (DMD_params.initial_memory - DMD.DevInquire(ALP_AVAIL_MEMORY) == - len(pattern_order)): + len(pattern_order)) and loop == False: print('Reusing patterns from previous acquisition') acquisition_params.pattern_amount = _sequence_limits( DMD, @@ -485,7 +555,7 @@ def _setup_patterns(DMD: ALP4, metadata: MetaData, DMD_params: DMDParameters, if (DMD.Seqs): DMD.FreeSeq() - _update_sequence(DMD, metadata.pattern_source, metadata.pattern_prefix, + _update_sequence(DMD, DMD_params, acquisition_params, metadata.pattern_source, metadata.pattern_prefix, pattern_order, bitplanes) print(f'DMD available memory after sequence allocation: ' f'{DMD.DevInquire(ALP_AVAIL_MEMORY)}') @@ -494,7 +564,7 @@ def _setup_patterns(DMD: ALP4, metadata: MetaData, DMD_params: DMDParameters, acquisition_params.pattern_compression, len(pattern_order), pos_neg=pos_neg) - + acquisition_params.patterns = ( pattern_order[0:acquisition_params.pattern_amount]) @@ -502,9 +572,12 @@ def _setup_patterns(DMD: ALP4, metadata: MetaData, DMD_params: DMDParameters, DMD_params.update_memory(DMD.DevInquire(ALP_AVAIL_MEMORY)) -def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParameters, - acquisition_params: AcquisitionParameters, camPar: CAM, - cov_path: str = None) -> None: +def _setup_patterns_2arms(DMD: ALP4, + metadata: MetaData, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + camPar: CAM, + cov_path: str = None) -> None: """Read and send patterns to DMD. Reads patterns from a file and sends a percentage of them to the DMD, @@ -525,7 +598,15 @@ def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParamete memory available after patterns are sent to DMD. acquisition_params (AcquisitionParameters): Acquisition related metadata object. User must partially fill up - with pattern_compression, pattern_dimension_x, pattern_dimension_y. + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera + cov_path (str): + Path to the covariance matrix used for reconstruction. + It must be a .npy (numpy) or .pt (pytorch) file. It is converted to + a torch tensor for reconstruction. + """ file = np.load(Path(metadata.pattern_order_source)) @@ -533,11 +614,14 @@ def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParamete pattern_order = pattern_order.astype('int32') # copy the black pattern image (png) to the number = -1 - black_pattern_dest_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + '-1.png' ) + # black_pattern_dest_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + '-1.png' ) + black_pattern_dest_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + '-1.npy' ) if black_pattern_dest_path.is_file() == False: + # black_pattern_orig_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + + # str(camPar.black_pattern_num) + '.png' ) black_pattern_orig_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + - str(camPar.black_pattern_num) + '.png' ) + str(camPar.black_pattern_num) + '.npy' ) shutil.copyfile(black_pattern_orig_path, black_pattern_dest_path) @@ -547,7 +631,7 @@ def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParamete while True: try: pattern_order[inc] # except error from the end of array to stop the loop - if (inc % 16) == 0: + if (inc % camPar.gate_period) == 0:#16) == 0: pattern_order = np.insert(pattern_order, inc, -1) # double white pattern is required if integration time is shorter than 3.85 ms if camPar.int_time_spect < 3.85: pattern_order = np.insert(pattern_order, inc+1, -1) @@ -568,6 +652,7 @@ def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParamete # # pattern_order = np.insert(pattern_order, 0, -1) if (len(pattern_order)%2) != 0: # Add one pattern at the end of the sequence if the pattern number is even pattern_order = np.insert(pattern_order, len(pattern_order), -1) + print('pattern order is odd => a black image is automaticly insert, need to be deleted in the case for tuning the spectrometer') pos_neg = file['pos_neg'] @@ -587,7 +672,7 @@ def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParamete if (DMD.Seqs): DMD.FreeSeq() - _update_sequence(DMD, metadata.pattern_source, metadata.pattern_prefix, + _update_sequence(DMD, DMD_params, acquisition_params, metadata.pattern_source, metadata.pattern_prefix, pattern_order, bitplanes) print(f'DMD available memory after sequence allocation: ' f'{DMD.DevInquire(ALP_AVAIL_MEMORY)}') @@ -605,10 +690,14 @@ def _setup_patterns_2arms(DMD: ALP4, metadata: MetaData, DMD_params: DMDParamete # Confirm memory allocated in DMD DMD_params.update_memory(DMD.DevInquire(ALP_AVAIL_MEMORY)) -def _setup_timings(DMD: ALP4, DMD_params: DMDParameters, picture_time: int, - illumination_time: int, synch_pulse_delay: int, - synch_pulse_width: int, trigger_in_delay: int, - add_illumination_time: int) -> None: +def _setup_timings(DMD: ALP4, + DMD_params: DMDParameters, + picture_time: int, + illumination_time: int, + synch_pulse_delay: int, + synch_pulse_width: int, + trigger_in_delay: int, + add_illumination_time: int) -> None: """Setup pattern sequence timings in DMD. Send previously user-defined plus calculated timings to DMD. @@ -648,16 +737,7 @@ def _setup_timings(DMD: ALP4, DMD_params: DMDParameters, picture_time: int, triggerInDelay=trigger_in_delay) DMD_params.update_sequence_parameters(add_illumination_time, DMD=DMD) - - -# class mytAlpDynSynchOutGate(ct.Structure): -# # For ControlType ALP_DEV_DYN_TRIG_OUT[1..3]_GATE of function AlpDevControlEx -# # Configure compiler to not insert padding bytes! (e.g. #pragma pack) -# _pack_ = 1 -# _fields_ = [("Period", ct.c_ubyte), # Period=1..16 enables output; 0: tri-state -# ("Polarity", ct.c_ubyte), # 0: active pulse is low, 1: high -# ("Gate", ct.c_ubyte * 16), -# ("byref", ct.c_ubyte * 18)] + def setup(spectrometer: Avantes, DMD: ALP4, @@ -672,6 +752,8 @@ def setup(spectrometer: Avantes, add_illumination_time: int = 356, dark_phase_time: int = 44, DMD_trigger_in_delay: int = 0, + pattern_to_display: str = 'white', + loop: bool = False ) -> Tuple[SpectrometerParameters, DMDParameters]: """Setup everything needed to start an acquisition. @@ -718,6 +800,12 @@ def setup(spectrometer: Avantes, DMD_trigger_in_delay (int): Time in microseconds between the incoming trigger edge and the start of the pattern display on DMD (slave mode). Default is 0 us. + pattern_to_display (string): + display one pattern on the DMD to tune the spectrometer. Default is white + pattern + loop (bool): + is to projet in loop, one or few patterns continuously (see AlpProjStartCont + in the doc for more detail). Default is False Raises: ValueError: Sum of dark phase and additional illumination time is lower than 400 us. @@ -731,9 +819,10 @@ def setup(spectrometer: Avantes, DMD metadata object with DMD configurations. """ - path = Path(metadata.output_directory) - if not path.exists(): - path.mkdir() + if loop == False: + path = Path(metadata.output_directory) + if not path.exists(): + path.mkdir() if dark_phase_time + add_illumination_time < 350: raise ValueError(f'Sum of dark phase and additional illumination time ' @@ -759,12 +848,14 @@ def setup(spectrometer: Avantes, start_pixel, stop_pixel) - acquisition_params.wavelengths = np.asarray(wavelenghts, dtype=np.float32) + acquisition_params.wavelengths = np.asarray(wavelenghts, dtype=np.float64) DMD_params = _setup_DMD(DMD, add_illumination_time, DMD_initial_memory) _setup_patterns(DMD=DMD, metadata=metadata, DMD_params=DMD_params, - acquisition_params=acquisition_params) + acquisition_params=acquisition_params, loop=loop, + pattern_to_display=pattern_to_display) + _setup_timings(DMD, DMD_params, picture_time, illumination_time, DMD_output_synch_pulse_delay, synch_pulse_width, DMD_trigger_in_delay, add_illumination_time) @@ -772,7 +863,39 @@ def setup(spectrometer: Avantes, return spectrometer_params, DMD_params +def change_patterns(DMD: ALP4, + acquisition_params: AcquisitionParameters, + zoom: int = 1, + xw_offset: int = 0, + yh_offset: int = 0, + force_change: bool = False + ): + """ + Delete patterns in the memory of the DMD in the case where the zoom or (x,y) offset change + + DMD (ALP4): + Connected DMD. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + zoom (int): + digital zoom. Deafult is x1. + xw_offset (int): + offset int he width direction of the patterns for zoom > 1. + Default is 0. + yh_offset (int): + offset int he height direction of the patterns for zoom > 1. + Default is 0. + force_change (bool): + to force the changement of the pattern sequence. Default is False. + """ + + if acquisition_params.zoom != zoom or acquisition_params.xw_offset != xw_offset or acquisition_params.yh_offset != yh_offset or force_change == True: + if (DMD.Seqs): + DMD.FreeSeq() + def setup_2arms(spectrometer: Avantes, DMD: ALP4, camPar: CAM, @@ -786,7 +909,7 @@ def setup_2arms(spectrometer: Avantes, DMD_output_synch_pulse_delay: int = 0, add_illumination_time: int = 356, dark_phase_time: int = 44, - DMD_trigger_in_delay: int = 0, + DMD_trigger_in_delay: int = 0 ) -> Tuple[SpectrometerParameters, DMDParameters]: """Setup everything needed to start an acquisition. @@ -798,6 +921,8 @@ def setup_2arms(spectrometer: Avantes, Connected spectrometer (Avantes object). DMD (ALP4): Connected DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera DMD_initial_memory (int): Initial memory available in DMD after initialization. metadata (MetaData): @@ -805,7 +930,8 @@ def setup_2arms(spectrometer: Avantes, outputs. Must be created and filled up by the user. acquisition_params (AcquisitionParameters): Acquisition related metadata object. User must partially fill up - with pattern_compression, pattern_dimension_x, pattern_dimension_y. + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. start_pixel (int): Initial pixel data received from spectrometer. Default is 0. stop_pixel (int, optional): @@ -833,6 +959,7 @@ def setup_2arms(spectrometer: Avantes, DMD_trigger_in_delay (int): Time in microseconds between the incoming trigger edge and the start of the pattern display on DMD (slave mode). Default is 0 us. + Raises: ValueError: Sum of dark phase and additional illumination time is lower than 400 us. @@ -849,7 +976,7 @@ def setup_2arms(spectrometer: Avantes, path = Path(metadata.output_directory) if not path.exists(): path.mkdir() - + if dark_phase_time + add_illumination_time < 350: raise ValueError(f'Sum of dark phase and additional illumination time ' f'is {dark_phase_time + add_illumination_time}.' @@ -859,7 +986,7 @@ def setup_2arms(spectrometer: Avantes, warnings.warn(f'Sum of dark phase and additional illumination time ' f'is {dark_phase_time + add_illumination_time}.' f' It is recomended to choose at least 400 µs.') - + synch_pulse_width, illumination_time, picture_time = _calculate_timings( integration_time, integration_delay, @@ -897,10 +1024,10 @@ def setup_2arms(spectrometer: Avantes, camPar.gate_period = gate_period camPar.int_time_spect = integration_time - acquisition_params.wavelengths = np.asarray(wavelenghts, dtype=np.float32) + acquisition_params.wavelengths = np.asarray(wavelenghts, dtype=np.float64) DMD_params = _setup_DMD(DMD, add_illumination_time, DMD_initial_memory) - + _setup_patterns_2arms(DMD=DMD, metadata=metadata, DMD_params=DMD_params, acquisition_params=acquisition_params, camPar=camPar) @@ -909,6 +1036,7 @@ def setup_2arms(spectrometer: Avantes, DMD_trigger_in_delay, add_illumination_time) return spectrometer_params, DMD_params, camPar + def _calculate_elapsed_time(start_measurement_time: int, measurement_time: np.ndarray, @@ -980,16 +1108,17 @@ def _save_acquisition(metadata: MetaData, path = path / f'{metadata.experiment_name}_spectraldata.npz' np.savez_compressed(path, spectral_data=spectral_data) - # Saving metadata - save_metadata(metadata, - DMD_params, - spectrometer_params, - acquisition_parameters) + # 'save_metadata' function is commented because the 'save_metadata_2arms' function is executed after the 'acquire' function in the "main_seq_2arms.py" prog + # # Saving metadata + # save_metadata(metadata, + # DMD_params, + # spectrometer_params, + # acquisition_parameters) def _save_acquisition_2arms(metadata: MetaData, DMD_params: DMDParameters, spectrometer_params: SpectrometerParameters, - camPar, + camPar: CAM, acquisition_parameters: AcquisitionParameters, spectral_data: np.ndarray) -> None: """Save all acquisition data and metadata. @@ -1002,6 +1131,8 @@ def _save_acquisition_2arms(metadata: MetaData, DMD metadata object with DMD configurations. spectrometer_params (SpectrometerParameters): Spectrometer metadata object with spectrometer configurations. + camPar (CAM): + Metadata object of the IDS monochrome camera acquisition_parameters (AcquisitionParameters): Acquisition related metadata object. spectral_data (ndarray): @@ -1021,11 +1152,13 @@ def _save_acquisition_2arms(metadata: MetaData, camPar, acquisition_parameters) + def _acquire_raw(ava: Avantes, DMD: ALP4, spectrometer_params: SpectrometerParameters, DMD_params: DMDParameters, - acquisition_params: AcquisitionParameters + acquisition_params: AcquisitionParameters, + loop: bool = False ) -> NamedTuple: """Raw data acquisition. @@ -1043,6 +1176,9 @@ def _acquire_raw(ava: Avantes, DMD metadata object with DMD configurations. acquisition_params (AcquisitionParameters): Acquisition related metadata object. + loop (bool): + if True, projet continuously the pattern, see the AlpProjStartCont function + if False, projet one time the seq of the patterns, see the AlpProjStart function (Default) Returns: NamedTuple: NamedTuple containig spectral data and measurement timings. @@ -1070,9 +1206,9 @@ def register_callback(measurement_time, timestamps, def measurement_callback(handle, info): # If we want to reconstruct during callback; can use it in here. Add function as parameter. nonlocal spectrum_index nonlocal saturation_detected - - measurement_time[spectrum_index] = perf_counter_ns() + measurement_time[spectrum_index] = perf_counter_ns() + if info.contents.value >= 0: timestamp,spectrum = ava.get_data() spectral_data[spectrum_index,:] = ( @@ -1104,33 +1240,79 @@ def measurement_callback(handle, info): # If we want to reconstruct during callb saturation_detected = False spectrum_index = 0 # Accessed as nonlocal variable inside the callback - - #spectro.register_callback(-2,acquisition_params.pattern_amount,pixel_amount) - callback = register_callback(measurement_time, timestamps, - spectral_data, ava) - measurement_callback = MeasureCallback(callback) - ava.measure_callback(-2, measurement_callback) - # Run the whole sequence only once - DMD.Run(loop=False) + if loop == False: + #spectro.register_callback(-2,acquisition_params.pattern_amount,pixel_amount) + callback = register_callback(measurement_time, timestamps, + spectral_data, ava) + measurement_callback = MeasureCallback(callback) + ava.measure_callback(-2, measurement_callback) + else: + ava.measure(-1) + + + DMD.Run(loop=loop) # if loop=False : Run the whole sequence only once, if loop=True : Run continuously one pattern start_measurement_time = perf_counter_ns() - #sleep(13) - while(True): - if(spectrum_index >= acquisition_params.pattern_amount): - break - elif((perf_counter_ns() - start_measurement_time) / 1e+6 > - (2 * acquisition_params.pattern_amount * - DMD_params.picture_time_us / 1e+3)): - print('Stopping measurement. One of the equipments may be blocked ' - 'or disconnected.') - break - else: - sleep(acquisition_params.pattern_amount * - DMD_params.picture_time_us / 1e+6 / 10) - + + if loop == False: + while(True): + if(spectrum_index >= acquisition_params.pattern_amount) and loop == False: + break + elif((perf_counter_ns() - start_measurement_time) / 1e+6 > + (2 * acquisition_params.pattern_amount * + DMD_params.picture_time_us / 1e+3)) and loop == False: + print('Stopping measurement. One of the equipments may be blocked ' + 'or disconnected.') + break + else: + sleep(acquisition_params.pattern_amount * + DMD_params.picture_time_us / 1e+6 / 10) + DMD.Halt() + else: + sleep(0.1) + + timestamp, spectrum = ava.get_data() + spectral_data_1 = (np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + get_ipython().run_line_magic('matplotlib', 'qt') + plt.ion() # create GUI + figure, ax = plt.subplots(figsize=(10, 8)) + line1, = ax.plot(acquisition_params.wavelengths, spectral_data_1) + + plt.title("Tune the Spectrometer", fontsize=20) + plt.xlabel("Lambda (nm)") + plt.ylabel("counts") + plt.xticks(fontsize=14) + plt.yticks(fontsize=14) + plt.grid() + printed = False + while(True): + try: + timestamp, spectrum = ava.get_data() + spectral_data_1 = (np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + line1.set_xdata(acquisition_params.wavelengths) + line1.set_ydata(spectral_data_1) # updating data values + + figure.canvas.draw() # drawing updated values + figure.canvas.flush_events() # flush prior plot + + if not printed: + print('Press "Ctrl + c" to exit') + if np.amax(spectral_data_1) >= 65535: + print('!!!!!!!!!! Saturation detected in the spectro !!!!!!!!!!') + printed = True + + except KeyboardInterrupt: + if (DMD.Seqs): + DMD.Halt() + DMD.FreeSeq() + plt.close() + get_ipython().run_line_magic('matplotlib', 'inline') + break + ava.stop_measure() - DMD.Halt() - + AcquisitionResult = namedtuple('AcquisitionResult', [ 'spectral_data', 'spectrum_index', @@ -1146,6 +1328,7 @@ def measurement_callback(handle, info): # If we want to reconstruct during callb start_measurement_time, saturation_detected) + def acquire(ava: Avantes, DMD: ALP4, metadata: MetaData, @@ -1210,6 +1393,8 @@ def acquire(ava: Avantes, Units in milliseconds. """ + loop = False # if true, is to projet continuously a unique pattern to tune the spectrometer + if reconstruct == True: print('Creating reconstruction processes') @@ -1246,7 +1431,7 @@ def acquire(ava: Avantes, (acquisition_params.pattern_amount * repetitions)) timestamps = np.zeros( ((acquisition_params.pattern_amount - 1) * repetitions), - dtype=np.float32) + dtype=np.float64) spectral_data = np.zeros( (acquisition_params.pattern_amount * repetitions,pixel_amount), dtype=np.float64) @@ -1259,12 +1444,12 @@ def acquire(ava: Avantes, print(f"Acquisition {repetition}") AcquisitionResults = _acquire_raw(ava, DMD, spectrometer_params, - DMD_params, acquisition_params) + DMD_params, acquisition_params, loop) (data, spectrum_index, timestamp, time, start_measurement_time, saturation_detected) = AcquisitionResults - print('Data acquired') + print('Acquisition number : ' + str(repetition) + ' finished') if reconstruct == True: queue_to_recon.put(data.T) @@ -1285,11 +1470,11 @@ def acquire(ava: Avantes, acquisition_params.acquired_spectra += spectrum_index acquisition_params.saturation_detected = saturation_detected - - # Print data for each repetition only if there are not too many repetitions - if (verbose) and repetitions <= 10: - if saturation_detected is True: - print('Saturation detected!') + + if saturation_detected is True: + print('!!!!!!!!!! Saturation detected in the spectro !!!!!!!!!!') + # Print data for each repetition + if (verbose): print('Spectra acquired: {}'.format(spectrum_index)) print('Mean callback acquisition time: {} ms'.format( np.mean(time))) @@ -1307,22 +1492,9 @@ def acquire(ava: Avantes, acquisition_params.update_timings(timestamps, measurement_time) # Real time between each spectrum acquisition by the spectrometer - print('Complete acquisition results:') - print('Spectra acquired: {}'.format( - acquisition_params.acquired_spectra)) - if acquisition_params.saturation_detected is True: - print('Saturation detected!') - print('Mean callback acquisition time: {} ms'.format( - acquisition_params.mean_callback_acquisition_time_ms)) - print('Total callback acquisition time: {} s'.format( - acquisition_params.total_callback_acquisition_time_s)) - print('Mean spectrometer acquisition time: {} ms'.format( - acquisition_params.mean_spectrometer_acquisition_time_ms)) - print('Total spectrometer acquisition time: {} s'.format( - acquisition_params.total_spectrometer_acquisition_time_s)) - print(f'Acquisition matrix dimension: {spectral_data.shape}') - - print(f'Saving data to {metadata.output_directory}') + print('Complete acquisition done') + print('Spectra acquired: {}'.format(acquisition_params.acquired_spectra)) + print('Total acquisition time: {0:.2f} s'.format(acquisition_params.total_spectrometer_acquisition_time_s)) _save_acquisition(metadata, DMD_params, spectrometer_params, acquisition_params, spectral_data) @@ -1334,308 +1506,94 @@ def acquire(ava: Avantes, queue_to_recon.close() plot_process.join() queue_reconstructed.close() + + maxi = np.amax(spectral_data[0,:]) + print('------------------------------------------------') + print('maximum in the spectrum = ' + str(maxi)) + print('------------------------------------------------') + if maxi >= 65535: + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + print('!!!!! warning, spectrum saturation !!!!!!!!') + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') return spectral_data -def check_ueye(func, *args, exp=0, raise_exc=True, txt=None): - ret = func(*args) - if not txt: - txt = "{}: Expected {} but ret={}!".format(str(func), exp, ret) - if ret != exp: - if raise_exc: - raise RuntimeError(txt) - else: - logging.critical(txt) +def _acquire_raw_2arms(ava: Avantes, + DMD: ALP4, + camPar: CAM, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + metadata, + repetition, + repetitions + ) -> NamedTuple: + """Raw data acquisition. + Setups a callback function to receive messages from spectrometer whenever a + measurement is ready to be read. Reads a measurement via a callback. -def stopCapt_DeallocMem(camPar): - # Stop capture and deallocate camera memory if need to change AOI - if camPar.camActivated == 1: - nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) - if nRet == ueye.IS_SUCCESS: - camPar.camActivated = 0 - print('video stop successful') - else: - print('problem to stop the video') + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + + Returns: + NamedTuple: NamedTuple containig spectral data and measurement timings. + spectral_data (ndarray): + 2D array of `float` of size (pattern_amount x pixel_amount) + containing measurements received from the spectrometer for each + pattern of a sequence. + spectrum_index (int): + Index of the last acquired spectrum. + timestamps (np.ndarray): + 1D array with `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. + Units in milliseconds. + measurement_time (np.ndarray): + 1D array with `float` type elapsed times between each callback. + Units in milliseconds. + start_measurement_time (float): + Time when acquisition started. + saturation_detected (bool): + Boolean incating if saturation was detected during acquisition. + """ + # def for spectrometer acquisition + def register_callback(measurement_time, timestamps, + spectral_data, ava): + + def measurement_callback(handle, info): # If we want to reconstruct during callback; can use it in here. Add function as parameter. + nonlocal spectrum_index + nonlocal saturation_detected + + measurement_time[spectrum_index] = perf_counter_ns() - if camPar.Memory == 1: - nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) - if nRet == ueye.IS_SUCCESS: - camPar.Memory = 0 - print('deallocate memory successful') - else: - print('Problem to deallocate memory of the camera') - - return camPar + if info.contents.value >= 0: + timestamp,spectrum = ava.get_data() + spectral_data[spectrum_index,:] = ( + np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + if np.any(ava.get_saturated_pixels() > 0): + saturation_detected = True -def stopCapt_DeallocMem_ExitCam(camPar): - # Stop capture and deallocate camera memory if need to change AOI - if camPar.camActivated == 1: - nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) - if nRet == ueye.IS_SUCCESS: - camPar.camActivated = 0 - print('video stop successful') - else: - print('problem to stop the video') + timestamps[spectrum_index] = np.ctypeslib.as_array(timestamp) + + else: # Set values to zero if an error occured + spectral_data[spectrum_index,:] = 0 + timestamps[spectrum_index] = 0 - if camPar.Memory == 1: - nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) - if nRet == ueye.IS_SUCCESS: - camPar.Memory = 0 - print('deallocate memory successful') - else: - print('Problem to deallocate memory of the camera') - - if camPar.Exit == 2: - nRet = ueye.is_ExitCamera(camPar.hCam) - if nRet == ueye.IS_SUCCESS: - camPar.Exit = 0 - print('Camera disconnected') - else: - print('Problem to disconnect camera, need to restart spyder') - - return camPar - -class ImageBuffer: - pcImageMemory = None - MemID = None - width = None - height = None - nbitsPerPixel = None - -def imageQueue(camPar): - # Create Imagequeue --------------------------------------------------------- - # - allocate 3 ore more buffers depending on the framerate - # - initialize Imagequeue - # --------------------------------------------------------- - sleep(1) # is required (delay of 1s was not optimized!!) - buffers = [] - for y in range(10): - buffers.append(ImageBuffer()) - - for x in range(len(buffers)): - buffers[x].nbitsPerPixel = camPar.nBitsPerPixel # RAW8 - buffers[x].height = camPar.rectAOI.s32Height # sensorinfo.nMaxHeight - buffers[x].width = camPar.rectAOI.s32Width # sensorinfo.nMaxWidth - buffers[x].MemID = ueye.int(0) - buffers[x].pcImageMemory = ueye.c_mem_p() - check_ueye(ueye.is_AllocImageMem, camPar.hCam, buffers[x].width, buffers[x].height, buffers[x].nbitsPerPixel, - buffers[x].pcImageMemory, buffers[x].MemID) - check_ueye(ueye.is_AddToSequence, camPar.hCam, buffers[x].pcImageMemory, buffers[x].MemID) - - check_ueye(ueye.is_InitImageQueue, camPar.hCam, ueye.c_int(0)) - if camPar.trigger_mode == 'soft': - check_ueye(ueye.is_SetExternalTrigger, camPar.hCam, ueye.IS_SET_TRIGGER_SOFTWARE) - elif camPar.trigger_mode == 'hard': - check_ueye(ueye.is_SetExternalTrigger, camPar.hCam, ueye.IS_SET_TRIGGER_LO_HI) - -def prepareCam(camPar, metadata): - cam_path = metadata.output_directory + '\\' + metadata.experiment_name + '_video.' + camPar.vidFormat - strFileName = ueye.c_char_p(cam_path.encode('utf-8')) - - if camPar.vidFormat == 'avi': - # print('Video format : AVI') - camPar.avi = ueye.int() - nRet = ueye_tools.isavi_InitAVI(camPar.avi, camPar.hCam) - # print("isavi_InitAVI") - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("isavi_InitAVI ERROR") - - nRet = ueye_tools.isavi_SetImageSize(camPar.avi, camPar.m_nColorMode, camPar.rectAOI.s32Width , camPar.rectAOI.s32Height, 0, 0, 0) - nRet = ueye_tools.isavi_SetImageQuality(camPar.avi, 100) - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("isavi_SetImageQuality ERROR") - - nRet = ueye_tools.isavi_OpenAVI(camPar.avi, strFileName) - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("isavi_OpenAVI ERROR") - - nRet = ueye_tools.isavi_SetFrameRate(camPar.avi, camPar.fps) - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("isavi_SetFrameRate ERROR") - nRet = ueye_tools.isavi_StartAVI(camPar.avi) - # print("isavi_StartAVI") - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("isavi_StartAVI ERROR") - - - elif camPar.vidFormat == 'bin': - camPar.punFileID = ueye.c_uint() - nRet = ueye_tools.israw_InitFile(camPar.punFileID, ueye_tools.IS_FILE_ACCESS_MODE_WRITE) - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("INIT RAW FILE ERROR") - - nRet = ueye_tools.israw_SetImageInfo(camPar.punFileID, camPar.rectAOI.s32Width, camPar.rectAOI.s32Height, camPar.nBitsPerPixel) - if nRet != ueye_tools.IS_AVI_NO_ERR: - print("SET IMAGE INFO ERROR") - - if nRet == ueye.IS_SUCCESS: - # print('initFile ok') - # print('SetImageInfo ok') - nRet = ueye_tools.israw_OpenFile(camPar.punFileID, strFileName) - # if nRet == ueye.IS_SUCCESS: - # # print('OpenFile success') - - # nShutterMode = ueye.c_uint(ueye.IS_DEVICE_FEATURE_CAP_SHUTTER_MODE_ROLLING_GLOBAL_START) - # nRet = ueye.is_DeviceFeature(camPar.hCam, ueye.IS_DEVICE_FEATURE_CMD_SET_SHUTTER_MODE, nShutterMode, - # ueye.sizeof(nShutterMode)) - # print('shutter mode = ' + str(nShutterMode.value) + ' / enable : ' + str(nRet)) - - # # Read the global flash params - # flashParams = ueye.IO_FLASH_PARAMS() - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_GET_GLOBAL_PARAMS, flashParams, ueye.sizeof(flashParams)) - # if (nRet == ueye.IS_SUCCESS): - # nDelay = flashParams.s32Delay - # print('nDelay = ' + str(nDelay.value)) - # nDuration = flashParams.u32Duration - # print('nDuration = ' + str(nDuration.value)) - - # flashParams.s32Delay.value = 0 - # flashParams.u32Duration.value = 40 - # # Apply the global flash params and set the flash params to these values - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_SET_PARAMS, flashParams, ueye.sizeof(flashParams)) - - - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_GET_PARAMS, flashParams, ueye.sizeof(flashParams)) - # if (nRet == ueye.IS_SUCCESS): - # nDelay = flashParams.s32Delay - # print('nDelay = ' + str(nDelay.value)) - # nDuration = flashParams.u32Duration - # print('nDuration = ' + str(nDuration.value)) - - # --------------------------------------------------------- - # Activates the camera's live video mode (free run mode) - # --------------------------------------------------------- - nRet = ueye.is_CaptureVideo(camPar.hCam, ueye.IS_DONT_WAIT) - # nRet = ueye.is_FreezeVideo(camPar.hCam, ueye.IS_DONT_WAIT) - if nRet != ueye.IS_SUCCESS: - print("is_CaptureVideo ERROR") - else: - camPar.camActivated = 1 - - return camPar - - -def runCam_thread(camPar, start_chrono): - imageinfo = ueye.UEYEIMAGEINFO() - current_buffer = ueye.c_mem_p() - current_id = ueye.int() - # inc = 0 - entier_old = 0 - # time.sleep(0.01) - while True: - nret = ueye.is_WaitForNextImage(camPar.hCam, camPar.timeout, current_buffer, current_id) - if nret == ueye.IS_SUCCESS: - check_ueye(ueye.is_GetImageInfo, camPar.hCam, current_id, imageinfo, ueye.sizeof(imageinfo)) - start_time = time.time() - counter = start_time - start_chrono - camPar.time_array.append(counter) - if camPar.vidFormat == 'avi': - nRet = ueye_tools.isavi_AddFrame(camPar.avi, current_buffer) - elif camPar.vidFormat == 'bin': - nRet = ueye_tools.israw_AddFrame(camPar.punFileID, current_buffer, imageinfo.u64TimestampDevice) - - check_ueye(ueye.is_UnlockSeqBuf, camPar.hCam, current_id, current_buffer) - else: - # nRet = ueye.is_FreeImageMem (camPar.hCam, current_buffer, current_id) - # if nRet != ueye.IS_SUCCESS: - # print('ERROR to free the memory') - # print(nRet) - print('Thread finished') - break - # else: - # print('thread cam stop correctly') - # break - -def stopCam(camPar): - if camPar.vidFormat == 'avi': - ueye_tools.isavi_StopAVI(camPar.hCam) - ueye_tools.isavi_CloseAVI(camPar.hCam) - ueye_tools.isavi_ExitAVI(camPar.hCam) - elif camPar.vidFormat == 'bin': - ueye_tools.israw_CloseFile(camPar.punFileID) - ueye_tools.israw_ExitFile(camPar.punFileID) - camPar.punFileID = ueye.c_uint() - - # camPar = stopCapt_DeallocMem(camPar) - - return camPar - - -def _acquire_raw_2arms(ava: Avantes, - DMD: ALP4, - camPar, - spectrometer_params: SpectrometerParameters, - DMD_params: DMDParameters, - acquisition_params: AcquisitionParameters, - metadata, - repetition, - repetitions - ) -> NamedTuple: - """Raw data acquisition. - - Setups a callback function to receive messages from spectrometer whenever a - measurement is ready to be read. Reads a measurement via a callback. - - Args: - ava (Avantes): - Connected spectrometer (Avantes object). - DMD (ALP4): - Connected DMD. - spectrometer_params (SpectrometerParameters): - Spectrometer metadata object with spectrometer configurations. - DMD_params (DMDParameters): - DMD metadata object with DMD configurations. - acquisition_params (AcquisitionParameters): - Acquisition related metadata object. - - Returns: - NamedTuple: NamedTuple containig spectral data and measurement timings. - spectral_data (ndarray): - 2D array of `float` of size (pattern_amount x pixel_amount) - containing measurements received from the spectrometer for each - pattern of a sequence. - spectrum_index (int): - Index of the last acquired spectrum. - timestamps (np.ndarray): - 1D array with `float` type elapsed time between each measurement - made by the spectrometer based on its internal clock. - Units in milliseconds. - measurement_time (np.ndarray): - 1D array with `float` type elapsed times between each callback. - Units in milliseconds. - start_measurement_time (float): - Time when acquisition started. - saturation_detected (bool): - Boolean incating if saturation was detected during acquisition. - """ - # def for spectrometer acquisition - def register_callback(measurement_time, timestamps, - spectral_data, ava): - - def measurement_callback(handle, info): # If we want to reconstruct during callback; can use it in here. Add function as parameter. - nonlocal spectrum_index - nonlocal saturation_detected - - measurement_time[spectrum_index] = perf_counter_ns() - - if info.contents.value >= 0: - timestamp,spectrum = ava.get_data() - spectral_data[spectrum_index,:] = ( - np.ctypeslib.as_array(spectrum[0:pixel_amount])) - - if np.any(ava.get_saturated_pixels() > 0): - saturation_detected = True - - timestamps[spectrum_index] = np.ctypeslib.as_array(timestamp) - - else: # Set values to zero if an error occured - spectral_data[spectrum_index,:] = 0 - timestamps[spectrum_index] = 0 - - spectrum_index += 1 - - return measurement_callback + spectrum_index += 1 + + return measurement_callback # def for camera acquisition if repetition == 0: @@ -1710,7 +1668,7 @@ def measurement_callback(handle, info): # If we want to reconstruct during callb def acquire_2arms(ava: Avantes, DMD: ALP4, - camPar, + camPar: CAM, metadata: MetaData, spectrometer_params: SpectrometerParameters, DMD_params: DMDParameters, @@ -1731,6 +1689,8 @@ def acquire_2arms(ava: Avantes, Connected spectrometer (Avantes object). DMD (ALP4): Connected DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera metadata (MetaData): Metadata concerning the experiment, paths, file inputs and file outputs. Must be created and filled up by the user. @@ -1809,7 +1769,7 @@ def acquire_2arms(ava: Avantes, (acquisition_params.pattern_amount * repetitions)) timestamps = np.zeros( ((acquisition_params.pattern_amount - 1) * repetitions), - dtype=np.float32) + dtype=np.float64) spectral_data = np.zeros( (acquisition_params.pattern_amount * repetitions,pixel_amount), dtype=np.float64) @@ -1817,102 +1777,561 @@ def acquire_2arms(ava: Avantes, acquisition_params.acquired_spectra = 0 print() - for repetition in range(repetitions): - if verbose: - print(f"Acquisition {repetition}") + for repetition in range(repetitions): + if verbose: + print(f"Acquisition {repetition}") + + AcquisitionResults = _acquire_raw_2arms(ava, DMD, camPar, spectrometer_params, + DMD_params, acquisition_params, metadata, repetition, repetitions) + + (data, spectrum_index, timestamp, time, + start_measurement_time, saturation_detected) = AcquisitionResults + + print('Acquisition number : ' + str(repetition) + ' finished') + + if reconstruct == True: + queue_to_recon.put(data.T) + print('Data sent') + + time, timestamp = _calculate_elapsed_time( + start_measurement_time, time, timestamp) + + begin = repetition * acquisition_params.pattern_amount + end = (repetition + 1) * acquisition_params.pattern_amount + spectral_data[begin:end] = data + measurement_time[begin:end] = time + + begin = repetition * (acquisition_params.pattern_amount - 1) + end = (repetition + 1) * (acquisition_params.pattern_amount - 1) + timestamps[begin:end] = timestamp + + acquisition_params.acquired_spectra += spectrum_index + + acquisition_params.saturation_detected = saturation_detected + + if saturation_detected is True: + print('!!!!!!!!!! Saturation detected in the spectro !!!!!!!!!!') + # Print data for each repetition + if (verbose): + print('Spectra acquired: {}'.format(spectrum_index)) + print('Mean callback acquisition time: {} ms'.format( + np.mean(time))) + print('Total callback acquisition time: {} s'.format( + np.sum(time)/1000)) + print('Mean spectrometer acquisition time: {} ms'.format( + np.mean(timestamp))) + print('Total spectrometer acquisition time: {} s'.format( + np.sum(timestamp)/1000)) + + # Print shape of acquisition matrix for one repetition + print(f'Partial acquisition matrix dimensions:' + f'{data.shape}') + print() + + acquisition_params.update_timings(timestamps, measurement_time) + # Real time between each spectrum acquisition by the spectrometer + print('Complete acquisition done') + print('Spectra acquired: {}'.format(acquisition_params.acquired_spectra)) + print('Total acquisition time: {0:.2f} s'.format(acquisition_params.total_spectrometer_acquisition_time_s)) + + # delete acquisition with black pattern (white for the camera) + if camPar.insert_patterns == 1: + black_pattern_index = np.where(acquisition_params.patterns_wp == -1) + # print('index of white patterns :') + # print(black_pattern_index[0:38]) + if acquisition_params.patterns_wp.shape == acquisition_params.patterns.shape: + acquisition_params.patterns = np.delete(acquisition_params.patterns, black_pattern_index) + spectral_data = np.delete(spectral_data, black_pattern_index, axis = 0) + acquisition_params.timestamps = np.delete(acquisition_params.timestamps, black_pattern_index[1:]) + acquisition_params.measurement_time = np.delete(acquisition_params.measurement_time, black_pattern_index) + acquisition_params.acquired_spectra = len(acquisition_params.patterns) + + _save_acquisition_2arms(metadata, DMD_params, spectrometer_params, camPar, + acquisition_params, spectral_data) + + # Joining processes and closing queues + if reconstruct == True: + queue_to_recon.put('kill') # Sends a message to stop reconstruction + recon_process.join() + queue_to_recon.close() + plot_process.join() + queue_reconstructed.close() + + maxi = np.amax(spectral_data[0,:]) + print('------------------------------------------------') + print('maximum in the spectrum = ' + str(maxi)) + print('------------------------------------------------') + if maxi >= 65535: + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + print('!!!!! warning, spectrum saturation !!!!!!!!') + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + + return spectral_data + + +def setup_tuneSpectro(spectrometer, + DMD, + DMD_initial_memory, + pattern_to_display, + ti : float = 1, + zoom : int = 1, + xw_offset: int = 128, + yh_offset: int = 0, + mask_index : np.array = []): + """ Setup the hadrware to tune the spectrometer in live. The goal is to find + the integration time of the spectrometer, noise is around 700 counts, + saturation is equal to 2**16=65535 + + Args: + spectrometer (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + DMD_initial_memory (int): + Initial memory available in DMD after initialization. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y. + pattern_to_display (string): + display one pattern on the DMD to tune the spectrometer. Default is + white pattern + ti (float): + The integration time of the spectrometer during one scan in miliseconds. + Default is 1 ms. + zoom (int): + digital zoom on the DMD. Default is 1 + xw_offset (int): + offset of the pattern in the DMD for zoom > 1 in the width (x) direction + yh_offset (int): + offset of the pattern in the DMD for zoom > 1 in the heihgt (y) direction + mask_index (Union[np.ndarray, str], optional): + Array of `int` type corresponding to the index of the mask vector where + the value is egal to 1 + + return: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + """ + + data_folder_name = 'Tune' + data_name = 'test' + # all_path = func_path(data_folder_name, data_name) + + scan_mode = 'Walsh' + Np = 16 + source = '' + object_name = '' + + metadata = MetaData( + output_directory = '',#all_path.subfolder_path, + pattern_order_source = 'C:/openspyrit/spas/stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz', + pattern_source = 'C:/openspyrit/spas/Patterns/' + scan_mode + '_' + str(Np) + 'x' + str(Np), + pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np), + experiment_name = data_name, + light_source = source, + object = object_name, + filter = '', + description = '' + ) + + acquisition_parameters = AcquisitionParameters( + pattern_compression = 1, + pattern_dimension_x = 16, + pattern_dimension_y = 16, + zoom = zoom, + xw_offset = xw_offset, + yh_offset = yh_offset, + mask_index = [] ) + + acquisition_parameters.pattern_amount = 1 + + spectrometer_params, DMD_params = setup( + spectrometer = spectrometer, + DMD = DMD, + DMD_initial_memory = DMD_initial_memory, + metadata = metadata, + acquisition_params = acquisition_parameters, + pattern_to_display = pattern_to_display, + integration_time = ti, + loop = True ) + + return metadata, spectrometer_params, DMD_params, acquisition_parameters + + +def displaySpectro(ava: Avantes, + DMD: ALP4, + metadata: MetaData, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + reconstruction_params: ReconstructionParameters = None + ): + """Perform a continousely acquisition on the spectrometer for optical tuning. + + Send a pattern on the DMD to project light on the spectrometer. The goal is + to have a look on the amplitude of the spectrum to tune the illumination to + avoid saturation (sat >= 65535) and noisy signal (amp <= 650). + + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + wavelengths (List[float]): + List of float corresponding to the wavelengths associated with + spectrometer's start and stop pixels. + reconstruction_params (ReconstructionParameters): + Object containing parameters of the neural network to be loaded for + reconstruction. + """ + + loop = True # is to project continuously a unique pattern to tune the spectrometer + + pixel_amount = (spectrometer_params.stop_pixel - + spectrometer_params.start_pixel + 1) + + spectral_data = np.zeros( + (acquisition_params.pattern_amount,pixel_amount), + dtype=np.float64) + + acquisition_params.acquired_spectra = 0 + + AcquisitionResults = _acquire_raw(ava, DMD, spectrometer_params, + DMD_params, acquisition_params, loop) + + (data, spectrum_index, timestamp, time, + start_measurement_time, saturation_detected) = AcquisitionResults + + time, timestamp = _calculate_elapsed_time( + start_measurement_time, time, timestamp) + + begin = acquisition_params.pattern_amount + end = 2 * acquisition_params.pattern_amount + spectral_data[begin:end] = data + + acquisition_params.acquired_spectra += spectrum_index + + acquisition_params.saturation_detected = saturation_detected + + +def check_ueye(func, *args, exp=0, raise_exc=True, txt=None): + """Check for bad input value + + Args: + ---------- + func : TYPE + the ueye function. + *args : TYPE + the input value. + exp : TYPE, optional + DESCRIPTION. The default is 0. + raise_exc : TYPE, optional + DESCRIPTION. The default is True. + txt : TYPE, optional + DESCRIPTION. The default is None. + + Raises + ------ + RuntimeError + DESCRIPTION. + + Returns + ------- + None. + """ + + ret = func(*args) + if not txt: + txt = "{}: Expected {} but ret={}!".format(str(func), exp, ret) + if ret != exp: + if raise_exc: + raise RuntimeError(txt) + else: + logging.critical(txt) + + +def stopCapt_DeallocMem(camPar): + """Stop capture and deallocate camera memory if need to change AOI + + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns: + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + + if camPar.camActivated == 1: + nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) + if nRet == ueye.IS_SUCCESS: + camPar.camActivated = 0 + print('video stop successful') + else: + print('problem to stop the video') + + if camPar.Memory == 1: + nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) + if nRet == ueye.IS_SUCCESS: + camPar.Memory = 0 + print('deallocate memory successful') + else: + print('Problem to deallocate memory of the camera') + + return camPar + + +def stopCapt_DeallocMem_ExitCam(camPar): + """Stop capture, deallocate camera memory if need to change AOI and disconnect the camera + + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns: + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + if camPar.camActivated == 1: + nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) + if nRet == ueye.IS_SUCCESS: + camPar.camActivated = 0 + print('video stop successful') + else: + print('problem to stop the video') + + if camPar.Memory == 1: + nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) + if nRet == ueye.IS_SUCCESS: + camPar.Memory = 0 + print('deallocate memory successful') + else: + print('Problem to deallocate memory of the camera') + + if camPar.Exit == 2: + nRet = ueye.is_ExitCamera(camPar.hCam) + if nRet == ueye.IS_SUCCESS: + camPar.Exit = 0 + print('Camera disconnected') + else: + print('Problem to disconnect camera, need to restart spyder') + + return camPar + + +class ImageBuffer: + """A class to allocate buffer in the camera memory + """ + + pcImageMemory = None + MemID = None + width = None + height = None + nbitsPerPixel = None + - AcquisitionResults = _acquire_raw_2arms(ava, DMD, camPar, spectrometer_params, - DMD_params, acquisition_params, metadata, repetition, repetitions) +def imageQueue(camPar): + """Create Imagequeue / Allocate 3 ore more buffers depending on the framerate / Initialize Image queue - (data, spectrum_index, timestamp, time, - start_measurement_time, saturation_detected) = AcquisitionResults + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera - print('Data acquired') + Returns: + ------- + None. - if reconstruct == True: - queue_to_recon.put(data.T) - print('Data sent') + """ - time, timestamp = _calculate_elapsed_time( - start_measurement_time, time, timestamp) + sleep(1) # is required (delay of 1s was not optimized!!) + buffers = [] + for y in range(10): + buffers.append(ImageBuffer()) - begin = repetition * acquisition_params.pattern_amount - end = (repetition + 1) * acquisition_params.pattern_amount - spectral_data[begin:end] = data - measurement_time[begin:end] = time + for x in range(len(buffers)): + buffers[x].nbitsPerPixel = camPar.nBitsPerPixel # RAW8 + buffers[x].height = camPar.rectAOI.s32Height # sensorinfo.nMaxHeight + buffers[x].width = camPar.rectAOI.s32Width # sensorinfo.nMaxWidth + buffers[x].MemID = ueye.int(0) + buffers[x].pcImageMemory = ueye.c_mem_p() + check_ueye(ueye.is_AllocImageMem, camPar.hCam, buffers[x].width, buffers[x].height, buffers[x].nbitsPerPixel, + buffers[x].pcImageMemory, buffers[x].MemID) + check_ueye(ueye.is_AddToSequence, camPar.hCam, buffers[x].pcImageMemory, buffers[x].MemID) - begin = repetition * (acquisition_params.pattern_amount - 1) - end = (repetition + 1) * (acquisition_params.pattern_amount - 1) - timestamps[begin:end] = timestamp + check_ueye(ueye.is_InitImageQueue, camPar.hCam, ueye.c_int(0)) + if camPar.trigger_mode == 'soft': + check_ueye(ueye.is_SetExternalTrigger, camPar.hCam, ueye.IS_SET_TRIGGER_SOFTWARE) + elif camPar.trigger_mode == 'hard': + check_ueye(ueye.is_SetExternalTrigger, camPar.hCam, ueye.IS_SET_TRIGGER_LO_HI) - acquisition_params.acquired_spectra += spectrum_index - acquisition_params.saturation_detected = saturation_detected +def prepareCam(camPar, metadata): + """Prepare the IDS monochrome camera before acquisition - # Print data for each repetition only if there are not too many repetitions - if (verbose) and repetitions <= 10: - if saturation_detected is True: - print('Saturation detected!') - print('Spectra acquired: {}'.format(spectrum_index)) - print('Mean callback acquisition time: {} ms'.format( - np.mean(time))) - print('Total callback acquisition time: {} s'.format( - np.sum(time)/1000)) - print('Mean spectrometer acquisition time: {} ms'.format( - np.mean(timestamp))) - print('Total spectrometer acquisition time: {} s'.format( - np.sum(timestamp)/1000)) + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. - # Print shape of acquisition matrix for one repetition - print(f'Partial acquisition matrix dimensions:' - f'{data.shape}') - print() + Returns: + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + cam_path = metadata.output_directory + '\\' + metadata.experiment_name + '_video.' + camPar.vidFormat + strFileName = ueye.c_char_p(cam_path.encode('utf-8')) - acquisition_params.update_timings(timestamps, measurement_time) - print('Acquisition completed') - # Real time between each spectrum acquisition by the spectrometer - # print('Complete acquisition results:') - # print('Spectra acquired: {}'.format( - # acquisition_params.acquired_spectra)) - if acquisition_params.saturation_detected is True: - print('Saturation detected!') - print('Mean callback acquisition time: {} ms'.format( - acquisition_params.mean_callback_acquisition_time_ms)) - # print('Total callback acquisition time: {} s'.format( - # acquisition_params.total_callback_acquisition_time_s)) - # print('Mean spectrometer acquisition time: {} ms'.format( - # acquisition_params.mean_spectrometer_acquisition_time_ms)) - print('Total spectrometer acquisition time: {} s'.format( - acquisition_params.total_spectrometer_acquisition_time_s)) - # print(f'Acquisition matrix dimension: {spectral_data.shape}') - - print(f'Saving data to {metadata.output_directory}') + if camPar.vidFormat == 'avi': + # print('Video format : AVI') + camPar.avi = ueye.int() + nRet = ueye_tools.isavi_InitAVI(camPar.avi, camPar.hCam) + # print("isavi_InitAVI") + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_InitAVI ERROR") + + nRet = ueye_tools.isavi_SetImageSize(camPar.avi, camPar.m_nColorMode, camPar.rectAOI.s32Width , camPar.rectAOI.s32Height, 0, 0, 0) + nRet = ueye_tools.isavi_SetImageQuality(camPar.avi, 100) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_SetImageQuality ERROR") + + nRet = ueye_tools.isavi_OpenAVI(camPar.avi, strFileName) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_OpenAVI ERROR") + print('Error code = ' + str(nRet)) + print('Certainly, it is a problem with the file name, Avoid special character like "µ" or try to redcue its size') + + nRet = ueye_tools.isavi_SetFrameRate(camPar.avi, camPar.fps) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_SetFrameRate ERROR") + nRet = ueye_tools.isavi_StartAVI(camPar.avi) + # print("isavi_StartAVI") + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_StartAVI ERROR") + + + elif camPar.vidFormat == 'bin': + camPar.punFileID = ueye.c_uint() + nRet = ueye_tools.israw_InitFile(camPar.punFileID, ueye_tools.IS_FILE_ACCESS_MODE_WRITE) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("INIT RAW FILE ERROR") + + nRet = ueye_tools.israw_SetImageInfo(camPar.punFileID, camPar.rectAOI.s32Width, camPar.rectAOI.s32Height, camPar.nBitsPerPixel) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("SET IMAGE INFO ERROR") + + if nRet == ueye.IS_SUCCESS: + # print('initFile ok') + # print('SetImageInfo ok') + nRet = ueye_tools.israw_OpenFile(camPar.punFileID, strFileName) + # if nRet == ueye.IS_SUCCESS: + # # print('OpenFile success') + + # --------------------------------------------------------- + # Activates the camera's live video mode (free run mode) + # --------------------------------------------------------- + nRet = ueye.is_CaptureVideo(camPar.hCam, ueye.IS_DONT_WAIT) + + if nRet != ueye.IS_SUCCESS: + print("is_CaptureVideo ERROR") + else: + camPar.camActivated = 1 - # delete acquisition with black pattern (white for the camera) - if camPar.insert_patterns == 1: - black_pattern_index = np.where(acquisition_params.patterns_wp == -1) - if acquisition_params.patterns_wp.shape == acquisition_params.patterns.shape: - acquisition_params.patterns = np.delete(acquisition_params.patterns, black_pattern_index) - spectral_data = np.delete(spectral_data, black_pattern_index, axis = 0) - acquisition_params.timestamps = np.delete(acquisition_params.timestamps, black_pattern_index[1:]) - acquisition_params.measurement_time = np.delete(acquisition_params.measurement_time, black_pattern_index) - acquisition_params.acquired_spectra = len(acquisition_params.patterns) + return camPar - _save_acquisition_2arms(metadata, DMD_params, spectrometer_params, camPar, - acquisition_params, spectral_data) + +def runCam_thread(camPar, start_chrono): + """Acquire video with the IDS monochrome camera in a thread - # Joining processes and closing queues - if reconstruct == True: - queue_to_recon.put('kill') # Sends a message to stop reconstruction - recon_process.join() - queue_to_recon.close() - plot_process.join() - queue_reconstructed.close() + Parameters: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + start_chrono : int + to save a delay for each acquisition frame of the video. - return spectral_data + Returns: + ------- + None. + """ + + imageinfo = ueye.UEYEIMAGEINFO() + current_buffer = ueye.c_mem_p() + current_id = ueye.int() + # inc = 0 + entier_old = 0 + # time.sleep(0.01) + while True: + nret = ueye.is_WaitForNextImage(camPar.hCam, camPar.timeout, current_buffer, current_id) + if nret == ueye.IS_SUCCESS: + check_ueye(ueye.is_GetImageInfo, camPar.hCam, current_id, imageinfo, ueye.sizeof(imageinfo)) + start_time = time.time() + counter = start_time - start_chrono + camPar.time_array.append(counter) + if camPar.vidFormat == 'avi': + nRet = ueye_tools.isavi_AddFrame(camPar.avi, current_buffer) + elif camPar.vidFormat == 'bin': + nRet = ueye_tools.israw_AddFrame(camPar.punFileID, current_buffer, imageinfo.u64TimestampDevice) + + check_ueye(ueye.is_UnlockSeqBuf, camPar.hCam, current_id, current_buffer) + else: + print('Thread finished') + break + + +def stopCam(camPar): + """To stop the acquisition of the video + + Parameters + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + + if camPar.vidFormat == 'avi': + ueye_tools.isavi_StopAVI(camPar.hCam) + ueye_tools.isavi_CloseAVI(camPar.hCam) + ueye_tools.isavi_ExitAVI(camPar.hCam) + elif camPar.vidFormat == 'bin': + ueye_tools.israw_CloseFile(camPar.punFileID) + ueye_tools.israw_ExitFile(camPar.punFileID) + camPar.punFileID = ueye.c_uint() -def disconnect(ava: Optional[Avantes]=None, DMD: Optional[ALP4]=None): + return camPar + + +def disconnect(ava: Optional[Avantes]=None, + DMD: Optional[ALP4]=None): """Disconnect spectrometer and DMD. Disconnects equipments trying to stop a running pattern sequence (possibly @@ -1928,7 +2347,8 @@ def disconnect(ava: Optional[Avantes]=None, DMD: Optional[ALP4]=None): if ava is not None: ava.disconnect() - + print('Spectro disconnected') + if DMD is not None: # Stop the sequence display @@ -1939,10 +2359,13 @@ def disconnect(ava: Optional[Avantes]=None, DMD: Optional[ALP4]=None): DMD.FreeSeq() DMD.Free() + print('DMD disconnected') -def disconnect_2arms(ava: Optional[Avantes]=None, DMD: Optional[ALP4]=None, camPar=None): - """Disconnect spectrometer and DMD. +def disconnect_2arms(ava: Optional[Avantes]=None, + DMD: Optional[ALP4]=None, + camPar=None): + """Disconnect spectrometer, DMD and the IDS monochrome camera. Disconnects equipments trying to stop a running pattern sequence (possibly blocking correct functioning) and trying to free DMD memory to avoid errors @@ -1953,27 +2376,29 @@ def disconnect_2arms(ava: Optional[Avantes]=None, DMD: Optional[ALP4]=None, camP Connected spectrometer (Avantes object). Defaults to None. DMD (ALP4, optional): Connected DMD. Defaults to None. + camPar (CAM): + Metadata object of the IDS monochrome camera """ if ava is not None: ava.disconnect() print('Spectro disconnected') - if DMD is not None: - + if DMD is not None: # Stop the sequence display try: DMD.Halt() + # Free the sequence from the onboard memory (if any is present) + if (DMD.Seqs): + DMD.FreeSeq() + + DMD.Free() + print('DMD disconnected') + except: print('probelm to Halt the DMD') - # Free the sequence from the onboard memory (if any is present) - if (DMD.Seqs): - DMD.FreeSeq() - - DMD.Free() - print('DMD disconnected') - + if camPar.camActivated == 1: nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) if nRet == ueye.IS_SUCCESS: @@ -1994,42 +2419,8 @@ def disconnect_2arms(ava: Optional[Avantes]=None, DMD: Optional[ALP4]=None, camP if nRet == ueye.IS_SUCCESS: camPar.Exit = 0 print('Camera disconnected') - print("END") else: print('Problem to disconnect camera, need to restart spyder') - -###################### IDS CAM class and definition ########################### -# @dataclass_json -# @dataclass -# class CAM(): -# """ -# Class containing IDS camera configuration -# """ - -# # hCam: ueye.c_uint -# # sInfo: ueye.SENSORINFO -# # cInfo: ueye.BOARDINFO -# # nBitsPerPixel: ueye.c_int -# # m_nColorMode: ueye.c_int -# # bytes_per_pixel: int -# # rectAOI: ueye.IS_RECT() - -# # pcImageMemory: ueye.c_mem_p() -# # MemID: ueye.int() -# # pitch: ueye.INT() - -# # fps: float -# # gain: int -# # gainBoost: str -# # gamma: float -# # exposureTime: float -# # blackLevel = int - -# class_description: str = 'IDS Camera configuration' - -# logging.basicConfig(level=logging.DEBUG, -# format='%(asctime)s.%(msecs)03d %(levelname)s: %(message)s', -# datefmt="%Y-%m-%d %H:%M:%S") def _init_CAM(): @@ -2059,7 +2450,7 @@ def _init_CAM(): pixelClock = ueye.uint(), bandwidth = float(), Memory = bool(), - Exit = bool(), + Exit = int(), vidFormat = str(), gate_period = int(), trigger_mode = str(), @@ -2074,7 +2465,6 @@ def _init_CAM(): ) # # Camera Initialization --- - # print("START Initialization of the IDS camera") ### Starts the driver and establishes the connection to the camera nRet = ueye.is_InitCamera(camPar.hCam, None) if nRet != ueye.IS_SUCCESS: @@ -2103,34 +2493,16 @@ def _init_CAM(): # setup the color depth to the current windows setting ueye.is_GetColorDepth(camPar.hCam, camPar.nBitsPerPixel, camPar.m_nColorMode) camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) - # print("IS_COLORMODE_BAYER: ", ) - # print("\tm_nColorMode: \t\t", m_nColorMode) - # print("\tnBitsPerPixel: \t\t", nBitsPerPixel) - # print("\tbytes_per_pixel: \t\t", bytes_per_pixel) - # print() - elif int.from_bytes(camPar.sInfo.nColorMode.value, byteorder='big') == ueye.IS_COLORMODE_CBYCRY: # for color camera models use RGB32 mode camPar.m_nColorMode = ueye.IS_CM_BGRA8_PACKED camPar.nBitsPerPixel = ueye.INT(32) camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) - # print("IS_COLORMODE_CBYCRY: ", ) - # print("\tm_nColorMode: \t\t", m_nColorMode) - # print("\tnBitsPerPixel: \t\t", nBitsPerPixel) - # print("\tbytes_per_pixel: \t\t", bytes_per_pixel) - # print() - elif int.from_bytes(camPar.sInfo.nColorMode.value, byteorder='big') == ueye.IS_COLORMODE_MONOCHROME: # for color camera models use RGB32 mode camPar.m_nColorMode = ueye.IS_CM_MONO8 camPar.nBitsPerPixel = ueye.INT(8) camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) - # print("IS_COLORMODE_MONOCHROME: ", ) - # print("\tm_nColorMode: \t\t", m_nColorMode) - # print("\tnBitsPerPixel: \t\t", nBitsPerPixel) - # print("\tbytes_per_pixel: \t\t", bytes_per_pixel) - # print() - else: # for monochrome camera models use Y8 mode camPar.m_nColorMode = ueye.IS_CM_MONO8 @@ -2156,26 +2528,6 @@ def _init_CAM(): # get the bandwidth (in MByte/s) camPar.bandwidth = ueye.is_GetUsedBandwidth(camPar.hCam) - # print('Bandwidth = ' + str(camPar.bandwidth) + ' MB/s') - - - # width = rectAOI.s32Width - # height = rectAOI.s32Height - - # Prints out some information about the camera and the sensor - # print("Camera model:\t\t", sInfo.strSensorName.decode('utf-8')) - # print("Camera serial no.:\t", cInfo.SerNo.decode('utf-8')) - # print("Maximum image width:\t", width) - # print("Maximum image height:\t", height) - # print() - - # self.hCam = hCam - # self.sInfo = sInfo - # self.cInfo = cInfo - # self.nBitsPerPixel = nBitsPerPixel - # self.m_nColorMode = m_nColorMode - # self.bytes_per_pixel = bytes_per_pixel - # self.rectAOI = rectAOI camPar.Exit = 1 @@ -2256,283 +2608,229 @@ def setup_cam(camPar, pixelClock, fps, Gain, gain_boost, nGamma, ExposureTime, b returns: CAM: a structure containing the parameters of the IDS camera """ - # It is necessary to execute twice this code to take account the parameter modification - ############################### Set Pixel Clock ############################### - ### Get range of pixel clock, result : range = [118 474] MHz (Inc = 0) - getpixelclock = ueye.UINT(0) - newpixelclock = ueye.UINT(0) - newpixelclock.value = pixelClock - PixelClockRange = (ueye.int * 3)() - - # Get pixel clock range - nRet = ueye.is_PixelClock(camPar.hCam, ueye.IS_PIXELCLOCK_CMD_GET_RANGE, PixelClockRange, ueye.sizeof(PixelClockRange)) - if nRet == ueye.IS_SUCCESS: - nPixelClockMin = PixelClockRange[0] - nPixelClockMax = PixelClockRange[1] - nPixelClockInc = PixelClockRange[2] - - # Set pixel clock - check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_SET, newpixelclock, - ueye.sizeof(newpixelclock)) - # Get current pixel clock - check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_GET, getpixelclock, - ueye.sizeof(getpixelclock)) - - camPar.pixelClock = getpixelclock.value - - print(' pixel clock = ' + str(getpixelclock) + ' MHz') - if getpixelclock == 118: - print('Pixel clcok blocked to 118 MHz, it is necessary to unplug the camera if not desired') - # get the bandwidth (in MByte/s) - camPar.bandwidth = ueye.is_GetUsedBandwidth(camPar.hCam) - - print(' Bandwidth = ' + str(camPar.bandwidth) + ' MB/s') - ############################### Set FrameRate ################################# - ### Read current FrameRate - dblFPS_init = ueye.c_double() - nRet = ueye.is_GetFramesPerSecond(camPar.hCam, dblFPS_init) - if nRet != ueye.IS_SUCCESS: - print("FrameRate getting ERROR") - else: - dblFPS_eff = dblFPS_init - print(' current FPS = '+str(round(dblFPS_init.value*100)/100) + ' fps') - - ### Set new FrameRate - # if fps > 17.771: # maximum value depends of the AOI size, pixel clock etc.... - # fps = 17.771 - # print('FPS exceed upper limit <= 17.771') - if fps < 1: - fps = 1 - print('FPS exceed lower limit >= 1') - - dblFPS = ueye.c_double(fps) - if (dblFPS.value < dblFPS_init.value-0.01) | (dblFPS.value > dblFPS_init.value+0.01): - newFPS = ueye.c_double() - nRet = ueye.is_SetFrameRate(camPar.hCam, dblFPS, newFPS) - time.sleep(1) - if nRet != ueye.IS_SUCCESS: - print("FrameRate setting ERROR") - else: - print(' new FPS = '+str(round(newFPS.value*100)/100) + ' fps') - ### Read again the effective FPS / depend of the image size, 17.7 fps is not possible with the entire image size (ie 2076x3088) - dblFPS_eff = ueye.c_double() - nRet = ueye.is_GetFramesPerSecond(camPar.hCam, dblFPS_eff) + # It is necessary to execute twice this code to take account the parameter modification + for i in range(2): + ############################### Set Pixel Clock ############################### + ### Get range of pixel clock, result : range = [118 474] MHz (Inc = 0) + getpixelclock = ueye.UINT(0) + newpixelclock = ueye.UINT(0) + newpixelclock.value = pixelClock + PixelClockRange = (ueye.int * 3)() + + # Get pixel clock range + nRet = ueye.is_PixelClock(camPar.hCam, ueye.IS_PIXELCLOCK_CMD_GET_RANGE, PixelClockRange, ueye.sizeof(PixelClockRange)) + if nRet == ueye.IS_SUCCESS: + nPixelClockMin = PixelClockRange[0] + nPixelClockMax = PixelClockRange[1] + nPixelClockInc = PixelClockRange[2] + + # Set pixel clock + check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_SET, newpixelclock, + ueye.sizeof(newpixelclock)) + # Get current pixel clock + check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_GET, getpixelclock, + ueye.sizeof(getpixelclock)) + + camPar.pixelClock = getpixelclock.value + if i == 1: + print(' pixel clock = ' + str(getpixelclock) + ' MHz') + if getpixelclock == 118: + if i == 1: + print('Pixel clcok blocked to 118 MHz, it is necessary to unplug the camera if not desired') + # get the bandwidth (in MByte/s) + camPar.bandwidth = ueye.is_GetUsedBandwidth(camPar.hCam) + if i == 1: + print(' Bandwidth = ' + str(camPar.bandwidth) + ' MB/s') + ############################### Set FrameRate ################################# + ### Read current FrameRate + dblFPS_init = ueye.c_double() + nRet = ueye.is_GetFramesPerSecond(camPar.hCam, dblFPS_init) + if nRet != ueye.IS_SUCCESS: + print("FrameRate getting ERROR") + else: + dblFPS_eff = dblFPS_init + if i == 1: + print(' current FPS = '+str(round(dblFPS_init.value*100)/100) + ' fps') + if fps < 1: + fps = 1 + if i == 1: + print('FPS exceed lower limit >= 1') + + dblFPS = ueye.c_double(fps) + if (dblFPS.value < dblFPS_init.value-0.01) | (dblFPS.value > dblFPS_init.value+0.01): + newFPS = ueye.c_double() + nRet = ueye.is_SetFrameRate(camPar.hCam, dblFPS, newFPS) + time.sleep(1) if nRet != ueye.IS_SUCCESS: - print("FrameRate getting ERROR") - else: - print(' effective FPS = '+str(round(dblFPS_eff.value*100)/100) + ' fps') - ############################### Set GAIN ###################################### - #### Read maximum value of the Gain depending of the sensor type - # gain_max_c = ueye.c_int(100) - # gain_max_code = ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_INQUIRE_MASTER_GAIN_FACTOR, gain_max_c) - # if nRet == ueye.IS_SUCCESS: - # print('current GAIN = '+str(gain_max_code)) - # else: - # print('Error to get GAIN') - - #### Maximum gain is depending of the sensor. Convertion gain code to gain to limit values from 0 to 100 - # gain_code = gain * slope + b - gain_max_code = 1450 - gain_min_code = 100 - gain_max = 100 - gain_min = 0 - slope = (gain_max_code-gain_min_code)/(gain_max-gain_min) - b = gain_min_code - #### Read gain setting - current_gain_code = ueye.c_int() - current_gain_code = ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_GET_MASTER_GAIN_FACTOR, current_gain_code) - current_gain = round((current_gain_code-b)/slope) - - print(' current GAIN = '+str(current_gain)) - gain_eff = current_gain - - ### Set new gain value - gain = ueye.c_int(Gain) - if gain.value != current_gain: - if gain.value < 0: - gain = ueye.c_int(0) - print('Gain exceed lower limit >= 0') - elif gain.value > 100: - gain = ueye.c_int(100) - print('Gain exceed upper limit <= 100') - gain_code = ueye.c_int(round(slope*gain.value+b)) + print("FrameRate setting ERROR") + else: + if i == 1: + print(' new FPS = '+str(round(newFPS.value*100)/100) + ' fps') + ### Read again the effective FPS / depend of the image size, 17.7 fps is not possible with the entire image size (ie 2076x3088) + dblFPS_eff = ueye.c_double() + nRet = ueye.is_GetFramesPerSecond(camPar.hCam, dblFPS_eff) + if nRet != ueye.IS_SUCCESS: + print("FrameRate getting ERROR") + else: + if i == 1: + print(' effective FPS = '+str(round(dblFPS_eff.value*100)/100) + ' fps') + ############################### Set GAIN ###################################### + #### Maximum gain is depending of the sensor. Convertion gain code to gain to limit values from 0 to 100 + # gain_code = gain * slope + b + gain_max_code = 1450 + gain_min_code = 100 + gain_max = 100 + gain_min = 0 + slope = (gain_max_code-gain_min_code)/(gain_max-gain_min) + b = gain_min_code + #### Read gain setting + current_gain_code = ueye.c_int() + current_gain_code = ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_GET_MASTER_GAIN_FACTOR, current_gain_code) + current_gain = round((current_gain_code-b)/slope) - ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_SET_MASTER_GAIN_FACTOR, gain_code) - new_gain = round((gain_code-b)/slope) + if i == 1: + print(' current GAIN = '+str(current_gain)) + gain_eff = current_gain - print(' new GAIN = '+str(new_gain)) - gain_eff = new_gain - ############################### Set GAIN Boost ################################ - ### Read current state of the gain boost - current_gain_boost_bool = ueye.is_SetGainBoost(camPar.hCam, ueye.IS_GET_GAINBOOST) - if nRet != ueye.IS_SUCCESS: - print("Gain boost ERROR") - if current_gain_boost_bool == 0: - current_gain_boost = 'OFF' - elif current_gain_boost_bool == 1: - current_gain_boost = 'ON' - - print('current Gain boost mode = ' + current_gain_boost) - - ### Set the state of the gain boost - if gain_boost != current_gain_boost: - if gain_boost == 'OFF': - nRet = ueye.is_SetGainBoost (camPar.hCam, ueye.IS_SET_GAINBOOST_OFF) - print(' new Gain Boost : OFF') + ### Set new gain value + gain = ueye.c_int(Gain) + if gain.value != current_gain: + if gain.value < 0: + gain = ueye.c_int(0) + if i == 1: + print('Gain exceed lower limit >= 0') + elif gain.value > 100: + gain = ueye.c_int(100) + if i == 1: + print('Gain exceed upper limit <= 100') + gain_code = ueye.c_int(round(slope*gain.value+b)) - elif gain_boost == 'ON': - nRet = ueye.is_SetGainBoost (camPar.hCam, ueye.IS_SET_GAINBOOST_ON) - print(' new Gain Boost : ON') - + ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_SET_MASTER_GAIN_FACTOR, gain_code) + new_gain = round((gain_code-b)/slope) + + if i == 1: + print(' new GAIN = '+str(new_gain)) + gain_eff = new_gain + ############################### Set GAIN Boost ################################ + ### Read current state of the gain boost + current_gain_boost_bool = ueye.is_SetGainBoost(camPar.hCam, ueye.IS_GET_GAINBOOST) if nRet != ueye.IS_SUCCESS: - print("Gain boost setting ERROR") - ############################### Set Gamma ##################################### - ### Check boundary of Gamma - if nGamma > 2.2: - nGamma = 2.2 - print('Gamma exceed upper limit <= 2.2') - elif nGamma < 1: - nGamma = 1 - print('Gamma exceed lower limit >= 1') - ### Read current Gamma - c_nGamma_init = ueye.c_void_p() - sizeOfnGamma = ueye.c_uint(4) - nRet = ueye.is_Gamma(camPar.hCam, ueye.IS_GAMMA_CMD_GET, c_nGamma_init, sizeOfnGamma) - if nRet != ueye.IS_SUCCESS: - print("Gamma getting ERROR") - else: - print(' current Gamma = ' + str(c_nGamma_init.value/100)) - ### Set Gamma - c_nGamma = ueye.c_void_p(round(nGamma*100)) # need to multiply by 100 [100 - 220] - if c_nGamma_init.value != c_nGamma.value: - nRet = ueye.is_Gamma(camPar.hCam, ueye.IS_GAMMA_CMD_SET, c_nGamma, sizeOfnGamma) - if nRet != ueye.IS_SUCCESS: - print("Gamma setting ERROR") - else: - print(' new Gamma = '+str(c_nGamma.value/100)) - ############################### Set Exposure time ############################# - ### Read current Exposure Time - getExposure = ueye.c_double() - sizeOfpParam = ueye.c_uint(8) - nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE, getExposure, sizeOfpParam) - if nRet == ueye.IS_SUCCESS: - getExposure.value = round(getExposure.value*1000)/1000 + print("Gain boost ERROR") + if current_gain_boost_bool == 0: + current_gain_boost = 'OFF' + elif current_gain_boost_bool == 1: + current_gain_boost = 'ON' - print(' current Exposure Time = ' + str(getExposure.value) + ' ms') - ### Get minimum Exposure Time - minExposure = ueye.c_double() - nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_MIN, minExposure, sizeOfpParam) - # if nRet == ueye.IS_SUCCESS: - # print('MIN Exposure Time = ' + str(minExposure.value)) - ### Get maximum Exposure Time - maxExposure = ueye.c_double() - nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_MAX, maxExposure, sizeOfpParam) - # if nRet == ueye.IS_SUCCESS: - # print('MAX Exposure Time = ' + str(maxExposure.value)) - ### Get increment Exposure Time - incExposure = ueye.c_double() - nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_INC, incExposure, sizeOfpParam) - # if nRet == ueye.IS_SUCCESS: - # print('INC Exposure Time = ' + str(incExposure.value)) - ### Set new Exposure Time - setExposure = ueye.c_double(ExposureTime) - if setExposure.value > maxExposure.value: - setExposure.value = maxExposure.value - print('Exposure Time exceed upper limit <= ' + str(maxExposure.value)) - elif setExposure.value < minExposure.value: - setExposure.value = minExposure.value - print('Exposure Time exceed lower limit >= ' + str(minExposure.value)) - - if (setExposure.value < getExposure.value-incExposure.value/2) | (setExposure.value > getExposure.value+incExposure.value/2): - nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_SET_EXPOSURE, setExposure, sizeOfpParam) + if i == 1: + print('current Gain boost mode = ' + current_gain_boost) + + ### Set the state of the gain boost + if gain_boost != current_gain_boost: + if gain_boost == 'OFF': + nRet = ueye.is_SetGainBoost (camPar.hCam, ueye.IS_SET_GAINBOOST_OFF) + print(' new Gain Boost : OFF') + + elif gain_boost == 'ON': + nRet = ueye.is_SetGainBoost (camPar.hCam, ueye.IS_SET_GAINBOOST_ON) + print(' new Gain Boost : ON') + + if nRet != ueye.IS_SUCCESS: + print("Gain boost setting ERROR") + ############################### Set Gamma ##################################### + ### Check boundary of Gamma + if nGamma > 2.2: + nGamma = 2.2 + if i == 1: + print('Gamma exceed upper limit <= 2.2') + elif nGamma < 1: + nGamma = 1 + if i == 1: + print('Gamma exceed lower limit >= 1') + ### Read current Gamma + c_nGamma_init = ueye.c_void_p() + sizeOfnGamma = ueye.c_uint(4) + nRet = ueye.is_Gamma(camPar.hCam, ueye.IS_GAMMA_CMD_GET, c_nGamma_init, sizeOfnGamma) if nRet != ueye.IS_SUCCESS: - print("Exposure Time ERROR") + print("Gamma getting ERROR") else: - print(' new Exposure Time = ' + str(round(setExposure.value*1000)/1000) + ' ms') - ############################### Set Black Level ############################### - # nMode = ueye.IS_AUTO_BLACKLEVEL_OFF - # nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_GET_MODE, nMode - - current_black_level_c = ueye.c_uint() - sizeOfBlack_level = ueye.c_uint(4) - #nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_GET_OFFSET_DEFAULT, current_black_level_c, sizeOfBlack_level) - - ### Read current Black Level - nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_GET_OFFSET, current_black_level_c, sizeOfBlack_level) - if nRet != ueye.IS_SUCCESS: - print("Black Level getting ERROR") - else: - print(' current Black Level = ' + str(current_black_level_c.value)) - - ### Set Black Level - if black_level > 255: - black_level = 255 - print('Black Level exceed upper limit <= 255') - if black_level < 0: - black_level = 0 - print('Black Level exceed lower limit >= 0') - - black_level_c = ueye.c_uint(black_level) - if black_level != current_black_level_c.value : - nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_SET_OFFSET, black_level_c, sizeOfBlack_level) + if i == 1: + print(' current Gamma = ' + str(c_nGamma_init.value/100)) + ### Set Gamma + c_nGamma = ueye.c_void_p(round(nGamma*100)) # need to multiply by 100 [100 - 220] + if c_nGamma_init.value != c_nGamma.value: + nRet = ueye.is_Gamma(camPar.hCam, ueye.IS_GAMMA_CMD_SET, c_nGamma, sizeOfnGamma) + if nRet != ueye.IS_SUCCESS: + print("Gamma setting ERROR") + else: + if i == 1: + print(' new Gamma = '+str(c_nGamma.value/100)) + ############################### Set Exposure time ############################# + ### Read current Exposure Time + getExposure = ueye.c_double() + sizeOfpParam = ueye.c_uint(8) + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE, getExposure, sizeOfpParam) + if nRet == ueye.IS_SUCCESS: + getExposure.value = round(getExposure.value*1000)/1000 + + if i == 1: + print(' current Exposure Time = ' + str(getExposure.value) + ' ms') + ### Get minimum Exposure Time + minExposure = ueye.c_double() + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_MIN, minExposure, sizeOfpParam) + ### Get maximum Exposure Time + maxExposure = ueye.c_double() + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_MAX, maxExposure, sizeOfpParam) + ### Get increment Exposure Time + incExposure = ueye.c_double() + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_INC, incExposure, sizeOfpParam) + ### Set new Exposure Time + setExposure = ueye.c_double(ExposureTime) + if setExposure.value > maxExposure.value: + setExposure.value = maxExposure.value + if i == 1: + print('Exposure Time exceed upper limit <= ' + str(maxExposure.value)) + elif setExposure.value < minExposure.value: + setExposure.value = minExposure.value + if i == 1: + print('Exposure Time exceed lower limit >= ' + str(minExposure.value)) + + if (setExposure.value < getExposure.value-incExposure.value/2) | (setExposure.value > getExposure.value+incExposure.value/2): + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_SET_EXPOSURE, setExposure, sizeOfpParam) + if nRet != ueye.IS_SUCCESS: + print("Exposure Time ERROR") + else: + if i == 1: + print(' new Exposure Time = ' + str(round(setExposure.value*1000)/1000) + ' ms') + ############################### Set Black Level ############################### + current_black_level_c = ueye.c_uint() + sizeOfBlack_level = ueye.c_uint(4) + ### Read current Black Level + nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_GET_OFFSET, current_black_level_c, sizeOfBlack_level) if nRet != ueye.IS_SUCCESS: - print("Black Level setting ERROR") + print("Black Level getting ERROR") else: - print(' new Black Level = ' + str(black_level_c.value)) + if i == 1: + print(' current Black Level = ' + str(current_black_level_c.value)) + + ### Set Black Level + if black_level > 255: + black_level = 255 + if i == 1: + print('Black Level exceed upper limit <= 255') + if black_level < 0: + black_level = 0 + if i == 1: + print('Black Level exceed lower limit >= 0') + + black_level_c = ueye.c_uint(black_level) + if black_level != current_black_level_c.value : + nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_SET_OFFSET, black_level_c, sizeOfBlack_level) + if nRet != ueye.IS_SUCCESS: + print("Black Level setting ERROR") + else: + if i == 1: + print(' new Black Level = ' + str(black_level_c.value)) - - # pParam = ueye.c_void_p() - # nRet = ueye.is_DeviceFeature(camPar.hCam, ueye.IS_DEVICE_FEATURE_CMD_GET_SHUTTER_MODE, pParam, ueye.sizeof(pParam)) - # print('nRet = ' + str(nRet)) - # print('shutter mode : ' + str(pParam.value)) - - # nShutterMode = ueye.c_uint(ueye.IS_DEVICE_FEATURE_CAP_SHUTTER_MODE_ROLLING) - # nRet = ueye.is_DeviceFeature(camPar.hCam, ueye.IS_DEVICE_FEATURE_CMD_SET_SHUTTER_MODE, nShutterMode, - # ueye.sizeof(nShutterMode)) - # print('shutter mode = ' + str(nShutterMode.value) + ' / enable : ' + str(nRet)) - - # nShutterMode = ueye.c_uint(ueye.IS_DEVICE_FEATURE_CAP_SHUTTER_MODE_ROLLING_GLOBAL_START) - # nRet = ueye.is_DeviceFeature(camPar.hCam, ueye.IS_DEVICE_FEATURE_CMD_SET_SHUTTER_MODE, nShutterMode, - # ueye.sizeof(nShutterMode)) - # print('shutter mode = ' + str(nShutterMode.value) + ' / enable : ' + str(nRet)) - - # # Read the global flash params - # flashParams = ueye.IO_FLASH_PARAMS() - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_GET_PARAMS, flashParams, ueye.sizeof(flashParams)) - # if (nRet == ueye.IS_SUCCESS): - # nDelay = flashParams.s32Delay - # print('nDelay = ' + str(nDelay.value)) - # nDuration = flashParams.u32Duration - # print('nDuration = ' + str(nDuration.value)) - - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_GET_PARAMS_MIN, flashParams, ueye.sizeof(flashParams)) - # print('nDelay_min = ' + str(flashParams.s32Delay.value)) - # print('nDuration_min = ' + str(flashParams.u32Duration.value)) - - # flashParams.s32Delay.value = 0 - # flashParams.u32Duration.value = 40 - # # Apply the global flash params and set the flash params to these values - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_SET_PARAMS, flashParams, ueye.sizeof(flashParams)) - # print('nDelay = ' + str(flashParams.s32Delay.value)) - # print('nDuration = ' + str(flashParams.u32Duration.value)) - - # nRet = ueye.is_IO(camPar.hCam, ueye.IS_IO_CMD_FLASH_GET_PARAMS, - # flashParams, ueye.sizeof(flashParams)) - # if (nRet == ueye.IS_SUCCESS): - # nDelay = flashParams.s32Delay - # print('nDelay = ' + str(nDelay.value)) - # nDuration = flashParams.u32Duration - # print('nDuration = ' + str(nDuration.value)) - - # nShutterMode = ueye.c_uint(ueye.IS_DEVICE_FEATURE_CAP_SHUTTER_MODE_GLOBAL) - # nRet = ueye.is_DeviceFeature(camPar.hCam, ueye.IS_DEVICE_FEATURE_CMD_SET_SHUTTER_MODE, nShutterMode, - # ueye.sizeof(nShutterMode)) - # print('shutter mode = ' + str(nShutterMode.value) + ' / enable : ' + str(nRet)) - - # nShutterMode = ueye.c_uint(ueye.IS_DEVICE_FEATURE_CAP_SHUTTER_MODE_GLOBAL_ALTERNATIVE_TIMING) - # nRet = ueye.is_DeviceFeature(camPar.hCam, ueye.IS_DEVICE_FEATURE_CMD_SET_SHUTTER_MODE, nShutterMode, - # ueye.sizeof(nShutterMode)) - # print('shutter mode = ' + str(nShutterMode.value) + ' / enable : ' + str(nRet)) camPar.fps = round(dblFPS_eff.value*100)/100 camPar.gain = gain_eff @@ -2556,10 +2854,13 @@ def snapshot(camPar, pathIDSsnapshot, pathIDSsnapshot_overview): # ...reshape it in an numpy array... frame = np.reshape(array,(camPar.rectAOI.s32Height.value, camPar.rectAOI.s32Width.value))#, camPar.bytes_per_pixel)) - with pathIDSsnapshot.open('ab') as f: #(pathname, mode='w', encoding='utf-8') as f: #('ab') as f: + with pathIDSsnapshot.open('wb') as f: #('ab') as f: #(pathname, mode='w', encoding='utf-8') as f: #('ab') as f: np.save(f,frame) - im = Image.fromarray(frame) + maxi = np.amax(frame) + if maxi == 0: + maxi = 1 + im = Image.fromarray(frame*math.floor(255/maxi)) im.save(pathIDSsnapshot_overview) maxi = np.amax(frame) diff --git a/spas/acquisition_SPIM1D.py b/spas/acquisition_SPIM1D.py new file mode 100644 index 0000000..de71f26 --- /dev/null +++ b/spas/acquisition_SPIM1D.py @@ -0,0 +1,2416 @@ +# -*- coding: utf-8 -*- +__author__ = 'Guilherme Beneti Martins' + +"""Acquisition utility functions. + + Acquisition module is a generic module that call function in different setup (SPC2D_1arm, SPC2D_2arms, SCP1D and SPIM) + +""" + +import warnings +from time import sleep, perf_counter_ns +from typing import NamedTuple, Tuple, List, Optional +from collections import namedtuple +from pathlib import Path +from multiprocessing import Process, Queue +import shutil +import math + +import numpy as np +from PIL import Image +##### DLL for the DMD +try: + from ALP4 import ALP4, ALP_FIRSTFRAME, ALP_LASTFRAME + from ALP4 import ALP_AVAIL_MEMORY, ALP_DEV_DYN_SYNCH_OUT1_GATE, tAlpDynSynchOutGate + # print('ALP4 is ok in Acquisition file') +except: + class ALP4: + pass +##### DLL for the spectrometer Avantes +try: + from msl.equipment import EquipmentRecord, ConnectionRecord, Backend + from msl.equipment.resources.avantes import MeasureCallback, Avantes +except: + pass + +from tqdm import tqdm +from spas.metadata_SPC2D import DMDParameters, MetaData, AcquisitionParameters +from spas.metadata_SPC2D import SpectrometerParameters, save_metadata, CAM, save_metadata_2arms +from spas.reconstruction_nn import reconstruct_process, plot_recon, ReconstructionParameters + +# DLL for the IDS CAMERA +try: + from pyueye import ueye, ueye_tools +except: + print('ueye DLL not installed') + +from matplotlib import pyplot as plt +from IPython import get_ipython +import ctypes as ct +import logging +import time +import threading + + +from spas.DMD_module import init_DMD, calculate_timings, setup_DMD, setup_patterns, setup_timings, _sequence_limits, _update_sequence, disconnect_DMD + + +def _init_spectrometer() -> Avantes: + """Initialize and connect to an Avantes Spectrometer. + + Returns: + Avantes: Avantes spectrometer. + """ + + dll_path = Path(__file__).parent.parent.joinpath( + 'lib/avaspec3/avaspecx64.dll') + + record = EquipmentRecord( + manufacturer='Avantes', + model='AvaSpec-UCLS2048BCL-EVO-RS', # update for your device + serial='2011126U1', # update for your device + connection=ConnectionRecord( + address=f'SDK::{dll_path}', + backend=Backend.MSL)) + + # Initialize Avantes SDK and establish the connection to the spectrometer + ava = record.connect() + print('Spectrometer connected') + + return ava + + +def _init_CAM(): + """ + Initialize and connect to the IDS camera. + + Returns: + CAM: a structure containing the parameters of the IDS camera + """ + camPar = CAM(hCam = ueye.HIDS(0), + sInfo = ueye.SENSORINFO(), + cInfo = ueye.CAMINFO(), + nBitsPerPixel = ueye.INT(8), + m_nColorMode = ueye.INT(), + bytes_per_pixel = int( ueye.INT(8)/ 8), + rectAOI = ueye.IS_RECT(), + pcImageMemory = ueye.c_mem_p(), + MemID = ueye.int(), + pitch = ueye.INT(), + fps = float(), + gain = int(), + gainBoost = str(), + gamma = float(), + exposureTime = float(), + blackLevel = int(), + camActivated = bool(), + pixelClock = ueye.uint(), + bandwidth = float(), + Memory = bool(), + Exit = int(), + vidFormat = str(), + gate_period = int(), + trigger_mode = str(), + avi = ueye.int(), + punFileID = ueye.c_uint(), + timeout = int(), + time_array = [], + int_time_spect = float(), + black_pattern_num = int(), + insert_patterns = bool(), + acq_mode = str(), + ) + + # # Camera Initialization --- + ### Starts the driver and establishes the connection to the camera + nRet = ueye.is_InitCamera(camPar.hCam, None) + if nRet != ueye.IS_SUCCESS: + print("is_InitCamera ERROR") + + ### Reads out the data hard-coded in the non-volatile camera memory and writes it to the data structure that cInfo points to + nRet = ueye.is_GetCameraInfo(camPar.hCam, camPar.cInfo) + if nRet != ueye.IS_SUCCESS: + print("is_GetCameraInfo ERROR") + + ### You can query additional information about the sensor type used in the camera + nRet = ueye.is_GetSensorInfo(camPar.hCam, camPar.sInfo) + if nRet != ueye.IS_SUCCESS: + print("is_GetSensorInfo ERROR") + + ### set camera parameters to default values + nRet = ueye.is_ResetToDefault(camPar.hCam) + if nRet != ueye.IS_SUCCESS: + print("is_ResetToDefault ERROR") + + ### Set display mode to DIB + nRet = ueye.is_SetDisplayMode(camPar.hCam, ueye.IS_SET_DM_DIB) + + ### Set the right color mode + if int.from_bytes(camPar.sInfo.nColorMode.value, byteorder='big') == ueye.IS_COLORMODE_BAYER: + # setup the color depth to the current windows setting + ueye.is_GetColorDepth(camPar.hCam, camPar.nBitsPerPixel, camPar.m_nColorMode) + camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) + elif int.from_bytes(camPar.sInfo.nColorMode.value, byteorder='big') == ueye.IS_COLORMODE_CBYCRY: + # for color camera models use RGB32 mode + camPar.m_nColorMode = ueye.IS_CM_BGRA8_PACKED + camPar.nBitsPerPixel = ueye.INT(32) + camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) + elif int.from_bytes(camPar.sInfo.nColorMode.value, byteorder='big') == ueye.IS_COLORMODE_MONOCHROME: + # for color camera models use RGB32 mode + camPar.m_nColorMode = ueye.IS_CM_MONO8 + camPar.nBitsPerPixel = ueye.INT(8) + camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) + else: + # for monochrome camera models use Y8 mode + camPar.m_nColorMode = ueye.IS_CM_MONO8 + camPar.nBitsPerPixel = ueye.INT(8) + camPar.bytes_per_pixel = int(camPar.nBitsPerPixel / 8) + # print("else") + + ### Get the AOI (Area Of Interest) + sizeofrectAOI = ueye.c_uint(4*4) + nRet = ueye.is_AOI(camPar.hCam, ueye.IS_AOI_IMAGE_GET_AOI, camPar.rectAOI, sizeofrectAOI) + if nRet != ueye.IS_SUCCESS: + print("AOI getting ERROR") + + camPar.camActivated = 0 + + # Get current pixel clock + getpixelclock = ueye.UINT(0) + check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_GET, getpixelclock, + ueye.sizeof(getpixelclock)) + + camPar.pixelClock = getpixelclock + # print('pixel clock = ' + str(getpixelclock) + ' MHz') + + # get the bandwidth (in MByte/s) + camPar.bandwidth = ueye.is_GetUsedBandwidth(camPar.hCam) + + camPar.Exit = 1 + + print('IDS camera connected') + + return camPar + +def init(dmd_lib_version: str = '4.2') -> Tuple[Avantes, ALP4, int]: + """Call functions to initialize spectrometer and DMD. + + Args: + dmd_lib_version [str]: the version of the DMD library + + Returns: + Tuple[Avantes, ALP4, int]: Tuple containing equipments and DMD initial + available memory: + Avantes: + Connected spectrometer object. + ALP4: + Connected DMD object. + DMD_initial_memory (int): + Initial memory available in DMD after initialization. + """ + + DMD, DMD_initial_memory = init_DMD(dmd_lib_version) + camPar = _init_CAM() + return DMD, DMD_initial_memory, camPar + + +def _setup_spectrometer(ava: Avantes, + integration_time: float, + integration_delay: int, + start_pixel: int, + stop_pixel: int, + ) -> Tuple[SpectrometerParameters, List[float]]: + """Sets configurations in the spectrometer. + + Set all necessary configurations in the spectrometer preparing it for a + measurement. Creates SpectrometerData containing its metadata. Gets the + correct wavelengths depending on the selected pixels to be used. + + Args: + ava (Avantes): + Avantes spectrometer. + integration_time (float): + Spectrometer exposure time during one scan in miliseconds. + integration_delay (int): + Parameter used to start the integration time not immediately after + the measurement request (or on an external hardware trigger), but + after a specified delay. Unit is based on internal FPGA clock cycle. + start_pixel (int): + Initial pixel data received from spectrometer. + stop_pixel (int, optional): + Last pixel data received from spectrometer. If None, then its value + will be determined from the amount of available pixels in the + spectrometer. + Returns: + Tuple[SpectrometerParameters, List[float]]: Metadata and wavelengths. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata. + wavelengths (List): + List of float corresponding to the wavelengths associated with + spectrometer's start and stop pixels. + """ + + spectrometer_detector = ava.SensType( + ava.get_parameter().m_Detector.m_SensorType).name + + # Get the number of pixels that the spectrometer has + initial_available_pixels = ava.get_num_pixels() + + # print(f'\nThe spectrometer has {initial_available_pixels} pixels') + + # Enable the 16-bit AD converter for High-Resolution + ava.use_high_res_adc(True) + + # Creating configuration block + measconfig = ava.MeasConfigType() + + measconfig.m_StartPixel = start_pixel + + if stop_pixel is None: + measconfig.m_StopPixel = initial_available_pixels - 1 + else: + measconfig.m_StopPixel = stop_pixel + + measconfig.m_IntegrationTime = integration_time + measconfig.m_IntegrationDelay = integration_delay + measconfig.m_NrAverages = 1 + + dark_correction = ava.DarkCorrectionType() + dark_correction.m_Enable = 0 + dark_correction.m_ForgetPercentage = 100 + measconfig.m_CorDynDark = dark_correction + + smoothing = ava.SmoothingType() + smoothing.m_SmoothPix = 0 + smoothing.m_SmoothModel = 0 + measconfig.m_Smoothing = smoothing + + measconfig.m_SaturationDetection = 1 + + trigger = ava.TriggerType() + trigger.m_Mode = 2 + trigger.m_Source = 0 + trigger.m_SourceType = 0 + measconfig.m_Trigger = trigger + + control_settings = ava.ControlSettingsType() + control_settings.m_StrobeControl = 0 + control_settings.m_LaserDelay = 0 + control_settings.m_LaserWidth = 0 + control_settings.LaserWaveLength = 0.00 + control_settings.m_StoreToRam = 0 + measconfig.m_Control = control_settings + + ava.prepare_measure(measconfig) + + spectrometer_params = SpectrometerParameters( + high_resolution=True, + initial_available_pixels=initial_available_pixels, + detector=spectrometer_detector, + configs=measconfig, + version_info=ava.get_version_info()) + + # Get the wavelength corresponding to each pixel + wavelengths = ava.get_lambda()[ + spectrometer_params.start_pixel:spectrometer_params.stop_pixel+1] + + return spectrometer_params, np.asarray(wavelengths) + + +def _setup_patterns_2arms(DMD: ALP4, + metadata: MetaData, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + camPar: CAM, + cov_path: str = None) -> None: + """Read and send patterns to DMD. + + Reads patterns from a file and sends a percentage of them to the DMD, + considering positve and negative Hadamard patterns, which should be even in + number. + Prints time taken to read all patterns and send the requested ones + to DMD. + Updates available memory in DMD metadata object (DMD_params). + + Args: + DMD (ALP4): + Connected DMD object. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + DMD_params (DMDParameters): + DMD metadata object to be updated with pattern related data and with + memory available after patterns are sent to DMD. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera + cov_path (str): + Path to the covariance matrix used for reconstruction. + It must be a .npy (numpy) or .pt (pytorch) file. It is converted to + a torch tensor for reconstruction. + + """ + + file = np.load(Path(metadata.pattern_order_source)) + pattern_order = file['pattern_order'] + pattern_order = pattern_order.astype('int32') + + # copy the black pattern image (png) to the number = -1 + # black_pattern_dest_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + '-1.png' ) + black_pattern_dest_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + '-1.npy' ) + + if black_pattern_dest_path.is_file() == False: + # black_pattern_orig_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + + # str(camPar.black_pattern_num) + '.png' ) + black_pattern_orig_path = Path( metadata.pattern_source + '/' + metadata.pattern_prefix + '_' + + str(camPar.black_pattern_num) + '.npy' ) + shutil.copyfile(black_pattern_orig_path, black_pattern_dest_path) + + + # add white patterns for the camera + if camPar.insert_patterns == 1: + inc = 0 + while True: + try: + pattern_order[inc] # except error from the end of array to stop the loop + if (inc % camPar.gate_period) == 0:#16) == 0: + pattern_order = np.insert(pattern_order, inc, -1) # double white pattern is required if integration time is shorter than 3.85 ms + if camPar.int_time_spect < 3.85: + pattern_order = np.insert(pattern_order, inc+1, -1) + if camPar.int_time_spect < 1.65: + pattern_order = np.insert(pattern_order, inc+2, -1) + if camPar.int_time_spect < 1: + pattern_order = np.insert(pattern_order, inc+1, -1) + if camPar.int_time_spect <= 0.6: + pattern_order = np.insert(pattern_order, inc+1, -1) + inc = inc + 1 + except: + # print('while loop finished') + break + + # if camPar.int_time_spect < 1.75: # add one pattern at the beginning of the sequence when the integration time of the spectrometer is shorter than 1.75 ms + # print('no interleaving') + # #pattern_order = np.insert(pattern_order, 0, -1) + # # pattern_order = np.insert(pattern_order, 0, -1) + if (len(pattern_order)%2) != 0: # Add one pattern at the end of the sequence if the pattern number is even + pattern_order = np.insert(pattern_order, len(pattern_order), -1) + print('pattern order is odd => a black image is automaticly insert, need to be deleted in the case for tuning the spectrometer') + + pos_neg = file['pos_neg'] + + bitplanes = 1 + DMD_params.bitplanes = bitplanes + + if (DMD_params.initial_memory - DMD.DevInquire(ALP_AVAIL_MEMORY) == + len(pattern_order)): + print('Reusing patterns from previous acquisition') + acquisition_params.pattern_amount = _sequence_limits( + DMD, + acquisition_params.pattern_compression, + len(pattern_order), + pos_neg=pos_neg) + + else: + if (DMD.Seqs): + DMD.FreeSeq() + + _update_sequence(DMD, DMD_params, acquisition_params, metadata.pattern_source, metadata.pattern_prefix, + pattern_order, bitplanes) + print(f'DMD available memory after sequence allocation: ' + f'{DMD.DevInquire(ALP_AVAIL_MEMORY)}') + acquisition_params.pattern_amount = _sequence_limits( + DMD, + acquisition_params.pattern_compression, + len(pattern_order), + pos_neg=pos_neg) + + acquisition_params.patterns = ( + pattern_order[0:acquisition_params.pattern_amount]) + + acquisition_params.patterns_wp = acquisition_params.patterns + + # Confirm memory allocated in DMD + DMD_params.update_memory(DMD.DevInquire(ALP_AVAIL_MEMORY)) + + +def setup(spectrometer: Avantes, + DMD: ALP4, + DMD_initial_memory: int, + metadata: MetaData, + acquisition_params: AcquisitionParameters, + start_pixel: int = 0, + stop_pixel: Optional[int] = None, + integration_time: float = 1, + integration_delay: int = 0, + DMD_output_synch_pulse_delay: int = 0, + add_illumination_time: int = 356, + dark_phase_time: int = 44, + DMD_trigger_in_delay: int = 0, + pattern_to_display: str = 'white', + loop: bool = False + ) -> Tuple[SpectrometerParameters, DMDParameters]: + """Setup everything needed to start an acquisition. + + Sets all parameters for DMD, spectrometer, DMD patterns and DMD timings. + Must be called before every acquisition. + + Args: + spectrometer (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + DMD_initial_memory (int): + Initial memory available in DMD after initialization. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y. + start_pixel (int): + Initial pixel data received from spectrometer. Default is 0. + stop_pixel (int, optional): + Last pixel data received from spectrometer. Default is None if it + should be determined from the amount of available pixels in the + spectrometer. + integration_time (float): + Spectrometer exposure time during one scan in miliseconds. Default + is 1 ms. + integration_delay (int): + Parameter used to start the integration time not immediately after + the measurement request (or on an external hardware trigger), but + after a specified delay. Unit is based on internal FPGA clock cycle. + Default is 0 us. + DMD_output_synch_pulse_delay (int): + Time in microseconds between start of the frame synch output pulse + and the start of the pattern display (in master mode). Default is + 0 us. + add_illumination_time (int): + Extra time in microseconds to account for the spectrometer's + "dead time". Default is 365 us. + dark_phase_time (int): + Time in microseconds taken by the DMD mirrors to completely tilt. + Minimum time for XGA type DMD is 44 us. Default is 44 us. + DMD_trigger_in_delay (int): + Time in microseconds between the incoming trigger edge and the start + of the pattern display on DMD (slave mode). Default is 0 us. + pattern_to_display (string): + display one pattern on the DMD to tune the spectrometer. Default is white + pattern + loop (bool): + is to projet in loop, one or few patterns continuously (see AlpProjStartCont + in the doc for more detail). Default is False + Raises: + ValueError: Sum of dark phase and additional illumination time is lower + than 400 us. + + Returns: + Tuple[SpectrometerParameters, DMDParameters, List]: Tuple containing DMD + and spectrometer relate metadata, as well as wavelengths. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + """ + + if loop == False: + path = Path(metadata.output_directory) + if not path.exists(): + path.mkdir() + + if dark_phase_time + add_illumination_time < 350: + raise ValueError(f'Sum of dark phase and additional illumination time ' + f'is {dark_phase_time + add_illumination_time}.' + f' Must be greater than 350 µs.') + + elif dark_phase_time + add_illumination_time < 400: + warnings.warn(f'Sum of dark phase and additional illumination time ' + f'is {dark_phase_time + add_illumination_time}.' + f' It is recomended to choose at least 400 µs.') + + synch_pulse_width, illumination_time, picture_time = calculate_timings( + integration_time, + integration_delay, + add_illumination_time, + DMD_output_synch_pulse_delay, + dark_phase_time) + + spectrometer_params, wavelenghts = _setup_spectrometer( + spectrometer, + integration_time, + integration_delay, + start_pixel, + stop_pixel) + + acquisition_params.wavelengths = np.asarray(wavelenghts, dtype=np.float64) + + DMD_params = setup_DMD(DMD, add_illumination_time, DMD_initial_memory) + + setup_patterns(DMD=DMD, metadata=metadata, DMD_params=DMD_params, + acquisition_params=acquisition_params, loop=loop, + pattern_to_display=pattern_to_display) + + setup_timings(DMD, DMD_params, picture_time, illumination_time, + DMD_output_synch_pulse_delay, synch_pulse_width, + DMD_trigger_in_delay, add_illumination_time) + + return spectrometer_params, DMD_params + + +def setup_2arms(spectrometer: Avantes, + DMD: ALP4, + camPar: CAM, + DMD_initial_memory: int, + metadata: MetaData, + acquisition_params: AcquisitionParameters, + start_pixel: int = 0, + stop_pixel: Optional[int] = None, + integration_time: float = 1, + integration_delay: int = 0, + DMD_output_synch_pulse_delay: int = 0, + add_illumination_time: int = 356, + dark_phase_time: int = 44, + DMD_trigger_in_delay: int = 0 + ) -> Tuple[SpectrometerParameters, DMDParameters]: + """Setup everything needed to start an acquisition. + + Sets all parameters for DMD, spectrometer, DMD patterns and DMD timings. + Must be called before every acquisition. + + Args: + spectrometer (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera + DMD_initial_memory (int): + Initial memory available in DMD after initialization. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y, + zoom, x and y offest of patterns displayed on the DMD. + start_pixel (int): + Initial pixel data received from spectrometer. Default is 0. + stop_pixel (int, optional): + Last pixel data received from spectrometer. Default is None if it + should be determined from the amount of available pixels in the + spectrometer. + integration_time (float): + Spectrometer exposure time during one scan in miliseconds. Default + is 1 ms. + integration_delay (int): + Parameter used to start the integration time not immediately after + the measurement request (or on an external hardware trigger), but + after a specified delay. Unit is based on internal FPGA clock cycle. + Default is 0 us. + DMD_output_synch_pulse_delay (int): + Time in microseconds between start of the frame synch output pulse + and the start of the pattern display (in master mode). Default is + 0 us. + add_illumination_time (int): + Extra time in microseconds to account for the spectrometer's + "dead time". Default is 365 us. + dark_phase_time (int): + Time in microseconds taken by the DMD mirrors to completely tilt. + Minimum time for XGA type DMD is 44 us. Default is 44 us. + DMD_trigger_in_delay (int): + Time in microseconds between the incoming trigger edge and the start + of the pattern display on DMD (slave mode). Default is 0 us. + + Raises: + ValueError: Sum of dark phase and additional illumination time is lower + than 400 us. + + Returns: + Tuple[SpectrometerParameters, DMDParameters, List]: Tuple containing DMD + and spectrometer relate metadata, as well as wavelengths. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + """ + + path = Path(metadata.output_directory) + if not path.exists(): + path.mkdir() + + if dark_phase_time + add_illumination_time < 350: + raise ValueError(f'Sum of dark phase and additional illumination time ' + f'is {dark_phase_time + add_illumination_time}.' + f' Must be greater than 350 µs.') + + elif dark_phase_time + add_illumination_time < 400: + warnings.warn(f'Sum of dark phase and additional illumination time ' + f'is {dark_phase_time + add_illumination_time}.' + f' It is recomended to choose at least 400 µs.') + + synch_pulse_width, illumination_time, picture_time = calculate_timings( + integration_time, + integration_delay, + add_illumination_time, + DMD_output_synch_pulse_delay, + dark_phase_time) + + spectrometer_params, wavelenghts = _setup_spectrometer( + spectrometer, + integration_time, + integration_delay, + start_pixel, + stop_pixel) + + if camPar.gate_period > 16: + gate_period = 16 + print('Warning, gate period is ' + str(camPar.gate_period) + ' > than the max: 16.') + print('Try to increase the FPS of the camera, or the integration time of the spectrometer.') + print('Check the Pixel clock which must be = 474 MHz') + print('Otherwise some frames will be lost.') + elif camPar.gate_period <1: + print('Warning, gate period is ' + str(camPar.gate_period) + ' < than the min: 1.') + gate_period = 1 + else: + gate_period = camPar.gate_period + + camPar.gate_period = gate_period + Gate = tAlpDynSynchOutGate() + Gate.byref[0] = ct.c_ubyte(gate_period) # Period [1 to 16] (it is a multiple of the trig period which go to the spectro) + Gate.byref[1] = ct.c_ubyte(1) # Polarity => 0: active pulse is low, 1: high + Gate.byref[2] = ct.c_ubyte(1) # Gate1 ok to send TTL + Gate.byref[3] = ct.c_ubyte(0) # Gate2 do not send TTL + Gate.byref[4] = ct.c_ubyte(0) # Gate3 do not send TTL + DMD.DevControlEx(ALP_DEV_DYN_SYNCH_OUT1_GATE, Gate) + camPar.gate_period = gate_period + camPar.int_time_spect = integration_time + + acquisition_params.wavelengths = np.asarray(wavelenghts, dtype=np.float64) + + DMD_params = setup_DMD(DMD, add_illumination_time, DMD_initial_memory) + + setup_patterns(DMD=DMD, metadata=metadata, DMD_params=DMD_params, + acquisition_params=acquisition_params) + + setup_timings(DMD, DMD_params, picture_time, illumination_time, + DMD_output_synch_pulse_delay, synch_pulse_width, + DMD_trigger_in_delay, add_illumination_time) + + return spectrometer_params, DMD_params, camPar + + +def _calculate_elapsed_time(start_measurement_time: int, + measurement_time: np.ndarray, + timestamps: List[int], + ) -> Tuple[np.ndarray, np.ndarray]: + """Calculate acquisition timings. + + Calculates elapsed time between each callback measurement taking into + account the moment when the DMD started running a sequence. + Calculates elapsed time between each spectrum acquired by the spectrometer + based on the spectrometer's internal clock. + + Args: + start_measurement_time (int): + Time in nanoseconds when DMD is set to start running a sequence. + measurement_time (np.ndarray): + 1D array with `int` type timings in nanoseconds when each callbacks + starts. + timestamps (List[int]): + 1D array with measurement timestamps from spectrometer. + Timestamps count ticks for the last pixel of the spectrum was + received by the spectrometer microcontroller. Ticks are in 10 + microsecond units since the spectrometer started. + + Returns: + Tuple[np.ndarray, np.ndarray]: Tuple with measurement timings. + measurement_time (np.ndarray): + 1D array with `float` type elapsed times between each callback. + Units in milliseconds. + timestamps (np.ndarray): + 1D array with `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. + Units in milliseconds. + """ + + measurement_time = np.concatenate( + (start_measurement_time,measurement_time),axis=None) + + measurement_time = np.diff(measurement_time)/1e+6 # In ms + timestamps = np.diff(timestamps)/100 # In ms + + return measurement_time, timestamps + + +def _save_acquisition(metadata: MetaData, + DMD_params: DMDParameters, + spectrometer_params: SpectrometerParameters, + acquisition_parameters: AcquisitionParameters, + spectral_data: np.ndarray) -> None: + """Save all acquisition data and metadata. + + Args: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + acquisition_parameters (AcquisitionParameters): + Acquisition related metadata object. + spectral_data (ndarray): + 1D array with `float` type spectrometer measurements. Array size + depends on start and stop pixels previously set to the spectrometer. + """ + + # Saving collected data and timings + path = Path(metadata.output_directory) + path = path / f'{metadata.experiment_name}_spectraldata.npz' + np.savez_compressed(path, spectral_data=spectral_data) + + # 'save_metadata' function is commented because the 'save_metadata_2arms' function is executed after the 'acquire' function in the "main_seq_2arms.py" prog + # # Saving metadata + # save_metadata(metadata, + # DMD_params, + # spectrometer_params, + # acquisition_parameters) + +def _save_acquisition_2arms(metadata: MetaData, + DMD_params: DMDParameters, + spectrometer_params: SpectrometerParameters, + camPar: CAM, + acquisition_parameters: AcquisitionParameters, + spectral_data: np.ndarray) -> None: + """Save all acquisition data and metadata. + + Args: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + camPar (CAM): + Metadata object of the IDS monochrome camera + acquisition_parameters (AcquisitionParameters): + Acquisition related metadata object. + spectral_data (ndarray): + 1D array with `float` type spectrometer measurements. Array size + depends on start and stop pixels previously set to the spectrometer. + """ + + # Saving collected data and timings + path = Path(metadata.output_directory) + path = path / f'{metadata.experiment_name}_spectraldata.npz' + np.savez_compressed(path, spectral_data=spectral_data) + + # Saving metadata + save_metadata_2arms(metadata, + DMD_params, + spectrometer_params, + camPar, + acquisition_parameters) + + +def _acquire_raw(ava: Avantes, + DMD: ALP4, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + loop: bool = False + ) -> NamedTuple: + """Raw data acquisition. + + Setups a callback function to receive messages from spectrometer whenever a + measurement is ready to be read. Reads a measurement via a callback. + + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + loop (bool): + if True, projet continuously the pattern, see the AlpProjStartCont function + if False, projet one time the seq of the patterns, see the AlpProjStart function (Default) + + Returns: + NamedTuple: NamedTuple containig spectral data and measurement timings. + spectral_data (ndarray): + 2D array of `float` of size (pattern_amount x pixel_amount) + containing measurements received from the spectrometer for each + pattern of a sequence. + spectrum_index (int): + Index of the last acquired spectrum. + timestamps (np.ndarray): + 1D array with `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. + Units in milliseconds. + measurement_time (np.ndarray): + 1D array with `float` type elapsed times between each callback. + Units in milliseconds. + start_measurement_time (float): + Time when acquisition started. + saturation_detected (bool): + Boolean incating if saturation was detected during acquisition. + """ + def register_callback(measurement_time, timestamps, + spectral_data, ava): + + def measurement_callback(handle, info): # If we want to reconstruct during callback; can use it in here. Add function as parameter. + nonlocal spectrum_index + nonlocal saturation_detected + + measurement_time[spectrum_index] = perf_counter_ns() + + if info.contents.value >= 0: + timestamp,spectrum = ava.get_data() + spectral_data[spectrum_index,:] = ( + np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + if np.any(ava.get_saturated_pixels() > 0): + saturation_detected = True + + timestamps[spectrum_index] = np.ctypeslib.as_array(timestamp) + + else: # Set values to zero if an error occured + spectral_data[spectrum_index,:] = 0 + timestamps[spectrum_index] = 0 + + spectrum_index += 1 + + return measurement_callback + + + pixel_amount = (spectrometer_params.stop_pixel - + spectrometer_params.start_pixel + 1) + + measurement_time = np.zeros((acquisition_params.pattern_amount)) + timestamps = np.zeros((acquisition_params.pattern_amount),dtype=np.uint32) + spectral_data = np.zeros( + (acquisition_params.pattern_amount,pixel_amount),dtype=np.float64) + + # Boolean to indicate if saturation was detected during acquisition + saturation_detected = False + + spectrum_index = 0 # Accessed as nonlocal variable inside the callback + + if loop == False: + #spectro.register_callback(-2,acquisition_params.pattern_amount,pixel_amount) + callback = register_callback(measurement_time, timestamps, + spectral_data, ava) + measurement_callback = MeasureCallback(callback) + ava.measure_callback(-2, measurement_callback) + else: + ava.measure(-1) + + + DMD.Run(loop=loop) # if loop=False : Run the whole sequence only once, if loop=True : Run continuously one pattern + start_measurement_time = perf_counter_ns() + + if loop == False: + while(True): + if(spectrum_index >= acquisition_params.pattern_amount) and loop == False: + break + elif((perf_counter_ns() - start_measurement_time) / 1e+6 > + (2 * acquisition_params.pattern_amount * + DMD_params.picture_time_us / 1e+3)) and loop == False: + print('Stopping measurement. One of the equipments may be blocked ' + 'or disconnected.') + break + else: + sleep(acquisition_params.pattern_amount * + DMD_params.picture_time_us / 1e+6 / 10) + DMD.Halt() + else: + sleep(0.1) + + timestamp, spectrum = ava.get_data() + spectral_data_1 = (np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + get_ipython().run_line_magic('matplotlib', 'qt') + plt.ion() # create GUI + figure, ax = plt.subplots(figsize=(10, 8)) + line1, = ax.plot(acquisition_params.wavelengths, spectral_data_1) + + plt.title("Tune the Spectrometer", fontsize=20) + plt.xlabel("Lambda (nm)") + plt.ylabel("counts") + plt.xticks(fontsize=14) + plt.yticks(fontsize=14) + plt.grid() + printed = False + while(True): + try: + timestamp, spectrum = ava.get_data() + spectral_data_1 = (np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + line1.set_xdata(acquisition_params.wavelengths) + line1.set_ydata(spectral_data_1) # updating data values + + figure.canvas.draw() # drawing updated values + figure.canvas.flush_events() # flush prior plot + + if not printed: + print('Press "Ctrl + c" to exit') + if np.amax(spectral_data_1) >= 65535: + print('!!!!!!!!!! Saturation detected in the spectro !!!!!!!!!!') + printed = True + + except KeyboardInterrupt: + if (DMD.Seqs): + DMD.Halt() + DMD.FreeSeq() + plt.close() + get_ipython().run_line_magic('matplotlib', 'inline') + break + + ava.stop_measure() + + AcquisitionResult = namedtuple('AcquisitionResult', [ + 'spectral_data', + 'spectrum_index', + 'timestamps', + 'measurement_time', + 'start_measurement_time', + 'saturation_detected']) + + return AcquisitionResult(spectral_data, + spectrum_index, + timestamps, + measurement_time, + start_measurement_time, + saturation_detected) + + +def acquire(ava: Avantes, + DMD: ALP4, + metadata: MetaData, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + repetitions: int = 1, + verbose: bool = False, + reconstruct: bool = False, + reconstruction_params: ReconstructionParameters = None + ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: + """Perform a complete acquisition. + + Performs single or multiple acquisitions using the same setup configurations + previously chosen. + Finnaly saves all acqusition related data and metadata. + + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + wavelengths (List[float]): + List of float corresponding to the wavelengths associated with + spectrometer's start and stop pixels. + repetitions (int): + Number of times the acquisition will be repeated with the same + configurations. Default is 1, a single acquisition. + verbose (bool): + Chooses if data concerning each acquisition should be printed to + user. If False, only overall data regarding all repetitions is + printed. Default is False. + reconstruct (bool): + If True, will perform reconstruction alongside acquisition using + multiprocessing. + reconstruction_params (ReconstructionParameters): + Object containing parameters of the neural network to be loaded for + reconstruction. + + Returns: + Tuple[ndarray, ndarray, ndarray]: Tuple containig spectral data and + measurement timings. + spectral_data (ndarray): + 2D array of `float` of size (pattern_amount x pixel_amount) + containing measurements received from the spectrometer for each + pattern of a sequence. + timestamps (np.ndarray): + 1D array with `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. + Units in milliseconds. + measurement_time (np.ndarray): + 1D array with `float` type elapsed times between each callback. + Units in milliseconds. + """ + + loop = False # if true, is to projet continuously a unique pattern to tune the spectrometer + + if reconstruct == True: + print('Creating reconstruction processes') + + # Creating a Queue for sending spectral data to reconstruction process + queue_to_recon = Queue() + + # Creating a Queue for sending reconstructed images to plot + queue_reconstructed = Queue() + + sleep_time = (acquisition_params.pattern_amount * + DMD_params.picture_time_us/1e+6) + + # Creating reconstruction process + recon_process = Process(target=reconstruct_process, + args=(reconstruction_params.model, + reconstruction_params.device, + queue_to_recon, + queue_reconstructed, + reconstruction_params.batches, + reconstruction_params.noise, + sleep_time)) + + # Creating plot process + plot_process = Process(target=plot_recon, + args=(queue_reconstructed, sleep_time)) + + # Starting processes + recon_process.start() + plot_process.start() + + pixel_amount = (spectrometer_params.stop_pixel - + spectrometer_params.start_pixel + 1) + measurement_time = np.zeros( + (acquisition_params.pattern_amount * repetitions)) + timestamps = np.zeros( + ((acquisition_params.pattern_amount - 1) * repetitions), + dtype=np.float64) + spectral_data = np.zeros( + (acquisition_params.pattern_amount * repetitions,pixel_amount), + dtype=np.float64) + + acquisition_params.acquired_spectra = 0 + print() + + for repetition in range(repetitions): + if verbose: + print(f"Acquisition {repetition}") + + AcquisitionResults = _acquire_raw(ava, DMD, spectrometer_params, + DMD_params, acquisition_params, loop) + + (data, spectrum_index, timestamp, time, + start_measurement_time, saturation_detected) = AcquisitionResults + + print('Acquisition number : ' + str(repetition) + ' finished') + + if reconstruct == True: + queue_to_recon.put(data.T) + print('Data sent') + + time, timestamp = _calculate_elapsed_time( + start_measurement_time, time, timestamp) + + begin = repetition * acquisition_params.pattern_amount + end = (repetition + 1) * acquisition_params.pattern_amount + spectral_data[begin:end] = data + measurement_time[begin:end] = time + + begin = repetition * (acquisition_params.pattern_amount - 1) + end = (repetition + 1) * (acquisition_params.pattern_amount - 1) + timestamps[begin:end] = timestamp + + acquisition_params.acquired_spectra += spectrum_index + + acquisition_params.saturation_detected = saturation_detected + + if saturation_detected is True: + print('!!!!!!!!!! Saturation detected in the spectro !!!!!!!!!!') + # Print data for each repetition + if (verbose): + print('Spectra acquired: {}'.format(spectrum_index)) + print('Mean callback acquisition time: {} ms'.format( + np.mean(time))) + print('Total callback acquisition time: {} s'.format( + np.sum(time)/1000)) + print('Mean spectrometer acquisition time: {} ms'.format( + np.mean(timestamp))) + print('Total spectrometer acquisition time: {} s'.format( + np.sum(timestamp)/1000)) + + # Print shape of acquisition matrix for one repetition + print(f'Partial acquisition matrix dimensions:' + f'{data.shape}') + print() + + acquisition_params.update_timings(timestamps, measurement_time) + # Real time between each spectrum acquisition by the spectrometer + print('Complete acquisition done') + print('Spectra acquired: {}'.format(acquisition_params.acquired_spectra)) + print('Total acquisition time: {0:.2f} s'.format(acquisition_params.total_spectrometer_acquisition_time_s)) + + _save_acquisition(metadata, DMD_params, spectrometer_params, + acquisition_params, spectral_data) + + # Joining processes and closing queues + if reconstruct == True: + queue_to_recon.put('kill') # Sends a message to stop reconstruction + recon_process.join() + queue_to_recon.close() + plot_process.join() + queue_reconstructed.close() + + maxi = np.amax(spectral_data[0,:]) + print('------------------------------------------------') + print('maximum in the spectrum = ' + str(maxi)) + print('------------------------------------------------') + if maxi >= 65535: + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + print('!!!!! warning, spectrum saturation !!!!!!!!') + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + + return spectral_data + +def _acquire_raw_2arms(ava: Avantes, + DMD: ALP4, + camPar: CAM, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + metadata, + repetition, + repetitions + ) -> NamedTuple: + """Raw data acquisition. + + Setups a callback function to receive messages from spectrometer whenever a + measurement is ready to be read. Reads a measurement via a callback. + + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + + Returns: + NamedTuple: NamedTuple containig spectral data and measurement timings. + spectral_data (ndarray): + 2D array of `float` of size (pattern_amount x pixel_amount) + containing measurements received from the spectrometer for each + pattern of a sequence. + spectrum_index (int): + Index of the last acquired spectrum. + timestamps (np.ndarray): + 1D array with `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. + Units in milliseconds. + measurement_time (np.ndarray): + 1D array with `float` type elapsed times between each callback. + Units in milliseconds. + start_measurement_time (float): + Time when acquisition started. + saturation_detected (bool): + Boolean incating if saturation was detected during acquisition. + """ + # def for spectrometer acquisition + def register_callback(measurement_time, timestamps, + spectral_data, ava): + + def measurement_callback(handle, info): # If we want to reconstruct during callback; can use it in here. Add function as parameter. + nonlocal spectrum_index + nonlocal saturation_detected + + measurement_time[spectrum_index] = perf_counter_ns() + + if info.contents.value >= 0: + timestamp,spectrum = ava.get_data() + spectral_data[spectrum_index,:] = ( + np.ctypeslib.as_array(spectrum[0:pixel_amount])) + + if np.any(ava.get_saturated_pixels() > 0): + saturation_detected = True + + timestamps[spectrum_index] = np.ctypeslib.as_array(timestamp) + + else: # Set values to zero if an error occured + spectral_data[spectrum_index,:] = 0 + timestamps[spectrum_index] = 0 + + spectrum_index += 1 + + return measurement_callback + + # def for camera acquisition + if repetition == 0: + camPar = stopCapt_DeallocMem(camPar) + camPar.trigger_mode = 'hard'#'soft'# + imageQueue(camPar) + camPar = prepareCam(camPar, metadata) + camPar.timeout = 1000 # time out in ms for the "is_WaitForNextImage" function + start_chrono = time.time() + x = threading.Thread(target = runCam_thread, args=(camPar, start_chrono)) + x.start() + + pixel_amount = (spectrometer_params.stop_pixel - + spectrometer_params.start_pixel + 1) + + measurement_time = np.zeros((acquisition_params.pattern_amount)) + timestamps = np.zeros((acquisition_params.pattern_amount),dtype=np.uint32) + spectral_data = np.zeros( + (acquisition_params.pattern_amount,pixel_amount),dtype=np.float64) + + # Boolean to indicate if saturation was detected during acquisition + saturation_detected = False + + spectrum_index = 0 # Accessed as nonlocal variable inside the callback + + #spectro.register_callback(-2,acquisition_params.pattern_amount,pixel_amount) + callback = register_callback(measurement_time, timestamps, + spectral_data, ava) + measurement_callback = MeasureCallback(callback) + ava.measure_callback(-2, measurement_callback) + + # time.sleep(0.5) + # Run the whole sequence only once + DMD.Run(loop=False) + start_measurement_time = perf_counter_ns() + #sleep(13) + + while(True): + if(spectrum_index >= acquisition_params.pattern_amount): + break + elif((perf_counter_ns() - start_measurement_time) / 1e+6 > + (2 * acquisition_params.pattern_amount * + DMD_params.picture_time_us / 1e+3)): + print('Stopping measurement. One of the equipments may be blocked ' + 'or disconnected.') + break + else: + time.sleep(acquisition_params.pattern_amount * + DMD_params.picture_time_us / 1e+6 / 10) + + ava.stop_measure() + DMD.Halt() + camPar.Exit = 2 + if repetition == repetitions-1: + camPar = stopCam(camPar) + #Yprint('MAIN :// camPar.camActivated = ' + str(camPar.camActivated)) + AcquisitionResult = namedtuple('AcquisitionResult', [ + 'spectral_data', + 'spectrum_index', + 'timestamps', + 'measurement_time', + 'start_measurement_time', + 'saturation_detected']) + + return AcquisitionResult(spectral_data, + spectrum_index, + timestamps, + measurement_time, + start_measurement_time, + saturation_detected) + + +def acquire_2arms(ava: Avantes, + DMD: ALP4, + camPar: CAM, + metadata: MetaData, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + repetitions: int = 1, + verbose: bool = False, + reconstruct: bool = False, + reconstruction_params: ReconstructionParameters = None + ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: + """Perform a complete acquisition. + + Performs single or multiple acquisitions using the same setup configurations + previously chosen. + Finnaly saves all acqusition related data and metadata. + + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + camPar (CAM): + Metadata object of the IDS monochrome camera + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + wavelengths (List[float]): + List of float corresponding to the wavelengths associated with + spectrometer's start and stop pixels. + repetitions (int): + Number of times the acquisition will be repeated with the same + configurations. Default is 1, a single acquisition. + verbose (bool): + Chooses if data concerning each acquisition should be printed to + user. If False, only overall data regarding all repetitions is + printed. Default is False. + reconstruct (bool): + If True, will perform reconstruction alongside acquisition using + multiprocessing. + reconstruction_params (ReconstructionParameters): + Object containing parameters of the neural network to be loaded for + reconstruction. + + Returns: + Tuple[ndarray, ndarray, ndarray]: Tuple containig spectral data and + measurement timings. + spectral_data (ndarray): + 2D array of `float` of size (pattern_amount x pixel_amount) + containing measurements received from the spectrometer for each + pattern of a sequence. + timestamps (np.ndarray): + 1D array with `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. + Units in milliseconds. + measurement_time (np.ndarray): + 1D array with `float` type elapsed times between each callback. + Units in milliseconds. + """ + + if reconstruct == True: + print('Creating reconstruction processes') + + # Creating a Queue for sending spectral data to reconstruction process + queue_to_recon = Queue() + + # Creating a Queue for sending reconstructed images to plot + queue_reconstructed = Queue() + + sleep_time = (acquisition_params.pattern_amount * + DMD_params.picture_time_us/1e+6) + + # Creating reconstruction process + recon_process = Process(target=reconstruct_process, + args=(reconstruction_params.model, + reconstruction_params.device, + queue_to_recon, + queue_reconstructed, + reconstruction_params.batches, + reconstruction_params.noise, + sleep_time)) + + # Creating plot process + plot_process = Process(target=plot_recon, + args=(queue_reconstructed, sleep_time)) + + # Starting processes + recon_process.start() + plot_process.start() + + pixel_amount = (spectrometer_params.stop_pixel - + spectrometer_params.start_pixel + 1) + measurement_time = np.zeros( + (acquisition_params.pattern_amount * repetitions)) + timestamps = np.zeros( + ((acquisition_params.pattern_amount - 1) * repetitions), + dtype=np.float64) + spectral_data = np.zeros( + (acquisition_params.pattern_amount * repetitions,pixel_amount), + dtype=np.float64) + + acquisition_params.acquired_spectra = 0 + print() + + for repetition in range(repetitions): + if verbose: + print(f"Acquisition {repetition}") + + AcquisitionResults = _acquire_raw_2arms(ava, DMD, camPar, spectrometer_params, + DMD_params, acquisition_params, metadata, repetition, repetitions) + + (data, spectrum_index, timestamp, time, + start_measurement_time, saturation_detected) = AcquisitionResults + + print('Acquisition number : ' + str(repetition) + ' finished') + + if reconstruct == True: + queue_to_recon.put(data.T) + print('Data sent') + + time, timestamp = _calculate_elapsed_time( + start_measurement_time, time, timestamp) + + begin = repetition * acquisition_params.pattern_amount + end = (repetition + 1) * acquisition_params.pattern_amount + spectral_data[begin:end] = data + measurement_time[begin:end] = time + + begin = repetition * (acquisition_params.pattern_amount - 1) + end = (repetition + 1) * (acquisition_params.pattern_amount - 1) + timestamps[begin:end] = timestamp + + acquisition_params.acquired_spectra += spectrum_index + + acquisition_params.saturation_detected = saturation_detected + + if saturation_detected is True: + print('!!!!!!!!!! Saturation detected in the spectro !!!!!!!!!!') + # Print data for each repetition + if (verbose): + print('Spectra acquired: {}'.format(spectrum_index)) + print('Mean callback acquisition time: {} ms'.format( + np.mean(time))) + print('Total callback acquisition time: {} s'.format( + np.sum(time)/1000)) + print('Mean spectrometer acquisition time: {} ms'.format( + np.mean(timestamp))) + print('Total spectrometer acquisition time: {} s'.format( + np.sum(timestamp)/1000)) + + # Print shape of acquisition matrix for one repetition + print(f'Partial acquisition matrix dimensions:' + f'{data.shape}') + print() + + acquisition_params.update_timings(timestamps, measurement_time) + # Real time between each spectrum acquisition by the spectrometer + print('Complete acquisition done') + print('Spectra acquired: {}'.format(acquisition_params.acquired_spectra)) + print('Total acquisition time: {0:.2f} s'.format(acquisition_params.total_spectrometer_acquisition_time_s)) + + # delete acquisition with black pattern (white for the camera) + if camPar.insert_patterns == 1: + black_pattern_index = np.where(acquisition_params.patterns_wp == -1) + # print('index of white patterns :') + # print(black_pattern_index[0:38]) + if acquisition_params.patterns_wp.shape == acquisition_params.patterns.shape: + acquisition_params.patterns = np.delete(acquisition_params.patterns, black_pattern_index) + spectral_data = np.delete(spectral_data, black_pattern_index, axis = 0) + acquisition_params.timestamps = np.delete(acquisition_params.timestamps, black_pattern_index[1:]) + acquisition_params.measurement_time = np.delete(acquisition_params.measurement_time, black_pattern_index) + acquisition_params.acquired_spectra = len(acquisition_params.patterns) + + _save_acquisition_2arms(metadata, DMD_params, spectrometer_params, camPar, + acquisition_params, spectral_data) + + # Joining processes and closing queues + if reconstruct == True: + queue_to_recon.put('kill') # Sends a message to stop reconstruction + recon_process.join() + queue_to_recon.close() + plot_process.join() + queue_reconstructed.close() + + maxi = np.amax(spectral_data[0,:]) + print('------------------------------------------------') + print('maximum in the spectrum = ' + str(maxi)) + print('------------------------------------------------') + if maxi >= 65535: + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + print('!!!!! warning, spectrum saturation !!!!!!!!') + print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') + + return spectral_data + + +def setup_tuneSpectro(spectrometer, + DMD, + DMD_initial_memory, + pattern_to_display, + ti : float = 1, + zoom : int = 1, + xw_offset: int = 128, + yh_offset: int = 0, + mask_index : np.array = []): + """ Setup the hadrware to tune the spectrometer in live. The goal is to find + the integration time of the spectrometer, noise is around 700 counts, + saturation is equal to 2**16=65535 + + Args: + spectrometer (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + DMD_initial_memory (int): + Initial memory available in DMD after initialization. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. User must partially fill up + with pattern_compression, pattern_dimension_x, pattern_dimension_y. + pattern_to_display (string): + display one pattern on the DMD to tune the spectrometer. Default is + white pattern + ti (float): + The integration time of the spectrometer during one scan in miliseconds. + Default is 1 ms. + zoom (int): + digital zoom on the DMD. Default is 1 + xw_offset (int): + offset of the pattern in the DMD for zoom > 1 in the width (x) direction + yh_offset (int): + offset of the pattern in the DMD for zoom > 1 in the heihgt (y) direction + mask_index (Union[np.ndarray, str], optional): + Array of `int` type corresponding to the index of the mask vector where + the value is egal to 1 + + return: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + """ + + data_folder_name = 'Tune' + data_name = 'test' + # all_path = func_path(data_folder_name, data_name) + + scan_mode = 'Walsh' + Np = 16 + source = '' + object_name = '' + + metadata = MetaData( + output_directory = '',#all_path.subfolder_path, + pattern_order_source = 'C:/openspyrit/spas/stats/pattern_order_' + scan_mode + '_' + str(Np) + 'x' + str(Np) + '.npz', + pattern_source = 'C:/openspyrit/spas/Patterns/' + scan_mode + '_' + str(Np) + 'x' + str(Np), + pattern_prefix = scan_mode + '_' + str(Np) + 'x' + str(Np), + experiment_name = data_name, + light_source = source, + object = object_name, + filter = '', + description = '' + ) + + acquisition_parameters = AcquisitionParameters( + pattern_compression = 1, + pattern_dimension_x = 16, + pattern_dimension_y = 16, + zoom = zoom, + xw_offset = xw_offset, + yh_offset = yh_offset, + mask_index = [] ) + + acquisition_parameters.pattern_amount = 1 + + spectrometer_params, DMD_params = setup( + spectrometer = spectrometer, + DMD = DMD, + DMD_initial_memory = DMD_initial_memory, + metadata = metadata, + acquisition_params = acquisition_parameters, + pattern_to_display = pattern_to_display, + integration_time = ti, + loop = True ) + + return metadata, spectrometer_params, DMD_params, acquisition_parameters + + +def displaySpectro(ava: Avantes, + DMD: ALP4, + metadata: MetaData, + spectrometer_params: SpectrometerParameters, + DMD_params: DMDParameters, + acquisition_params: AcquisitionParameters, + reconstruction_params: ReconstructionParameters = None + ): + """Perform a continousely acquisition on the spectrometer for optical tuning. + + Send a pattern on the DMD to project light on the spectrometer. The goal is + to have a look on the amplitude of the spectrum to tune the illumination to + avoid saturation (sat >= 65535) and noisy signal (amp <= 650). + + Args: + ava (Avantes): + Connected spectrometer (Avantes object). + DMD (ALP4): + Connected DMD. + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + spectrometer_params (SpectrometerParameters): + Spectrometer metadata object with spectrometer configurations. + DMD_params (DMDParameters): + DMD metadata object with DMD configurations. + acquisition_params (AcquisitionParameters): + Acquisition related metadata object. + wavelengths (List[float]): + List of float corresponding to the wavelengths associated with + spectrometer's start and stop pixels. + reconstruction_params (ReconstructionParameters): + Object containing parameters of the neural network to be loaded for + reconstruction. + """ + + loop = True # is to project continuously a unique pattern to tune the spectrometer + + pixel_amount = (spectrometer_params.stop_pixel - + spectrometer_params.start_pixel + 1) + + spectral_data = np.zeros( + (acquisition_params.pattern_amount,pixel_amount), + dtype=np.float64) + + acquisition_params.acquired_spectra = 0 + + AcquisitionResults = _acquire_raw(ava, DMD, spectrometer_params, + DMD_params, acquisition_params, loop) + + (data, spectrum_index, timestamp, time, + start_measurement_time, saturation_detected) = AcquisitionResults + + time, timestamp = _calculate_elapsed_time( + start_measurement_time, time, timestamp) + + begin = acquisition_params.pattern_amount + end = 2 * acquisition_params.pattern_amount + spectral_data[begin:end] = data + + acquisition_params.acquired_spectra += spectrum_index + + acquisition_params.saturation_detected = saturation_detected + + +def check_ueye(func, *args, exp=0, raise_exc=True, txt=None): + """Check for bad input value + + Args: + ---------- + func : TYPE + the ueye function. + *args : TYPE + the input value. + exp : TYPE, optional + DESCRIPTION. The default is 0. + raise_exc : TYPE, optional + DESCRIPTION. The default is True. + txt : TYPE, optional + DESCRIPTION. The default is None. + + Raises + ------ + RuntimeError + DESCRIPTION. + + Returns + ------- + None. + """ + + ret = func(*args) + if not txt: + txt = "{}: Expected {} but ret={}!".format(str(func), exp, ret) + if ret != exp: + if raise_exc: + raise RuntimeError(txt) + else: + logging.critical(txt) + + +def stopCapt_DeallocMem(camPar): + """Stop capture and deallocate camera memory if need to change AOI + + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns: + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + + if camPar.camActivated == 1: + nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) + if nRet == ueye.IS_SUCCESS: + camPar.camActivated = 0 + print('video stop successful') + else: + print('problem to stop the video') + + if camPar.Memory == 1: + nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) + if nRet == ueye.IS_SUCCESS: + camPar.Memory = 0 + print('deallocate memory successful') + else: + print('Problem to deallocate memory of the camera') + + return camPar + + +def stopCapt_DeallocMem_ExitCam(camPar): + """Stop capture, deallocate camera memory if need to change AOI and disconnect the camera + + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns: + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + if camPar.camActivated == 1: + nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) + if nRet == ueye.IS_SUCCESS: + camPar.camActivated = 0 + print('video stop successful') + else: + print('problem to stop the video') + + if camPar.Memory == 1: + nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) + if nRet == ueye.IS_SUCCESS: + camPar.Memory = 0 + print('deallocate memory successful') + else: + print('Problem to deallocate memory of the camera') + + if camPar.Exit == 2: + nRet = ueye.is_ExitCamera(camPar.hCam) + if nRet == ueye.IS_SUCCESS: + camPar.Exit = 0 + print('Camera disconnected') + else: + print('Problem to disconnect camera, need to restart spyder') + + return camPar + + +class ImageBuffer: + """A class to allocate buffer in the camera memory + """ + + pcImageMemory = None + MemID = None + width = None + height = None + nbitsPerPixel = None + + +def imageQueue(camPar): + """Create Imagequeue / Allocate 3 ore more buffers depending on the framerate / Initialize Image queue + + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns: + ------- + None. + + """ + + sleep(1) # is required (delay of 1s was not optimized!!) + buffers = [] + for y in range(10): + buffers.append(ImageBuffer()) + + for x in range(len(buffers)): + buffers[x].nbitsPerPixel = camPar.nBitsPerPixel # RAW8 + buffers[x].height = camPar.rectAOI.s32Height # sensorinfo.nMaxHeight + buffers[x].width = camPar.rectAOI.s32Width # sensorinfo.nMaxWidth + buffers[x].MemID = ueye.int(0) + buffers[x].pcImageMemory = ueye.c_mem_p() + check_ueye(ueye.is_AllocImageMem, camPar.hCam, buffers[x].width, buffers[x].height, buffers[x].nbitsPerPixel, + buffers[x].pcImageMemory, buffers[x].MemID) + check_ueye(ueye.is_AddToSequence, camPar.hCam, buffers[x].pcImageMemory, buffers[x].MemID) + + check_ueye(ueye.is_InitImageQueue, camPar.hCam, ueye.c_int(0)) + if camPar.trigger_mode == 'soft': + check_ueye(ueye.is_SetExternalTrigger, camPar.hCam, ueye.IS_SET_TRIGGER_SOFTWARE) + elif camPar.trigger_mode == 'hard': + check_ueye(ueye.is_SetExternalTrigger, camPar.hCam, ueye.IS_SET_TRIGGER_LO_HI) + + +def prepareCam(camPar, metadata): + """Prepare the IDS monochrome camera before acquisition + + Args: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. Must be created and filled up by the user. + + Returns: + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + """ + cam_path = metadata.output_directory + '\\' + metadata.experiment_name + '_video.' + camPar.vidFormat + strFileName = ueye.c_char_p(cam_path.encode('utf-8')) + + if camPar.vidFormat == 'avi': + # print('Video format : AVI') + camPar.avi = ueye.int() + nRet = ueye_tools.isavi_InitAVI(camPar.avi, camPar.hCam) + # print("isavi_InitAVI") + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_InitAVI ERROR") + + nRet = ueye_tools.isavi_SetImageSize(camPar.avi, camPar.m_nColorMode, camPar.rectAOI.s32Width , camPar.rectAOI.s32Height, 0, 0, 0) + nRet = ueye_tools.isavi_SetImageQuality(camPar.avi, 100) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_SetImageQuality ERROR") + + nRet = ueye_tools.isavi_OpenAVI(camPar.avi, strFileName) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_OpenAVI ERROR") + print('Error code = ' + str(nRet)) + print('Certainly, it is a problem with the file name, Avoid special character like "µ" or try to redcue its size') + + nRet = ueye_tools.isavi_SetFrameRate(camPar.avi, camPar.fps) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_SetFrameRate ERROR") + nRet = ueye_tools.isavi_StartAVI(camPar.avi) + # print("isavi_StartAVI") + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("isavi_StartAVI ERROR") + + + elif camPar.vidFormat == 'bin': + camPar.punFileID = ueye.c_uint() + nRet = ueye_tools.israw_InitFile(camPar.punFileID, ueye_tools.IS_FILE_ACCESS_MODE_WRITE) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("INIT RAW FILE ERROR") + + nRet = ueye_tools.israw_SetImageInfo(camPar.punFileID, camPar.rectAOI.s32Width, camPar.rectAOI.s32Height, camPar.nBitsPerPixel) + if nRet != ueye_tools.IS_AVI_NO_ERR: + print("SET IMAGE INFO ERROR") + + if nRet == ueye.IS_SUCCESS: + # print('initFile ok') + # print('SetImageInfo ok') + nRet = ueye_tools.israw_OpenFile(camPar.punFileID, strFileName) + # if nRet == ueye.IS_SUCCESS: + # # print('OpenFile success') + + # --------------------------------------------------------- + # Activates the camera's live video mode (free run mode) + # --------------------------------------------------------- + nRet = ueye.is_CaptureVideo(camPar.hCam, ueye.IS_DONT_WAIT) + + if nRet != ueye.IS_SUCCESS: + print("is_CaptureVideo ERROR") + else: + camPar.camActivated = 1 + + return camPar + + +def runCam_thread(camPar, start_chrono): + """Acquire video with the IDS monochrome camera in a thread + + Parameters: + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + start_chrono : int + to save a delay for each acquisition frame of the video. + + Returns: + ------- + None. + """ + + imageinfo = ueye.UEYEIMAGEINFO() + current_buffer = ueye.c_mem_p() + current_id = ueye.int() + # inc = 0 + entier_old = 0 + # time.sleep(0.01) + while True: + nret = ueye.is_WaitForNextImage(camPar.hCam, camPar.timeout, current_buffer, current_id) + if nret == ueye.IS_SUCCESS: + check_ueye(ueye.is_GetImageInfo, camPar.hCam, current_id, imageinfo, ueye.sizeof(imageinfo)) + start_time = time.time() + counter = start_time - start_chrono + camPar.time_array.append(counter) + if camPar.vidFormat == 'avi': + nRet = ueye_tools.isavi_AddFrame(camPar.avi, current_buffer) + elif camPar.vidFormat == 'bin': + nRet = ueye_tools.israw_AddFrame(camPar.punFileID, current_buffer, imageinfo.u64TimestampDevice) + + check_ueye(ueye.is_UnlockSeqBuf, camPar.hCam, current_id, current_buffer) + else: + print('Thread finished') + break + + +def stopCam(camPar): + """To stop the acquisition of the video + + Parameters + ---------- + camPar (CAM): + Metadata object of the IDS monochrome camera + + Returns + ------- + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + + if camPar.vidFormat == 'avi': + ueye_tools.isavi_StopAVI(camPar.hCam) + ueye_tools.isavi_CloseAVI(camPar.hCam) + ueye_tools.isavi_ExitAVI(camPar.hCam) + elif camPar.vidFormat == 'bin': + ueye_tools.israw_CloseFile(camPar.punFileID) + ueye_tools.israw_ExitFile(camPar.punFileID) + camPar.punFileID = ueye.c_uint() + + return camPar + + +def disconnect(ava: Optional[Avantes]=None, + DMD: Optional[ALP4]=None): + """Disconnect spectrometer and DMD. + + Disconnects equipments trying to stop a running pattern sequence (possibly + blocking correct functioning) and trying to free DMD memory to avoid errors + in later acqusitions. + + Args: + ava (Avantes, optional): + Connected spectrometer (Avantes object). Defaults to None. + DMD (ALP4, optional): + Connected DMD. Defaults to None. + """ + + if ava is not None: + ava.disconnect() + print('Spectro disconnected') + + if DMD is not None: + + # Stop the sequence display + DMD.Halt() + + # Free the sequence from the onboard memory (if any is present) + if (DMD.Seqs): + DMD.FreeSeq() + + DMD.Free() + print('DMD disconnected') + + +def disconnect_2arms(ava: Optional[Avantes]=None, + DMD: Optional[ALP4]=None, + camPar=None): + """Disconnect spectrometer, DMD and the IDS monochrome camera. + + Disconnects equipments trying to stop a running pattern sequence (possibly + blocking correct functioning) and trying to free DMD memory to avoid errors + in later acqusitions. + + Args: + ava (Avantes, optional): + Connected spectrometer (Avantes object). Defaults to None. + DMD (ALP4, optional): + Connected DMD. Defaults to None. + camPar (CAM): + Metadata object of the IDS monochrome camera + """ + + if ava is not None: + ava.disconnect() + print('Spectro disconnected') + + disconnect_DMD(DMD) + + if camPar.camActivated == 1: + nRet = ueye.is_StopLiveVideo(camPar.hCam, ueye.IS_FORCE_VIDEO_STOP) + if nRet == ueye.IS_SUCCESS: + camPar.camActivated = 0 + else: + print('Problem to stop video, need to restart spyder') + + if camPar.Memory == 1: + nRet = ueye.is_FreeImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) + if nRet == ueye.IS_SUCCESS: + camPar.Memory = 0 + else: + print('Problem to deallocate camera memory, need to restart spyder') + + + if camPar.Exit == 1 or camPar.Exit == 2: + nRet = ueye.is_ExitCamera(camPar.hCam) + if nRet == ueye.IS_SUCCESS: + camPar.Exit = 0 + print('Camera disconnected') + else: + print('Problem to disconnect camera, need to restart spyder') + + +def captureVid(camPar): + """ + Allocate memory and begin video capture of the IDS camera + + Args: + camPar : a structure containing the parameters of the IDS camera. + + Returns: + camPar : a structure containing the parameters of the IDS camera. + """ + camPar = stopCapt_DeallocMem_ExitCam(camPar) + + if camPar.Exit == 0: + camPar = _init_CAM() + camPar.Exit = 1 + + + ### Set the AOI + sizeofrectAOI = ueye.c_uint(4*4) + nRet = ueye.is_AOI(camPar.hCam, ueye.IS_AOI_IMAGE_SET_AOI, camPar.rectAOI, sizeofrectAOI) + if nRet != ueye.IS_SUCCESS: + print("AOI setting ERROR") + + width = camPar.rectAOI.s32Width + height = camPar.rectAOI.s32Height + + ### Allocates an image memory for an image having its dimensions defined by width and height and its color depth defined by nBitsPerPixel + nRet = ueye.is_AllocImageMem(camPar.hCam, width, height, camPar.nBitsPerPixel, camPar.pcImageMemory, camPar.MemID) + if nRet != ueye.IS_SUCCESS: + print("is_AllocImageMem ERROR") + else: + # Makes the specified image memory the active memory + camPar.Memory = 1 + nRet = ueye.is_SetImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID) + if nRet != ueye.IS_SUCCESS: + print("is_SetImageMem ERROR") + else: + # Set the desired color mode + nRet = ueye.is_SetColorMode(camPar.hCam, camPar.m_nColorMode) + + + ### Activates the camera's live video mode (free run mode) + nRet = ueye.is_CaptureVideo(camPar.hCam, ueye.IS_DONT_WAIT) + if nRet != ueye.IS_SUCCESS: + print("is_CaptureVideo ERROR") + + + ### Enables the queue mode for existing image memory sequences + nRet = ueye.is_InquireImageMem(camPar.hCam, camPar.pcImageMemory, camPar.MemID, width, height, camPar.nBitsPerPixel, camPar.pitch) + if nRet != ueye.IS_SUCCESS: + print("is_InquireImageMem ERROR") + + camPar.camActivated = 1 + + return camPar + +def setup_cam(camPar, pixelClock, fps, Gain, gain_boost, nGamma, ExposureTime, black_level): + """ + Set and read the camera parameters + + Args: + pixelClock = [118, 237 or 474] (MHz) + fps: fps boundary => [1 - No Value] sup limit depend of image size (216 fps for 768x544 pixels for example) + Gain: Gain boundary => [0 100] + gain_boost: 'ON' set "ON" to activate gain boost, "OFF" to deactivate + nGamma: Gamma boundary => [1 - 2.2] + ExposureTime: Exposure time (ms) boundarye => [0.032 - 56.221] + black_level: Black Level boundary => [0 255] + + returns: + CAM: a structure containing the parameters of the IDS camera + """ + # It is necessary to execute twice this code to take account the parameter modification + for i in range(2): + ############################### Set Pixel Clock ############################### + ### Get range of pixel clock, result : range = [118 474] MHz (Inc = 0) + getpixelclock = ueye.UINT(0) + newpixelclock = ueye.UINT(0) + newpixelclock.value = pixelClock + PixelClockRange = (ueye.int * 3)() + + # Get pixel clock range + nRet = ueye.is_PixelClock(camPar.hCam, ueye.IS_PIXELCLOCK_CMD_GET_RANGE, PixelClockRange, ueye.sizeof(PixelClockRange)) + if nRet == ueye.IS_SUCCESS: + nPixelClockMin = PixelClockRange[0] + nPixelClockMax = PixelClockRange[1] + nPixelClockInc = PixelClockRange[2] + + # Set pixel clock + check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_SET, newpixelclock, + ueye.sizeof(newpixelclock)) + # Get current pixel clock + check_ueye(ueye.is_PixelClock, camPar.hCam, ueye.PIXELCLOCK_CMD.IS_PIXELCLOCK_CMD_GET, getpixelclock, + ueye.sizeof(getpixelclock)) + + camPar.pixelClock = getpixelclock.value + if i == 1: + print(' pixel clock = ' + str(getpixelclock) + ' MHz') + if getpixelclock == 118: + if i == 1: + print('Pixel clcok blocked to 118 MHz, it is necessary to unplug the camera if not desired') + # get the bandwidth (in MByte/s) + camPar.bandwidth = ueye.is_GetUsedBandwidth(camPar.hCam) + if i == 1: + print(' Bandwidth = ' + str(camPar.bandwidth) + ' MB/s') + ############################### Set FrameRate ################################# + ### Read current FrameRate + dblFPS_init = ueye.c_double() + nRet = ueye.is_GetFramesPerSecond(camPar.hCam, dblFPS_init) + if nRet != ueye.IS_SUCCESS: + print("FrameRate getting ERROR") + else: + dblFPS_eff = dblFPS_init + if i == 1: + print(' current FPS = '+str(round(dblFPS_init.value*100)/100) + ' fps') + if fps < 1: + fps = 1 + if i == 1: + print('FPS exceed lower limit >= 1') + + dblFPS = ueye.c_double(fps) + if (dblFPS.value < dblFPS_init.value-0.01) | (dblFPS.value > dblFPS_init.value+0.01): + newFPS = ueye.c_double() + nRet = ueye.is_SetFrameRate(camPar.hCam, dblFPS, newFPS) + time.sleep(1) + if nRet != ueye.IS_SUCCESS: + print("FrameRate setting ERROR") + else: + if i == 1: + print(' new FPS = '+str(round(newFPS.value*100)/100) + ' fps') + ### Read again the effective FPS / depend of the image size, 17.7 fps is not possible with the entire image size (ie 2076x3088) + dblFPS_eff = ueye.c_double() + nRet = ueye.is_GetFramesPerSecond(camPar.hCam, dblFPS_eff) + if nRet != ueye.IS_SUCCESS: + print("FrameRate getting ERROR") + else: + if i == 1: + print(' effective FPS = '+str(round(dblFPS_eff.value*100)/100) + ' fps') + ############################### Set GAIN ###################################### + #### Maximum gain is depending of the sensor. Convertion gain code to gain to limit values from 0 to 100 + # gain_code = gain * slope + b + gain_max_code = 1450 + gain_min_code = 100 + gain_max = 100 + gain_min = 0 + slope = (gain_max_code-gain_min_code)/(gain_max-gain_min) + b = gain_min_code + #### Read gain setting + current_gain_code = ueye.c_int() + current_gain_code = ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_GET_MASTER_GAIN_FACTOR, current_gain_code) + current_gain = round((current_gain_code-b)/slope) + + if i == 1: + print(' current GAIN = '+str(current_gain)) + gain_eff = current_gain + + ### Set new gain value + gain = ueye.c_int(Gain) + if gain.value != current_gain: + if gain.value < 0: + gain = ueye.c_int(0) + if i == 1: + print('Gain exceed lower limit >= 0') + elif gain.value > 100: + gain = ueye.c_int(100) + if i == 1: + print('Gain exceed upper limit <= 100') + gain_code = ueye.c_int(round(slope*gain.value+b)) + + ueye.is_SetHWGainFactor(camPar.hCam, ueye.IS_SET_MASTER_GAIN_FACTOR, gain_code) + new_gain = round((gain_code-b)/slope) + + if i == 1: + print(' new GAIN = '+str(new_gain)) + gain_eff = new_gain + ############################### Set GAIN Boost ################################ + ### Read current state of the gain boost + current_gain_boost_bool = ueye.is_SetGainBoost(camPar.hCam, ueye.IS_GET_GAINBOOST) + if nRet != ueye.IS_SUCCESS: + print("Gain boost ERROR") + if current_gain_boost_bool == 0: + current_gain_boost = 'OFF' + elif current_gain_boost_bool == 1: + current_gain_boost = 'ON' + + if i == 1: + print('current Gain boost mode = ' + current_gain_boost) + + ### Set the state of the gain boost + if gain_boost != current_gain_boost: + if gain_boost == 'OFF': + nRet = ueye.is_SetGainBoost (camPar.hCam, ueye.IS_SET_GAINBOOST_OFF) + print(' new Gain Boost : OFF') + + elif gain_boost == 'ON': + nRet = ueye.is_SetGainBoost (camPar.hCam, ueye.IS_SET_GAINBOOST_ON) + print(' new Gain Boost : ON') + + if nRet != ueye.IS_SUCCESS: + print("Gain boost setting ERROR") + ############################### Set Gamma ##################################### + ### Check boundary of Gamma + if nGamma > 2.2: + nGamma = 2.2 + if i == 1: + print('Gamma exceed upper limit <= 2.2') + elif nGamma < 1: + nGamma = 1 + if i == 1: + print('Gamma exceed lower limit >= 1') + ### Read current Gamma + c_nGamma_init = ueye.c_void_p() + sizeOfnGamma = ueye.c_uint(4) + nRet = ueye.is_Gamma(camPar.hCam, ueye.IS_GAMMA_CMD_GET, c_nGamma_init, sizeOfnGamma) + if nRet != ueye.IS_SUCCESS: + print("Gamma getting ERROR") + else: + if i == 1: + print(' current Gamma = ' + str(c_nGamma_init.value/100)) + ### Set Gamma + c_nGamma = ueye.c_void_p(round(nGamma*100)) # need to multiply by 100 [100 - 220] + if c_nGamma_init.value != c_nGamma.value: + nRet = ueye.is_Gamma(camPar.hCam, ueye.IS_GAMMA_CMD_SET, c_nGamma, sizeOfnGamma) + if nRet != ueye.IS_SUCCESS: + print("Gamma setting ERROR") + else: + if i == 1: + print(' new Gamma = '+str(c_nGamma.value/100)) + ############################### Set Exposure time ############################# + ### Read current Exposure Time + getExposure = ueye.c_double() + sizeOfpParam = ueye.c_uint(8) + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE, getExposure, sizeOfpParam) + if nRet == ueye.IS_SUCCESS: + getExposure.value = round(getExposure.value*1000)/1000 + + if i == 1: + print(' current Exposure Time = ' + str(getExposure.value) + ' ms') + ### Get minimum Exposure Time + minExposure = ueye.c_double() + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_MIN, minExposure, sizeOfpParam) + ### Get maximum Exposure Time + maxExposure = ueye.c_double() + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_MAX, maxExposure, sizeOfpParam) + ### Get increment Exposure Time + incExposure = ueye.c_double() + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_GET_EXPOSURE_RANGE_INC, incExposure, sizeOfpParam) + ### Set new Exposure Time + setExposure = ueye.c_double(ExposureTime) + if setExposure.value > maxExposure.value: + setExposure.value = maxExposure.value + if i == 1: + print('Exposure Time exceed upper limit <= ' + str(maxExposure.value)) + elif setExposure.value < minExposure.value: + setExposure.value = minExposure.value + if i == 1: + print('Exposure Time exceed lower limit >= ' + str(minExposure.value)) + + if (setExposure.value < getExposure.value-incExposure.value/2) | (setExposure.value > getExposure.value+incExposure.value/2): + nRet = ueye.is_Exposure(camPar.hCam, ueye.IS_EXPOSURE_CMD_SET_EXPOSURE, setExposure, sizeOfpParam) + if nRet != ueye.IS_SUCCESS: + print("Exposure Time ERROR") + else: + if i == 1: + print(' new Exposure Time = ' + str(round(setExposure.value*1000)/1000) + ' ms') + ############################### Set Black Level ############################### + current_black_level_c = ueye.c_uint() + sizeOfBlack_level = ueye.c_uint(4) + ### Read current Black Level + nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_GET_OFFSET, current_black_level_c, sizeOfBlack_level) + if nRet != ueye.IS_SUCCESS: + print("Black Level getting ERROR") + else: + if i == 1: + print(' current Black Level = ' + str(current_black_level_c.value)) + + ### Set Black Level + if black_level > 255: + black_level = 255 + if i == 1: + print('Black Level exceed upper limit <= 255') + if black_level < 0: + black_level = 0 + if i == 1: + print('Black Level exceed lower limit >= 0') + + black_level_c = ueye.c_uint(black_level) + if black_level != current_black_level_c.value : + nRet = ueye.is_Blacklevel(camPar.hCam, ueye.IS_BLACKLEVEL_CMD_SET_OFFSET, black_level_c, sizeOfBlack_level) + if nRet != ueye.IS_SUCCESS: + print("Black Level setting ERROR") + else: + if i == 1: + print(' new Black Level = ' + str(black_level_c.value)) + + + camPar.fps = round(dblFPS_eff.value*100)/100 + camPar.gain = gain_eff + camPar.gainBoost = gain_boost + camPar.gamma = c_nGamma.value/100 + camPar.exposureTime = round(setExposure.value*1000)/1000 + camPar.blackLevel = black_level_c.value + + return camPar + + +def snapshot(camPar, pathIDSsnapshot, pathIDSsnapshot_overview): + """ + Snapshot of the IDS camera + + Args: + CAM: a structure containing the parameters of the IDS camera + """ + array = ueye.get_data(camPar.pcImageMemory, camPar.rectAOI.s32Width, camPar.rectAOI.s32Height, camPar.nBitsPerPixel, camPar.pitch, copy=False) + + # ...reshape it in an numpy array... + frame = np.reshape(array,(camPar.rectAOI.s32Height.value, camPar.rectAOI.s32Width.value))#, camPar.bytes_per_pixel)) + + with pathIDSsnapshot.open('wb') as f: #('ab') as f: #(pathname, mode='w', encoding='utf-8') as f: #('ab') as f: + np.save(f,frame) + + maxi = np.amax(frame) + if maxi == 0: + maxi = 1 + im = Image.fromarray(frame*math.floor(255/maxi)) + im.save(pathIDSsnapshot_overview) + + maxi = np.amax(frame) + # print() + # print('frame max = ' + str(maxi)) + # print('frame min = ' + str(np.amin(frame))) + if maxi >= 255: + print('Saturation detected') + + plt.figure + plt.imshow(frame)#, cmap='gray', vmin=mini, vmax=maxi) + plt.colorbar(); + plt.show() + + \ No newline at end of file diff --git a/spas/command_spectrograph.py b/spas/command_spectrograph.py new file mode 100644 index 0000000..ed2901a --- /dev/null +++ b/spas/command_spectrograph.py @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +""" +Created on Tue Oct 8 16:52:13 2024 + +@author: mahieu +""" + +# import sys +# # Set the path where the "CM110.py" module is installed in your computer. +# sys.path.append('../../spectrograph') + +from spas.spectro_SP_lib import open_serial, close_serial, grating +from spas.spectro_SP_lib import query_echo, query_unit, query_position, query_grating, query_speed, query_size +from spas.spectro_SP_lib import cmd_unit, cmd_size, cmd_speed, cmd_step, cmd_goto, cmd_selectGrating, cmd_reset, cmd_scan + +#%% Open serial port. +serial_port = open_serial(comm_port = 'COM3') +#%% Query functions +query_echo(serial_port) +unit = query_unit(serial_port, print_unit = True) +position = query_position(serial_port, print_position = True) +grating = query_grating(serial_port, grating, print_grating_info = True) +speed = query_speed(serial_port, print_speed = True) +size = query_size(serial_port, print_size = True) +#%% command functions +# Below are examples of the command implemented. Pleass, Execute one line at a time +cmd_unit(serial_port, unit = 'nm', print_unit = True) +cmd_size(serial_port, size = 10, print_size = True) +cmd_speed(serial_port, speed = 3000, print_speed = True) +cmd_step(serial_port, print_position = True) +cmd_selectGrating(serial_port, grating_nbr = 2, print_select = True) +cmd_goto(serial_port, position = 600, unit = 'nm', print_position = True) +cmd_scan(serial_port, start_position = 400, end_position = 800, unit = 'nm') +cmd_reset(serial_port) +#%% Close serial port. +close_serial(serial_port) diff --git a/spas/convert_spec_to_rgb.py b/spas/convert_spec_to_rgb.py index 524b5f9..d3fa4b2 100644 --- a/spas/convert_spec_to_rgb.py +++ b/spas/convert_spec_to_rgb.py @@ -14,6 +14,12 @@ def xyz_from_xy(x, y): """Return the vector (x, y, 1-x-y).""" return np.array((x, y, 1-x-y)) +def linear_srgb_to_rgb(rgb): + nonlinearity = np.vectorize(lambda x: 12.92*x if x < 0.0031308 else 1.055*(x**(1.0/2.4))-0.055) + + return nonlinearity(rgb) + + class ColourSystem: """A class representing a colour system. @@ -26,6 +32,9 @@ class ColourSystem: # The CIE colour matching function for 380 - 780 nm in 5 nm intervals cmf = np.loadtxt(Path(__file__).parent.joinpath('cie-cmf.txt'), usecols=(1,2,3)) + # cmf_path = 'C:/openspyrit/spas/spas/cie-cmf.txt' + # cmf_all = np.loadtxt(cmf_path) + # cmf = cmf_all[:, 1:] def __init__(self, red, green, blue, white): """Initialise the ColourSystem object. @@ -73,6 +82,15 @@ def xyz_to_rgb(self, xyz, out_fmt=None): return self.rgb_to_hex(rgb) return rgb + def xyz_to_rgb2(xyz): + M = np.array([[0.418456, -0.158657, -0.082833], [-0.091167, 0.252426, 0.015707], [0.000921, -0.002550, 0.178595]]) + + rgb = np.dot(M, xyz) + if not np.all(rgb==0): + # Normalize the rgb vector + rgb /= np.max(rgb) + + return rgb def rgb_to_hex(self, rgb): """Convert from fractional rgb values to HTML-style hex string.""" @@ -100,7 +118,11 @@ def spec_to_rgb(self, spec, out_fmt=None): """Convert a spectrum to an rgb value.""" xyz = self.spec_to_xyz(spec) - return self.xyz_to_rgb(xyz, out_fmt) + rgb = self.xyz_to_rgb(xyz, out_fmt) + srgb = linear_srgb_to_rgb(rgb) + return srgb + + # return ColourSystem.xyz_to_rgb2(xyz) illuminant_D65 = xyz_from_xy(0.3127, 0.3291) diff --git a/spas/generate.py b/spas/generate.py index 4529b24..2efa683 100644 --- a/spas/generate.py +++ b/spas/generate.py @@ -10,7 +10,7 @@ import numpy as np from PIL import Image from spyrit.misc.statistics import Cov2Var -from spyrit.learning.model_Had_DCAN import Permutation_Matrix +from spyrit.misc.sampling import Permutation_Matrix import spyrit.misc.walsh_hadamard as wh diff --git a/spas/metadata.py b/spas/metadata2.py similarity index 100% rename from spas/metadata.py rename to spas/metadata2.py diff --git a/spas/metadata_SPC2D.py b/spas/metadata_SPC2D.py new file mode 100644 index 0000000..344b1dd --- /dev/null +++ b/spas/metadata_SPC2D.py @@ -0,0 +1,1118 @@ +# -*- coding: utf-8 -*- +__author__ = 'Guilherme Beneti Martins' + +"""Metadata classes and utilities. + +Metadata classes to keep and save all relevant data during an acquisition. +Utility functions to recreate objects from JSON files, save them to JSON and to +improve readability. +""" + +import json +from datetime import datetime +from enum import IntEnum +from dataclasses import dataclass, InitVar, field +from typing import Optional, Union, List, Tuple, Optional +from pathlib import Path +import os +from dataclasses_json import dataclass_json +import numpy as np +import ctypes as ct +import pickle +##### DLL for the DMD +try: + import ALP4 +except: # in the cas the DLL of the DMD is not installed + class ALP4: + pass + setattr(ALP4, 'ALP4', None) + print('DLL of the DMD not installed') +##### DLL for the spectrometer Avantes +try: + from msl.equipment.resources.avantes import MeasConfigType +except: # in the cas the DLL of the spectrometer is not installed + class MeasConfigType: + pass + MeasConfigType = None + print('DLL of the spectrometer not installed !!!') + +##### DLL for the camera +try: + from pyueye import ueye + dll_pyueye_installed = 1 +except: + dll_pyueye_installed = 0 + print('DLL of the cam not installed !!') + +class DMDTypes(IntEnum): + """Enumeration of DMD types and respective codes.""" + ALP_DMDTYPE_XGA = 1 + ALP_DMDTYPE_SXGA_PLUS = 2 + ALP_DMDTYPE_1080P_095A = 3 + ALP_DMDTYPE_XGA_07A = 4 + ALP_DMDTYPE_XGA_055A = 5 + ALP_DMDTYPE_XGA_055X = 6 + ALP_DMDTYPE_WUXGA_096A = 7 + ALP_DMDTYPE_WQXGA_400MHZ_090A = 8 + ALP_DMDTYPE_WQXGA_480MHZ_090A = 9 + ALP_DMDTYPE_WXGA_S450 = 12 + ALP_DMDTYPE_DISCONNECT = 255 + + +@dataclass_json +@dataclass +class MetaData: + """ Class containing overall acquisition parameters and description. + + Metadata concerning the experiment, paths, file inputs and file outputs. + This class is adapted to be reconstructed from a JSON file. + + Attributes: + output_directory (Union[str, Path], optional): + Directory where multiple related acquisitions will be stored. + pattern_order_source (Union[str, Path], optional): + File where the order of patterns to be sent to DMD is specified. It + can be a text file containing a list of pattern indeces or a numpy + file containing a covariance matrix from which the pattern order is + calculated. + pattern_source (Union[str, Path], optional): + Pattern source folder. + pattern_prefix (str): + Prefix used in pattern naming. + experiment_name (str): + Prefix of all files related to a single acquisition. Files will + appear with the following string pattern: + experiment_name + '_' + filename. + light_source (str): + Light source used to illuminate an object during acquisition. + object (str): + Object imaged during acquisition. + filter (str): + Light filter used. + description (str): + Acqusition experiment description. + date (str, optional): + Acquisition date. Automatically set when object is created. Default + is None. + time (str, optional): + Time when metadata object is created. Set automatically by + __post_init__(). Default is None. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'Metadata'. + """ + + pattern_prefix: str + experiment_name: str + + light_source: str + object: str + filter: str + description: str + + output_directory: Union[str, Path] + pattern_order_source: Union[str, Path] + pattern_source: Union[str, Path] + + date: Optional[str] = None + time: Optional[str] = None + + class_description: str = 'Metadata' + + def __post_init__(self): + """Sets time and date of object cretion and deals with paths""" + + today = datetime.today() + self.date = '--/--/----' #today.strftime('%d/%m/%Y') + self.time = today.strftime('%I:%M:%S %p') + + # If parameter is str, turn it into Path + if isinstance(self.output_directory, str): + self.output_directory = Path(self.output_directory) + + # If parameter is Path or was turned into a Path, resolve it and get the + # str format. + if issubclass(self.output_directory.__class__, Path): + self.output_directory = str(self.output_directory.resolve()) + + + if isinstance(self.pattern_order_source, str): + self.pattern_order_source = Path(self.pattern_order_source) + + if issubclass(self.pattern_order_source.__class__, Path): + self.pattern_order_source = str( + self.pattern_order_source.resolve()) + + + if isinstance(self.pattern_source, str): + self.pattern_source = Path(self.pattern_source) + + if issubclass(self.pattern_source.__class__, Path): + self.pattern_source = str(self.pattern_source.resolve()) + + +@dataclass_json +@dataclass +class CAM: + """Class containing IDS camera configurations. + + Further information: https://en.ids-imaging.com/manuals/ids-software-suite/ueye-manual/4.95/en/c_programmierung.html. + + Attributes: + hCam (ueye.c_uint): Handle of the camera. + sInfo (ueye.SENSORINFO):sensor information : [SensorID [c_ushort] = 566; + strSensorName [c_char_Array_32] = b'UI388xCP-M'; + nColorMode [c_char] = b'\x01'; + nMaxWidth [c_uint] = 3088; + nMaxHeight [c_uint] = 2076; + bMasterGain [c_int] = 1; + bRGain [c_int] = 0; + bGGain [c_int] = 0; + bBGain [c_int] = 0; + bGlobShutter [c_int] = 0; + wPixelSize [c_ushort] = 240; + nUpperLeftBayerPixel [c_char] = b'\x00'; + Reserved]. + cInfo (ueye.BOARDINFO):Camera information: [SerNo [c_char_Array_12] = b'4103219888'; + ID [c_char_Array_20] = b'IDS GmbH'; + Version [c_char_Array_10] = b''; + Date [c_char_Array_12] = b'30.11.2017'; + Select [c_ubyte] = 1; + Type [c_ubyte] = 100; + Reserved [c_char_Array_8] = b'';] + nBitsPerPixel (ueye.c_int): number of bits per pixel (8 for monochrome, 24 for color). + m_nColorMode (ueye.c_int): color mode : Y8/RGB16/RGB24/REG32. + bytes_per_pixel (int): bytes_per_pixel = int(nBitsPerPixel / 8). + rectAOI (ueye.IS_RECT()): rectangle of the Area Of Interest (AOI): s32X [c_int] = 0; + s32Y [c_int] = 0; + s32Width [c_int] = 3088; + s32Height [c_int] = 2076; + pcImageMemory (ueye.c_mem_p()): memory allocation. + MemID (ueye.int()): memory identifier. + pitch (ueye.INT()): ???. + fps (float): set frame per second. + gain (int): Set gain between [0 - 100]. + gainBoost (str): Activate gain boosting ("ON") or deactivate ("OFF"). + gamma (float): Set Gamma between [1 - 2.5] to change the image contrast + exposureTime (float): Set the exposure time between [0.032 - 56.221] + blackLevel (int): Set the black level between [0 - 255] to set an offset in the image. It is adviced to put 5 for noise measurement + camActivated (bool) : need to to know if the camera is ready to acquire (1: yes, 0: No) + pixelClock (int) : the pixel clock, three values possible : [118, 237, 474] (MHz) + bandwidth (float) the bandwidth (in MByte/s) is an approximate value which is calculated based on the pixel clock + Memory (bool) : a boolean to know if the memory inside the camera is busy [1] or free [0] + Exit (int) : if Exit = 2 => excute is_ExitCamera function (disables the hCam camera handle and releases the memory) | if Exit = 0 => allow to init cam, after that, Exit = 1 + vidFormat (str) : save video in the format avi or bin (for binary) + gate_period (int) : a second TTL is sent by the DMD to trigg the camera, and based on the fisrt TTL to trigg the spectrometer. camera trigger period = gate_period*(spectrometer trigger period) + trigger_mode (str) : hard or soft + avi (ueye.int) : A pointer that returns the instance ID which is needed for calling the other uEye AVI functions + punFileID (ueye.c_int) : a pointer in which the instance ID is returned. This ID is needed for calling other functions. + timeout (int) : a time which stop the camera that waiting for a TTL + time_array (List[float]) : the time array saved after each frame received on the camera + int_time_spect (float) : is egal to the integration time of the spectrometer, it is need to know this value because of the rolling shutter of the monochrome IDS camera + black_pattern_num (int) : is number inside the image name of the black pattern (for the hyperspectral arm, or white pattern for the camera arm) to be inserted betweem the Hadamard patterns + insert_patterns (int) : 0 => no insertion / 1=> insert white patterns for the camera + acq_mode (str) : mode of the acquisition => 'video' or 'snapshot' mode + """ + if dll_pyueye_installed: + hCam: Optional[ueye.c_uint] = None + sInfo: Optional[ueye.SENSORINFO] = None + cInfo: Optional[ueye.BOARDINFO] = None + nBitsPerPixel: Optional[ueye.c_int] = None + m_nColorMode: Optional[ueye.c_int] = None + bytes_per_pixel: Optional[int] = None + rectAOI: Optional[ueye.IS_RECT] = None + pcImageMemory: Optional[ueye.c_mem_p] = None + MemID: Optional[ueye.c_int] = None + pitch: Optional[ueye.c_int] = None + fps: Optional[float] = None + gain: Optional[int] = None + gainBoost: Optional[str] = None + gamma: Optional[float] = None + exposureTime: Optional[float] = None + blackLevel: Optional[int] = None + camActivated : Optional[bool] = None + pixelClock : Optional[int] = None + bandwidth : Optional[float] = None + Memory : Optional[bool] = None + Exit : Optional[int] = None + vidFormat : Optional[str] = None + gate_period : Optional[int] = None + trigger_mode : Optional[str] = None + avi : Optional[ueye.int] = None + punFileID : Optional[ueye.c_int] = None + timeout : Optional[int] = None + time_array : Optional[Union[List[float], str]] = field(default=None, repr=False) + int_time_spect : Optional[float] = None + black_pattern_num : Optional[int] = None + insert_patterns : Optional[int] = None + acq_mode : Optional[str] = None + + class_description: str = 'IDS camera configuration' + + def undo_readable_class_CAM(self) -> None: + """Changes the time_array attribute from `str` to `List` of `int`.""" + + def to_float(str_arr): + arr = [] + for s in str_arr: + try: + num = float(s) + arr.append(num) + except ValueError: + pass + return arr + + if self.time_array: + self.time_array = ( + self.time_array.strip('[').strip(']').split(', ')) + self.time_array = to_float(self.time_array) + self.time_array = np.asarray(self.time_array) + + @staticmethod + def readable_class_CAM(cam_params_dict: dict) -> dict: + # pass + """Turns list of time_array into a string. + convert the c_type structure (sInfo, cInfo and rectAOI) into a nested dict + change the bytes type item into str + change the c_types item into their value + """ + + readable_cam_dict = {} + readable_cam_dict_temp = cam_params_dict#camPar.to_dict()# + inc = 0 + for item in readable_cam_dict_temp: + stri = str(type(readable_cam_dict_temp[item])) + # print('----- item : ' + item) + if item == 'sInfo' or item == 'cInfo' or item == 'rectAOI': + readable_cam_dict[item] = dict() + try: + for sub_item in readable_cam_dict_temp[item]._fields_: + new_item = item + '-' + sub_item[0] + try: + att = getattr(readable_cam_dict_temp[item], sub_item[0]).value + except: + att = getattr(readable_cam_dict_temp[item], sub_item[0]) + + if type(att) == bytes: + att = str(att) + + readable_cam_dict[item][sub_item[0]] = att + except: + try: + for sub_item in readable_cam_dict_temp[item]: + # print('----- sub_item : ' + sub_item) + new_item = item + '-' + sub_item + att = readable_cam_dict_temp[item][sub_item] + + if type(att) == bytes: + att = str(att) + + readable_cam_dict[item][sub_item] = att + except: + print('warning, impossible to read the subitem of readable_cam_dict_temp[item]') + + elif stri.find('pyueye') >=0: + try: + readable_cam_dict[item] = readable_cam_dict_temp[item].value + except: + readable_cam_dict[item] = readable_cam_dict_temp[item] + elif item == 'time_array': + readable_cam_dict[item] = str(readable_cam_dict_temp[item]) + else: + readable_cam_dict[item] = readable_cam_dict_temp[item] + + return readable_cam_dict + + +@dataclass_json +@dataclass +class AcquisitionParameters: + """Class containing acquisition specifications and timing results. + + This class is adapted to be reconstructed from a JSON file. + + Attributes: + pattern_compression (float): + Percentage of total available patterns to be present in an + acquisition sequence. + pattern_dimension_x (int): + Length of reconstructed image that defines pattern length. + pattern_dimension_y (int): + Width of reconstructed image that defines pattern width. + zoom (int): + numerical zoom of the patterns + xw_offset (int): + offset of the pattern in the DMD for zoom > 1 in the width (x) direction + yh_offset (int): + offset of the pattern in the DMD for zoom > 1 in the heihgt (y) direction + mask_index (Union[np.ndarray, str], optional): + Array of `int` type corresponding to the index of the mask vector where + the value is egal to 1 + x_mask_coord (Union[np.ndarray, str], optional): + coordinates of the mask in the x direction x[0] and x[1] are the first + and last points respectively + y_mask_coord (Union[np.ndarray, str], optional): + coordinates of the mask in the y direction y[0] and y[1] are the first + and last points respectively + pattern_amount (int, optional): + Quantity of patterns sent to DMD for an acquisition. This value is + calculated by an external function. Default in None. + acquired_spectra (int, optional): + Amount of spectra actually read from the spectrometer. This value is + calculated by an external function. Default in None. + mean_callback_acquisition_time_ms (float, optional): + Mean time between 2 callback executions during an acquisition. This + value is calculated by an external function. Default in None. + total_callback_acquisition_time_s (float, optional): + Total time of callback executions during an acquisition. This value + is calculated by an external function. Default in None. + mean_spectrometer_acquisition_time_ms (float, optional): + Mean time between 2 spectrometer measurements during an acquisition + based on its own internal clock. This value is calculated by an + external function. Default in None. + total_spectrometer_acquisition_time_s (float, optional): + Total time of spectrometer measurements during an acquisition + based on its own internal clock. This value is calculated by an + external function. Default in None. + saturation_detected (bool, optional): + Boolean incating if saturation was detected during acquisition. + Default is None. + patterns (Union[List[int],str], optional) = None + List `int` or `str` containing all patterns sent to the DMD for an + acquisition sequence. This value is set by an external function and + its type can be modified by multiple functions during object + creation, manipulation, when dumping to a JSON file or + when reconstructing an AcquisitionParameters object from a JSON + file. It is intended to be of type List[int] most of the execution + List[int]time. Default is None. + wavelengths (Union[np.ndarray, str], optional): + Array of `float` type corresponding to the wavelengths associated + with spectrometer's start and stop pixels. + timestamps (Union[List[float], str], optional): + List of `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. Units in + milliseconds. Default is None. + measurement_time (Union[List[float], str], optional): + List of `float` type elapsed times between each callback. Units in + milliseconds. Default is None. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'Acquisition parameters'. + """ + + pattern_compression: float + pattern_dimension_x: int + pattern_dimension_y: int + zoom: Optional[int] = field(default=None) + xw_offset: Optional[int] = field(default=None) + yh_offset: Optional[int] = field(default=None) + mask_index: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + x_mask_coord: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + y_mask_coord: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + + pattern_amount: Optional[int] = None + acquired_spectra: Optional[int] = None + + mean_callback_acquisition_time_ms: Optional[float] = None + total_callback_acquisition_time_s: Optional[float] = None + mean_spectrometer_acquisition_time_ms: Optional[float] = None + total_spectrometer_acquisition_time_s: Optional[float] = None + + saturation_detected: Optional[bool] = None + + patterns: Optional[Union[List[int], str]] = field(default=None, repr=False) + patterns_wp: Optional[Union[List[int], str]] = field(default=None, repr=False) + wavelengths: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + timestamps: Optional[Union[List[float], str]] = field(default=None, + repr=False) + measurement_time: Optional[Union[List[float], str]] = field(default=None, + repr=False) + + class_description: str = 'Acquisition parameters' + + + def undo_readable_pattern_order(self) -> None: + """Changes the patterns attribute from `str` to `List` of `int`. + + When reconstructing an AcquisitionParameters object from a JSON file, + this method turns the patterns, wavelengths, timestamps and + measurement_time attributes from a string to a list of integers + containing the pattern indices used in that acquisition. + """ + + def to_float(str_arr): + arr = [] + for s in str_arr: + try: + num = float(s) + arr.append(num) + except ValueError: + pass + return arr + + self.patterns = self.patterns.strip('[').strip(']').split(', ') + self.patterns = [int(s) for s in self.patterns if s.isdigit()] + try: + self.patterns_wp = self.patterns_wp.text.strip('[').strip(']').split(', ') + self.patterns_wp = [int(s) for s in self.patterns_wp if s.isdigit()] + except: + print('patterns_wp has no attribute ''strip''') + + if self.wavelengths: + self.wavelengths = ( + self.wavelengths.strip('[').strip(']').split(', ')) + self.wavelengths = to_float(self.wavelengths) + self.wavelengths = np.asarray(self.wavelengths) + else: + print('wavelenghts not present in metadata.' + ' Reading data in legacy mode.') + + if self.timestamps: + self.timestamps = self.timestamps.strip('[').strip(']').split(', ') + self.timestamps = to_float(self.timestamps) + else: + print('timestamps not present in metadata.' + ' Reading data in legacy mode.') + + if self.measurement_time: + self.measurement_time = ( + self.measurement_time.strip('[').strip(']').split(', ')) + self.measurement_time = to_float(self.measurement_time) + else: + print('measurement_time not present in metadata.' + ' Reading data in legacy mode.') + + if self.mask_index: + self.mask_index = ( + self.mask_index.strip('[').strip(']').split(', ')) + self.mask_index = to_float(self.mask_index) + self.mask_index = np.asarray(self.mask_index) + else: + print('mask_index not present in metadata.' + ' Reading data in legacy mode.') + + if self.x_mask_coord: + self.x_mask_coord = ( + self.x_mask_coord.strip('[').strip(']').split(', ')) + self.x_mask_coord = to_float(self.x_mask_coord) + self.x_mask_coord = np.asarray(self.x_mask_coord) + else: + print('x_mask_coord not present in metadata.' + ' Reading data in legacy mode.') + + if self.y_mask_coord: + self.y_mask_coord = ( + self.y_mask_coord.strip('[').strip(']').split(', ')) + self.y_mask_coord = to_float(self.y_mask_coord) + self.y_mask_coord = np.asarray(self.y_mask_coord) + else: + print('y_mask_coord not present in metadata.' + ' Reading data in legacy mode.') + + @staticmethod + def readable_pattern_order(acquisition_params_dict: dict) -> dict: + """Turns list of patterns into a string. + + Turns the list of pattern attributes from an AcquisitionParameters + object (turned into a dictionary) into a string that will improve + readability once all metadata is dumped into a JSON file. + This function must be called before dumping. + + Args: + acquisition_params_dict (dict): Dictionary obtained from converting + an AcquisitionParameters object. + + Returns: + [dict]: Modified dictionary with acquisition parameters metadata. + """ + + def _hard_coded_conversion(data): + s = '[' + for value in data: + s += f'{value:.4f}, ' + s = s[:-2] + s += ']' + + return s + + readable_dict = acquisition_params_dict + readable_dict['patterns'] = str(readable_dict['patterns']) + readable_dict['patterns_wp'] = str(readable_dict['patterns_wp']) + + readable_dict['wavelengths'] = _hard_coded_conversion( + readable_dict['wavelengths']) + + readable_dict['timestamps'] = _hard_coded_conversion( + readable_dict['timestamps']) + + readable_dict['measurement_time'] = _hard_coded_conversion( + readable_dict['measurement_time']) + + readable_dict['mask_index'] = _hard_coded_conversion( + readable_dict['mask_index']) + + readable_dict['x_mask_coord'] = _hard_coded_conversion( + readable_dict['x_mask_coord']) + + readable_dict['y_mask_coord'] = _hard_coded_conversion( + readable_dict['y_mask_coord']) + + return readable_dict + + + def update_timings(self, timestamps: np.ndarray, + measurement_time: np.ndarray): + """Updates acquisition timings. + + Args: + timestamps (ndarray): + Array of `float` type elapsed time between each measurement made + by the spectrometer based on its internal clock. Units in + milliseconds. + measurement_time (ndarray): + Array of `float` type elapsed times between each callback. Units + in milliseconds. + """ + self.mean_callback_acquisition_time_ms = np.mean(measurement_time) + self.total_callback_acquisition_time_s = np.sum(measurement_time) / 1000 + self.mean_spectrometer_acquisition_time_ms = np.mean( + timestamps, dtype=float) + self.total_spectrometer_acquisition_time_s = np.sum(timestamps) / 1000 + + self.timestamps = timestamps + self.measurement_time = measurement_time + + + +@dataclass_json +@dataclass +class SpectrometerParameters: + """Class containing spectrometer configurations. + + Further information: AvaSpec Library Manual (Version 9.10.2.0). + + Attributes: + high_resolution (bool): + True if 16-bit AD Converter is used. False if 14-bit ADC is used. + initial_available_pixels (int): + Number of pixels available in spectrometer. + detector (str): + Name of the light detector. + firmware_version (str, optional): + Spectrometer firmware version. + dll_version (str, optional): + Spectrometer dll version. + fpga_version (str, optional): + Internal FPGA version. + integration_delay_ns (int, optional): + Parameter used to start the integration time not immediately after + the measurement request (or on an external hardware trigger), but + after a specified delay. Unit is based on internal FPGA clock cycle. + integration_time_ms (float, optional): + Spectrometer exposure time during one scan in miliseconds. + start_pixel (int, optional): + Initial pixel data received from spectrometer. + stop_pixel (int, optional): + Last pixel data received from spectrometer. + averages (int, optional): + Number of averages in a single measurement. + dark_correction_enable (bool, optional): + Enable dynamic dark current correction. + dark_correction_forget_percentage (int, optional): + Percentage of the new dark value pixels that has to be used. e.g., + a percentage of 100 means only new dark values are used. A + percentage of 10 means that 10 percent of the new dark values is + used and 90 percent of the old values is used for drift correction. + smooth_pixels (int, optional): + Number of neighbor pixels used for smoothing, max. has to be smaller + than half the selected pixel range because both the pixels on the + left and on the right are used. + smooth_model (int, optional): + Smoothing model. Currently a single model is supported in which the + spectral data is averaged over a number of pixels on the detector + array. For example, if the smoothpix parameter is set to 2, the + spectral data for all pixels x(n) on the detector array will be + averaged with their neighbor pixels x(n-2), x(n-1), x(n+1) and + x(n+2). + saturation_detection (bool, optional): + Enable detection of saturation/overexposition in pixels. + trigger_mode (int, optional): + Trigger mode (0 = Software, 1 = Hardware, 2 = Single Scan). + trigger_source (int, optional): + Trigger source (0 = external trigger, 1 = sync input). + trigger_source_type (int, optional): + Trigger source type (0 = edge trigger, 1 = level trigger). + store_to_ram (int, optional): + Define how many scans can be stored in RAM. In DynamicRAM mode, can + be set to 0 to indicate infinite measurements. + configs: InitVar[MeasConfigType]: + Initialization object containing data to create SpectrometerData + object. Unnecessary if reconstructing object from JSON file Defaut + is None. + version_info: InitVar[Tuple[str]]: + Initialization variable used for receiving firmware, dll and FPGA + version data. Unnecessary if reconstructing object from JSON file. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'Spectrometer parameters'. + """ + + high_resolution: bool + initial_available_pixels: int + detector: str + firmware_version: Optional[str] = None + dll_version: Optional[str] = None + fpga_version: Optional[str] = None + + integration_delay_ns: Optional[int] = None + integration_time_ms: Optional[float] = None + + start_pixel: Optional[int] = None + stop_pixel: Optional[int] = None + averages: Optional[int] = None + + dark_correction_enable: Optional[bool] = None + dark_correction_forget_percentage: Optional[int] = None + + smooth_pixels: Optional[int] = None + smooth_model: Optional[int] = None + + saturation_detection: Optional[bool] = None + + trigger_mode: Optional[int] = None + trigger_source: Optional[int] = None + trigger_source_type: Optional[int] = None + + store_to_ram: Optional[int] = None + + configs: InitVar[MeasConfigType] = None + version_info: InitVar[Tuple[str]] = None + + class_description: str = 'Spectrometer parameters' + + + def __post_init__(self, configs: Optional[MeasConfigType] = None, + version_info: Optional[Tuple[str, str, str]] = None): + """Post initialization of attributes. + + Receives the data sent to spectrometer and some version data and unwraps + everything to set the majority of SpectrometerParameters's attributes. + During reconstruction from JSON, arguments of type InitVar (configs and + version_info) are set to None and the function does nothing, letting + initialization for the standard __init__ function. + + Args: + configs (MeasConfigType, optional): + Object containing configurations sent to spectrometer. + Defaults to None. + version_info (Tuple[str, str, str], optional): + Tuple containing firmware, dll and FPGA version data. Defaults + to None. + """ + if configs is None or version_info is None: + pass + + else: + self.fpga_version, self.firmware_version, self.dll_version = ( + version_info) + + self.integration_delay_ns = configs.m_IntegrationDelay + self.integration_time_ms = configs.m_IntegrationTime + + self.start_pixel = configs.m_StartPixel + self.stop_pixel = configs.m_StopPixel + self.averages = configs.m_NrAverages + + self.dark_correction_enable = configs.m_CorDynDark.m_Enable + self.dark_correction_forget_percentage = ( + configs.m_CorDynDark.m_ForgetPercentage) + + self.smooth_pixels = configs.m_Smoothing.m_SmoothPix + self.smooth_model = configs.m_Smoothing.m_SmoothModel + + self.saturation_detection = configs.m_SaturationDetection + + self.trigger_mode = configs.m_Trigger.m_Mode + self.trigger_source = configs.m_Trigger.m_Source + self.trigger_source_type = configs.m_Trigger.m_SourceType + + self.store_to_ram = configs.m_Control.m_StoreToRam + + +@dataclass_json +@dataclass +class DMDParameters: + """Class containing DMD configurations and status. + + Further information: ALP-4.2 API Description (14/04/2020). + + Attributes: + add_illumination_time_us (int): + Extra time in microseconds to account for the spectrometer's + "dead time". + initial_memory (int): + Initial memory available before sending patterns to DMD. + dark_phase_time_us (int, optional): + Time in microseconds taken by the DMD mirrors to completely tilt. + Minimum time for XGA type DMD is 44 us. + illumination_time_us (int, optional): + Duration of the display of one pattern in a DMD sequence. Units in + microseconds. + picture_time_us (int, optional): + Time between the start of two consecutive pictures (i.e. this + parameter defines the image display rate). Units in microseconds. + synch_pulse_width_us (int, optional): + Duration of DMD's frame synch output pulse. Units in microseconds. + synch_pulse_delay (int, optional): + Time in microseconds between start of the frame synch output pulse + and the start of the pattern display (in master mode). + device_number (int, optional): + Serial number of the ALP device. + ALP_version (int, optional): + Version number of the ALP device. + id (int, optional): + ALP device identifier for a DMD provided by the API. + synch_polarity (str, optional): + Frame synch output signal polarity: 'High' or 'Low.' + trigger_edge (str, optional): + Trigger input signal slope. Can be a 'Falling' or 'Rising' edge. + type (str, optional): + Digital light processing (DLP) chip present in DMD. + usb_connection (bool, optional): + True if USB connection is ok. + ddc_fpga_temperature (float, optional): + Temperature of the DDC FPGA (IC4) at DMD connection. Units in °C. + apps_fpga_temperature (float, optional): + Temperature of the Applications FPGA (IC3) at DMD connection. Units + in °C. + pcb_temperature (float, optional): + Internal temperature of the temperature sensor IC (IC2) at DMD + connection. Units in °C. + display_height (int, optional): + DMD display height in pixels. + display_width (int, optional): + DMD display width in pixels. + patterns (int, optional): + Number of patterns uploaded to DMD. + unused_memory (int, optional): + Memory available after sending patterns to DMD. + bitplanes (int, optional): + Bit depth of the patterns to be displayed. Values supported from 1 + to 8. + DMD (InitVar[ALP4.ALP4], optional): + Initialization DMD object. Can be used to automatically fill most of + the DMDParameters' attributes. Unnecessary if reconstructing object + from JSON file. Defaut is None. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'DMD parameters'. + """ + + add_illumination_time_us: int + initial_memory: int + + dark_phase_time_us: Optional[int] = None + illumination_time_us: Optional[int] = None + picture_time_us: Optional[int] = None + synch_pulse_width_us: Optional[int] = None + synch_pulse_delay: Optional[int] = None + + device_number: Optional[int] = None + ALP_version: Optional[int] = None + id: Optional[int] = None + + synch_polarity: Optional[str] = None + trigger_edge: Optional[str] = None + + # synch_polarity_OUT1: Optional[str] = None + # synch_period_OUT1: Optional[str] = None + # synch_gate_OUT1: Optional[str] = None + + type: Optional[str] = None + usb_connection: Optional[bool] = None + + ddc_fpga_temperature: Optional[float] = None + apps_fpga_temperature: Optional[float] = None + pcb_temperature: Optional[float] = None + + display_height: Optional[int] = None + display_width: Optional[int] = None + + patterns: Optional[int] = None + patterns_wp: Optional[int] = None + unused_memory: Optional[int] = None + bitplanes: Optional[int] = None + + DMD: InitVar[ALP4.ALP4] = None + + class_description: str = 'DMD parameters' + + + def __post_init__(self, DMD: Optional[ALP4.ALP4] = None): + """ Post initialization of attributes. + + Receives a DMD object and directly asks it for its configurations and + status, then sets the majority of SpectrometerParameters's attributes. + During reconstruction from JSON, DMD is set to None and the function + does nothing, letting initialization for the standard __init__ function. + + Args: + DMD (ALP4.ALP4, optional): + Connected DMD. Defaults to None. + """ + if DMD == None: + pass + + else: + self.device_number = DMD.DevInquire(ALP4.ALP_DEVICE_NUMBER) + self.ALP_version = DMD.DevInquire(ALP4.ALP_VERSION) + self.id = DMD.ALP_ID.value + + polarity = DMD.DevInquire(ALP4.ALP_SYNCH_POLARITY) + if polarity == 2006: + self.synch_polarity = 'High' + elif polarity == 2007: + self.synch_polarity = 'Low' + + edge = DMD.DevInquire(ALP4.ALP_TRIGGER_EDGE) + if edge == 2008: + self.trigger_edge = 'Falling' + elif edge == 2009: + self.trigger_edge = 'Rising' + + # synch_polarity_OUT1 = + + self.type = DMDTypes(DMD.DevInquire(ALP4.ALP_DEV_DMDTYPE)) + + if DMD.DevInquire(ALP4.ALP_USB_CONNECTION) == 0: + self.usb_connection = True + else: + self.usb_connection = False + + # Temperatures converted to °C + self.ddc_fpga_temperature = DMD.DevInquire( + ALP4.ALP_DDC_FPGA_TEMPERATURE)/256 + self.apps_fpga_temperature = DMD.DevInquire( + ALP4.ALP_APPS_FPGA_TEMPERATURE)/256 + self.pcb_temperature = DMD.DevInquire( + ALP4.ALP_PCB_TEMPERATURE)/256 + + self.display_width = DMD.nSizeX + self.display_height = DMD.nSizeY + + + def update_memory(self, unused_memory: int): + + self.unused_memory = unused_memory + self.patterns = self.initial_memory - unused_memory + + + def update_sequence_parameters(self, add_illumination_time, + DMD: Optional[ALP4.ALP4] = None): + + self.bitplanes = DMD.SeqInquire(ALP4.ALP_BITPLANES) + self.illumination_time_us = DMD.SeqInquire(ALP4.ALP_ILLUMINATE_TIME) + self.picture_time_us = DMD.SeqInquire(ALP4.ALP_PICTURE_TIME) + self.dark_phase_time_us = self.picture_time_us - self.illumination_time_us + self.synch_pulse_width_us = DMD.SeqInquire(ALP4.ALP_SYNCH_PULSEWIDTH) + self.synch_pulse_delay = DMD.SeqInquire(ALP4.ALP_SYNCH_DELAY) + self.add_illumination_time_us = add_illumination_time + + + + + +def read_metadata(file_path: str) -> Tuple[MetaData, + AcquisitionParameters, + SpectrometerParameters, + DMDParameters]: + """Reads metadata of a previous acquisition from JSON file. + + Args: + file_path (str): + Name of JSON file containing all metadata. + + Returns: + Tuple[MetaData, AcquisitionParameters, SpectrometerParameters, + DMDParameters]: + saved_metadata (MetaData): + Metadata object read from JSON. + saved_acquisition_params(AcquisitionParameters): + AcquisitionParameters object read from JSON. + saved_spectrometer_params(SpectrometerParameters): + SpectrometerParameters object read from JSON. + saved_dmd_params(DMDParameters): + DMDParameters object read from JSON. + """ + + file = open(file_path,'r') + data = json.load(file) + file.close() + + for object in data: + if object['class_description'] == 'Metadata': + saved_metadata = MetaData.from_dict(object) + if object['class_description'] == 'Acquisition parameters': + saved_acquisition_params = AcquisitionParameters.from_dict(object) + saved_acquisition_params.undo_readable_pattern_order() + if object['class_description'] == 'Spectrometer parameters': + saved_spectrometer_params = SpectrometerParameters.from_dict(object) + if object['class_description'] == 'DMD parameters': + saved_dmd_params = DMDParameters.from_dict(object) + + return (saved_metadata, saved_acquisition_params, + saved_spectrometer_params, saved_dmd_params) + +def read_metadata_2arms(file_path: str) -> Tuple[MetaData, + AcquisitionParameters, + SpectrometerParameters, + DMDParameters, + CAM]: + """Reads metadata of a previous acquisition from JSON file. + + Args: + file_path (str): + Name of JSON file containing all metadata. + + Returns: + Tuple[MetaData, AcquisitionParameters, SpectrometerParameters, + DMDParameters]: + saved_metadata (MetaData): + Metadata object read from JSON. + saved_acquisition_params(AcquisitionParameters): + AcquisitionParameters object read from JSON. + saved_spectrometer_params(SpectrometerParameters): + SpectrometerParameters object read from JSON. + saved_dmd_params(DMDParameters): + DMDParameters object read from JSON. + """ + + file = open(file_path,'r') + data = json.load(file) + file.close() + + for object in data: + if object['class_description'] == 'Metadata': + saved_metadata = MetaData.from_dict(object) + if object['class_description'] == 'Acquisition parameters': + saved_acquisition_params = AcquisitionParameters.from_dict(object) + saved_acquisition_params.undo_readable_pattern_order() + if object['class_description'] == 'Spectrometer parameters': + saved_spectrometer_params = SpectrometerParameters.from_dict(object) + if object['class_description'] == 'DMD parameters': + saved_dmd_params = DMDParameters.from_dict(object) + if object['class_description'] == 'IDS camera configuration': + saved_cam_params = CAM.from_dict(object) + saved_cam_params.undo_readable_class_CAM() + + + return (saved_metadata, saved_acquisition_params, + saved_spectrometer_params, saved_dmd_params, saved_cam_params) + +def save_metadata(metadata: MetaData, + DMD_params: DMDParameters, + spectrometer_params: SpectrometerParameters, + acquisition_parameters: AcquisitionParameters) -> None: + """Saves metadata to JSON file. + + Args: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. + DMD_params (DMDParameters): + Class containing DMD configurations and status. + spectrometer_params (SpectrometerParameters): + Object containing spectrometer configurations. + acquisition_parameters (AcquisitionParameters): + Object containing acquisition specifications and timing results. + """ + + path = Path(metadata.output_directory) + with open( + path / f'{metadata.experiment_name}_metadata.json', + 'w', encoding='utf8') as output: + + output_params = [ + metadata.to_dict(), + DMD_params.to_dict(), + spectrometer_params.to_dict(), + AcquisitionParameters.readable_pattern_order( + acquisition_parameters.to_dict())] + + json.dump(output_params,output,ensure_ascii=False,indent=4) + +def save_metadata_2arms(metadata: MetaData, + DMD_params: DMDParameters, + spectrometer_params: SpectrometerParameters, + camPar : CAM, + acquisition_parameters: AcquisitionParameters) -> None: + """Saves metadata to JSON file. + + Args: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. + DMD_params (DMDParameters): + Class containing DMD configurations and status. + spectrometer_params (SpectrometerParameters): + Object containing spectrometer configurations. + acquisition_parameters (AcquisitionParameters): + Object containing acquisition specifications and timing results. + """ + + path = Path(metadata.output_directory) + with open( + path / f'{metadata.experiment_name}_metadata.json', + 'w', encoding='utf8') as output: + + output_params = [ + metadata.to_dict(), + DMD_params.to_dict(), + spectrometer_params.to_dict(), + CAM.readable_class_CAM(camPar.to_dict()), + AcquisitionParameters.readable_pattern_order(acquisition_parameters.to_dict())] + + json.dump(output_params, output, ensure_ascii=False, indent=4)#, default=convert) + + # with open(path / f'{metadata.experiment_name}_metadata_cam.pkl', 'wb') as f: + # pickle.dump(camPar.__dict__, f) + +@dataclass_json +@dataclass +class func_path: + def __init__(self, data_folder_name, data_name, ask_overwrite=False): + if not os.path.exists('../data/' + data_folder_name): + os.makedirs('../data/' + data_folder_name) + + if not os.path.exists('../data/' + data_folder_name + '/' + data_name): + os.makedirs('../data/' + data_folder_name + '/' + data_name) + aborted = False + elif ask_overwrite == True: + res = input('Acquisition already exists, overwrite it ?[y/n]') + if res == 'n': + aborted = True + else: + aborted = False + else: + aborted = True + + self.aborted = aborted + self.subfolder_path = '../data/' + data_folder_name + '/' + data_name + self.overview_path = self.subfolder_path + '/overview' + if not os.path.exists(self.overview_path): + os.makedirs(self.overview_path) + + self.data_name = data_name + self.data_path = self.subfolder_path + '/' + data_name + self.had_reco_path = self.data_path + '_had_reco.npz' + self.fig_had_reco_path = self.overview_path + '/' + data_name + self.pathIDSsnapshot = Path(self.data_path + '_IDScam_before_acq.npy') + self.pathIDSsnapshot_overview = self.overview_path + '/' + data_name + '_IDScam_before_acq.png' + self.nn_reco_path = self.data_path + '_nn_reco.npz' + self.fig_nn_reco_path = self.overview_path + '/' + data_name + + diff --git a/spas/metadata_SPIM1D.py b/spas/metadata_SPIM1D.py new file mode 100644 index 0000000..2e8b966 --- /dev/null +++ b/spas/metadata_SPIM1D.py @@ -0,0 +1,921 @@ +# -*- coding: utf-8 -*- +__author__ = 'Guilherme Beneti Martins' + +"""Metadata classes and utilities. + +Metadata classes to keep and save all relevant data during an acquisition. +Utility functions to recreate objects from JSON files, save them to JSON and to +improve readability. +""" + +import json +from datetime import datetime +from enum import IntEnum +from dataclasses import dataclass, InitVar, field +from typing import Optional, Union, List, Tuple +from pathlib import Path +import os +from dataclasses_json import dataclass_json +import numpy as np +import ctypes as ct +import pickle +##### DLL for the DMD +try: + import ALP4 +except: # in the cas the DLL of the DMD is not installed + class ALP4: + pass + setattr(ALP4, 'ALP4', None) + print('DLL of the DMD not installed') +##### DLL for the spectrometer Avantes +try: + from msl.equipment.resources.avantes import MeasConfigType +except: # in the cas the DLL of the spectrometer is not installed + class MeasConfigType: + pass + MeasConfigType = None + print('DLL of the spectrometer not installed !!!') + +##### DLL for the camera +try: + from pyueye import ueye + dll_pyueye_installed = 1 +except: + dll_pyueye_installed = 0 + print('DLL of the cam not installed !!') + + +@dataclass_json +@dataclass +class MetaData: + """ Class containing overall acquisition parameters and description. + + Metadata concerning the experiment, paths, file inputs and file outputs. + This class is adapted to be reconstructed from a JSON file. + + Attributes: + output_directory (Union[str, Path], optional): + Directory where multiple related acquisitions will be stored. + pattern_order_source (Union[str, Path], optional): + File where the order of patterns to be sent to DMD is specified. It + can be a text file containing a list of pattern indeces or a numpy + file containing a covariance matrix from which the pattern order is + calculated. + pattern_source (Union[str, Path], optional): + Pattern source folder. + pattern_prefix (str): + Prefix used in pattern naming. + experiment_name (str): + Prefix of all files related to a single acquisition. Files will + appear with the following string pattern: + experiment_name + '_' + filename. + light_source (str): + Light source used to illuminate an object during acquisition. + object (str): + Object imaged during acquisition. + filter (str): + Light filter used. + description (str): + Acqusition experiment description. + date (str, optional): + Acquisition date. Automatically set when object is created. Default + is None. + time (str, optional): + Time when metadata object is created. Set automatically by + __post_init__(). Default is None. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'Metadata'. + """ + + pattern_prefix: str + experiment_name: str + + light_source: str + object: str + filter: str + description: str + + output_directory: Union[str, Path] + pattern_order_source: Union[str, Path] + pattern_source: Union[str, Path] + + date: Optional[str] = None + time: Optional[str] = None + + class_description: str = 'Metadata' + + def __post_init__(self): + """Sets time and date of object cretion and deals with paths""" + + today = datetime.today() + self.date = '--/--/----' #today.strftime('%d/%m/%Y') + self.time = today.strftime('%I:%M:%S %p') + + # If parameter is str, turn it into Path + if isinstance(self.output_directory, str): + self.output_directory = Path(self.output_directory) + + # If parameter is Path or was turned into a Path, resolve it and get the + # str format. + if issubclass(self.output_directory.__class__, Path): + self.output_directory = str(self.output_directory.resolve()) + + + if isinstance(self.pattern_order_source, str): + self.pattern_order_source = Path(self.pattern_order_source) + + if issubclass(self.pattern_order_source.__class__, Path): + self.pattern_order_source = str( + self.pattern_order_source.resolve()) + + + if isinstance(self.pattern_source, str): + self.pattern_source = Path(self.pattern_source) + + if issubclass(self.pattern_source.__class__, Path): + self.pattern_source = str(self.pattern_source.resolve()) + + +@dataclass_json +@dataclass +class CAM: + """Class containing IDS camera configurations. + + Further information: https://en.ids-imaging.com/manuals/ids-software-suite/ueye-manual/4.95/en/c_programmierung.html. + + Attributes: + hCam (ueye.c_uint): Handle of the camera. + sInfo (ueye.SENSORINFO):sensor information : [SensorID [c_ushort] = 566; + strSensorName [c_char_Array_32] = b'UI388xCP-M'; + nColorMode [c_char] = b'\x01'; + nMaxWidth [c_uint] = 3088; + nMaxHeight [c_uint] = 2076; + bMasterGain [c_int] = 1; + bRGain [c_int] = 0; + bGGain [c_int] = 0; + bBGain [c_int] = 0; + bGlobShutter [c_int] = 0; + wPixelSize [c_ushort] = 240; + nUpperLeftBayerPixel [c_char] = b'\x00'; + Reserved]. + cInfo (ueye.BOARDINFO):Camera information: [SerNo [c_char_Array_12] = b'4103219888'; + ID [c_char_Array_20] = b'IDS GmbH'; + Version [c_char_Array_10] = b''; + Date [c_char_Array_12] = b'30.11.2017'; + Select [c_ubyte] = 1; + Type [c_ubyte] = 100; + Reserved [c_char_Array_8] = b'';] + nBitsPerPixel (ueye.c_int): number of bits per pixel (8 for monochrome, 24 for color). + m_nColorMode (ueye.c_int): color mode : Y8/RGB16/RGB24/REG32. + bytes_per_pixel (int): bytes_per_pixel = int(nBitsPerPixel / 8). + rectAOI (ueye.IS_RECT()): rectangle of the Area Of Interest (AOI): s32X [c_int] = 0; + s32Y [c_int] = 0; + s32Width [c_int] = 3088; + s32Height [c_int] = 2076; + pcImageMemory (ueye.c_mem_p()): memory allocation. + MemID (ueye.int()): memory identifier. + pitch (ueye.INT()): ???. + fps (float): set frame per second. + gain (int): Set gain between [0 - 100]. + gainBoost (str): Activate gain boosting ("ON") or deactivate ("OFF"). + gamma (float): Set Gamma between [1 - 2.5] to change the image contrast + exposureTime (float): Set the exposure time between [0.032 - 56.221] + blackLevel (int): Set the black level between [0 - 255] to set an offset in the image. It is adviced to put 5 for noise measurement + camActivated (bool) : need to to know if the camera is ready to acquire (1: yes, 0: No) + pixelClock (int) : the pixel clock, three values possible : [118, 237, 474] (MHz) + bandwidth (float) the bandwidth (in MByte/s) is an approximate value which is calculated based on the pixel clock + Memory (bool) : a boolean to know if the memory inside the camera is busy [1] or free [0] + Exit (int) : if Exit = 2 => excute is_ExitCamera function (disables the hCam camera handle and releases the memory) | if Exit = 0 => allow to init cam, after that, Exit = 1 + vidFormat (str) : save video in the format avi or bin (for binary) + gate_period (int) : a second TTL is sent by the DMD to trigg the camera, and based on the fisrt TTL to trigg the spectrometer. camera trigger period = gate_period*(spectrometer trigger period) + trigger_mode (str) : hard or soft + avi (ueye.int) : A pointer that returns the instance ID which is needed for calling the other uEye AVI functions + punFileID (ueye.c_int) : a pointer in which the instance ID is returned. This ID is needed for calling other functions. + timeout (int) : a time which stop the camera that waiting for a TTL + time_array (List[float]) : the time array saved after each frame received on the camera + int_time_spect (float) : is egal to the integration time of the spectrometer, it is need to know this value because of the rolling shutter of the monochrome IDS camera + black_pattern_num (int) : is number inside the image name of the black pattern (for the hyperspectral arm, or white pattern for the camera arm) to be inserted betweem the Hadamard patterns + insert_patterns (int) : 0 => no insertion / 1=> insert white patterns for the camera + acq_mode (str) : mode of the acquisition => 'video' or 'snapshot' mode + """ + if dll_pyueye_installed: + hCam: Optional[ueye.c_uint] = None + sInfo: Optional[ueye.SENSORINFO] = None + cInfo: Optional[ueye.BOARDINFO] = None + nBitsPerPixel: Optional[ueye.c_int] = None + m_nColorMode: Optional[ueye.c_int] = None + bytes_per_pixel: Optional[int] = None + rectAOI: Optional[ueye.IS_RECT] = None + pcImageMemory: Optional[ueye.c_mem_p] = None + MemID: Optional[ueye.c_int] = None + pitch: Optional[ueye.c_int] = None + fps: Optional[float] = None + gain: Optional[int] = None + gainBoost: Optional[str] = None + gamma: Optional[float] = None + exposureTime: Optional[float] = None + blackLevel: Optional[int] = None + camActivated : Optional[bool] = None + pixelClock : Optional[int] = None + bandwidth : Optional[float] = None + Memory : Optional[bool] = None + Exit : Optional[int] = None + vidFormat : Optional[str] = None + gate_period : Optional[int] = None + trigger_mode : Optional[str] = None + avi : Optional[ueye.int] = None + punFileID : Optional[ueye.c_int] = None + timeout : Optional[int] = None + time_array : Optional[Union[List[float], str]] = field(default=None, repr=False) + int_time_spect : Optional[float] = None + black_pattern_num : Optional[int] = None + insert_patterns : Optional[int] = None + acq_mode : Optional[str] = None + + class_description: str = 'IDS camera configuration' + + def undo_readable_class_CAM(self) -> None: + """Changes the time_array attribute from `str` to `List` of `int`.""" + + def to_float(str_arr): + arr = [] + for s in str_arr: + try: + num = float(s) + arr.append(num) + except ValueError: + pass + return arr + + if self.time_array: + self.time_array = ( + self.time_array.strip('[').strip(']').split(', ')) + self.time_array = to_float(self.time_array) + self.time_array = np.asarray(self.time_array) + + @staticmethod + def readable_class_CAM(cam_params_dict: dict) -> dict: + # pass + """Turns list of time_array into a string. + convert the c_type structure (sInfo, cInfo and rectAOI) into a nested dict + change the bytes type item into str + change the c_types item into their value + """ + + readable_cam_dict = {} + readable_cam_dict_temp = cam_params_dict#camPar.to_dict()# + inc = 0 + for item in readable_cam_dict_temp: + stri = str(type(readable_cam_dict_temp[item])) + # print('----- item : ' + item) + if item == 'sInfo' or item == 'cInfo' or item == 'rectAOI': + readable_cam_dict[item] = dict() + try: + for sub_item in readable_cam_dict_temp[item]._fields_: + new_item = item + '-' + sub_item[0] + try: + att = getattr(readable_cam_dict_temp[item], sub_item[0]).value + except: + att = getattr(readable_cam_dict_temp[item], sub_item[0]) + + if type(att) == bytes: + att = str(att) + + readable_cam_dict[item][sub_item[0]] = att + except: + try: + for sub_item in readable_cam_dict_temp[item]: + # print('----- sub_item : ' + sub_item) + new_item = item + '-' + sub_item + att = readable_cam_dict_temp[item][sub_item] + + if type(att) == bytes: + att = str(att) + + readable_cam_dict[item][sub_item] = att + except: + print('warning, impossible to read the subitem of readable_cam_dict_temp[item]') + + elif stri.find('pyueye') >=0: + try: + readable_cam_dict[item] = readable_cam_dict_temp[item].value + except: + readable_cam_dict[item] = readable_cam_dict_temp[item] + elif item == 'time_array': + readable_cam_dict[item] = str(readable_cam_dict_temp[item]) + else: + readable_cam_dict[item] = readable_cam_dict_temp[item] + + return readable_cam_dict + + +@dataclass_json +@dataclass +class AcquisitionParameters: + """Class containing acquisition specifications and timing results. + + This class is adapted to be reconstructed from a JSON file. + + Attributes: + pattern_compression (float): + Percentage of total available patterns to be present in an + acquisition sequence. + pattern_dimension_x (int): + Length of reconstructed image that defines pattern length. + pattern_dimension_y (int): + Width of reconstructed image that defines pattern width. + zoom (int): + numerical zoom of the patterns + xw_offset (int): + offset of the pattern in the DMD for zoom > 1 in the width (x) direction + yh_offset (int): + offset of the pattern in the DMD for zoom > 1 in the heihgt (y) direction + mask_index (Union[np.ndarray, str], optional): + Array of `int` type corresponding to the index of the mask vector where + the value is egal to 1 + x_mask_coord (Union[np.ndarray, str], optional): + coordinates of the mask in the x direction x[0] and x[1] are the first + and last points respectively + y_mask_coord (Union[np.ndarray, str], optional): + coordinates of the mask in the y direction y[0] and y[1] are the first + and last points respectively + pattern_amount (int, optional): + Quantity of patterns sent to DMD for an acquisition. This value is + calculated by an external function. Default in None. + acquired_spectra (int, optional): + Amount of spectra actually read from the spectrometer. This value is + calculated by an external function. Default in None. + mean_callback_acquisition_time_ms (float, optional): + Mean time between 2 callback executions during an acquisition. This + value is calculated by an external function. Default in None. + total_callback_acquisition_time_s (float, optional): + Total time of callback executions during an acquisition. This value + is calculated by an external function. Default in None. + mean_spectrometer_acquisition_time_ms (float, optional): + Mean time between 2 spectrometer measurements during an acquisition + based on its own internal clock. This value is calculated by an + external function. Default in None. + total_spectrometer_acquisition_time_s (float, optional): + Total time of spectrometer measurements during an acquisition + based on its own internal clock. This value is calculated by an + external function. Default in None. + saturation_detected (bool, optional): + Boolean incating if saturation was detected during acquisition. + Default is None. + patterns (Union[List[int],str], optional) = None + List `int` or `str` containing all patterns sent to the DMD for an + acquisition sequence. This value is set by an external function and + its type can be modified by multiple functions during object + creation, manipulation, when dumping to a JSON file or + when reconstructing an AcquisitionParameters object from a JSON + file. It is intended to be of type List[int] most of the execution + List[int]time. Default is None. + wavelengths (Union[np.ndarray, str], optional): + Array of `float` type corresponding to the wavelengths associated + with spectrometer's start and stop pixels. + timestamps (Union[List[float], str], optional): + List of `float` type elapsed time between each measurement + made by the spectrometer based on its internal clock. Units in + milliseconds. Default is None. + measurement_time (Union[List[float], str], optional): + List of `float` type elapsed times between each callback. Units in + milliseconds. Default is None. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'Acquisition parameters'. + """ + + pattern_compression: float + pattern_dimension_x: int + pattern_dimension_y: int + zoom: Optional[int] = field(default=None) + xw_offset: Optional[int] = field(default=None) + yh_offset: Optional[int] = field(default=None) + mask_index: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + x_mask_coord: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + y_mask_coord: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + + pattern_amount: Optional[int] = None + acquired_spectra: Optional[int] = None + + mean_callback_acquisition_time_ms: Optional[float] = None + total_callback_acquisition_time_s: Optional[float] = None + mean_spectrometer_acquisition_time_ms: Optional[float] = None + total_spectrometer_acquisition_time_s: Optional[float] = None + + saturation_detected: Optional[bool] = None + + patterns: Optional[Union[List[int], str]] = field(default=None, repr=False) + patterns_wp: Optional[Union[List[int], str]] = field(default=None, repr=False) + wavelengths: Optional[Union[np.ndarray, str]] = field(default=None, + repr=False) + timestamps: Optional[Union[List[float], str]] = field(default=None, + repr=False) + measurement_time: Optional[Union[List[float], str]] = field(default=None, + repr=False) + + class_description: str = 'Acquisition parameters' + + + def undo_readable_pattern_order(self) -> None: + """Changes the patterns attribute from `str` to `List` of `int`. + + When reconstructing an AcquisitionParameters object from a JSON file, + this method turns the patterns, wavelengths, timestamps and + measurement_time attributes from a string to a list of integers + containing the pattern indices used in that acquisition. + """ + + def to_float(str_arr): + arr = [] + for s in str_arr: + try: + num = float(s) + arr.append(num) + except ValueError: + pass + return arr + + self.patterns = self.patterns.strip('[').strip(']').split(', ') + self.patterns = [int(s) for s in self.patterns if s.isdigit()] + try: + self.patterns_wp = self.patterns_wp.text.strip('[').strip(']').split(', ') + self.patterns_wp = [int(s) for s in self.patterns_wp if s.isdigit()] + except: + print('patterns_wp has no attribute ''strip''') + + if self.wavelengths: + self.wavelengths = ( + self.wavelengths.strip('[').strip(']').split(', ')) + self.wavelengths = to_float(self.wavelengths) + self.wavelengths = np.asarray(self.wavelengths) + else: + print('wavelenghts not present in metadata.' + ' Reading data in legacy mode.') + + if self.timestamps: + self.timestamps = self.timestamps.strip('[').strip(']').split(', ') + self.timestamps = to_float(self.timestamps) + else: + print('timestamps not present in metadata.' + ' Reading data in legacy mode.') + + if self.measurement_time: + self.measurement_time = ( + self.measurement_time.strip('[').strip(']').split(', ')) + self.measurement_time = to_float(self.measurement_time) + else: + print('measurement_time not present in metadata.' + ' Reading data in legacy mode.') + + if self.mask_index: + self.mask_index = ( + self.mask_index.strip('[').strip(']').split(', ')) + self.mask_index = to_float(self.mask_index) + self.mask_index = np.asarray(self.mask_index) + else: + print('mask_index not present in metadata.' + ' Reading data in legacy mode.') + + if self.x_mask_coord: + self.x_mask_coord = ( + self.x_mask_coord.strip('[').strip(']').split(', ')) + self.x_mask_coord = to_float(self.x_mask_coord) + self.x_mask_coord = np.asarray(self.x_mask_coord) + else: + print('x_mask_coord not present in metadata.' + ' Reading data in legacy mode.') + + if self.y_mask_coord: + self.y_mask_coord = ( + self.y_mask_coord.strip('[').strip(']').split(', ')) + self.y_mask_coord = to_float(self.y_mask_coord) + self.y_mask_coord = np.asarray(self.y_mask_coord) + else: + print('y_mask_coord not present in metadata.' + ' Reading data in legacy mode.') + + @staticmethod + def readable_pattern_order(acquisition_params_dict: dict) -> dict: + """Turns list of patterns into a string. + + Turns the list of pattern attributes from an AcquisitionParameters + object (turned into a dictionary) into a string that will improve + readability once all metadata is dumped into a JSON file. + This function must be called before dumping. + + Args: + acquisition_params_dict (dict): Dictionary obtained from converting + an AcquisitionParameters object. + + Returns: + [dict]: Modified dictionary with acquisition parameters metadata. + """ + + def _hard_coded_conversion(data): + s = '[' + for value in data: + s += f'{value:.4f}, ' + s = s[:-2] + s += ']' + + return s + + readable_dict = acquisition_params_dict + readable_dict['patterns'] = str(readable_dict['patterns']) + readable_dict['patterns_wp'] = str(readable_dict['patterns_wp']) + + readable_dict['wavelengths'] = _hard_coded_conversion( + readable_dict['wavelengths']) + + readable_dict['timestamps'] = _hard_coded_conversion( + readable_dict['timestamps']) + + readable_dict['measurement_time'] = _hard_coded_conversion( + readable_dict['measurement_time']) + + readable_dict['mask_index'] = _hard_coded_conversion( + readable_dict['mask_index']) + + readable_dict['x_mask_coord'] = _hard_coded_conversion( + readable_dict['x_mask_coord']) + + readable_dict['y_mask_coord'] = _hard_coded_conversion( + readable_dict['y_mask_coord']) + + return readable_dict + + + def update_timings(self, timestamps: np.ndarray, + measurement_time: np.ndarray): + """Updates acquisition timings. + + Args: + timestamps (ndarray): + Array of `float` type elapsed time between each measurement made + by the spectrometer based on its internal clock. Units in + milliseconds. + measurement_time (ndarray): + Array of `float` type elapsed times between each callback. Units + in milliseconds. + """ + self.mean_callback_acquisition_time_ms = np.mean(measurement_time) + self.total_callback_acquisition_time_s = np.sum(measurement_time) / 1000 + self.mean_spectrometer_acquisition_time_ms = np.mean( + timestamps, dtype=float) + self.total_spectrometer_acquisition_time_s = np.sum(timestamps) / 1000 + + self.timestamps = timestamps + self.measurement_time = measurement_time + + + +@dataclass_json +@dataclass +class SpectrometerParameters: + """Class containing spectrometer configurations. + + Further information: AvaSpec Library Manual (Version 9.10.2.0). + + Attributes: + high_resolution (bool): + True if 16-bit AD Converter is used. False if 14-bit ADC is used. + initial_available_pixels (int): + Number of pixels available in spectrometer. + detector (str): + Name of the light detector. + firmware_version (str, optional): + Spectrometer firmware version. + dll_version (str, optional): + Spectrometer dll version. + fpga_version (str, optional): + Internal FPGA version. + integration_delay_ns (int, optional): + Parameter used to start the integration time not immediately after + the measurement request (or on an external hardware trigger), but + after a specified delay. Unit is based on internal FPGA clock cycle. + integration_time_ms (float, optional): + Spectrometer exposure time during one scan in miliseconds. + start_pixel (int, optional): + Initial pixel data received from spectrometer. + stop_pixel (int, optional): + Last pixel data received from spectrometer. + averages (int, optional): + Number of averages in a single measurement. + dark_correction_enable (bool, optional): + Enable dynamic dark current correction. + dark_correction_forget_percentage (int, optional): + Percentage of the new dark value pixels that has to be used. e.g., + a percentage of 100 means only new dark values are used. A + percentage of 10 means that 10 percent of the new dark values is + used and 90 percent of the old values is used for drift correction. + smooth_pixels (int, optional): + Number of neighbor pixels used for smoothing, max. has to be smaller + than half the selected pixel range because both the pixels on the + left and on the right are used. + smooth_model (int, optional): + Smoothing model. Currently a single model is supported in which the + spectral data is averaged over a number of pixels on the detector + array. For example, if the smoothpix parameter is set to 2, the + spectral data for all pixels x(n) on the detector array will be + averaged with their neighbor pixels x(n-2), x(n-1), x(n+1) and + x(n+2). + saturation_detection (bool, optional): + Enable detection of saturation/overexposition in pixels. + trigger_mode (int, optional): + Trigger mode (0 = Software, 1 = Hardware, 2 = Single Scan). + trigger_source (int, optional): + Trigger source (0 = external trigger, 1 = sync input). + trigger_source_type (int, optional): + Trigger source type (0 = edge trigger, 1 = level trigger). + store_to_ram (int, optional): + Define how many scans can be stored in RAM. In DynamicRAM mode, can + be set to 0 to indicate infinite measurements. + configs: InitVar[MeasConfigType]: + Initialization object containing data to create SpectrometerData + object. Unnecessary if reconstructing object from JSON file Defaut + is None. + version_info: InitVar[Tuple[str]]: + Initialization variable used for receiving firmware, dll and FPGA + version data. Unnecessary if reconstructing object from JSON file. + class_description (str): + Class description used to improve redability when dumped to JSON + file. Default is 'Spectrometer parameters'. + """ + + high_resolution: bool + initial_available_pixels: int + detector: str + firmware_version: Optional[str] = None + dll_version: Optional[str] = None + fpga_version: Optional[str] = None + + integration_delay_ns: Optional[int] = None + integration_time_ms: Optional[float] = None + + start_pixel: Optional[int] = None + stop_pixel: Optional[int] = None + averages: Optional[int] = None + + dark_correction_enable: Optional[bool] = None + dark_correction_forget_percentage: Optional[int] = None + + smooth_pixels: Optional[int] = None + smooth_model: Optional[int] = None + + saturation_detection: Optional[bool] = None + + trigger_mode: Optional[int] = None + trigger_source: Optional[int] = None + trigger_source_type: Optional[int] = None + + store_to_ram: Optional[int] = None + + configs: InitVar[MeasConfigType] = None + version_info: InitVar[Tuple[str]] = None + + class_description: str = 'Spectrometer parameters' + + + def __post_init__(self, configs: Optional[MeasConfigType] = None, + version_info: Optional[Tuple[str, str, str]] = None): + """Post initialization of attributes. + + Receives the data sent to spectrometer and some version data and unwraps + everything to set the majority of SpectrometerParameters's attributes. + During reconstruction from JSON, arguments of type InitVar (configs and + version_info) are set to None and the function does nothing, letting + initialization for the standard __init__ function. + + Args: + configs (MeasConfigType, optional): + Object containing configurations sent to spectrometer. + Defaults to None. + version_info (Tuple[str, str, str], optional): + Tuple containing firmware, dll and FPGA version data. Defaults + to None. + """ + if configs is None or version_info is None: + pass + + else: + self.fpga_version, self.firmware_version, self.dll_version = ( + version_info) + + self.integration_delay_ns = configs.m_IntegrationDelay + self.integration_time_ms = configs.m_IntegrationTime + + self.start_pixel = configs.m_StartPixel + self.stop_pixel = configs.m_StopPixel + self.averages = configs.m_NrAverages + + self.dark_correction_enable = configs.m_CorDynDark.m_Enable + self.dark_correction_forget_percentage = ( + configs.m_CorDynDark.m_ForgetPercentage) + + self.smooth_pixels = configs.m_Smoothing.m_SmoothPix + self.smooth_model = configs.m_Smoothing.m_SmoothModel + + self.saturation_detection = configs.m_SaturationDetection + + self.trigger_mode = configs.m_Trigger.m_Mode + self.trigger_source = configs.m_Trigger.m_Source + self.trigger_source_type = configs.m_Trigger.m_SourceType + + self.store_to_ram = configs.m_Control.m_StoreToRam + + +def read_metadata(file_path: str) -> Tuple[MetaData, + AcquisitionParameters, + SpectrometerParameters, + DMDParameters]: + """Reads metadata of a previous acquisition from JSON file. + + Args: + file_path (str): + Name of JSON file containing all metadata. + + Returns: + Tuple[MetaData, AcquisitionParameters, SpectrometerParameters, + DMDParameters]: + saved_metadata (MetaData): + Metadata object read from JSON. + saved_acquisition_params(AcquisitionParameters): + AcquisitionParameters object read from JSON. + saved_spectrometer_params(SpectrometerParameters): + SpectrometerParameters object read from JSON. + saved_dmd_params(DMDParameters): + DMDParameters object read from JSON. + """ + + file = open(file_path,'r') + data = json.load(file) + file.close() + + for object in data: + if object['class_description'] == 'Metadata': + saved_metadata = MetaData.from_dict(object) + if object['class_description'] == 'Acquisition parameters': + saved_acquisition_params = AcquisitionParameters.from_dict(object) + saved_acquisition_params.undo_readable_pattern_order() + if object['class_description'] == 'Spectrometer parameters': + saved_spectrometer_params = SpectrometerParameters.from_dict(object) + if object['class_description'] == 'DMD parameters': + saved_dmd_params = DMDParameters.from_dict(object) + + return (saved_metadata, saved_acquisition_params, + saved_spectrometer_params, saved_dmd_params) + +def read_metadata_2arms(file_path: str) -> Tuple[MetaData, + AcquisitionParameters, + SpectrometerParameters, + DMDParameters, + CAM]: + """Reads metadata of a previous acquisition from JSON file. + + Args: + file_path (str): + Name of JSON file containing all metadata. + + Returns: + Tuple[MetaData, AcquisitionParameters, SpectrometerParameters, + DMDParameters]: + saved_metadata (MetaData): + Metadata object read from JSON. + saved_acquisition_params(AcquisitionParameters): + AcquisitionParameters object read from JSON. + saved_spectrometer_params(SpectrometerParameters): + SpectrometerParameters object read from JSON. + saved_dmd_params(DMDParameters): + DMDParameters object read from JSON. + """ + + file = open(file_path,'r') + data = json.load(file) + file.close() + + for object in data: + if object['class_description'] == 'Metadata': + saved_metadata = MetaData.from_dict(object) + if object['class_description'] == 'Acquisition parameters': + saved_acquisition_params = AcquisitionParameters.from_dict(object) + saved_acquisition_params.undo_readable_pattern_order() + if object['class_description'] == 'Spectrometer parameters': + saved_spectrometer_params = SpectrometerParameters.from_dict(object) + if object['class_description'] == 'DMD parameters': + saved_dmd_params = DMDParameters.from_dict(object) + if object['class_description'] == 'IDS camera configuration': + saved_cam_params = CAM.from_dict(object) + saved_cam_params.undo_readable_class_CAM() + + + return (saved_metadata, saved_acquisition_params, + saved_spectrometer_params, saved_dmd_params, saved_cam_params) + +def save_metadata(metadata: MetaData, + DMD_params: DMDParameters, + spectrometer_params: SpectrometerParameters, + acquisition_parameters: AcquisitionParameters) -> None: + """Saves metadata to JSON file. + + Args: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. + DMD_params (DMDParameters): + Class containing DMD configurations and status. + spectrometer_params (SpectrometerParameters): + Object containing spectrometer configurations. + acquisition_parameters (AcquisitionParameters): + Object containing acquisition specifications and timing results. + """ + + path = Path(metadata.output_directory) + with open( + path / f'{metadata.experiment_name}_metadata.json', + 'w', encoding='utf8') as output: + + output_params = [ + metadata.to_dict(), + DMD_params.to_dict(), + spectrometer_params.to_dict(), + AcquisitionParameters.readable_pattern_order( + acquisition_parameters.to_dict())] + + json.dump(output_params,output,ensure_ascii=False,indent=4) + +def save_metadata_2arms(metadata: MetaData, + DMD_params: DMDParameters, + spectrometer_params: SpectrometerParameters, + camPar : CAM, + acquisition_parameters: AcquisitionParameters) -> None: + """Saves metadata to JSON file. + + Args: + metadata (MetaData): + Metadata concerning the experiment, paths, file inputs and file + outputs. + DMD_params (DMDParameters): + Class containing DMD configurations and status. + spectrometer_params (SpectrometerParameters): + Object containing spectrometer configurations. + acquisition_parameters (AcquisitionParameters): + Object containing acquisition specifications and timing results. + """ + + path = Path(metadata.output_directory) + with open( + path / f'{metadata.experiment_name}_metadata.json', + 'w', encoding='utf8') as output: + + output_params = [ + metadata.to_dict(), + DMD_params.to_dict(), + spectrometer_params.to_dict(), + CAM.readable_class_CAM(camPar.to_dict()), + AcquisitionParameters.readable_pattern_order(acquisition_parameters.to_dict())] + + json.dump(output_params, output, ensure_ascii=False, indent=4)#, default=convert) + + # with open(path / f'{metadata.experiment_name}_metadata_cam.pkl', 'wb') as f: + # pickle.dump(camPar.__dict__, f) + +@dataclass_json +@dataclass +class func_path: + def __init__(self, data_folder_name, data_name, ask_overwrite=False): + if not os.path.exists('../data/' + data_folder_name): + os.makedirs('../data/' + data_folder_name) + + if not os.path.exists('../data/' + data_folder_name + '/' + data_name): + os.makedirs('../data/' + data_folder_name + '/' + data_name) + aborted = False + elif ask_overwrite == True: + res = input('Acquisition already exists, overwrite it ?[y/n]') + if res == 'n': + aborted = True + else: + aborted = False + else: + aborted = True + + self.aborted = aborted + self.subfolder_path = '../data/' + data_folder_name + '/' + data_name + self.overview_path = self.subfolder_path + '/overview' + if not os.path.exists(self.overview_path): + os.makedirs(self.overview_path) + + self.data_name = data_name + self.data_path = self.subfolder_path + '/' + data_name + self.had_reco_path = self.data_path + '_had_reco.npz' + self.fig_had_reco_path = self.overview_path + '/' + data_name + self.pathIDSsnapshot = Path(self.data_path + '_IDScam_before_acq.npy') + self.pathIDSsnapshot_overview = self.overview_path + '/' + data_name + '_IDScam_before_acq.png' + self.nn_reco_path = self.data_path + '_nn_reco.npz' + self.fig_nn_reco_path = self.overview_path + '/' + data_name + + diff --git a/spas/plot_spec_to_rgb_image.py b/spas/plot_spec_to_rgb_image.py index bbd1e66..74ea479 100644 --- a/spas/plot_spec_to_rgb_image.py +++ b/spas/plot_spec_to_rgb_image.py @@ -30,14 +30,18 @@ from spas import convert_spec_to_rgb from spas.convert_spec_to_rgb import ColourSystem +from scipy.ndimage import median_filter as medfilt +from matplotlib import pyplot as plt def plot_spec_to_rgb_image(GT, wavelengths): - cs_srgb = convert_spec_to_rgb.cs_srgb + # cs_srgb = convert_spec_to_rgb.cs_srgb + cs_hdtv = convert_spec_to_rgb.cs_hdtv #################### prepare interpolation ################################ lambda_begin_cie = 380 lambda_end_cie = 780 + lambda_cie = np.linspace(lambda_begin_cie, lambda_end_cie, int((lambda_end_cie-lambda_begin_cie)/5 +1)) lambda_begin = math.ceil(wavelengths[0]/5)*5 lambda_end = math.floor(wavelengths[len(wavelengths)-1]/5)*5 new_wavelengths = np.arange(lambda_begin, lambda_end+1, 5) @@ -49,26 +53,44 @@ def plot_spec_to_rgb_image(GT, wavelengths): maxi = np.amax(GT) image_arr = np.zeros([np.size(GT, axis=0), np.size(GT, axis=1), 3], dtype=np.uint8) + coeff1 = 0 for j in range(np.size(GT,axis=1)): for i in range(np.size(GT,axis=0)): + # extract spectrum by spectrum pix_spect = GT[i, j, :] - med_spect = pix_spect #medfilt(pix_spect, 99) + max_raw_spec = np.amax(pix_spect) + + # apply a median filter on each spectrum + med_spect = medfilt(pix_spect, 99)#pix_spect # + + # rescale the spectrum by its original maximum value + max_med_spec = np.amax(med_spect) + if max_med_spec != 0: + med_spect = med_spect * max_raw_spec / max_med_spec + else: + med_spect = np.zeros(len(med_spect)) + # delete all negative values med_spect[np.where(med_spect<0)] = 0 - gamma = np.max(med_spect)/maxi + # interpolate to combine with the spectra from the CIE (cmf) => interpolation degrade the spectral resolution, interpolation of the spectra of the CIE would be better (high resolution) f = interpolate.interp1d(wavelengths, med_spect) GT_interpol = f(new_wavelengths) - GT_interpol2 = np.insert(GT_interpol, 0, zeros_before_vec) - GT_interpol3 = np.append(GT_interpol2, zeros_after_vec) + # complete the spectrum with zeros to multiplicate it with the CIE spectra + GT_interpol = np.insert(GT_interpol, 0, zeros_before_vec) + GT_interpol = np.append(GT_interpol, zeros_after_vec) + max_GT_interpol = np.amax(GT_interpol) + if max_GT_interpol != 0: + GT_interpol = GT_interpol * max_raw_spec / max_GT_interpol + # find coeff value for rescale rgb vector into the 0-255 range + coeff = np.amax(GT_interpol)/maxi - rgb = ColourSystem.spec_to_rgb(cs_srgb, GT_interpol3) - - image_arr[i, j] = rgb*255*gamma - - # new_wavelengths2 = np.arange(lambda_begin_cie, lambda_end+1, 5) - # new_wavelengths3 = np.arange(lambda_begin_cie, lambda_end_cie+1, 5) + # rgb = ColourSystem.spec_to_rgb(cs_srgb, GT_interpol) + rgb = ColourSystem.spec_to_rgb(cs_hdtv, GT_interpol) + + px = rgb*255*coeff + image_arr[i, j] = px if i == -1: - print('[i='+str(i)+',j='+str(j)+'] ==> rgb = '+str(image_arr[i, j])+' gamma = '+str(gamma)) + print('[i='+str(i)+',j='+str(j)+'] ==> rgb = '+str(image_arr[i, j])+' coeff = '+str(coeff)) return image_arr diff --git a/spas/reconstruction.py b/spas/reconstruction.py index a249357..291649e 100644 --- a/spas/reconstruction.py +++ b/spas/reconstruction.py @@ -2,8 +2,9 @@ __author__ = 'Guilherme Beneti Martins' import numpy as np +from spas.metadata import AcquisitionParameters -def reconstruction_hadamard(patterns: np.ndarray, +def reconstruction_hadamard(acquisition_parameters: AcquisitionParameters, mode: str, Q: np.ndarray, M: np.ndarray, @@ -11,9 +12,8 @@ def reconstruction_hadamard(patterns: np.ndarray, """Reconstruct an image acquired with Hadamard patterns. Args: - patterns (np.ndarray): - Array containing an ordered list of the patterns used for - acquisition. + acquisition_parameters (AcquisitionParameters): + Object containing acquisition specifications mode (str): Select if reconstruction is based on MATLAB, fht or Walsh generated patterns. @@ -28,7 +28,9 @@ def reconstruction_hadamard(patterns: np.ndarray, [np.ndarray]: Reconstructed matrix of size NxN pixels. """ - + + patterns = acquisition_parameters.patterns + if mode == 'matlab': ind_opt = patterns[1::2] if mode == 'fht' or mode == 'walsh': @@ -47,6 +49,19 @@ def reconstruction_hadamard(patterns: np.ndarray, f = np.matmul(Q,M_Had) # Q.T = Q frames = np.reshape(f,(N,N,M.shape[1])) frames /= N*N + + mask_index = acquisition_parameters.mask_index + if len(mask_index) > 0: + x_mask_coord = acquisition_parameters.x_mask_coord + y_mask_coord = acquisition_parameters.y_mask_coord + x_mask_length = x_mask_coord[1] - x_mask_coord[0] + y_mask_length = y_mask_coord[1] - y_mask_coord[0] + + GTnew_vec = np.zeros((x_mask_length*y_mask_length, frames.shape[2])) + GT_vec = frames.reshape(-1, frames.shape[-1]) + + GTnew_vec[mask_index,:] = GT_vec[:len(mask_index),:] + frames = np.reshape(GTnew_vec, (y_mask_length, x_mask_length, frames.shape[2])) return frames diff --git a/spas/reconstruction_nn.py b/spas/reconstruction_nn.py index 3863689..9d057a9 100644 --- a/spas/reconstruction_nn.py +++ b/spas/reconstruction_nn.py @@ -16,17 +16,17 @@ import torch import numpy as np from matplotlib import pyplot as plt +import pathlib -from spyrit.core.recon import PinvNet, DCNet -from spyrit.core.train import load_net -from spyrit.misc.statistics import Cov2Var -from spyrit.misc.walsh_hadamard import walsh2_matrix -from spyrit.core.noise import Poisson +from spyrit.core.train import load_net +from spyrit.core.noise import Poisson from spyrit.core.meas import HadamSplit -from spyrit.core.prep import SplitPoisson -from spyrit.core.recon import TikhonovMeasurementPriorDiag -from spyrit.core.nnet import Unet +from spyrit.core.prep import SplitPoisson +from spyrit.core.recon import PinvNet, DCNet, TikhonovMeasurementPriorDiag +from spyrit.core.nnet import Unet from spyrit.misc.sampling import Permutation_Matrix, reorder +from spyrit.misc.statistics import Cov2Var +from spyrit.misc.walsh_hadamard import walsh2_matrix from spas.noise import noiseClass from spas.metadata import AcquisitionParameters @@ -69,12 +69,14 @@ def setup_reconstruction(cov_path: str, size 128). Args: - cov_path (str): - Path to the covariance matrix used for reconstruction. - model_folder (str): - Folder containing trained models for reconstruction. - network_params (ReconstructionParameters): - Parameters used to load the model. + cov_path (str): Path to the covariance matrix used for reconstruction. + It must be a .npy (numpy) or .pt (pytorch) file. It is converted to + a torch tensor for reconstruction. + + model_folder (str): Folder containing trained models for reconstruction. + It is unused by the current implementation. + + network_params (ReconstructionParameters): Parameters used to load the model. Returns: Tuple[Union[Pinv_Net, DC2_Net], str]: @@ -87,92 +89,65 @@ def setup_reconstruction(cov_path: str, device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(f'Device: {device}') - Cov_rec = np.load(cov_path) + if pathlib.Path(cov_path).suffix == '.npy': + Cov_rec = torch.from_numpy(np.load(cov_path)) + elif pathlib.Path(cov_path).suffix == '.pt': + Cov_rec = torch.load(cov_path) + else: + raise RuntimeError('Covariance matrix must be a .npy or .pt file') + H = walsh2_matrix(network_params.img_size) # Rectangular sampling # N.B.: Only for measurements from patterns of size 2**K reconstructed at # size 2**L, with L > K (e.g., measurements are at size 64, reconstructions # at size 128. - Ord = np.ones((network_params.img_size, network_params.img_size)) - n_sub = math.ceil(network_params.M**0.5) - Ord[:,n_sub:] = 0 - Ord[n_sub:,:] = 0 - - # Init network - #Perm_rec = Permutation_Matrix(Ord) - #Hperm = Perm_rec @ H - #Pmat = Hperm[:network_params.M,:] - - # init - # Forward = Forward_operator_Split_ft_had(Pmat, Perm_rec, - # network_params.img_size, - # network_params.img_size) - - Forward = HadamSplit(network_params.M, - network_params.img_size, - Ord)# modified by LMW 30/03/2023 - - # Noise = Acquisition_Poisson_approx_Gauss(network_params.N0, Forward) + Ord = torch.zeros(network_params.img_size, network_params.img_size) + M_xy = math.ceil(network_params.M**0.5) + Ord[:M_xy, :M_xy] = 1 + # Init network + Forward = HadamSplit(network_params.M, network_params.img_size, Ord) Noise = Poisson(Forward, network_params.N0) - - # Prep = Preprocess_Split_diag_poisson(network_params.N0, - # network_params.M, - # network_params.img_size**2) - - Prep = SplitPoisson(network_params.N0, - network_params.M, - network_params.img_size**2) - - Denoi = Unet() - # Cov_perm = Perm_rec @ Cov_rec @ Perm_rec.T - # DC = Generalized_Orthogonal_Tikhonov(sigma_prior = Cov_perm, - # M = network_params.M, - # N = network_params.img_size**2) - - - - # model = DC2_Net(Noise, Prep, DC, Denoi) - model = DCNet(Noise, Prep, Cov_rec, Denoi) + Prep = SplitPoisson(network_params.N0, Forward) - # # load - # net_folder = '{}_{}_{}'.format( - # network_params.arch, network_params.denoi, - # network_params.data) - - # suffix = '_{}_N0_{}_N_{}_M_{}_epo_{}_lr_{}_sss_{}_sdr_{}_bs_{}_reg_{}'.format( - # network_params.subs, network_params.N0, - # network_params.img_size, network_params.M, - # network_params.num_epochs, network_params.learning_rate, - # network_params.step_size, network_params.gamma, - # network_params.batch_size, network_params.regularization) - - # torch.cuda.empty_cache() # need to keep this here? - # title = Path(model_folder) / net_folder / (net_folder + suffix) - # load_net(title, model, device) - # model.eval() # Mandantory when batchNorm is used - # model = model.to(device) + if network_params.denoi is None: + Denoi = torch.nn.Identity() + else: + Denoi = Unet() + + model = DCNet(Noise, Prep, Cov_rec, Denoi) # Load trained DC-Net net_arch = network_params.arch net_denoi = network_params.denoi net_data = network_params.data - if (network_params.img_size == 128) and (network_params.M == 4096): - net_order = 'rect' - else: - net_order = 'var' + # if (network_params.img_size == 128) and (network_params.M == 4096): + # net_order = 'rect' + # else: + # net_order = 'var' + + net_order = network_params.subs - bs = 256 - # net_suffix = f'N0_{network_params.N0}_N_{network_params.img_size}_M_{network_params.M}_epo_30_lr_0.001_sss_10_sdr_0.5_bs_{bs}_reg_1e-07' + if net_data == 'stl10': + bs = 1024 + elif net_data == 'imagenet': + bs = 256 + net_suffix = f'N0_{network_params.N0}_N_{network_params.img_size}_M_{network_params.M}_epo_30_lr_0.001_sss_10_sdr_0.5_bs_{bs}_reg_1e-07_light' + # net_suffix = f'N0_{network_params.N0}_N_{network_params.img_size}_M_{network_params.M}_epo_30_lr_0.001_sss_10_sdr_0.5_bs_{bs}_reg_1e-07' + # bs = 1024 + # net_suffix = f'N0_{network_params.N0}_N_{network_params.img_size}_M_{network_params.M}_epo_30_lr_0.001_sss_10_sdr_0.5_bs_{bs}_reg_1e-07_seed_0' net_folder= f'{net_arch}_{net_denoi}_{net_data}/' net_title = f'{net_arch}_{net_denoi}_{net_data}_{net_order}_{net_suffix}' - # title = './model_v2/' + net_folder + net_title - title = 'C:/openspyrit/models/' + net_folder + net_title - load_net(title, model, device, False) + title = 'C:/openspyrit/models/' + net_folder + net_title + '.pth' + # print(title) + + if network_params.denoi is not None: + load_net(title, model, device, False) + model.eval() # Mandantory when batchNorm is used return model, device @@ -201,7 +176,8 @@ def reorder_subsample(meas: np.ndarray, Acquisitions can be subsampled a posteriori, leadind to M_rec < M_acq """ # Dimensions (N.B: images are assumed to be square) - N_acq = acqui_param.pattern_dimension_x + # print("meas.shape = " + str(meas.shape[0])) + N_acq = acqui_param.pattern_dimension_x #int((meas.shape[0]/2)**0.5)# N_rec = recon_param.img_size N_wav = meas.shape[0] @@ -210,7 +186,6 @@ def reorder_subsample(meas: np.ndarray, Ord_acq = np.reshape(Ord_acq, (N_acq,N_acq)) # sampling map Perm_acq = Permutation_Matrix(Ord_acq).T - # Order used for reconstruction if recon_param.subs == 'rect': Ord_rec = np.ones((N_rec, N_rec)) @@ -224,61 +199,10 @@ def reorder_subsample(meas: np.ndarray, Perm_rec = Permutation_Matrix(Ord_rec) - # + # reorder meas = meas.T - - # Subsample acquisition permutation matrix (fill with zeros if necessary) - if N_rec > N_acq: - - # Square subsampling in the "natural" order - Ord_sub = np.zeros((N_rec,N_rec)) - Ord_sub[:N_acq,:N_acq]= -np.arange(-N_acq**2,0).reshape(N_acq,N_acq) - Perm_sub = Permutation_Matrix(Ord_sub) - - # Natural order measurements (N_acq resolution) - Perm_raw = np.zeros((2*N_acq**2,2*N_acq**2)) - Perm_raw[::2,::2] = Perm_acq.T - Perm_raw[1::2,1::2] = Perm_acq.T - meas = Perm_raw @ meas + meas = reorder(meas, Perm_acq, Perm_rec) - # Zero filling (needed only when reconstruction resolution is higher - # than acquisition res) - zero_filled = np.zeros((2*N_rec**2, N_wav)) - zero_filled[:2*N_acq**2,:] = meas - - meas = zero_filled - - Perm_raw = np.zeros((2*N_rec**2,2*N_rec**2)) - Perm_raw[::2,::2] = Perm_sub.T - Perm_raw[1::2,1::2] = Perm_sub.T - - meas = Perm_raw @ meas - - elif N_rec == N_acq: - Perm_sub = Perm_acq[:N_rec**2,:].T - - elif N_rec < N_acq: - # Square subsampling in the "natural" order - Ord_sub = np.zeros((N_acq,N_acq)) - Ord_sub[:N_rec,:N_rec]= -np.arange(-N_rec**2,0).reshape(N_rec,N_rec) - Perm_sub = Permutation_Matrix(Ord_sub) - Perm_sub = Perm_sub[:N_rec**2,:] - Perm_sub = Perm_sub @ Perm_acq.T - - #Reorder measurements when reconstruction order is not "natural" - if N_rec <= N_acq: - # Get both positive and negative coefficients permutated - Perm = Perm_rec @ Perm_sub - Perm_raw = np.zeros((2*N_rec**2,2*N_acq**2)) - - elif N_rec > N_acq: - Perm = Perm_rec - Perm_raw = np.zeros((2*N_rec**2,2*N_rec**2)) - - Perm_raw[::2,::2] = Perm - Perm_raw[1::2,1::2] = Perm - meas = Perm_raw @ meas - return meas[:2*recon_param.M,:].T @@ -323,17 +247,13 @@ def reconstruct(model: Union[PinvNet, DCNet], # method. Defaults to None. proportion = spectral_data.shape[0]//batches # Amount of wavelengths per batch - - # img_size = model.Acq.FO.h # image assumed to be square img_size = model.Acq.meas_op.h - recon = np.zeros((spectral_data.shape[0], img_size, img_size)) - start = perf_counter_ns() # model.PreP.set_expe() - model.prep.set_expe() # Modified by LMW 30/03/2023 - model.to(device) # Modified by LMW 30/03/2023 + model.prep.set_expe() + model.to(device) with torch.no_grad(): for batch in range(batches): @@ -487,4 +407,4 @@ def plot_recon(queue: Queue, sleep_time: float = 0.3) -> None: while not queue.empty(): queue.get_nowait() - print('#Plot process: Ended plot') \ No newline at end of file + print('#Plot process: Ended plot') diff --git a/spas/spectro_SP_lib.py b/spas/spectro_SP_lib.py new file mode 100644 index 0000000..34429f7 --- /dev/null +++ b/spas/spectro_SP_lib.py @@ -0,0 +1,754 @@ +# -*- coding: utf-8 -*- +""" +Created on Fri Sep 20 11:55:45 2024 + +@author: mahieu +""" + +import serial +import numpy as np +import time + +#%% functions +class grating(): + """A class containing the informations about the grating. + + Attributes: + grooves: + the number of grooves by mm + blaze: + the blaze wavelength (nm) + current_grating: + the grating currently used in the spectrograph + number_of_grating: + the number of available grating in the spectrograph + """ + + def __init__(self): + self.grooves = 0 + self.blaze = 0 + self.current_grating_nbr = 0 + self.number_of_grating = 0 + + +class Spectro_SP: + """ + This class controls the spectrograph CM110 from Spectral Products. + """ + def query_echo(serial_port: object): + """The ECHO command is used to verify communications with the CM110. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + + Returns: + Nothing, just print if communication is established or failed + """ + + send_cmd = bytes([27]) + try: + serial_port.write(send_cmd) + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + + if HiByte == 27: + print("RS232 communication with the spectrograh established") + except: + print("Error: Attempting to use a port that is not open or used by another software !!") + + + def open_serial(comm_port: str = 'COM3') -> object: + """Open the serial port for RS232 communication with the spectrograph + + Args: + comm_port (str): + the communication port number that your computer assigns (Valid numbers: COM1-6) by opening the Device Manager / Ports(COM & LPT) + + Returns + serial_port (obj) + A object containing the serial port information + """ + + try: + serial_port = serial.Serial(port = comm_port, baudrate = 9600, bytesize = 8, parity = 'N', stopbits = 1, timeout = 0.1, rtscts = True, dsrdtr = False, xonxoff = False) + # query_echo(serial_port) + + return serial_port + except: + print('Error: Unable to open the port: ' + comm_port + '. Try the following possibilities:') + print(' - Turn on the alimentation of the spectrograph') + print(' - Connect the USB cable of the spectrograph') + print(' - Check the COM port number by opening the Device Manager / Ports(COM & LPT)') + + + def query_unit(serial_port: object, + print_unit: bool = False) -> str: + """Read the unit used in the GOTO, SCAN, SIZE, and CALIBRATE commands of the current grating. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + print_unit (bool): + a boolean to print or not (default) the result + + Returns: + unit (str): + the unit used in the GOTO, SCAN, SIZE, and CALIBRATE commands. + (µm: micrometer, nm: nanometer, Å: Angström) + """ + + inc = 0 + while True: + stop = False + inc = inc + 1 + cmd = bytes([56]) + HiByte = bytes([14]) + send_cmd = cmd + HiByte + serial_port.readline() # used to flush the buffer + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + unit_nbr = HiByte * 256 + LoByte + if unit_nbr == 0: + unit = 'µm' + elif unit_nbr == 1: + unit = 'nm' + elif unit_nbr == 2: + unit = 'A' + else: + print('problem to read the unit, value out of range') + stop = True + if inc == 1: + print('try a second time') + else: + print('Error: unit reading failed !!') + + if stop == False or inc >= 2: + break + + if stop == False: + if print_unit == True: + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + print("Unit : " + unit_to_print) + + return unit + + + def query_position(serial_port: object, + print_position: bool = False) -> int: + """Read the position (in wavelength) of the grating inside the spectrograph. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + print_position (bool): + a boolean to print or not (default) the result + + Returns: + position (int): + the position of the grating depending of the unit. + (µm: micrometer, nm: nanometer, Å: Angström) + """ + + cmd = bytes([56]) + HiByte = bytes([0]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + position = HiByte * 256 + LoByte + + if print_position == True: + unit = query_unit(serial_port) + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + print("position = " + str(position) + ' ' + unit_to_print) + + return position + + + def query_grating(serial_port: object, + grating: object, + print_grating_info: bool = False) -> grating: + """Read informations about the grating. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + grating (class): + the class containing the information of the grating -> + (Grooves/mm, Blaze wavelength, current grating number, number of grating) + print_grating_info (bool): + a boolean to print or not (default) the grating informations + + Returns: + grating (class): + the characteristic of the current grating + """ + + # Query on the grooves number + cmd = bytes([56]) + HiByte = bytes([2]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + grating.grooves = HiByte * 256 + LoByte + + # Query on the blaze wavelength + cmd = bytes([56]) + HiByte = bytes([3]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + grating.blaze = HiByte * 256 + LoByte + + # Query on the current grating number + cmd = bytes([56]) + HiByte = bytes([4]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + grating.current_grating_nbr = HiByte * 256 + LoByte + + # Query on the total number of grating + cmd = bytes([56]) + HiByte = bytes([13]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + grating.number_of_grating = HiByte * 256 + LoByte + + if print_grating_info == True: + print("grooves/mm = " + str(grating.grooves)) + print("blaze wavelength = " + str(grating.blaze) + ' nm') + print("current grating number = " + str(grating.current_grating_nbr)) + print("number of grating = " + str(grating.number_of_grating)) + + return grating + + + def query_speed(serial_port: object, + print_speed: bool = False) -> int: + """Read the speed at which the monochromator may scan. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + print_speed (bool): + a boolean to print or not (default) the result + + Returns: + speed (int): + the speed at which the monochromator may scan (Å/sec). + """ + + cmd = bytes([56]) + HiByte = bytes([5]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + speed = HiByte * 256 + LoByte + + if print_speed == True: + print("speed = " + str(speed) + ' Å/sec') + + return speed + + + def query_size(serial_port: object, + print_size: bool = False) -> int: + """Read the step size and the direction of the grating moving. + If size is positive : rotation of the grating will increase the position in wavelength + If size is negative : rotation of the grating will decrease the position in wavelength + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + print_size (bool): + a boolean to print or not (default) the result + + Returns: + size (int): + the size of the step + """ + + cmd = bytes([56]) + HiByte = bytes([6]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + receive_serial = serial_port.readline() + HiByte = receive_serial[0] + LoByte = receive_serial[1] + size = HiByte * 256 + LoByte + + if size >= 128: + size = 128 - size + + if print_size == True: + unit = query_unit(serial_port) + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + print("step size = " + str(size) + ' ' + unit_to_print) + + return size + + + def cmd_unit(serial_port: object, + unit: str = 'nm', + print_unit: bool = False): + """This command allows the selection of units in the GOTO, SCAN, SIZE, and CALIBRATE commands of the current grating. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + unit (str): "µm", "nm" (default) or "A". + the unit used in the GOTO, SCAN, SIZE, and CALIBRATE commands. + print_unit (bool): + a boolean to print or not (default) the result + + Returns: + Nothing, just display a message if the command is not accepted or the new value if accepted + """ + + stop = False + if unit == 'µm': + unit_nbr = 0 + elif unit == 'nm': + unit_nbr = 1 + elif unit == 'A': + unit_nbr = 2 + else: + print('Wrong input unit, please set to : µm, nm or A') + print('command aborted') + stop = True + + if stop == False: + cmd = bytes([50]) + HiByte = bytes([unit_nbr]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + elif print_unit == True: + unit = query_unit(serial_port) + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + print("Unit set to : " + unit_to_print) + + + def cmd_size(serial_port: object, + size: int = 1, + print_size: bool = False): + """This command determines the change in magnitude and the direction of the grating position after a STEP command. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + size (int = 1 default): + the step size (in the preset unit) and the direction of the grating moving. + To increase the position, set a value in the range[0:127]. + To decrease the position, set a value in the range[0:-127]. + print_unit (bool): + a boolean to print or not (default) the result + + Returns: + Nothing, just display a message if the command is not accepted or the new value if accepted + """ + + stop = False + if abs(size) >= 128: + print('value out of range. Valid range : [-127 ; 127]') + print('command aborted') + stop = True + elif size < 0: + new_size = abs(size) + 128 + else: + new_size = size + + if stop == False: + cmd = bytes([55]) + HiByte = bytes([int(new_size)]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + elif print_size == True: + unit = query_unit(serial_port) + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + + current_size = query_size(serial_port) + if current_size >= 128: + new_current_size = 128 - current_size + else: + new_current_size = current_size + print("step size set to : " + str(new_current_size) + ' ' + unit_to_print) + + + def cmd_speed(serial_port: object, + speed: int = 1000, + print_speed: bool = False): + """Set the speed at which the grating may scan. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + speed (int = 1000 Å/s default): + Values of speed are grating dependent. The function will find the nearest valid value depending of the grating. + print_speed (bool): + A boolean to print or not (default) the result + + Returns: + Nothing, just display a message if the command is not accepted or the new value if accepted + """ + + stop = False + + # possible valid values of the speed + serial_port.readline()# used to flush the buffer + current_grating = query_grating(serial_port, grating) + if current_grating.grooves == 3600: + possible_speed = [333, 166, 83, 41, 20, 10, 5, 2, 1] + elif current_grating.grooves == 2400: + possible_speed = [500, 250, 125, 62, 31, 15, 7, 3, 1] + elif current_grating.grooves == 1800: + possible_speed = [666, 332, 166, 82, 40, 20, 10, 4, 2] + elif current_grating.grooves == 1200: + possible_speed = [1000, 500, 250, 125, 62, 31, 15, 7, 3, 1] + elif current_grating.grooves == 600: + possible_speed = [2000, 1000, 500, 250, 124, 62, 30, 14, 6, 2] + elif current_grating.grooves == 300: + possible_speed = [4000, 2000, 1000, 500, 248, 124, 60, 28, 12, 4] + elif current_grating.grooves == 150: + possible_speed = [8000, 4000, 2000, 1000, 496, 248, 120, 56, 24, 8] + elif current_grating.grooves == 75: + possible_speed = [16000, 8000, 4000, 2000, 992, 496, 240, 112, 48, 16] + else: + print('grating not referenced in the function "cmd_speed".') + stop = True + + if stop == False: + try: + possible_speed.index(speed) + except: # find the closest valid speed if value is not valid + print('desired speed does not match the possible values for the grating : ' + str(current_grating.grooves) + ' grooves/mm.') + nearest_indx = np.argmin(np.abs(np.array(possible_speed) - speed)) + speed = possible_speed[nearest_indx] + print('possible valid values : ' + str(possible_speed) + ' Å/s') + print('The closest valid speed found is ' + str(speed) + ' Å/s') + + cmd = bytes([13]) + HiByte = bytes([int(np.floor(speed/256))]) + LoByte = bytes([int(speed%256)]) + send_cmd = cmd + HiByte + LoByte + serial_port.write(send_cmd) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + elif print_speed == True: + current_speed = query_speed(serial_port) + print("speed set to : " + str(current_speed) + ' Å/s') + + + def cmd_step(serial_port: object, + print_position: bool = False): + """Moove the grating by a preset amount defined by the SIZE command. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + print_position (bool): + a boolean to print or not (default) the result + + Returns: + Nothing, just display a message if the command is not accepted or the new position after the moving if accepted + """ + + cmd = bytes([54]) + send_cmd = cmd + serial_port.write(send_cmd) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + elif print_position == True: + position = query_position(serial_port) + unit = query_unit(serial_port) + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + print("wavelength position = " + str(position) + ' ' + unit_to_print) + + + def cmd_selectGrating(serial_port: object, + grating_nbr: int = 1, + print_select: bool = False): + """Select the grating that will be used. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + grating_nbr (int = 1 default): + To selecte the grating number. Valid values : 1 or 2 + print_select (bool): + a boolean to print or not (default) the grating selected + + Returns: + Nothing, just display a message if the command is not accepted or the new value if accepted + """ + + serial_port.readline()# used to flush the buffer + current_grating = query_grating(serial_port, grating) + if current_grating.current_grating_nbr == grating_nbr: + print('grating already selectionned. Nothing to do') + else: + cmd = bytes([26]) + HiByte = bytes([grating_nbr]) + send_cmd = cmd + HiByte + serial_port.write(send_cmd) + print('grating change, please wait...') + time.sleep(15) + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + elif print_select == True: + print('selected grating:') + query_grating(serial_port, grating, print_grating_info = True) + + + def cmd_goto(serial_port: object, + position: int = 0, + unit: str = 'nm', + print_position: bool = False): + """This command moves the grating to a selected position. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + position (int = 0 default): + the position (in wavelength) + unit (str): "µm", "nm" (default) or "A". + the unit used in the GOTO, SCAN, SIZE, and CALIBRATE commands. + print_position (bool): + a boolean to print or not (default) the position of the grating in wavelength + + Returns: + Nothing, just display a message if the command is not accepted or the new position after the moving if accepted + """ + + stop = False + + serial_port.readline()# used to flush the buffer + current_unit = query_unit(serial_port) + if current_unit != unit: + cmd_unit(serial_port, unit = unit) + + # find the factor to calculate the delay to move the grating because the unit of the speed is Å/s + if current_unit == "µm": + fact_speed = 1/10000 + elif current_unit == "nm": + fact_speed = 1/10 + elif current_unit == "A": + fact_speed = 1 + + current_position = query_position(serial_port) + + if stop == False: + # calculate the delay to move the grating + speed = query_speed(serial_port) + dist = abs(position - current_position) + delay = np.ceil(dist *fact_speed / speed * 1000) / 1000 + if print_position == True: + print('delay to move the grating is = ' + str(delay) + ' s') + + cmd = bytes([16]) + HiByte = bytes([int(np.floor(position/256))]) + LoByte = bytes([int(position%256)]) + send_cmd = cmd + HiByte + LoByte + serial_port.write(send_cmd) + + # waiting for the grating displacement + time.sleep(delay) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + elif print_position == True: + unit = query_unit(serial_port) + if unit == 'A': + unit_to_print = 'Å' + else: + unit_to_print = unit + pos = query_position(serial_port) + print("wavelength position set to : " + str(pos) + ' ' + unit_to_print) + + + def cmd_scan(serial_port: object, + start_position: int = 400, + end_position: int = 800, + unit: str = 'nm'): + """This command moves the grating between a START position and an END position + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + start_position (int=400 default): + the start position of the scan + end_position (int=800 default): + the end position of the scan + unit (str): "µm", "nm" (default) or "A". + the unit used in the GOTO, SCAN, SIZE, and CALIBRATE commands. + + Returns: + Nothing, just display a message if the command is not accepted + """ + + serial_port.readline()# used to flush the buffer + current_unit = query_unit(serial_port) + if current_unit != unit: + cmd_unit(serial_port, unit = unit) + + cmd = bytes([12]) + start_HiByte = bytes([int(np.floor(start_position/256))]) + start_LoByte = bytes([int(start_position%256)]) + end_HiByte = bytes([int(np.floor(end_position/256))]) + end_LoByte = bytes([int(end_position%256)]) + send_cmd = cmd + start_HiByte + start_LoByte + end_HiByte + end_LoByte + serial_port.write(send_cmd) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + + + def cmd_reset(serial_port: object): + """This command returns the grating to the home position. + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + + Returns: + Nothing, just display a message if the command is accepted or not + """ + + cmd = bytes([255]) + HiByte = bytes([255]) + LoByte = bytes([255]) + send_cmd = cmd + HiByte + LoByte + serial_port.write(send_cmd) + + ret = serial_port.readline() + if len(ret) > 0: + if ret[0] >= 128: + print('Command not accepted') + else: + print('grating goes back to home') + + + def close_serial(serial_port): + """Close the serial port + + Args: + serial_port (obj): + A object to communicate with the spectrograph by the RS232 serial port. + + Returns: + Nothing, just display a message if the command is accepted or not + """ + + serial_port.close() + print("serial port closed") + +#%% Example of how to use the functions +# serial_port = open_serial(comm_port = 'COM3') +# query_echo(serial_port) +# unit = query_unit(serial_port, print_unit = True) +# position = query_position(serial_port, print_position = True) +# grating = query_grating(serial_port, grating, print_grating_info = True) +# speed = query_speed(serial_port, print_speed = True) +# size = query_size(serial_port, print_size = True) + +# cmd_unit(serial_port, unit = 'nm', print_unit = True) +# cmd_size(serial_port, size = 10, print_size = True) +# cmd_speed(serial_port, speed = 4000, print_speed = True) +# cmd_step(serial_port, print_position = True) +# cmd_selectGrating(serial_port, grating_nbr = 2, print_select = True) +# cmd_goto(serial_port, position = 550, unit = 'nm', print_position = True) +# cmd_scan(serial_port, start_position = 400, end_position = 800, unit = 'nm') +# cmd_reset(serial_port) +# close_serial(serial_port) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/spas/spectro_SP_module.py b/spas/spectro_SP_module.py new file mode 100644 index 0000000..8f2ce1b --- /dev/null +++ b/spas/spectro_SP_module.py @@ -0,0 +1,145 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Tue Mar 4 09:14:44 2025 + +@author: mahieu +""" +# from spas import spectro_SP_lib + +# from spas.spectro_SP_lib import open_serial, close_serial, grating +# from spas.spectro_SP_lib import query_echo, query_unit, query_position, query_grating, query_speed, query_size +# from spas.spectro_SP_lib import cmd_unit, cmd_size, cmd_speed, cmd_step, cmd_goto, cmd_selectGrating, cmd_reset, cmd_scan + +from spas.spectro_SP_lib import Spectro_SP, grating + +from dataclasses_json import dataclass_json +from dataclasses import dataclass, InitVar +from typing import Optional + + +def init_spectro_SP(): + spectro_SP = Spectro_SP() + spectro_SP.serial_port = 10#Spectro_SP.open_serial(comm_port = 'COM3') + print('Spetrograph SP connected') + + return spectro_SP + + +@dataclass_json +@dataclass +class Spectro_SP_parameters(): + """Class containing the spectrograph Spectral Products configurations and status. + + Further information: spectro_SP_lib.py. + + Attributes: + add_illumination_time_us (int): + """ + + def __init__(self, serial_port): + self.serial_port = serial_port + print('__init__') + + serial_port: object + unit: Optional[str] = None + position: Optional[int] = None + grating: Optional[grating] = None + speed: Optional[int] = None + size: Optional[int] = None + print('set to None') + + # spectro_SP: InitVar[Spectro_SP] = None + + class_description: str = 'spectrograph SP parameters' + + + def __post_init__(self): + """ Post initialization of attributes. + + Receives a DMD object and directly asks it for its configurations and + status, then sets the majority of SpectrometerParameters's attributes. + During reconstruction from JSON, DMD is set to None and the function + does nothing, letting initialization for the standard __init__ function. + + Args: + DMD (ALP4.ALP4, optional): + Connected DMD. Defaults to None. + """ + # if spectro_SP == None: + # print('la') + # pass + # else: + # pass + + # self.serial_port = object#Spectro_SP.query_unit(serial_port, print_unit = True) + # serial_port = Spectro_SP.open_serial() + print('sp= ' + str(self.serial_port)) + + self.unit = 'nm' # Spectro_SP.query_unit(Spectro_SP.serial_port, print_unit = True) + self.position = 0 # Spectro_SP.query_position(Spectro_SP.serial_port, print_position = True) + self.grating = 1 # Spectro_SP.query_grating(Spectro_SP.serial_port, grating, print_grating_info = True) + self.speed = 3000 # Spectro_SP.query_speed(Spectro_SP.serial_port, print_speed = True) + self.size = 10 # Spectro_SP.query_size(Spectro_SP.serial_port, print_size = True) + print('ici') + + +def setup_spectro_SP(Spectro_SP_params : Spectro_SP_parameters, + spectro_SP: object, + position: int = 600, + print_position: bool = True, + unit: str = 'nm', + print_unit: bool = True, + grating_nbr: int = 1, + print_select: bool = True, + speed: int = 3000, + print_speed: bool = True, + size: int = 10, + print_size: bool = True): + + Spectro_SP.cmd_goto(spectro_SP.serial_port, position = position, unit = 'nm', print_position = print_position) + Spectro_SP.cmd_unit(spectro_SP.serial_port, unit = unit, print_unit = print_unit) + Spectro_SP.cmd_size(spectro_SP.serial_port, size = size, print_size = print_size) + Spectro_SP.cmd_speed(spectro_SP.serial_port, speed = speed, print_speed = print_speed) + Spectro_SP.cmd_selectGrating(spectro_SP.serial_port, grating_nbr = grating_nbr, print_select = print_select) + + + + + +# #%% Query functions +# def setup_spectro_SP(): +# query_echo(serial_port) +# unit = query_unit(serial_port, print_unit = True) +# position = query_position(serial_port, print_position = True) +# grating = query_grating(serial_port, grating, print_grating_info = True) +# speed = query_speed(serial_port, print_speed = True) +# size = query_size(serial_port, print_size = True) +# #%% command functions +# # Below are examples of the command implemented. Pleass, Execute one line at a time +# cmd_unit(serial_port, unit = 'nm', print_unit = True) +# cmd_size(serial_port, size = 10, print_size = True) +# cmd_speed(serial_port, speed = 3000, print_speed = True) +# cmd_step(serial_port, print_position = True) +# cmd_selectGrating(serial_port, grating_nbr = 2, print_select = True) +# cmd_goto(serial_port, position = 600, unit = 'nm', print_position = True) +# cmd_scan(serial_port, start_position = 400, end_position = 800, unit = 'nm') +# cmd_reset(serial_port) +# #%% Close serial port. +# close_serial(serial_port) + + + + + + + + + + + + + + + + diff --git a/spas/test_spectro_SP.py b/spas/test_spectro_SP.py new file mode 100644 index 0000000..5f1b7bc --- /dev/null +++ b/spas/test_spectro_SP.py @@ -0,0 +1,31 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Tue Mar 4 13:27:41 2025 + +@author: mahieu +""" + +import sys +sys.path.append('/home/mahieu/openspyrit/spas') + +from spas.spectro_SP_module import Spectro_SP_parameters, init_spectro_SP, setup_spectro_SP + + +spectro_SP = init_spectro_SP() + + +Spectro_SP_params = Spectro_SP_parameters(spectro_SP.serial_port) + +Spectro_SP_params = setup_spectro_SP(Spectro_SP_params = Spectro_SP_parameters, + spectro_SP = spectro_SP, + position = 600, + print_position = True, + unit = 'nm', + print_unit = True, + grating_nbr = 1, + print_select = True, + speed = 3000, + print_speed = True, + size = 10, + print_size = True) \ No newline at end of file diff --git a/spas/transfer_data_to_girder.py b/spas/transfer_data_to_girder.py index 3e9f53d..eb863d5 100644 --- a/spas/transfer_data_to_girder.py +++ b/spas/transfer_data_to_girder.py @@ -136,7 +136,7 @@ def transfer_data(metadata, acquisition_parameters, spectrometer_params, DMD_par def transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, DMD_params, camPar, - setup_version, data_folder_name, data_name, upload_metadata): + setup_version, data_folder_name, data_name, collection_access, upload_metadata): #unwrap structure into camPar try: @@ -145,19 +145,31 @@ def transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, D camPar.AOI_Width = camPar.rectAOI.s32Width.value camPar.AOI_Height = camPar.rectAOI.s32Height.value except: - camPar['AOI_X'] = camPar['rectAOI'].s32X.value - camPar['AOI_Y'] = camPar['rectAOI'].s32Y.value - camPar['AOI_Width'] = camPar['rectAOI'].s32Width.value - camPar['AOI_Height'] = camPar['rectAOI'].s32Height.value + try: + camPar['AOI_X'] = camPar['rectAOI'].s32X.value + camPar['AOI_Y'] = camPar['rectAOI'].s32Y.value + camPar['AOI_Width'] = camPar['rectAOI'].s32Width.value + camPar['AOI_Height'] = camPar['rectAOI'].s32Height.value + except: + camPar.AOI_X = camPar.rectAOI['s32X'] + camPar.AOI_Y = camPar.rectAOI['s32Y'] + camPar.AOI_Width = camPar.rectAOI['s32Width'] + camPar.AOI_Height = camPar.rectAOI['s32Height'] #%%########################## Girder info ################################# url = 'https://pilot-warehouse.creatis.insa-lyon.fr/api/v1' - collectionId = '6140ba6929e3fc10d47dbe3e'# collection_name = 'spc' + if collection_access == 'private': + parent_data_folder = 'private_data' + else: + parent_data_folder = 'data' + + collectionId = '6140ba6929e3fc10d47dbe3e' txt_file = open('C:/private/no_name.txt', 'r', encoding='utf8') apiKey = txt_file.read() txt_file.close() #%%############################## path #################################### data_path = '../data/' + data_folder_name + '/' + data_name # here, data_name is the subfolder - temp_path = '../temp/data/' + setup_version + '/' + data_folder_name + '/' + data_name + # data_path = '../' + parent_data_folder + '/' + data_folder_name + '/' + data_name + temp_path = '../temp/' + parent_data_folder + '/' + setup_version + '/' + data_folder_name + '/' + data_name #%%######################## erase temp folder ############################# if len(os.listdir('../temp')) != 0: list_TempFolder = os.listdir('../temp') @@ -169,9 +181,13 @@ def transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, D gc = girder_client.GirderClient(apiUrl=url) # Generate the warehouse client gc.authenticate(apiKey=apiKey) # Authentication to the warehouse #%%##################### begin data transfer ############################## - gc.upload('../temp/data/', collectionId, 'collection', reuseExisting=True) + gc.upload('../temp/' + parent_data_folder + '/', collectionId, 'collection', reuseExisting=True) #%%############## find data folder id to uplaod metada #################### - girder_data_folder_id = '6149c3ce29e3fc10d47dbffb' + if collection_access == 'private': + girder_data_folder_id = '6509b645f5c5008d1c980f6c' + else: + girder_data_folder_id = '6149c3ce29e3fc10d47dbffb' + version_list = gc.listFolder(girder_data_folder_id, 'folder') for version_folder in version_list: if version_folder['name'] == setup_version: @@ -236,6 +252,8 @@ def transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, D dict.update(DMD_params_dict2) dict.update(CAM_params_dict2) + del dict['a)_EXP_date'] + del dict['a)_EXP_time'] del dict['a)_EXP_output_directory'] del dict['a)_EXP_pattern_order_source'] del dict['a)_EXP_pattern_source'] @@ -246,6 +264,9 @@ def transfer_data_2arms(metadata, acquisition_parameters, spectrometer_params, D del dict['b)_ACQ_measurement_time'] del dict['b)_ACQ_timestamps'] del dict['b)_ACQ_wavelengths'] + del dict['b)_ACQ_mask_index'] + del dict['b)_ACQ_x_mask_coord'] + del dict['b)_ACQ_y_mask_coord'] del dict['c)_SPECTRO_initial_available_pixels'] del dict['c)_SPECTRO_store_to_ram'] del dict['c)_SPECTRO_class_description'] diff --git a/spas/visualization.py b/spas/visualization.py index beebe3b..45b9c45 100644 --- a/spas/visualization.py +++ b/spas/visualization.py @@ -14,16 +14,13 @@ from spas.plot_spec_to_rgb_image import plot_spec_to_rgb_image from spas.noise import noiseClass from spas.reconstruction_nn import reorder_subsample, reconstruct - +from spas.metadata import DMDParameters, read_metadata +import time # Libraries for the IDS CAMERA try: from pyueye import ueye except: print('ueye DLL not installed') -# import pyueye as ueye -# from pyueye import ueye -import cv2 -import time def spectral_binning(F: np.ndarray, wavelengths: np.ndarray, lambda_min: int, lambda_max: int, n_bin: int, noise: noiseClass=None @@ -396,7 +393,7 @@ def plot_color(F: np.ndarray, wavelengths: np.ndarray, filename: str = None, cax = divider.append_axes('right', size='5%', pad=0.05) im = ax.imshow(F[bin_,:,:], cmap=colormap) - ax.set_title('$\lambda=$'f'{wavelengths[bin_]:.2f}', + ax.set_title('$\\lambda=$'f'{wavelengths[bin_]:.2f}', fontsize=fontsize) cbar = fig.colorbar(im, cax=cax, orientation='vertical') @@ -443,47 +440,63 @@ def displayVid(camPar): Args: CAM: a structure containing the parameters of the IDS camera """ - ii = 0 - start_time = time.time() + + import cv2 + # Creating a cv2 window + window_name = "Camera of the Spatial Arm" + cv2.namedWindow(window_name) + + # Create a function 'nothing' for creating trackbar + def nothing(x): + pass + + # waiting time inside the loop of the display of the window t1 = camPar.exposureTime/1000 t2 = 1/camPar.fps t_wait = max(t1, t2) - print('Press "q" on the new window to exit') - window_name = "window_live_openCV" + + first_passage = True while 1: time.sleep(t_wait) # Sleep for 1 seconds - ii = ii + 1 - + # extract the data of the image memory array = ueye.get_data(camPar.pcImageMemory, camPar.rectAOI.s32Width, camPar.rectAOI.s32Height, camPar.nBitsPerPixel, camPar.pitch, copy=False) - - # ...reshape it in an numpy array... + + # reshape it in an numpy array frame = np.reshape(array,(camPar.rectAOI.s32Height.value, camPar.rectAOI.s32Width.value, camPar.bytes_per_pixel)) - - if ii%100 == 0: - print('frame max = ' + str(np.amax(frame))) - print('frame min = ' + str(np.amin(frame))) - print("--- enlapse time :" + str(round((time.time() - start_time)*1000)/100) + 'ms') - start_time = time.time() - #...and finally display it - cv2.imshow(window_name, frame) - - # Press q if you want to end the loop + if first_passage == True: + maxi = np.max(frame) + print('maxi = ' + str(maxi)) + print('press "q" to exit') + # Creating trackbars for color change + cv2.createTrackbar('brightness', window_name, maxi, 510, nothing) + first_passage = False + + # Get current positions of trackbar + brightness = cv2.getTrackbarPos('brightness', window_name) + + frame = frame.astype(np.float64) + frame2=frame*brightness/maxi + frame3=frame2.astype(np.uint8) + #*brightness/maxi + cv2.imshow(window_name, frame3) + if cv2.waitKey(1) & 0xFF == ord('q'): cv2.destroyWindow(window_name) break -def plot_reco_without_NN(acquisition_parameters, GT, Q, all_path): +def plot_reco_without_NN(acquisition_parameters, GT, all_path): had_reco_path = all_path.had_reco_path fig_had_reco_path = all_path.fig_had_reco_path + GT = np.rot90(GT, 2) + if not os.path.exists(had_reco_path): np.savez_compressed(had_reco_path, GT) - - GT = np.rot90(GT, 2) + size_x = GT.shape[0] size_y = GT.shape[1] @@ -494,19 +507,19 @@ def plot_reco_without_NN(acquisition_parameters, GT, Q, all_path): F_bin_1px_rot = np.rot90(F_bin_1px, axes=(1,2)) F_bin_1px_flip = F_bin_1px_rot[:,::-1,:] ############### spatial view, wavelength bin ############# - #plt.figure() + # plt.figure() plot_color(F_bin_flip, wavelengths_bin) plt.savefig(fig_had_reco_path + '_BIN_IMAGE_had_reco.png') plt.show() ############### spatial view, one wavelength ############# - #plt.figure() + # plt.figure() plot_color(F_bin_1px_flip, wavelengths_bin) plt.savefig(fig_had_reco_path + '_SLICE_IMAGE_had_reco.png') plt.show() ############### spatial view, wavelength sum ############# - #plt.figure() + # plt.figure() plt.imshow(np.mean(GT[:,:,100:-100], axis=2))#[:,:,193:877] #(540-625 nm) plt.title('Sum of all wavelengths') plt.savefig(fig_had_reco_path + '_GRAY_IMAGE_had_reco.png') @@ -537,9 +550,9 @@ def plot_reco_without_NN(acquisition_parameters, GT, Q, all_path): plt.show() -def plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, network_param, all_path): +def plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, network_param, all_path, cov_path): - reorder_spectral_data = reorder_subsample(spectral_data.T, acquisition_parameters, network_param) + reorder_spectral_data = reorder_subsample(spectral_data.T, acquisition_parameters, network_param, cov_path) reco = reconstruct(model, device, reorder_spectral_data) # Reconstruction reco = reco.T reco = np.rot90(reco, 3, axes=(0,1)) @@ -550,33 +563,33 @@ def plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, netw if not os.path.exists(nn_reco_path): np.savez_compressed(nn_reco_path, reco) - ############### spatial view, one wavelength ############# - meas_bin_1w, wavelengths_bin, _ = spectral_slicing(reorder_spectral_data, acquisition_parameters.wavelengths, 530, 730, 8) - rec = reconstruct(model, device, meas_bin_1w) # Reconstruction - rec = np.rot90(rec, 2, axes=(1,2)) - - #plt.figure() - plot_color(rec, wavelengths_bin) - plt.savefig(fig_nn_reco_path + '_SLICE_IMAGE_nn_reco.png') - plt.show() - ############### spatial view, wavelength bin ############# meas_bin, wavelengths_bin, _ = spectral_binning(reorder_spectral_data, acquisition_parameters.wavelengths, 530, 730, 8) rec = reconstruct(model, device, meas_bin) rec = np.rot90(rec, 2, axes=(1,2)) - #plt.figure() + # plt.figure() plot_color(rec, wavelengths_bin) plt.savefig(fig_nn_reco_path + '_BIN_IMAGE_nn_reco.png') - plt.show() + plt.show() + ############### spatial view, one wavelength ############# + meas_bin_1w, wavelengths_bin, _ = spectral_slicing(reorder_spectral_data, acquisition_parameters.wavelengths, 530, 730, 8) + rec = reconstruct(model, device, meas_bin_1w) # Reconstruction + rec = np.rot90(rec, 2, axes=(1,2)) + + # plt.figure() + plot_color(rec, wavelengths_bin) + plt.savefig(fig_nn_reco_path + '_SLICE_IMAGE_nn_reco.png') + plt.show() + ############### spatial view, wavelength sum ############# sum_wave = np.zeros((1, reorder_spectral_data.shape[1])) moy = np.sum(reorder_spectral_data, axis=0) sum_wave[0, :] = moy rec_sum = reconstruct(model, device, sum_wave) rec_sum = rec_sum[0, :, :] - rec_sum = np.rot90(rec_sum, 2) + rec_sum = np.rot90(rec_sum, 2) # plt.figure() plt.imshow(rec_sum)#[:,:,193:877] #(540-625 nm) @@ -610,3 +623,422 @@ def plot_reco_with_NN(acquisition_parameters, spectral_data, model, device, netw plt.savefig(fig_nn_reco_path + '_SPECTRA_PLOT_nn_reco.png') plt.show() +def extract_ROI_coord(DMD_params, acquisition_parameters, all_path, data_folder_name: str, + data_name: str, GT: np.ndarray, ti: float, Np: int) -> Tuple[np.ndarray, np.array, np.array]: + + """Extract the coordinates of the ROI drawing in the hadamard reconstruction matrix. + + Display the sum of the hypercube between the initial and final wavelengths. Draw a ROI + to evaluate its coordinates + + Args: + DMD_params (DMDParameters): + DMD metadata object to be updated with pattern related data and with + memory available after patterns are sent to DMD. + acquisition_parameters : + Class containing acquisition specifications and timing results. + all_path : + function that store all the paths + data_folder_name (str): + the general folder name of the data to be load if acquisition is not the last one. + data_name (str): + the folder name of the data to be load if acquisition is not the last one. + GT (np.ndarray): + the hyperspectral cube reconstruct by the hadamard transformation + ti (float): + integration time of the sepctrometer + Np (int): + Number of pixel of the desired initial image (in one dimension) define in the main prog + + Returns (only for the freehand ROI): + mask_index (np.array): + A 1D array of the index of the mask + x_mask_coord (np.array): + the x coord, first and last point of the rectangular that most closely the freehand ROI + y_mask_coord (np.array): + the y coord, first and last point of the rectangular that most closely the freehand ROI + """ + + import cv2 + + if data_name != all_path.data_name and data_name != '': + print('Warning, you read an old acquisition') + print('') + # read GT from old acquisition + old_data_path = '../data/' + data_folder_name + '/' + data_name + '/' + data_name + '_had_reco.npz' + file_had_reco = np.load(old_data_path) + GT = file_had_reco['arr_0'] + + # read metadata + old_metadata_path = '../data/' + data_folder_name + '/' + data_name + '/' + data_name + '_metadata.json' + metadata, acquisition_parameters, spectrometer_parameters, DMD_params = read_metadata(old_metadata_path) + ti = spectrometer_parameters.integration_time_ms + else: + GT = np.rot90(GT, 2) + + # Find the indices that fit the spectral range + wavelengths = acquisition_parameters.wavelengths + init_lambda = 550 + final_lambda = 600 + init_lambda_index = min(range(len(wavelengths)), key=lambda i: abs(wavelengths[i]-init_lambda)) + final_lambda_index = min(range(len(wavelengths)), key=lambda i: abs(wavelengths[i]-final_lambda)) + + GT_sum = np.sum(GT[:,:,init_lambda_index:final_lambda_index], axis=2) + + mask_index = acquisition_parameters.mask_index + + # choose between a freehand drawn ROI or a geometrical drawn ROI + ret = input('Draw a freehand[f] or geometrical[g] ROI ? [f/g]') + + # freehand drawn ROI + if ret == 'f': + #current zoom of the displayed image + zoom_cu = acquisition_parameters.zoom + xw_offset_cu = acquisition_parameters.xw_offset + yh_offset_cu = acquisition_parameters.yh_offset + Npxx = acquisition_parameters.pattern_dimension_x + Npyy = acquisition_parameters.pattern_dimension_y + HD = DMD_params.display_height + WD = DMD_params.display_width + + if acquisition_parameters.x_mask_coord != [] and acquisition_parameters.x_mask_coord is not None: + x_mask_coord_cu = acquisition_parameters.x_mask_coord[0] + y_mask_coord_cu = acquisition_parameters.y_mask_coord[0] + else: + x_mask_coord_cu = 0 + y_mask_coord_cu = 0 + + # GT_sum = np.mean(GT2, axis=2) + GT_sum = GT_sum - GT_sum.min() + GT_sum = GT_sum * 255 / GT_sum.max() + GT_sum = GT_sum.astype(np.uint8) + + Npx = GT_sum.shape[1] + Npy = GT_sum.shape[0] + + fac_resize = int(768/np.amax(GT_sum.shape)) + + GT_sum_resized = cv2.resize(GT_sum, (Npx*fac_resize, Npy*fac_resize))#Warning, "cv2.resize" function inverte the axis + + # transform im to RGB matrix to use the freehand_ROI function + im = np.zeros([GT_sum_resized.shape[0], GT_sum_resized.shape[1], 3], dtype=np.uint8) + im[:,:,0] = GT_sum_resized + im[:,:,1] = GT_sum_resized + im[:,:,2] = GT_sum_resized + + # plt.figure() + # plt.imshow(im) + # plt.colorbar() + + print('Draw one or more ROI by holding down the mouse button') + print(' ----') + print('Press "r" after drawing a ROI to save it') + print(' ----') + print('Press "e" to erase the previous saved ROI') + print(' ----') + print('Press "Echap" to exit when done') + print(' ----') + + # Draw ROIs to create a mask + global drawing, mode, tab_roi + drawing = False # true if mouse is pressed + mode = True # if True, draw rectangle. Press 'm' to toggle to curve + + tab_roi = [] # tab of an unique ROI + tab_all_roi = [] # tab of all the ROIs + + # mouse callback function + def freehand_ROI(event,former_x,former_y,flags,param): + global current_former_x, current_former_y, drawing, mode, tab_roi + + if event==cv2.EVENT_LBUTTONDOWN: + drawing=True + current_former_x,current_former_y=former_x,former_y + + elif event==cv2.EVENT_MOUSEMOVE: + if drawing==True: + if mode==True: + cv2.line(im,(current_former_x,current_former_y),(former_x,former_y),(0,0,255),5) + current_former_x = former_x + current_former_y = former_y + # print('(x,y) = (' + str(former_x) + ',' + str(former_y) + ')') + tab_roi.append([former_x, former_y]) + elif event==cv2.EVENT_LBUTTONUP: + drawing=False + if mode==True: + cv2.line(im,(current_former_x,current_former_y),(former_x,former_y),(0,0,255),5) + current_former_x = former_x + current_former_y = former_y + + # Draw ROIs, return tab with its coordinates + cv2.namedWindow("HyperSpectral Cube") + cv2.setMouseCallback('HyperSpectral Cube', freehand_ROI) + incROI = 0 + while(1): + cv2.imshow('HyperSpectral Cube',im) + k=cv2.waitKey(1)&0xFF + if k==27: # Press "Echap" to exit + break + elif k == 114: # Press 'r' letter to save the drawing ROI + incROI = incROI + 1 + tab_all_roi.append(np.array(tab_roi)) + tab_roi = [] + print('ROI n° ' + str(incROI) + ' saved') + elif k == 101: # Press 'e' letter to erase the previous ROI + del tab_all_roi[incROI-1] + tab_roi = [] + print('ROI n° ' + str(incROI) + ' deleted') + incROI = incROI - 1 + + time.sleep(0.1) + + cv2.destroyAllWindows() + + # Draw mask by filling into the contours of the ROIs + mask = np.zeros(im.shape[:2], dtype=np.uint8) + for i in range(len(tab_all_roi)): + cv2.drawContours(mask, [tab_all_roi[i]], -1, 1, thickness=cv2.FILLED) + + # plt.figure() + # plt.imshow(mask) + # plt.title('mask') + + # Rotate the mask as the image is rotated + mask_rot = np.rot90(mask, 2) + + # plt.figure() + # plt.imshow(mask_rot) + # plt.title('mask_rot') + + # resize image in respect of the DMD size of the previous acquisition + # mask_rot_re = cv2.resize(mask_rot, (int(Npx*HD/(Np*zoom_cu)), int(Npy*HD/(Np*zoom_cu)))) + + # plt.figure() + # plt.imshow(mask_rot_re) + # plt.title('mask_rot_re') + + mask_rot_re = cv2.resize(mask_rot, (HD, HD)) + + # plt.figure() + # plt.imshow(mask_rot_re) + # plt.title('mask_rot_re') + + # define x, y offset in full HD of the DMD (offset by micromirror unit) + lim_mask_rot = np.stack(np.nonzero(mask_rot_re), axis=-1) + y_mask_coord_HD = np.array([lim_mask_rot[:,0].min(), lim_mask_rot[:,0].max()], dtype=np.uint64) + x_mask_coord_HD = np.array([lim_mask_rot[:,1].min(), lim_mask_rot[:,1].max()], dtype=np.uint64) + # print('x_mask_HD = ' + str(x_mask_coord_HD)) + # print('y_mask_HD = ' + str(y_mask_coord_HD)) + # pour info, à commenter ultéreirement + x_mask_HD_length = x_mask_coord_HD[1] - x_mask_coord_HD[0] + y_mask_HD_length = y_mask_coord_HD[1] - y_mask_coord_HD[0] + # print('x_mask_HD_length = ' + str(x_mask_HD_length)) + # print('y_mask_HD_length = ' + str(y_mask_HD_length)) + + yh_offset = int(y_mask_coord_HD[0] + yh_offset_cu) + xw_offset = int(x_mask_coord_HD[0] + xw_offset_cu) + + # find the best zoom factor + zoom_tab = [1, 2, 3, 4, 6, 12, 24, 48, 96, 192, 384, 768] + indx_1 = np.where(mask.ravel() > 0)[0] + zoom_ratio = np.sqrt(HD**2/len(indx_1)) * zoom_cu + for zoom_c in zoom_tab: + if zoom_ratio <= zoom_c: + break + zoom_i = zoom_c + + print('Suggested zoom = x' + str(zoom_i)) + print('With image size = (' + str(Npx) + ',' + str(Npy) + ')') + val = input("Are you ok with this zoom factor ?[y/n] ") + if val == 'n': + zoom_input = input('please enter the zoom factor you want :') + zoom_i = int(zoom_input) + print('Selected zoom = x' + str(zoom_i)) + print('With image size = (' + str(Npx) + ',' + str(Npy) + ')') + + val = input("Are you ok with this image size ?[y/n]") + if val == 'n': + Np_new = input('please entre the image side size you want (Np) : ') + Npx = int(Np_new) + Npy = int(Np_new) + print('Selected zoom = x' + str(zoom_i)) + print('With image size = (' + str(Npx) + ',' + str(Npy) + ')') + + mask_re = cv2.resize(mask_rot, (int(Npx*zoom_i/zoom_cu), int(Npy*zoom_i/zoom_cu))) + + # plt.figure() + # plt.imshow(mask_re) + # plt.title('mask resized') + + # find the coordinates and lengths of the rectangular ROI that most closely match the freehand ROI + lim_mask_re = np.stack(np.nonzero(mask_re), axis=-1) + y_mask_coord = np.array([lim_mask_re[:,0].min(), lim_mask_re[:,0].max()], dtype=np.uint64) + x_mask_coord = np.array([lim_mask_re[:,1].min(), lim_mask_re[:,1].max()], dtype=np.uint64) + # print('x_mask = ' + str(x_mask_coord)) + # print('y_mask = ' + str(y_mask_coord)) + x_mask_length = x_mask_coord[1] - x_mask_coord[0] + y_mask_length = y_mask_coord[1] - y_mask_coord[0] + # print('x_mask_length = ' + str(x_mask_length)) + # print('y_mask_length = ' + str(y_mask_length)) + + # Crop the mask that most closely match the freehand ROI + mask_re_crop = mask_re[y_mask_coord[0]:y_mask_coord[1], x_mask_coord[0]:x_mask_coord[1]] + + # plt.figure() + # plt.imshow(mask_re_crop) + # plt.title('mask_re_crop') + + + # find index in the resized mask + mask_index = np.where(mask_re_crop.ravel() > 0)[0] + mask_element_nbr = len(mask_index) + diff = (mask_element_nbr - Np**2) / (Np**2) * 100 + + if diff > 0: + print('!!! Warning, the ROI is ' + str(int(diff)) + '% larger than the pattern, this will lead to an error !!!') + print(' => please change the size of the ROI !!!') + else: + print('loss of = ' + str(-int(diff)) + ' % of the pattern size') + print(' => ok, the ROI is smaller than the pattern') + + print('---------------') + print('Set the following offsets in the "Setup acquisition" cell:') + print('xw_offset = ' + str(int(xw_offset))) + print('yh_offset = ' + str(int(yh_offset))) + + # Display the masked image + im_re = cv2.resize(im, (int(Npx*zoom_i/zoom_cu), int(Npy*zoom_i/zoom_cu))) + mask_re_rot = np.rot90(mask_re, 2) + + im2 = np.mean(im_re, axis=2) + im_mask = im2*mask_re_rot + + plt.figure() + plt.imshow(im_mask) + plt.title('masked image') + + # geometrical drawn ROI + elif ret == 'g': + # Convert hypercube into en RGB image to be read by the "selectROI" function + GT_sum_pos = GT_sum - np.min(GT_sum) + GT_sum_8bit = np.array(GT_sum_pos/np.amax(GT_sum_pos)*255, dtype=np.uint8) + colored_img = np.stack((GT_sum_8bit,)*3, axis=-1) + + Np = GT.shape[0] + # resize image for the "selectROI" function + HD = DMD_params.display_height#768 => The DMD height + WD = DMD_params.display_width#1024 => The DMD width + + #current zoom and offset of the displayed image + zoom_cu = acquisition_parameters.zoom + xw_offset_cu = acquisition_parameters.xw_offset + yh_offset_cu = acquisition_parameters.yh_offset + # if zoom_cu == 1: # warning, to be change if zoom = x1 and x offset is different than 128 !!! + # xw_offset_cu = (WD - HD)/2 + # yh_offset_cu = 0 + + # Draw the ROI + print('Draw a ROI in the image by holding the left mouse button') + print('Press "ENTER" when done') + x, y, w, h = cv2.selectROI(cv2.resize(colored_img, (HD, HD))) + cv2.destroyAllWindows() + + # rescale displayed image in the case of current zoom + + # calculate the factor size between the drawn ROI and the future ROI + fac = np.sqrt(HD**2 / (w*h)) + fac_round = np.round(fac) + dif_fac = fac_round - fac + + # Available digital zoom + zoom_tab = [1, 2, 3, 4, 6, 12, 24, 48, 96, 192, 384, 768] + x = x / zoom_cu + y = y / zoom_cu + w = w / zoom_cu + h = h / zoom_cu + + # calculate the center of the ROI + Cx = x + w / 2 + Cy = y + h / 2 + + # calculate the center of the ROI in the rotated image + Crx = HD / zoom_cu - Cx + Cry = HD / zoom_cu - Cy + + + + # find index of the zoom_tab for the current zoom + inc_cu = zoom_tab.index(zoom_cu) + + # Define the two nearest zoom of the drawn ROI + inc = inc_cu + + for zoom in zoom_tab: + if fac <= zoom: + break + inc = inc + 1 + + if dif_fac <= 0: + zoom_range = np.array([zoom_tab[inc], zoom_tab[inc+1]], dtype=float) + else: + zoom_range = np.array([zoom_tab[inc-1], zoom_tab[inc]], dtype=float) + + w_roi = HD/zoom_range + + # calculate the difference of the drawn ROI and the final ROI (due to the available zoom) + diff_size = np.round((w_roi**2 - w*h)/(w*h) * 10000) / 100 + + # calculate x, y at the top left of the ROI + x_roi = Crx - w_roi/2 + y_roi = Cry - w_roi/2 + + # because the DMD is a rectangle and the pattern is square + xw_offset = x_roi + xw_offset_cu + yh_offset = y_roi + yh_offset_cu + + # calculate new integration time and acquisition time + new_ti = ti*zoom_range**2 + + # display result + for inc in range(len(zoom_range)): + print('------------------------------------') + print('Zoom = x' + str(int(zoom_range[inc]))) + print('This lead to a change of the drawn ROI by = ' + str(diff_size[inc]) + ' %') + + print('xw_offset = ' + str(int(xw_offset[inc]))) + print('yh_offset = ' + str(int(yh_offset[inc]))) + print('Suggested new ti = ' + str(new_ti[inc]) + (' ms')) + print('Leading to a total acq time : ' + str(int(acquisition_parameters.pattern_amount*(new_ti[inc]+0.356)/1000 // 60)) + ' min ' + + str(round(acquisition_parameters.pattern_amount*new_ti[inc]/1000 % 60)) + ' s') + print('------------------------------------') + + mask_index = [] + x_mask_coord = [] + y_mask_coord = [] + else: + print('Bad entry, aborted !!!') + x_mask_coord = acquisition_parameters.x_mask_coord + y_mask_coord = acquisition_parameters.y_mask_coord + mask_index = acquisition_parameters.mask_index + + return mask_index, x_mask_coord, y_mask_coord + + + + + + + + + + + + + + + + + + + +