-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merging autotracker #156
base: main
Are you sure you want to change the base?
Merging autotracker #156
Conversation
…into autotracker
…umes that there are shifts computed and returned.
@ieivanov so far this works using the When you review this, I would like feedback on:
As a side note, I am thinking a bit ahead and making sure this can be used only with one arm either |
shifts_zyx_um = np.array(shifts_zyx_pix) * self.scale | ||
|
||
# Limit the shifts_zyx, preserving the sign of the shift | ||
self.shifts_zyx = np.sign(shifts_zyx_um) * np.minimum(np.abs(shifts_zyx_um), self.shift_limit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we going for all or nothing strategy or if it's off in one dimension it's ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
len(self.shift_limit) = 3
, here we are capping each dimension to its max allowed value
|
||
# shifts_zyx in px to shifts_zyx in um | ||
self.shifts_zyx = np.array(shifts_zyx) * self.scale | ||
shifts_zyx_um = np.array(shifts_zyx_pix) * self.scale |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: make sure self.scale
is of length 3, and that the Z, Y, X scales are correct. The Z scale changes per experiment and should be fetched or passed from the metadata
@tayllatheodoro yesterday we found that the calculated shifts are not being applied in the acquisition. Here are next steps I suggest:
def update_position_autotracker(self):
for p_idx in len(self.position_settings.xyz_positions):
# read the new position for the CSV file; there is a separate file for each position
pos_label = self.position_settings.position_labels[p_idx]
csv_filepath = output_shift_path / f'{pos_label}.csv'
updated_position = # read CSV file and grab the XYZ position at the last row
self.position_settings.xyz_positions[p_idx] = updated_position
I think a lot of this code can be written offline and tested at the end on the microscope. I'll also set up the automaton for you to do testing there. |
This is the code I used to generate a toy dataset. I took an image, shifted and center cropped to make a # %%
from iohub import open_ome_zarr
from pathlib import Path
import numpy as np
import napari
import os
# %%
viewer = napari.Viewer()
# %%
input_data_path = Path(
'/hpc/projects/tlg2/virtual_staining/20240801_2dpf_she_h2b_gfp_cldnb_lyn_mscarlet_VS/0-zarr/55hpf_1/55hpf_1.zarr'
)
key = '/0/2/0'
dataset = open_ome_zarr(input_data_path)
T, C, Z, Y, X = dataset[key].data.shape
# %%
crop = 900
t_slice = slice(0, 5)
x_slice = slice(X // 2 - crop // 2, X // 2 + crop // 2)
y_slice = slice(Y // 2 - crop // 2, Y // 2 + crop // 2)
z_slice = slice(35, 66)
channel_idx = [0, 1, 3]
data = dataset[key][0].oindex[t_slice, channel_idx, z_slice, y_slice, x_slice]
# %%
viewer.add_image(data)
# %%
from scipy.ndimage import shift
from tqdm import tqdm
# Example translations for each timepoint (Z, Y, X)
translations = [
(0, 0, 0), # Shift for timepoint 0
(5, -80, 80), # Shift for timepoint 1
(9, -50, -50), # Shift for timepoint 2
(-5, 30, -60), # Shift for timepoint 3
(0, 30, -80), # Shift for timepoint 4
]
# Calculate crop boundaries
T, C, Z, Y, X = data.shape
crop = 600
z_slice = slice(10, 26)
y_slice = slice(Y // 2 - crop // 2, Y // 2 + crop // 2)
x_slice = slice(X // 2 - crop // 2, X // 2 + crop // 2)
# Applying the translations
shifted_data = np.zeros(
(
T,
C,
z_slice.stop - z_slice.start,
y_slice.stop - y_slice.start,
x_slice.stop - x_slice.start,
)
)
for t in tqdm(range(T)):
shifted = np.zeros_like(data[t])
for c in tqdm(range(len(channel_idx)), leave=False):
# Apply shifts and crop
shifted[c] = shift(data[t, c], shift=translations[t])
# Crop the shifted volume
shifted_data[t] = shifted[:, z_slice, y_slice, x_slice]
# %%
viewer.add_image(shifted_data)
# %%
output_path = './toy_translate.zarr'
with open_ome_zarr(
output_path, mode='w', channel_names=['BF', 'nuc', 'mem'], layout='hcs'
) as output_dataset:
position = output_dataset.create_position('0', '0', '0')
position.create_image('0', data=shifted_data)
# %% |
This PR adds autotracker (smart microscopy) tool into shirmPy.
For this PR, I hope to accomplish the following:
Part 1:
Part 2:
microscope_settings
in the config. This gives the flexibility to be used forlf
orls
.Mantis_acqusition.go_to_position()
Part 3: