Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging autotracker #156

Open
wants to merge 15 commits into
base: main
Choose a base branch
from
Open

Merging autotracker #156

wants to merge 15 commits into from

Conversation

edyoshikun
Copy link
Contributor

@edyoshikun edyoshikun commented Aug 9, 2024

This PR adds autotracker (smart microscopy) tool into shirmPy.

For this PR, I hope to accomplish the following:

Part 1:

  • Checkout the old code and test the different algorithms
  • Ensure the tracking algorithms work for our imaging modalities
  • Write a tests for the tracking algorithms.

Part 2:

  • Integrate the autotracker as part of microscope_settings in the config. This gives the flexibility to be used for lf or ls.
  • Add separate autotracking settings that configure the method, roi, tracking frequency
  • Add a function that relies on the current Mantis_acqusition.go_to_position()
  • Test the autotracker with the Demo mode.

Part 3:

  • Test this with a test target or live sample

@edyoshikun edyoshikun requested a review from ieivanov August 16, 2024 00:44
@edyoshikun
Copy link
Contributor Author

edyoshikun commented Aug 16, 2024

@ieivanov so far this works using the demo.cfg in automaton. I am providing some fake calculated shifts and added a sleep()to resemble the actual tracking methods calculating the shifts.

When you review this, I would like feedback on:

  • Do you foresee any issues with the current implementation? Does the current object autotracker_hook_fn() take the proper parameters? Pay attention to how I am modifying the position_settings.xyx_positions().
  • Currently using the demo mode the auotracker() only does XY shifts given that Z is not part of the demo mode. When I read the 'Z' from the stage positions, this returns 'None'. Is that a bug or a feature?
  • When I test this on mantis using the mantis config, I think the ZYX shifts should be fed to the ASI ZYX stage and not the LF remote refocus piezo. Do you agree?

As a side note, I am thinking a bit ahead and making sure this can be used only with one arm either lf or ls.

@edyoshikun edyoshikun marked this pull request as ready for review September 18, 2024 23:14
shifts_zyx_um = np.array(shifts_zyx_pix) * self.scale

# Limit the shifts_zyx, preserving the sign of the shift
self.shifts_zyx = np.sign(shifts_zyx_um) * np.minimum(np.abs(shifts_zyx_um), self.shift_limit)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we going for all or nothing strategy or if it's off in one dimension it's ok?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

len(self.shift_limit) = 3, here we are capping each dimension to its max allowed value


# shifts_zyx in px to shifts_zyx in um
self.shifts_zyx = np.array(shifts_zyx) * self.scale
shifts_zyx_um = np.array(shifts_zyx_pix) * self.scale
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: make sure self.scale is of length 3, and that the Z, Y, X scales are correct. The Z scale changes per experiment and should be fetched or passed from the metadata

@ieivanov
Copy link
Collaborator

@tayllatheodoro yesterday we found that the calculated shifts are not being applied in the acquisition. Here are next steps I suggest:

  • I'm not 100% sure that the phase_cross_corr function correctly computes the shift. Yesterday we acquired datasets where we manually shifted the image using the XY stage. You can take these data, run them through the phase_cross_corr function and confirm that the computed shift makes sense. Update parameters of the phase_cross_corr function as needed. This can happen offline, i.e. not at the microscope. You can read the acquired data using the iohub read_images function: https://github.com/czbiohub-sf/iohub/blob/main/iohub/reader.py#L126
  • Refactor autotracker_hook_fn such that it computes a shift, calculates the new target position for that field of view, and saves that target position in a file. It should not be updating the position list.
  • Refactor update_position_autotracker such that it reads the file saved by autotracker_hook_fn and updates the position with the latest entry. This can be the same as the current position, or an updated position given the computed shift. Note that update_position_autotracker is called for every position and timepoint; autotracker_hook_fn runs on a separate thread with a delay because the computation takes time, so a shift may not always be available.
  • I don't see a need for having the variable position_settings.xyz_positions_shift, I think you can remove it. Here is pseurocode for what update_position_autotracker could look like:
def update_position_autotracker(self):
  for p_idx in len(self.position_settings.xyz_positions):
    # read the new position for the CSV file; there is a separate file for each position
    pos_label = self.position_settings.position_labels[p_idx]
    csv_filepath = output_shift_path / f'{pos_label}.csv'
    updated_position = # read CSV file and grab the XYZ position at the last row
    self.position_settings.xyz_positions[p_idx] = updated_position
  • The CSV files should be saved under /log rather than in the acquisition directory.

I think a lot of this code can be written offline and tested at the end on the microscope. I'll also set up the automaton for you to do testing there.

@edyoshikun
Copy link
Contributor Author

edyoshikun commented Feb 21, 2025

This is the code I used to generate a toy dataset. I took an image, shifted and center cropped to make a toy_dataset.zarr. Then I used the code already committed in here to test. The algorithms should work for both labelfree and fluorescence

# %%
from iohub import open_ome_zarr
from pathlib import Path
import numpy as np
import napari
import os

# %%
viewer = napari.Viewer()
# %%
input_data_path = Path(
    '/hpc/projects/tlg2/virtual_staining/20240801_2dpf_she_h2b_gfp_cldnb_lyn_mscarlet_VS/0-zarr/55hpf_1/55hpf_1.zarr'
)
key = '/0/2/0'
dataset = open_ome_zarr(input_data_path)
T, C, Z, Y, X = dataset[key].data.shape

# %%
crop = 900
t_slice = slice(0, 5)
x_slice = slice(X // 2 - crop // 2, X // 2 + crop // 2)
y_slice = slice(Y // 2 - crop // 2, Y // 2 + crop // 2)
z_slice = slice(35, 66)
channel_idx = [0, 1, 3]
data = dataset[key][0].oindex[t_slice, channel_idx, z_slice, y_slice, x_slice]

# %%
viewer.add_image(data)
# %%
from scipy.ndimage import shift
from tqdm import tqdm

# Example translations for each timepoint (Z, Y, X)
translations = [
    (0, 0, 0),  # Shift for timepoint 0
    (5, -80, 80),  # Shift for timepoint 1
    (9, -50, -50),  # Shift for timepoint 2
    (-5, 30, -60),  # Shift for timepoint 3
    (0, 30, -80),  # Shift for timepoint 4
]

# Calculate crop boundaries
T, C, Z, Y, X = data.shape
crop = 600
z_slice = slice(10, 26)
y_slice = slice(Y // 2 - crop // 2, Y // 2 + crop // 2)
x_slice = slice(X // 2 - crop // 2, X // 2 + crop // 2)

# Applying the translations
shifted_data = np.zeros(
    (
        T,
        C,
        z_slice.stop - z_slice.start,
        y_slice.stop - y_slice.start,
        x_slice.stop - x_slice.start,
    )
)
for t in tqdm(range(T)):
    shifted = np.zeros_like(data[t])
    for c in tqdm(range(len(channel_idx)), leave=False):
        # Apply shifts and crop
        shifted[c] = shift(data[t, c], shift=translations[t])
    # Crop the shifted volume
    shifted_data[t] = shifted[:, z_slice, y_slice, x_slice]
# %%
viewer.add_image(shifted_data)
# %%
output_path = './toy_translate.zarr'
with open_ome_zarr(
    output_path, mode='w', channel_names=['BF', 'nuc', 'mem'], layout='hcs'
) as output_dataset:
    position = output_dataset.create_position('0', '0', '0')
    position.create_image('0', data=shifted_data)

# %%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants