Skip to content

Scaling fitted transform #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
racng opened this issue Jan 15, 2025 · 2 comments
Open

Scaling fitted transform #40

racng opened this issue Jan 15, 2025 · 2 comments

Comments

@racng
Copy link

racng commented Jan 15, 2025

Thank you for creating this useful tool and writing great tutorials.
I have tried using your package to align an H&E image with an immunofluorescence (IF) image. For quick troubleshooting, I downsized both images by a factor of 50 and 100 respectively. I was able to fit a functional model. However I have trouble figuring out if its possible to use the results to transform the original high res images.

The shape of these scaled down images are (3, 207, 203) and (3, 206, 204).

# Downscale images
height, width = img_he.size # (20704, 20325)
new_width = int(width / 100)
new_height = int(height / 100)
img_he_lores = img_he.resize((new_width, new_height), 
    Image.LANCZOS)

height, width = img_if.size # (10318, 10206)
new_width = int(width / 50)
new_height = int(height / 50)
img_if_lores = img_if.resize((new_width, new_height), 
    Image.LANCZOS)

# Normalize matrix to values ranging from 0 to 1
Inorm = STalign.normalize(img_he_lores)
print(Inorm.min())
print(Inorm.max())
# Remove scale bar
Inorm[0:20, 175:, :] = 1

Jnorm = STalign.normalize(img_if_lores)
print(Jnorm.min())
print(Jnorm.max())
# Transpose normalized matrix to be 3xNxM matrix
I = Inorm.transpose(2,0,1)
print(I.shape)
YI = np.array(range(I.shape[1]))*100. 
XI = np.array(range(I.shape[2]))*100.
extentI = STalign.extent_from_x((YI,XI))

J = Jnorm.transpose(2,0,1)
YJ = np.array(range(J.shape[1]))*50. 
XJ = np.array(range(J.shape[2]))*50.
extentJ = STalign.extent_from_x((YJ,XJ))

# Compute initial affine transformation from points
pointsI = ...
pointsJ = ...
L,T = STalign.L_T_from_points(pointsI, pointsJ)

I fitted the model using the following parameters and it took only 3 mins:

params = {'L':L,'T':T,
          'niter':2000,
          'pointsI':pointsI,
          'pointsJ':pointsJ,
          'device':device,
          'sigmaM':0.15,
          'sigmaB':0.05,
          'sigmaA':0.05,
          'epV': 10,
          'a': 7500,
          'muB': torch.tensor([0,0,0]), # black is background in target
          'muA': torch.tensor([1,1,1]) # use white as artifact
          }

out = STalign.LDDMM([YI,XI],I,[YJ,XJ],J,**params)

I tried to transform the high res images (3, 10206, 10318), but it produces a very low dimension image (3, 207, 203), same size the reference that I used to train.

# get necessary output variables
A = out['A']
v = out['v']
xv = out['xv']

Knorm = STalign.normalize(img_if)
print(Knorm.min())
print(Knorm.max())
# %%
K = Knorm.transpose(2,0,1)
print(K.shape) # (3, 10206, 10318)
YK = np.array(range(K.shape[1]))*1. 
XK = np.array(range(K.shape[2]))*1.
extentK = STalign.extent_from_x((YK,XK))

# Transform high res image
newK = STalign.transform_image_target_to_source(xv,v,A,[YK,XK],K,[YI,XI])
newK = newK.cpu()
print(newK.shape) #  (3, 207, 203)

Does this mean I need to train with a high resolution reference? Would that increase the training time a lot? Or is the training time mostly determined by the size of the target image?

@racng
Copy link
Author

racng commented Jan 15, 2025

I think I figured it out by defining a new XI with higher resolution, but not full resolution (dude ot GPU memory limit).

YH = np.arange(0, img_he.size[0], 5) *1.
XH = np.arange(0, img_he.size[1], 5) *1.
newK = STalign.transform_image_target_to_source(xv,v,A,[YK,XK], K, [YH, XH])

@mmganant
Copy link
Collaborator

Hello, yes, thats right! When you find the the tranforms (v and A) using the downsampled image, you can apply these transform on the original image using STalign.transform_image_target_to_source(xv,v,A,[YK,XK], K, [YH, XH]), where K is the original image and YH, XH are points that correspond to the resolution of your image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants