You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for creating this useful tool and writing great tutorials.
I have tried using your package to align an H&E image with an immunofluorescence (IF) image. For quick troubleshooting, I downsized both images by a factor of 50 and 100 respectively. I was able to fit a functional model. However I have trouble figuring out if its possible to use the results to transform the original high res images.
The shape of these scaled down images are (3, 207, 203) and (3, 206, 204).
I fitted the model using the following parameters and it took only 3 mins:
params= {'L':L,'T':T,
'niter':2000,
'pointsI':pointsI,
'pointsJ':pointsJ,
'device':device,
'sigmaM':0.15,
'sigmaB':0.05,
'sigmaA':0.05,
'epV': 10,
'a': 7500,
'muB': torch.tensor([0,0,0]), # black is background in target'muA': torch.tensor([1,1,1]) # use white as artifact
}
out=STalign.LDDMM([YI,XI],I,[YJ,XJ],J,**params)
I tried to transform the high res images (3, 10206, 10318), but it produces a very low dimension image (3, 207, 203), same size the reference that I used to train.
# get necessary output variablesA=out['A']
v=out['v']
xv=out['xv']
Knorm=STalign.normalize(img_if)
print(Knorm.min())
print(Knorm.max())
# %%K=Knorm.transpose(2,0,1)
print(K.shape) # (3, 10206, 10318)YK=np.array(range(K.shape[1]))*1.XK=np.array(range(K.shape[2]))*1.extentK=STalign.extent_from_x((YK,XK))
# Transform high res imagenewK=STalign.transform_image_target_to_source(xv,v,A,[YK,XK],K,[YI,XI])
newK=newK.cpu()
print(newK.shape) # (3, 207, 203)
Does this mean I need to train with a high resolution reference? Would that increase the training time a lot? Or is the training time mostly determined by the size of the target image?
The text was updated successfully, but these errors were encountered:
Hello, yes, thats right! When you find the the tranforms (v and A) using the downsampled image, you can apply these transform on the original image using STalign.transform_image_target_to_source(xv,v,A,[YK,XK], K, [YH, XH]), where K is the original image and YH, XH are points that correspond to the resolution of your image.
Thank you for creating this useful tool and writing great tutorials.
I have tried using your package to align an H&E image with an immunofluorescence (IF) image. For quick troubleshooting, I downsized both images by a factor of 50 and 100 respectively. I was able to fit a functional model. However I have trouble figuring out if its possible to use the results to transform the original high res images.
The shape of these scaled down images are (3, 207, 203) and (3, 206, 204).
I fitted the model using the following parameters and it took only 3 mins:
I tried to transform the high res images (3, 10206, 10318), but it produces a very low dimension image (3, 207, 203), same size the reference that I used to train.
Does this mean I need to train with a high resolution reference? Would that increase the training time a lot? Or is the training time mostly determined by the size of the target image?
The text was updated successfully, but these errors were encountered: