Skip to content

how did you generate those data isbi_train_volume.h5 and isbi_test_volume.h5 #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Ltieregenius opened this issue Jul 10, 2019 · 3 comments

Comments

@Ltieregenius
Copy link

Thanks for your sharing.
I tried to open isbi_train_volume.h5 and it contains ['affinities', 'labels', 'raw'], also the f['labels'] contains 'gt_segmentation' and 'membrabes'. Are those your pre-trained results or? Do the results seem that only the mutex-watershed cannot get good segmentation results?

@constantinpape
Copy link
Contributor

Are those your pre-trained results or?

labels/membranesis the membrane labeling provided by the challenge and labels/gt_segmentation is a 3d segmentation derived from labels/membranes. This is a necessary pre-processsing step in order to train the affinity network. affinities is the prediction of the CNN.

Do the results seem that only the mutex-watershed cannot get good segmentation results?

Sorry, I don't understand what you mean by this question.

@Ltieregenius
Copy link
Author

Sorry to bother you again.
Actually three days ago I followed your steps
image
and choose the 'mws' and 'thresh' algorithms to run and it works the results as:
image
As we can see that the width of the cell membrane's boundary is really clear and seems to be a good segmentation result. However, when I compared these results to the ground truth:
image
There are still obvious differences between my output results and labels. Then I use Fiji to calculate the v-rand and the v-info:
image
According to the evaluation results, my question is that:
(1) Do i need to do the 'dilution' to imporve the width of the boundary or do some pre-processing?
(2) could you plz point out my mistakes when i run your code like i can fix some parameters to imporve my results.

@constantinpape
Copy link
Contributor

There are still obvious differences between my output results and labels.

Of course, the algorithm will not 100 % reproduce the ground-truth. That cannot be expected. Plaese note that there are also a lot of ambiguous places in these segmentations.

According to the evaluation results, my question is that:
(1) Do i need to do the 'dilution' to imporve the width of the boundary or do some pre-processing?
(2) could you plz point out my mistakes when i run your code like i can fix some parameters to imporve my results.

Yes, dilation might improve the results a bit, but there seems to be something more fundementally wrong with the evaluation you are running. The segmentation looks much better then the numbers you report.

As a sanity check, I would evaluate groundtruth against groundtruth and make sure that this yields a perfect score. Alternatively, use some other evaluation code, e.g.
https://github.com/cremi/cremi_python/tree/master/cremi/evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants