-
Notifications
You must be signed in to change notification settings - Fork 84
Details about the traning #49
Comments
I am also trying to compare the implementation of Unbiased Teacher and SoftTeacher, and see where the improvement comes from. I guess for a fair comparison (under the same batch size, same data augmentation technique, and other trivial implementation details), you could just change the background confidence loss to Focal loss and remove the unsupervised regression loss in the SoftTeacher codebase. I didn't see the comparison between Focal loss and background confidence loss in your paper. Do you know how much improvement it contributes? |
Thanks for your reply. I have tried native Focal loss but the result is quite weird. I will try to replace the roi head with yours directly. |
Hi @MendelXu , I'm tracing your SoftTeacher code and trying to understand the background confidence loss. Could I interpret as applying a Focal loss on the student's predicted background samples, while the confidence is from the Teacher rather than the Student? |
I think it is just a weighting mechanism that is opposite to focal loss (it intends to ignore some hard samples). And the confidence is evaluated on weak augmented samples, which is easier for recognition and might be more accurate. |
Got it. Did you try to apply the teacher's predicted weight to foreground samples before? |
Yes. We have tried to apply the weight to all samples but the improvement is marginal compared to only apply to the background part. |
Nice job. I am trying to reproduce your work with
mmdetection
and before it could you help me to confirm some details?unbiased-teacher/configs/Base-RCNN-FPN.yaml
Line 41 in 6977c6f
The text was updated successfully, but these errors were encountered: