-
Hi! I noticed that some recent backbones (e.g. ConvNeXt) will have an extra norm layer on the multi-scale feature output, typically before going into FPN (see ConvNeXt object detection). However, If this is indeed not supported at the moment I can open a feature issue regarding this. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@DianCh yes, that is correct ... I actually thought about this one and don't currently have an answer ... with the current impl, generically re-using the backbone model from classification the downstream model (ie your obj detection model would need to apply that extra norm). This is similar to how it already works for activations, some models return non-activated outputs (ie efficientnets, regnetz, resnetv2) and benefit from having an extra act layer applied. I've wanted to provide a mechanism for allowing the feature_info spec to have flags that specify whether an extra norm or act (or some straight forward nn.Sequential should be applied to all or subset of the feature taps. TLDR you can create a feature issue but it's not a trivial undertaking to do 'right' so it'll be behind some other work already in my queue |
Beta Was this translation helpful? Give feedback.
@DianCh yes, that is correct ... I actually thought about this one and don't currently have an answer ... with the current impl, generically re-using the backbone model from classification the downstream model (ie your obj detection model would need to apply that extra norm). This is similar to how it already works for activations, some models return non-activated outputs (ie efficientnets, regnetz, resnetv2) and benefit from having an extra act layer applied.
I've wanted to provide a mechanism for allowing the feature_info spec to have flags that specify whether an extra norm or act (or some straight forward nn.Sequential should be applied to all or subset of the feature taps.
TLDR you c…