Skip to content

Extra norm layers on feature output #1145

Answered by rwightman
DianCh asked this question in Ideas
Discussion options

You must be logged in to vote

@DianCh yes, that is correct ... I actually thought about this one and don't currently have an answer ... with the current impl, generically re-using the backbone model from classification the downstream model (ie your obj detection model would need to apply that extra norm). This is similar to how it already works for activations, some models return non-activated outputs (ie efficientnets, regnetz, resnetv2) and benefit from having an extra act layer applied.

I've wanted to provide a mechanism for allowing the feature_info spec to have flags that specify whether an extra norm or act (or some straight forward nn.Sequential should be applied to all or subset of the feature taps.

TLDR you c…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@DianCh
Comment options

Answer selected by DianCh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
2 participants