Skip to content

Issue trying to use MoBY SLL pretrained model with SWIN T backbone #181

Open
@Giles-Billenness

Description

@Giles-Billenness

When trying to load the checkpoint after SLL pretraining (1 epoch to test) with MoBY I get this error after using the --pretrained flag pointing to the checkpoint (I've tried ckpt_epoch_0.pth and checkpoint.pth). I am trying to transfer self-supervised learning with the same backbone to this architecture.

[2022-03-11 20:31:06 swin_tiny_patch4_window7_224](utils.py 47): INFO ==============> Loading weight **********/MOBY SSL SWIN/moby__swin_tiny__patch4_window7_224__odpr02_tdpr0_cm099_ct02_queue4096_proj2_pred2/default/ckpt_epoch_0.pth for fine-tuning......
Traceback (most recent call last):
  File "/content/Swin-Transformer/main.py", line 357, in <module>
    main(config)
  File "/content/Swin-Transformer/main.py", line 131, in main
    load_pretrained(config, model_without_ddp, logger)
  File "/content/Swin-Transformer/utils.py", line 70, in load_pretrained
    relative_position_bias_table_current = model.state_dict()[k]
KeyError: 'encoder.layers.0.blocks.0.attn.relative_position_bias_table'
Killing subprocess 3303```

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions