You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to evaluate Qilin-Med-VL on medical images. I have downloaded the pre-trained weights of clip-vit from https://huggingface.co/openai/clip-vit-large-patch14. Then I set "mm_vision_tower": "Qilin-Med-VL/clip-vit-large-patch14" in the config.json. However, when I try to run the model using python -m llava.serve.cli --model-path Qilin-Med-VL --image-file "*.jpg" \, it occurs error like:
"some weights of the model checkpoint at Qilin-Med-VL were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.encoder.layers.11.mlp.fc1.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.21.mlp.fc1.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.9.layer_norm2.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.21.layer_norm2.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.2.layer_norm2.bias', ......... -
This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).".
It seems that the model do not load correct pre-trained weights of clip-vit-large, how to solve this problem?
The text was updated successfully, but these errors were encountered:
Hi, I am trying to evaluate Qilin-Med-VL on medical images. I have downloaded the pre-trained weights of clip-vit from https://huggingface.co/openai/clip-vit-large-patch14. Then I set
"mm_vision_tower": "Qilin-Med-VL/clip-vit-large-patch14"
in theconfig.json
. However, when I try to run the model usingpython -m llava.serve.cli --model-path Qilin-Med-VL --image-file "*.jpg" \
, it occurs error like:"some weights of the model checkpoint at Qilin-Med-VL were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.encoder.layers.11.mlp.fc1.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.21.mlp.fc1.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.9.layer_norm2.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.21.layer_norm2.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.2.layer_norm2.bias', ......... -
It seems that the model do not load correct pre-trained weights of clip-vit-large, how to solve this problem?
The text was updated successfully, but these errors were encountered: