File tree Expand file tree Collapse file tree 1 file changed +25
-0
lines changed Expand file tree Collapse file tree 1 file changed +25
-0
lines changed Original file line number Diff line number Diff line change 1158
1158
- filename : llava-llama-3-8b-v1_1-mmproj-f16.gguf
1159
1159
sha256 : eb569aba7d65cf3da1d0369610eb6869f4a53ee369992a804d5810a80e9fa035
1160
1160
uri : huggingface://xtuner/llava-llama-3-8b-v1_1-gguf/llava-llama-3-8b-v1_1-mmproj-f16.gguf
1161
+ - !!merge <<: *llama3
1162
+ name : " minicpm-llama3-v-2_5"
1163
+ urls :
1164
+ - https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf
1165
+ - https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5
1166
+ description : |
1167
+ MiniCPM-Llama3-V 2.5 is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters
1168
+ tags :
1169
+ - llm
1170
+ - multimodal
1171
+ - gguf
1172
+ - gpu
1173
+ - llama3
1174
+ - cpu
1175
+ overrides :
1176
+ mmproj : minicpm-llama3-mmproj-f16.gguf
1177
+ parameters :
1178
+ model : minicpm-llama3-Q4_K_M.gguf
1179
+ files :
1180
+ - filename : minicpm-llama3-Q4_K_M.gguf
1181
+ sha256 : 010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2
1182
+ uri : huggingface://openbmb/MiniCPM-Llama3-V-2_5-gguf/minicpm-llama3-Q4_K_M.gguf
1183
+ - filename : minicpm-llama3-mmproj-f16.gguf
1184
+ sha256 : 391d11736c3cd24a90417c47b0c88975e86918fcddb1b00494c4d715b08af13e
1185
+ uri : huggingface://openbmb/MiniCPM-Llama3-V-2_5-gguf/mmproj-model-f16.gguf
1161
1186
# ## ChatML
1162
1187
- url : " github:mudler/LocalAI/gallery/chatml.yaml@master"
1163
1188
name : " helpingai-9b"
You can’t perform that action at this time.
0 commit comments