Skip to content

chore(model gallery): add qwen_qwen2.5-vl-7b-instruct #5348

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 11, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions gallery/index.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -713,7 +713,7 @@
- gemma3
- gemma-3
overrides:
#mmproj: gemma-3-27b-it-mmproj-f16.gguf

Check warning on line 716 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

716:6 [comments] missing starting space in comment
parameters:
model: gemma-3-27b-it-Q4_K_M.gguf
files:
Expand All @@ -731,7 +731,7 @@
description: |
google/gemma-3-12b-it is an open-source, state-of-the-art, lightweight, multimodal model built from the same research and technology used to create the Gemini models. It is capable of handling text and image input and generating text output. It has a large context window of 128K tokens and supports over 140 languages. The 12B variant has been fine-tuned using the instruction-tuning approach. Gemma 3 models are suitable for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes them deployable in environments with limited resources such as laptops, desktops, or your own cloud infrastructure.
overrides:
#mmproj: gemma-3-12b-it-mmproj-f16.gguf

Check warning on line 734 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

734:6 [comments] missing starting space in comment
parameters:
model: gemma-3-12b-it-Q4_K_M.gguf
files:
Expand All @@ -749,7 +749,7 @@
description: |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. Gemma-3-4b-it is a 4 billion parameter model.
overrides:
#mmproj: gemma-3-4b-it-mmproj-f16.gguf

Check warning on line 752 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

752:6 [comments] missing starting space in comment
parameters:
model: gemma-3-4b-it-Q4_K_M.gguf
files:
Expand Down Expand Up @@ -3676,7 +3676,7 @@
sha256: b9f01bead9e163db9351af036d8d63ef479d7d48a1bb44934ead732a180f371c
uri: huggingface://bartowski/Menlo_ReZero-v0.1-llama-3.2-3b-it-grpo-250404-GGUF/Menlo_ReZero-v0.1-llama-3.2-3b-it-grpo-250404-Q4_K_M.gguf
- &qwen25
name: "qwen2.5-14b-instruct" ## Qwen2.5

Check warning on line 3679 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

3679:32 [comments] too few spaces before comment: expected 2
icon: https://avatars.githubusercontent.com/u/141221163
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
license: apache-2.0
Expand Down Expand Up @@ -5637,7 +5637,7 @@
sha256: 0fec82625f74a9a340837de7af287b1d9042e5aeb70cda2621426db99958b0af
uri: huggingface://bartowski/Chuluun-Qwen2.5-72B-v0.08-GGUF/Chuluun-Qwen2.5-72B-v0.08-Q4_K_M.gguf
- &smollm
url: "github:mudler/LocalAI/gallery/chatml.yaml@master" ## SmolLM

Check warning on line 5640 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

5640:59 [comments] too few spaces before comment: expected 2
name: "smollm-1.7b-instruct"
icon: https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png
tags:
Expand Down Expand Up @@ -7135,8 +7135,47 @@
- filename: cognition-ai_Kevin-32B-Q4_K_M.gguf
sha256: 2576edd5b1880bcac6732eae9446b035426aee2e76937dc68a252ad34e185705
uri: huggingface://bartowski/cognition-ai_Kevin-32B-GGUF/cognition-ai_Kevin-32B-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen_qwen2.5-vl-7b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct
- https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-7B-Instruct-GGUF
description: |
In the past five months since Qwen2-VL’s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.
Key Enhancements:

Understand things visually: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.

Being agentic: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.

Understanding long videos and capturing events: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.

Capable of visual localization in different formats: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.

Generating structured outputs: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.

Model Architecture Updates:

Dynamic Resolution and Frame Rate Training for Video Understanding:

We extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.

Streamlined and Efficient Vision Encoder

We enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.
overrides:
mmproj: mmproj-Qwen_Qwen2.5-VL-7B-Instruct-f16.gguf
parameters:
model: Qwen_Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen_Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf
sha256: 3f4513330aa7f109922bd701d773575484ae2b4a4090d6511260a2a4f8e3d069
uri: huggingface://bartowski/Qwen_Qwen2.5-VL-7B-Instruct-GGUF/Qwen_Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf
- filename: mmproj-Qwen_Qwen2.5-VL-7B-Instruct-f16.gguf
sha256: c24a7f5fcfc68286f0a217023b6738e73bea4f11787a43e8238d4bb1b8604cde
uri: https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/mmproj-Qwen_Qwen2.5-VL-7B-Instruct-f16.gguf
- &llama31
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master" ## LLama3.1

Check warning on line 7178 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

7178:70 [comments] too few spaces before comment: expected 2
icon: https://avatars.githubusercontent.com/u/153379578
name: "meta-llama-3.1-8b-instruct"
license: llama3.1
Expand Down
Loading