|
44 | 44 | - filename: Qwen_Qwen3-30B-A3B-Q4_K_M.gguf
|
45 | 45 | sha256: a015794bfb1d69cb03dbb86b185fb2b9b339f757df5f8f9dd9ebdab8f6ed5d32
|
46 | 46 | uri: huggingface://bartowski/Qwen_Qwen3-30B-A3B-GGUF/Qwen_Qwen3-30B-A3B-Q4_K_M.gguf
|
| 47 | +- !!merge <<: *qwen3 |
| 48 | + name: "qwen3-32b" |
| 49 | + urls: |
| 50 | + - https://huggingface.co/Qwen/Qwen3-32B |
| 51 | + - https://huggingface.co/bartowski/Qwen_Qwen3-32B-GGUF |
| 52 | + description: | |
| 53 | + Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: |
| 54 | + |
| 55 | + Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios. |
| 56 | + Significantly enhancement in its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. |
| 57 | + Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. |
| 58 | + Expertise in agent capabilities, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. |
| 59 | + Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation. |
| 60 | + |
| 61 | + Qwen3-32B has the following features: |
| 62 | + |
| 63 | + Type: Causal Language Models |
| 64 | + Training Stage: Pretraining & Post-training |
| 65 | + Number of Parameters: 32.8B |
| 66 | + Number of Paramaters (Non-Embedding): 31.2B |
| 67 | + Number of Layers: 64 |
| 68 | + Number of Attention Heads (GQA): 64 for Q and 8 for KV |
| 69 | + Context Length: 32,768 natively and 131,072 tokens with YaRN. |
| 70 | + |
| 71 | + For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation. |
| 72 | + overrides: |
| 73 | + parameters: |
| 74 | + model: Qwen_Qwen3-32B-Q4_K_M.gguf |
| 75 | + files: |
| 76 | + - filename: Qwen_Qwen3-32B-Q4_K_M.gguf |
| 77 | + sha256: e41ec56ddd376963a116da97506fadfccb50fb402bb6f3cb4be0bc179a582bd6 |
| 78 | + uri: huggingface://bartowski/Qwen_Qwen3-32B-GGUF/Qwen_Qwen3-32B-Q4_K_M.gguf |
47 | 79 | - &gemma3
|
48 | 80 | url: "github:mudler/LocalAI/gallery/gemma.yaml@master"
|
49 | 81 | name: "gemma-3-27b-it"
|
|
0 commit comments