|
701 | 701 | - filename: TheBeagle-v2beta-32B-MGS-Q4_K_M.gguf
|
702 | 702 | sha256: db0d3b3c5341d2d51115794bf5da6552b5c0714b041de9b82065cc0c982dd4f7
|
703 | 703 | uri: huggingface://bartowski/TheBeagle-v2beta-32B-MGS-GGUF/TheBeagle-v2beta-32B-MGS-Q4_K_M.gguf
|
| 704 | +- !!merge <<: *qwen25 |
| 705 | + name: "meraj-mini" |
| 706 | + icon: https://i.ibb.co/CmPSSpq/Screenshot-2024-10-06-at-9-45-06-PM.png |
| 707 | + urls: |
| 708 | + - https://huggingface.co/arcee-ai/Meraj-Mini |
| 709 | + - https://huggingface.co/QuantFactory/Meraj-Mini-GGUF |
| 710 | + description: | |
| 711 | + Arcee Meraj Mini is a quantized version of the Meraj-Mini model, created using llama.cpp. It is an open-source model that is fine-tuned from the Qwen2.5-7B-Instruct model and is designed for both Arabic and English languages. The model has undergone evaluations across multiple benchmarks in both languages and demonstrates top-tier performance in Arabic and competitive results in English. The key stages in its development include data preparation, initial training, iterative training and post-training, evaluation, and final model creation. The model is capable of solving a wide range of language tasks and is suitable for various applications such as education, mathematics and coding, customer service, and content creation. The Arcee Meraj Mini model consistently outperforms state-of-the-art models on most benchmarks of the Open Arabic LLM Leaderboard (OALL), highlighting its improvements and effectiveness in Arabic language content. |
| 712 | + overrides: |
| 713 | + parameters: |
| 714 | + model: Meraj-Mini.Q4_K_M.gguf |
| 715 | + files: |
| 716 | + - filename: Meraj-Mini.Q4_K_M.gguf |
| 717 | + sha256: f8f3923eb924b8f8e8f530a5bf07fcbd5b3dd10dd478d229d6f4377e31eb3938 |
| 718 | + uri: huggingface://QuantFactory/Meraj-Mini-GGUF/Meraj-Mini.Q4_K_M.gguf |
704 | 719 | - &archfunct
|
705 | 720 | license: apache-2.0
|
706 | 721 | tags:
|
|
4571 | 4586 | - filename: Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_M.gguf
|
4572 | 4587 | sha256: cf01ba2ca5af2a3ecd6a2221d19b8b91ec0e9fe06fa8fdffd774d5e0a2459c4c
|
4573 | 4588 | uri: huggingface://QuantFactory/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_M.gguf
|
| 4589 | +- !!merge <<: *llama3 |
| 4590 | + name: "l3-nymeria-maid-8b" |
| 4591 | + icon: https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/resolve/main/Nymeria.png? |
| 4592 | + urls: |
| 4593 | + - https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B |
| 4594 | + - https://huggingface.co/QuantFactory/L3-Nymeria-Maid-8B-GGUF |
| 4595 | + description: | |
| 4596 | + The model is a merge of pre-trained language models created using the mergekit library. It combines the following models: |
| 4597 | + - Sao10K/L3-8B-Stheno-v3.2 |
| 4598 | + - princeton-nlp/Llama-3-Instruct-8B-SimPO |
| 4599 | + The merge was performed using the slerp merge method. The models were merged using the slerp merge method and the configuration used to produce the model is included in the text. The model is not suitable for all audiences and is intended for scientific purposes. |
| 4600 | + Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive. |
| 4601 | + overrides: |
| 4602 | + parameters: |
| 4603 | + model: L3-Nymeria-Maid-8B.Q4_K_M.gguf |
| 4604 | + files: |
| 4605 | + - filename: L3-Nymeria-Maid-8B.Q4_K_M.gguf |
| 4606 | + sha256: 05bce561daa59b38cf9b79973c3b1e2e27af6d1e8e41570760af54800a09bcc2 |
| 4607 | + uri: huggingface://QuantFactory/L3-Nymeria-Maid-8B-GGUF/L3-Nymeria-Maid-8B.Q4_K_M.gguf |
4574 | 4608 | - &dolphin
|
4575 | 4609 | name: "dolphin-2.9-llama3-8b"
|
4576 | 4610 | url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
|
|
0 commit comments