Skip to content

Commit 259ad3c

Browse files
authored
chore(model gallery): add all-hands_openhands-lm-1.5b-v0.1 (#5114)
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 18b320d commit 259ad3c

File tree

1 file changed

+30
-0
lines changed

1 file changed

+30
-0
lines changed

gallery/index.yaml

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5485,6 +5485,36 @@
54855485
- filename: all-hands_openhands-lm-7b-v0.1-Q4_K_M.gguf
54865486
sha256: d50031b04bbdad714c004a0dc117c18d26a026297c236cda36089c20279b2ec1
54875487
uri: huggingface://bartowski/all-hands_openhands-lm-7b-v0.1-GGUF/all-hands_openhands-lm-7b-v0.1-Q4_K_M.gguf
5488+
- !!merge <<: *qwen25
5489+
name: "all-hands_openhands-lm-1.5b-v0.1"
5490+
icon: https://github.com/All-Hands-AI/OpenHands/blob/main/docs/static/img/logo.png?raw=true
5491+
urls:
5492+
- https://huggingface.co/all-hands/openhands-lm-1.5b-v0.1
5493+
- https://huggingface.co/bartowski/all-hands_openhands-lm-1.5b-v0.1-GGUF
5494+
description: |
5495+
This is a smaller 1.5B model trained following the recipe of all-hands/openhands-lm-32b-v0.1. It is intended to be used for speculative decoding. Autonomous agents for software development are already contributing to a wide range of software development tasks. But up to this point, strong coding agents have relied on proprietary models, which means that even if you use an open-source agent like OpenHands, you are still reliant on API calls to an external service.
5496+
5497+
Today, we are excited to introduce OpenHands LM, a new open coding model that:
5498+
5499+
Is open and available on Hugging Face, so you can download it and run it locally
5500+
Is a reasonable size, 32B, so it can be run locally on hardware such as a single 3090 GPU
5501+
Achieves strong performance on software engineering tasks, including 37.2% resolve rate on SWE-Bench Verified
5502+
5503+
Read below for more details and our future plans!
5504+
What is OpenHands LM?
5505+
5506+
OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks. What sets OpenHands LM apart is our specialized fine-tuning process:
5507+
5508+
We used training data generated by OpenHands itself on a diverse set of open-source repositories
5509+
Specifically, we use an RL-based framework outlined in SWE-Gym, where we set up a training environment, generate training data using an existing agent, and then fine-tune the model on examples that were resolved successfully
5510+
It features a 128K token context window, ideal for handling large codebases and long-horizon software engineering tasks
5511+
overrides:
5512+
parameters:
5513+
model: all-hands_openhands-lm-1.5b-v0.1-Q4_K_M.gguf
5514+
files:
5515+
- filename: all-hands_openhands-lm-1.5b-v0.1-Q4_K_M.gguf
5516+
sha256: 30abd7860c4eb5f2f51546389407b0064360862f64ea55cdf95f97c6e155b3c6
5517+
uri: huggingface://bartowski/all-hands_openhands-lm-1.5b-v0.1-GGUF/all-hands_openhands-lm-1.5b-v0.1-Q4_K_M.ggu
54885518
- &llama31
54895519
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master" ## LLama3.1
54905520
icon: https://avatars.githubusercontent.com/u/153379578

0 commit comments

Comments
 (0)