How to run Safetensor Models? #2589
Unanswered
0195b3a7cb6735967a8f
asked this question in
Q&A
Replies: 2 comments 1 reply
-
Did you found how to do this? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Same issue, LocalAI will only run gguf models locally. When I try to load a safetensors folder, it tries to connect to huggingface.co. Is there a way to load a local model of this type? Edit: to be clear, I don't get the same HTTP error as above, as I'm offline, so I get an Cannot Establish connection error. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I recently found this project and want to run LLaMA-33B-HF. I saw that you need the vLLM Backend for that.
I have the following file structure:
And this is the Content of the
LLaMA-33B-HF.yaml
file:But whenever I try to run the model, I get the following Error:
Beta Was this translation helpful? Give feedback.
All reactions