We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 80ae52a commit b95478fCopy full SHA for b95478f
README.md
@@ -150,10 +150,10 @@ ls ./models
150
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
151
152
# install Python dependencies
153
-python3 -m pip install torch numpy sentencepiece
+python3 -m pip install -r requirements.txt
154
155
# convert the 7B model to ggml FP16 format
156
-python3 convert-pth-to-ggml.py models/7B/ 1
+python3 convert.py models/7B/
157
158
# quantize the model to 4-bits (using method 2 = q4_0)
159
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2
0 commit comments