Skip to content

The repository contains a client script in Python for interacting with the LLM model via an HTTP API llama-server. The code allows sending user messages and receiving responses from the model using various API routes (e.g. `/completions` or `/v1/chat/completions`). Streaming data is also supported.

License

Notifications You must be signed in to change notification settings

intulint/llama.cpp-simple-api-client

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

llama.cpp-simple-api-client

The repository contains a client script in Python for interacting with the LLM model via an HTTP API llama-server. The code allows sending user messages and receiving responses from the model using various API routes (e.g. /completions or /v1/chat/completions). Streaming data is also supported.

Used for testing, training and interaction with a local server llama.cpp. This is a simple implementation of a chat client, to understand how to work with the API and process messages.

About

The repository contains a client script in Python for interacting with the LLM model via an HTTP API llama-server. The code allows sending user messages and receiving responses from the model using various API routes (e.g. `/completions` or `/v1/chat/completions`). Streaming data is also supported.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages