-
Notifications
You must be signed in to change notification settings - Fork 111
feat: implement OpenAI compatible API provider support #500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add WHODB_OPENAI_COMPATIBLE_LABEL environment variable for custom labeling - Create OpenAI compatible provider when API key and endpoint are configured - Add OpenAI-Compatible LLM type with request/response handling - Support custom models from WHODB_CUSTOM_MODELS environment variable - Reuse existing OpenAI-compatible API structure without requiring new API code Fixes #496 Co-authored-by: modelorona <[email protected]>
Bug Report
Comments? Email us. Your free trial ends in 7 days. |
@claude could you take a look at the issue by the Jazzberry bot? |
Claude finished @modelorona's task —— View job Fixed SSE Handling Bug ✅I investigated and fixed the SSE handling issue reported by Jazzberry bot. Todo List:
Bug Analysis:Found the issue in Fix Applied:
Fixed in commit: |
- Strip "data: " prefix from SSE lines before JSON parsing - Handle "[DONE]" control message to terminate stream properly - Fixes JSON unmarshaling errors in streaming responses Co-authored-by: modelorona <[email protected]>
Bug Report
Comments? Email us. Your free trial ends in 7 days. |
@claude can you look at the latest Jazzberry comment? |
Claude finished @modelorona's task —— View job Fixed Truncated Streaming Response Bug ✅I've analyzed and fixed the "Truncated Streaming Response" bug reported by Jazzberry bot. Todo List:
Bug Analysis:Found the exact issue in // Skip SSE control messages
if line == "[DONE]" {
break // ❌ This breaks immediately without sending final response
} Problem: When "[DONE]" is received, the function broke immediately, potentially losing the last accumulated response. Fix Applied:
Changes in commit
This ensures complete responses are delivered regardless of whether the stream ends with "[DONE]" or EOF, preventing truncation issues. |
- Send accumulated response before breaking on "[DONE]" message - Ensure complete response is returned when stream ends normally - Fixes issue where last part of response could be lost Co-authored-by: modelorona <[email protected]>
Bug Report
Comments? Email us. Your free trial ends in 6 days. |
Any update on this? |
@nicopace we're just doing some local testing on our end! |
@nicopace we've merged this to main and are creating a new release! make sure to pull again from docker :) let us know how you get on. I was playing around with the same API you tried and didn't run into issues. you might need to clear your browser cache/storage due to how the providers are stored/managed in the frontend |
This PR implements OpenAI compatible API provider support to resolve the issue where environment variables for OpenAI compatible endpoints were not working.
Changes
Fixes #496
Generated with Claude Code