Skip to content

[Docs] Feedback for self-hosting/environment-variables/model-provider关于此文档没有LM Studio 相关的环境变量示例 #7462

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Thelast0802 opened this issue Apr 18, 2025 · 2 comments
Labels
📝 Documentation Improvements or additions to documentation | 文档问题

Comments

@Thelast0802
Copy link

Thelast0802 commented Apr 18, 2025

📝 Reference

https://github.com/lobehub/lobe-chat/tree/main/docs/self-hosting/environment-variables/model-provider.zh-CN.mdx

🐛 Feedback

在本地使用 docker 部署了1.80.2 的 BD 版本,已成功在 env中配置了ollama,DeepSeek 等服务商,但是发现文档中没有关于如何配置LM Studio的示例说明,尝试用以下形式注入后没有生效:
ENABLED_LM_STUDIO=1
LM_STUDIO_PROXY_URL=http://169.888.88.91:8816/v1
LM_STUDIO_MODEL_LIST=deepseek-r1-distill-qwen-7b

请问是否支持在环境配置中设定默认的LM 模型?
如果可以请问应该怎么写入?
补充:同时开启 ollama 会不会发生冲突导致的不能生效了?

@lobehubbot
Copy link
Member

👀 @Thelast0802

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@dosubot dosubot bot added the 📝 Documentation Improvements or additions to documentation | 文档问题 label Apr 18, 2025
Copy link

dosubot bot commented Apr 18, 2025

The Lobe Chat documentation does not currently provide specific examples for configuring LM Studio environment variables. However, you can try using a similar approach to other model providers. Here are some steps and considerations:

  1. Enable LM Studio: Set ENABLED_LM_STUDIO=1 to enable LM Studio as a provider.

  2. Set Proxy URL: Use LM_STUDIO_PROXY_URL to specify the API service address. Ensure the URL is correctly formatted and accessible.

  3. Define Model List: Use LM_STUDIO_MODEL_LIST to list the models you want to use, separated by commas if there are multiple models.

  4. Check Configuration: Ensure that the syntax aligns with examples provided for other models in the documentation. Double-check for any typos or incorrect values.

  5. Troubleshooting: If the configuration does not take effect, verify that the environment variables are correctly set and that there are no connectivity issues, such as mismatched protocols or CORS policies.

If these steps do not resolve the issue, it might be helpful to check for any updates or additional guidance in the Lobe Chat community or documentation.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📝 Documentation Improvements or additions to documentation | 文档问题
Projects
None yet
Development

No branches or pull requests

3 participants
@Thelast0802 @lobehubbot and others