Skip to content

在 AI 服务商配置 OpenAI 的 API 代理地址,会影响通过环境变量设置的嵌入模型 #7500

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
samurai00 opened this issue Apr 21, 2025 · 3 comments
Labels
ollama Relative to Ollama Provider and ollama models unconfirm 未被维护者确认的问题

Comments

@samurai00
Copy link
Contributor

📦 部署环境

Docker

📦 部署模式

服务端模式(lobe-chat-database 镜像)

📌 软件版本

v1.81.7

💻 系统环境

macOS

🌐 浏览器

Chrome

🐛 问题描述

使用 docker 部署服务器模式,通过设置环境变量来使用本地 ollama 的 bge-m3 模型来处理文件向量化。

- DEFAULT_FILES_CONFIG=embedding_model=ollama/bge-m3
- OLLAMA_PROXY_URL=https://ollama.proxy.exmaple.com

当在 AI 服务商配置 OpenAI 的 API 代理地址时,会导致请求的不是 OLLAMA_PROXY_URL 的地址,而是 OpenAI 的 API 代理地址(https://openai.proxy.example.com/v1)。

把 OpenAI 的 API 代理地址清空,就能正常向量化。

📷 复现步骤

No response

🚦 期望结果

No response

📝 补充信息

No response

@samurai00 samurai00 added the unconfirm 未被维护者确认的问题 label Apr 21, 2025
@lobehubbot
Copy link
Member

👀 @samurai00

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


📦 Deployment environment

Docker

📦 Deployment Mode

Server-side mode (lobe-chat-database mirror)

📌 Software version

v1.81.7

💻 System environment

macOS

🌐 Browser

Chrome

🐛 Question description

Use docker to deploy server mode to process file vectorization using the local ollama's bge-m3 model by setting environment variables.

- DEFAULT_FILES_CONFIG=embedding_model=ollama/bge-m3
- OLLAMA_PROXY_URL=https://ollama.proxy.exmaple.com

When configuring OpenAI's API proxy address at the AI ​​service provider, the request will not be the address of OLLAMA_PROXY_URL, but the API proxy address of OpenAI (https://openai.proxy.example.com/v1).

Clear the OpenAI API proxy address and you can vectorize it normally.

📷 Reproduction steps

No response

🚦 Expected results

No response

📝 Supplementary information

No response

@dosubot dosubot bot added the ollama Relative to Ollama Provider and ollama models label Apr 21, 2025
Copy link

dosubot bot commented Apr 21, 2025

I found a similar discussion regarding issues with proxy URL configurations when deploying Lobe Chat in server mode using Docker. The discussion titled 使用docker run和docker compose部署lobe,环境变量配置未生效 is still open and might provide insights into your issue [1].

To address your problem, ensure that the environment variables are correctly set and that the Ollama proxy URL is prioritized over the OpenAI API proxy address. Here are some steps you can follow:

  1. Verify that the DEFAULT_FILES_CONFIG and OLLAMA_PROXY_URL environment variables are correctly set in your Docker setup:

    DEFAULT_FILES_CONFIG="embedding_model=ollama/bge-m3"
    OLLAMA_PROXY_URL=https://ollama.proxy.example.com
    
  2. Ensure that the OpenAI API proxy address is not set in the AI service provider configuration if you want to use the Ollama proxy URL for embedding requests.

  3. If the issue persists, consider checking the configuration file src/config/llm.ts to ensure that the environment variables for AI service providers are correctly set and that the Ollama proxy URL is being used [2].

If clearing the OpenAI API proxy address resolves the issue, it indicates that the system defaults to using the OpenAI proxy when both are configured. Adjusting the configuration to prioritize the Ollama proxy URL should help.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ollama Relative to Ollama Provider and ollama models unconfirm 未被维护者确认的问题
Projects
None yet
Development

No branches or pull requests

2 participants