Skip to content

Document passing custom embedModel or llm per request to avoid global Settings usage #2016

Open
@eduardoxRib

Description

@eduardoxRib

I'm currently integrating LlamaIndexTS into a multi-tenant backend API, where each client has their own OpenAI API key and model configuration.

Right now, it seems like the only way to set a custom embedding model or llm is by using the global Settings.embedModel or Settings.llm, like this:

import { Settings } from 'llamaindex';
import { OpenAIEmbedding } from '@llamaindex/embeddings-openai';

Settings.embedModel = new OpenAIEmbedding({ apiKey: 'CLIENT_API_KEY' });
Settings.llm = openai({ apiKey: key,  model: 'gpt-4o' })

For example, to use here:

const reader = new SimpleDirectoryReader()
const documents = await reader.loadData({
  directoryPath: `${process.env.STORAGE_PATH}/llama`
})
const index = await VectorStoreIndex.fromDocuments(documents)

If global setting embedModel is not set, it cause error: 'Error: Cannot find Embedding, please set Settings.embedModel = ... on the top of your code'

However, this creates issues when handling concurrent requests from multiple clients, because Settings is global and mutable. In a typical API server (like with NestJS or Express), this can lead to race conditions and incorrect behavior.

Is there any alternative to this that I might have missed?

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationgood first issueGood for newcomershelp wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions