Skip to content

Support interceptor pattern for LLM responses #947

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
elroy-bot opened this issue Apr 22, 2025 · 0 comments
Open

Support interceptor pattern for LLM responses #947

elroy-bot opened this issue Apr 22, 2025 · 0 comments

Comments

@elroy-bot
Copy link

I'd like to implement a memory extension for llm. The best fit I see for this is adding a custom "model", which queries memories and appends to the conversation before passing it all along to another model.

The ux for selecting the underlying model would be not very good though - the user would need to specify a "model" which is really a model wrapper, and I'd have to re-implement functionality to select a model

Would would be easier/cleaner is to just have an execute function that basically just calls super().execute() with the same params it received

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant