You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to implement a memory extension for llm. The best fit I see for this is adding a custom "model", which queries memories and appends to the conversation before passing it all along to another model.
The ux for selecting the underlying model would be not very good though - the user would need to specify a "model" which is really a model wrapper, and I'd have to re-implement functionality to select a model
Would would be easier/cleaner is to just have an execute function that basically just calls super().execute() with the same params it received
The text was updated successfully, but these errors were encountered:
I'd like to implement a memory extension for
llm
. The best fit I see for this is adding a custom "model", which queries memories and appends to theconversation
before passing it all along to another model.The ux for selecting the underlying model would be not very good though - the user would need to specify a "model" which is really a model wrapper, and I'd have to re-implement functionality to select a model
Would would be easier/cleaner is to just have an
execute
function that basically just callssuper().execute()
with the same params it receivedThe text was updated successfully, but these errors were encountered: