Skip to content

fix: adding fixHistory logic for agentic Chat #1050

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 22, 2025
Merged

fix: adding fixHistory logic for agentic Chat #1050

merged 4 commits into from
Apr 22, 2025

Conversation

pras0131
Copy link
Contributor

Problem

Solution

License

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@pras0131 pras0131 requested a review from a team as a code owner April 22, 2025 14:33
this.#features.logging.debug('No history message found, but new user message has tool results.')
newUserMessage.userInputMessage.userInputMessageContext.toolResults = undefined
// tool results are empty, so content must not be empty
newUserMessage.userInputMessage.content = 'Conversation history was too large, so it was cleared.'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this customer-facing notification and Is it correctly explains current state?
And what was cleared exactly - tool use or whole history? How I read it, it's whole history was cleared, which sounds scary

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is customer facing, I think it should be more clear

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a customer facing notification.

if (message.toolUses) {
try {
for (const toolUse of message.toolUses) {
count += JSON.stringify(toolUse).length
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is called inside of tool use loop, which already set to run for 100 iterations. I don't know if you've tested it, but I suspect running bunch of JSON.stringify on whole conversation history every time will impact performance quite badly. Did you test the performance of this code on large dataset in history?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of now, I was testing the functionality to work first. Right now did not see it causing any performance issue. Reference code: https://github.com/aws/aws-toolkit-vscode/blob/a597cd2817bf4bf861f947aab25af8fddcc01ff5/packages/core/src/shared/db/chatDb/chatDb.ts#L419

Comment on lines +364 to +366
if (currentMessage) {
this.#chatHistoryDb.fixHistory(tabId, currentMessage, session.conversationId ?? '')
}
Copy link
Contributor

@kmile kmile Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be outside the agent loop? It doesn't influence the current loop since we add history in memory below (line 375) instead of in the db during the loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this should be done before we call generateAssistantResponse. The history needs to be fixed before sending the request

@kmile kmile merged commit 4a7ad34 into main Apr 22, 2025
6 checks passed
@kmile kmile deleted the fixReadIssue branch April 22, 2025 18:31
kmile pushed a commit to kmile/language-servers that referenced this pull request Apr 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants