Skip to content

refactor: propagate model errors via LSP response errors. #1124

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 25, 2025

Conversation

Hweinstock
Copy link
Contributor

@Hweinstock Hweinstock commented Apr 24, 2025

Problem

Solution

  • Follow the LSP paradigm to return response error to the client, and handle it there.
  • This is in combination with client side changes here.
  • These together produce the following experience on model api failures.
image

License

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

const backendError = err.cause
// Send the backend error message directly to the client to be displayed in chat.
return {
return new ResponseError<ChatResult>(LSPErrorCodes.RequestFailed, err.cause.message, {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will the error cause message be logged and then the other body be shown to the user?

Copy link
Contributor Author

@Hweinstock Hweinstock Apr 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, see the screenshot I added above. I was debating trying to parse the requestId and include it in the chat message (done on the agentic-chat vsc branch), but I think it makes more sense to be the logs since its a technical detail and only useful for internal users.

I think ideally we add a button to this chat message that opens the logs (like we do in most VSC error messages). However, this is not a p0 to me since it requires a decent amount of work to allow Flare to open IDE logs.

Copy link
Contributor

@jpinkney-aws jpinkney-aws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense that we should be handling it this way, we just had to make a quick judgement call for the bug bash yesterday 😅

@volodkevych
Copy link
Contributor

volodkevych commented Apr 25, 2025

Thanks for the followup 👍

@Hweinstock
Copy link
Contributor Author

Hweinstock commented Apr 25, 2025

Should be safe to assume this is merged with client side changes with the version fixing setup, but want to double-check in case they get out of sync since this is fairly critical. Will take out of draft once I've verified.

@Hweinstock
Copy link
Contributor Author

Hweinstock commented Apr 25, 2025

Verified these PRs can go in either order. Ideally client side first, but it doesn't break anything.

I did notice a bug where thinking doesn't go away sometimes on the happy path, but this is also on main and is not related to this change. Here is the demo on main here, and hybrid chat latest VSC:

thinkingBug.mov

@Hweinstock Hweinstock marked this pull request as ready for review April 25, 2025 13:48
@Hweinstock Hweinstock requested a review from a team as a code owner April 25, 2025 13:48
@jpinkney-aws jpinkney-aws merged commit 455cd76 into aws:main Apr 25, 2025
7 checks passed
@Hweinstock Hweinstock deleted the responseError branch April 25, 2025 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants