Skip to content

Cancel queued Publish RPCs if IDONTWANT arrives in the meantime #611

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
raulk opened this issue May 6, 2025 · 6 comments
Open

Cancel queued Publish RPCs if IDONTWANT arrives in the meantime #611

raulk opened this issue May 6, 2025 · 6 comments

Comments

@raulk
Copy link
Member

raulk commented May 6, 2025

See libp2p/rust-libp2p#5751 and sigp/rust-libp2p#570.

@MarcoPolo
Copy link
Contributor

We effectively do this already here by checking if the message is unwanted right before sending it.

@raulk
Copy link
Member Author

raulk commented May 6, 2025

sendRPC only queues the RPC for sending. The actual sending happens async via a dispatch goroutine a few layers down the stack. Between now and then we could've received an IDONTWANT. That's the cancellation I was referring to here. Some useful breadcrumbs:

One way to approach this is by adding an RPC#Cancelled() bool that is backed by a closure supplied at RPC creation time that queries the gs.unwanted map, but we'd need to guard that with a lock, break the concurrency model, and introduce contention which ain't good. There are more ways.

@MarcoPolo
Copy link
Contributor

This is likely a difference of <1ms for a machine not in high cpu contention. Is the juice worth the squeeze? We can certainly try.

@MarcoPolo
Copy link
Contributor

I'd be curious to look at our trace metrics and see the time differences between when we send the rpc (in doSendRPC) and when we receive a IDONTWANT for that rpc, and how many times these are within 1 ms of each other.

@raulk
Copy link
Member Author

raulk commented May 7, 2025

I don't think this is CPU-bound? I was thinking it would be IO bound by the multiplexer or transport congestion control (i.e. if the Yamux window is full), which can happen more frequently under bandwidth saturation (especially under bursty loads).

@MarcoPolo
Copy link
Contributor

ah right, if the transport backpressures on stream writes. That seems more likely 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants