-
-
Notifications
You must be signed in to change notification settings - Fork 7k
[Core] Prevent side-channel attacks via cache salting #17045
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Marko Rosenmueller <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Marko Rosenmueller <[email protected]>
I wonder how much this helps given that vLLM already initializes hashes with a random number that's different each time vLLM is executed. related: GHSA-rm76-4mrf-v9r8 |
sorry, I hadn't read the paper yet! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maintainers of the KV cache should review this, but I like this conceptually from a security perspective. Thank you!
This is related too: #15297 |
Thanks! I am in touch with @comaniac for taking a look. |
Both CI failures (both failing a bit differently) are in EntrypointsTest, in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM. cc @WoosukKwon @ywang96
cache_salt_keys: list[str] = [request.cache_salt] if ( | ||
start_token_idx == 0 and request.cache_salt) else [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, we only include cache salt in the first block of a prompt? Is any particular reason not to include it to all blocks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is propagated to all blocks via the hash of the previous block. So adding it to all would not improve it.
One thing I just realized @comaniac: I added it only to the V1 engine but that's problematic for V0. In addition to a comment in the docs I guess there should be some error for requests with salt to the V0 engine, otherwise users of V0 will provide a salt but don't get feedback that it is not used. Any suggestion? Or do you consider V0 deprecated and its good enough to only have a comment in the docs? |
Yeah definitely we should error out when v0 engine receives the cache salt. |
Context
Prefix caching in vLLM improves inference performance by reusing KV blocks across requests. However, this reuse introduces a potential privacy risk in shared environments, where an attacker could infer prompt reuse via timing side channels as demonstrated in Leaking Secrets from Prefix Caches.
To address this, we propose to isolate caches as described in an RFC: #16016
Suggested change
This PR implements the single barrier approach from the RFC by adding support for an optional
cache_salt
field in the request schema. When present, the salt is injected into the hash of the first block, ensuring that only requests with the same salt can share cached blocks. This effectively segments cache reuse by salt and protects against timing-based attacks.The change is compatible with OpenAI requests as only an additional optional field is added. Users can still use the OpenAI client:
The scope of cache sharing can be configured per request as needed, e.g., full single user protection or cache sharing within a group of users.
The change is in line with cache protection applied by other providers such as OpenAI and provides a solution that allows for higher flexibility allowing for more fine-grained configuration.