Skip to content

[Core] Prevent side-channel attacks via cache salting #17045

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

dr75
Copy link
Contributor

@dr75 dr75 commented Apr 23, 2025

Context

Prefix caching in vLLM improves inference performance by reusing KV blocks across requests. However, this reuse introduces a potential privacy risk in shared environments, where an attacker could infer prompt reuse via timing side channels as demonstrated in Leaking Secrets from Prefix Caches.

To address this, we propose to isolate caches as described in an RFC: #16016

Suggested change

This PR implements the single barrier approach from the RFC by adding support for an optional cache_salt field in the request schema. When present, the salt is injected into the hash of the first block, ensuring that only requests with the same salt can share cached blocks. This effectively segments cache reuse by salt and protects against timing-based attacks.

{
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Here is a document with details about the world series: ..."},
    {"role": "user", "content": "Who won the world series in 2020?"}
  ],
  "cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ=="
}

The change is compatible with OpenAI requests as only an additional optional field is added. Users can still use the OpenAI client:

response = client.chat.completions.create(
    model=model,
    messages=messages,
    extra_body={
        "cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ==",
    },
)

The scope of cache sharing can be configured per request as needed, e.g., full single user protection or cache sharing within a group of users.

The change is in line with cache protection applied by other providers such as OpenAI and provides a solution that allows for higher flexibility allowing for more fine-grained configuration.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@dr75 dr75 changed the title Prevent side-channel attacks via cache salting [Core] Prevent side-channel attacks via cache salting Apr 23, 2025
@mergify mergify bot added documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) v1 labels Apr 23, 2025
@DarkLight1337 DarkLight1337 requested a review from russellb April 23, 2025 10:56
Signed-off-by: Marko Rosenmueller <[email protected]>
@russellb
Copy link
Member

I wonder how much this helps given that vLLM already initializes hashes with a random number that's different each time vLLM is executed. related: GHSA-rm76-4mrf-v9r8

@russellb
Copy link
Member

I wonder how much this helps given that vLLM already initializes hashes with a random number that's different each time vLLM is executed. related: GHSA-rm76-4mrf-v9r8

sorry, I hadn't read the paper yet!

Copy link
Member

@russellb russellb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think maintainers of the KV cache should review this, but I like this conceptually from a security perspective. Thank you!

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

This is related too: #15297

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

I think maintainers of the KV cache should review this, but I like this conceptually from a security perspective. Thank you!

Thanks! I am in touch with @comaniac for taking a look.

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

Both CI failures (both failing a bit differently) are in EntrypointsTest, in PEFTHelper.from_local_dir() reading a file so I guess unrelated.

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM. cc @WoosukKwon @ywang96

Comment on lines +386 to +387
cache_salt_keys: list[str] = [request.cache_salt] if (
start_token_idx == 0 and request.cache_salt) else []
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, we only include cache salt in the first block of a prompt? Is any particular reason not to include it to all blocks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is propagated to all blocks via the hash of the previous block. So adding it to all would not improve it.

@dr75
Copy link
Contributor Author

dr75 commented Apr 23, 2025

One thing I just realized @comaniac: I added it only to the V1 engine but that's problematic for V0. In addition to a comment in the docs I guess there should be some error for requests with salt to the V0 engine, otherwise users of V0 will provide a salt but don't get feedback that it is not used.

Any suggestion? Or do you consider V0 deprecated and its good enough to only have a comment in the docs?

@comaniac
Copy link
Collaborator

One thing I just realized @comaniac: I added it only to the V1 engine but that's problematic for V0. In addition to a comment in the docs I guess there should be some error for requests with salt to the V0 engine, otherwise users of V0 will provide a salt but don't get feedback that it is not used.

Any suggestion? Or do you consider V0 deprecated and its good enough to only have a comment in the docs?

Yeah definitely we should error out when v0 engine receives the cache salt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants