Skip to content

[JS] Add support to anthropic prompt caching #2885

Open
@amondnet

Description

@amondnet

Describe the solution you'd like

Anthropic cache control is in a Pre-Generally Available (GA) state on Google Vertex. For more see Google Vertex Anthropic prompt caching documentation.

const llmResponse = await ai.generate({
  model: claude3Sonnet, // or another Anthropic model
  messages: [
    {
      role: 'system',
      content: [
        {
          text: 'This is an important instruction that can be cached.',
          custom: {
            cacheControl: {
              type: 'ephemeral',
            },
          },
        },
      ],
    },
    {
      role: 'user',
      content: [{ text: 'What should I do when I visit Melbourne?' }],
    },
  ],
});

Additional context
The Anthropic Claude models offer prompt caching to reduce latency and costs
when reusing the same content in multiple requests. When you send a query, you
can cache all or specific parts of your input so that subsequent queries can use
the cached results from the previous request. This avoids additional compute and
network costs. Caches are unique to your Google Cloud project and cannot be used
by other projects.

For details about how to structure your prompts, see the Anthropic Prompt
caching

documentation.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions