Open
Description
Describe the solution you'd like
Anthropic cache control is in a Pre-Generally Available (GA) state on Google Vertex. For more see Google Vertex Anthropic prompt caching documentation.
const llmResponse = await ai.generate({
model: claude3Sonnet, // or another Anthropic model
messages: [
{
role: 'system',
content: [
{
text: 'This is an important instruction that can be cached.',
custom: {
cacheControl: {
type: 'ephemeral',
},
},
},
],
},
{
role: 'user',
content: [{ text: 'What should I do when I visit Melbourne?' }],
},
],
});
Additional context
The Anthropic Claude models offer prompt caching to reduce latency and costs
when reusing the same content in multiple requests. When you send a query, you
can cache all or specific parts of your input so that subsequent queries can use
the cached results from the previous request. This avoids additional compute and
network costs. Caches are unique to your Google Cloud project and cannot be used
by other projects.
For details about how to structure your prompts, see the Anthropic Prompt
caching
documentation.
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
No status