[ ABORT TO HUD ]
SEQ. 1
SEQ. 2
SEQ. 3

Using a Cached Content

🗄️ Context Caching 10m 200 BASE XP

Querying the Cache

Once a cache is created, you instantiate a GenerativeModel pointing to the cache instead of providing the massive context again.

from vertexai.generative_models import GenerativeModel

# Point the model to the cache ID
model = GenerativeModel.from_cached_content(cached_content=cache)

# Query instantly
response = model.generate_content("When did the CEO enter the room?")
SYNAPSE VERIFICATION
QUERY 1 // 1
How do you use a cached object after creating it?
You append the cache ID to your prompt text
You use the GenerativeModel.from_cached_content method
You pass it in the safety_settings
You upload it to Google Cloud Storage
Watch: 139x Rust Speedup
Google Vertex AI Academy | Free Interactive Course | Infinity AI