create_context_cache#

langchain_google_vertexai.utils.create_context_cache(model: ChatVertexAI, messages: List[BaseMessage], expire_time: datetime | None = None, time_to_live: timedelta | None = None, tools: Sequence[Tool | Tool | _ToolDictLike | BaseTool | Type[BaseModel] | FunctionDescription | Callable | FunctionDeclaration | Dict[str, Any]] | None = None, tool_config: _ToolConfigDict | None = None) str[source]#

Creates a cache for content in some model.

Parameters:
  • model (ChatVertexAI) – ChatVertexAI model. Must be at least gemini-1.5 pro or flash.

  • messages (List[BaseMessage]) – List of messages to cache.

  • expire_time (datetime | None) – Timestamp of when this resource is considered expired.

  • set (At most one of expire_time and ttl can be set. If neither is) – on the API side will be used (currently 1 hour).

  • TTL (default) – on the API side will be used (currently 1 hour).

  • time_to_live (timedelta | None) – The TTL for this resource. If provided, the expiration time is

  • computed – created_time + TTL.

  • set – on the API side will be used (currently 1 hour).

  • TTL – on the API side will be used (currently 1 hour).

  • tools (Sequence[Tool | Tool | _ToolDictLike | BaseTool | Type[BaseModel] | FunctionDescription | Callable | FunctionDeclaration | Dict[str, Any]] | None) – A list of tool definitions to bind to this chat model. Can be a pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation.

  • tool_config (_ToolConfigDict | None) – Optional. Immutable. Tool config. This config is shared for all tools.

Raises:

ValueError – If model doesn’t support context catching.

Returns:

String with the identificator of the created cache.

Return type:

str