promptguard.cache

class promptguard.PromptCache(max_size=10000, ttl_seconds=3600)[source]

Bases: object

In-memory LRU cache for prompt analysis results.

Uses MD5 hash of the prompt text as the cache key. Entries are evicted in least-recently-used order when the cache reaches max_size. An optional TTL causes stale entries to be discarded on read.

Parameters:
  • max_size (int) – Maximum number of entries to hold before evicting the LRU entry. Must be a positive integer. Defaults to 10 000.

  • ttl_seconds (int | None) – Optional time-to-live in seconds. Entries older than this value are treated as cache misses and silently removed. Pass None (the default) to disable expiry.

__init__(max_size=10000, ttl_seconds=3600)[source]
get(prompt)[source]

Return the cached RiskScore for prompt, or None on miss.

A miss is returned when: * the prompt has never been cached, or * the cached entry has exceeded ttl_seconds.

set(prompt, result)[source]

Store result in the cache keyed by prompt.

If the cache is at capacity, the least-recently-used entry is evicted before the new entry is inserted.

clear()[source]

Remove all cached entries.

size()[source]

Return the number of currently cached entries.

stats()[source]

Return a dictionary of cache statistics.