Monitor cache performance and optimize LLM token usage
Unique items with saved decisions
Times cache was used
AI calls avoided
Estimated LLM tokens saved