Cache
🚀
Enhanced
Direct integration with Langfuse tracing
Caching can save you money by reducing the number of API calls you make to the LLM provider, if you’re often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.
Cache Nodes:
- InMemory Cache
- InMemory Embedding Cache
- Momento Cache
- Redis Cache
- Redis Embeddings Cache
- Upstash Redis Cache