Blazing Fast Cache Performance
Lock-minimized architecture with sharded maps and async IO, optimized for multi-core systems and maximum throughput in distributed cache clusters.
Surgical cache invalidation through intelligent tagging.
Atomic operations at wire speed for distributed systems.
1M+ ops/sec throughput with <0.8ms P95 latency.
Built from the ground up for modern cloud-native applications and distributed microservices architectures
Lock-minimized architecture with sharded maps and async IO, optimized for multi-core systems and maximum throughput in distributed cache clusters.
Attach multiple tags to cache entries for surgical purges. Invalidate by user, tenant, or any custom dimension instantly across distributed backend systems.
Thread-safe INCR, DECR, and ADD operations perfect for rate limiting, counters, and distributed coordination.
Choose HTTP with JSON for simplicity or binary TCP protocol for maximum performance and minimal overhead.
Beautiful React-based admin dashboard with real-time metrics, structured logging, health checks, and comprehensive performance monitoring for complete cache visibility.
Authentication, network hardening, and TLS support ensure your cached data remains protected. Learn about security configuration.
Docker images, Kubernetes manifests, systemd units, and cloud templates for seamless deployment.
Memory-safe, zero-cost abstractions, and fearless concurrency for unmatched reliability and performance.
DashMap-powered sharding eliminates lock contention, enabling true parallel access across CPU cores for maximum throughput.
Choose your preferred installation method
| |
| |
| |
| |
| |
Tag-based invalidation makes cache management intuitive
| |
</div>
Native cache client support for all major programming languages and cloud platforms
First-class runtime and native async client library
View API βTypeScript client with connection pooling
Get Started βAsync support via httpx and aiohttp
Examples βHigh-performance client with connection reuse
Samples βPSR-compliant cache adapter with tag support
Examples βHelm charts and horizontal scaling guides
Deploy βSpring Boot integration with connection pooling
Examples βAsync HttpClient patterns for microservices
Examples βTransparent about current constraints and exciting features coming soon
Currently no persistence to disk. Cache data is lost on restart. Future: Optional persistence with configurable write-ahead logging.
No built-in replication or clustering support. Roadmap: Consistent hashing with peer discovery and automatic failover.
Binary protocol lacks compression layer for large payloads. Planned: Optional compression with configurable algorithms (LZ4, Zstd).
No automatic limits on unique tag counts. Monitor memory usage with high tag diversity. Future: Configurable tag limits and cleanup policies.
πΊοΈ Roadmap Priority: Clustering and replication are top priorities for v2.0, followed by persistence options and compression enhancements.
Adopt TagCache as your unified low-latency caching layer with surgical invalidation and observability from day one.
Everything you need to build with TagCache