Caching
Deep dive into Prism's multi-layer caching system and how to optimize it for your workload.
Table of Contents
Cache Architecture
Prism implements a sophisticated three-tier caching system:
Cache Layers
┌─────────────────────────────────────────────────────────────┐
│ CacheManager │
│ (Orchestrates all caches, tracks chain state) │
└───────────────┬─────────────┬──────────────┬────────────────┘
│ │ │
┌───────▼────┐ ┌────▼────┐ ┌─────▼──────┐
│ BlockCache │ │LogCache │ │TransactionC│
│ │ │ │ │ache │
└────────────┘ └─────────┘ └────────────┘1. Block Cache
Caches block headers and bodies with O(1) access for recent blocks.
Features:
Hot Window: Ring buffer for recent
Nblocks (default: 200)LRU Cache: Least-recently-used cache for older blocks
Separate Storage: Headers and bodies cached independently
Use Cases:
eth_getBlockByNumbereth_getBlockByHash
Memory Formula:
Memory = (hot_window_size × avg_block_size) + (max_headers × header_size) + (max_bodies × body_size)2. Log Cache
Advanced caching with partial-range fulfillment using roaring bitmaps.
Features:
Bitmap Indexing: Track which blocks have been queried
Partial Fulfillment: Serve cached logs + fetch missing ranges
Deduplicated Storage: Store each log only once
Range Tracking: Know which ranges are fully cached
Use Cases:
eth_getLogsqueries with block ranges
Memory Formula:
Memory = (max_exact_results × avg_log_size) + (max_bitmap_entries × 8 bytes)3. Transaction Cache
Caches transactions and receipts with automatic log resolution.
Features:
Transaction Storage: Raw transaction data
Receipt Storage: Transaction receipts
Log Resolution: Receipts reference logs from log cache
Use Cases:
eth_getTransactionByHasheth_getTransactionReceipt
Memory Formula:
Memory = (max_transactions × avg_tx_size) + (max_receipts × avg_receipt_size)Cache Status
Every response includes an X-Cache-Status header indicating how it was served.
Status Values
MISS
No cached data, fetched from upstream
1+
FULL
Complete cache hit
0
PARTIAL
Some cached, some fetched
1+
EMPTY
Cached empty result
0
PARTIAL_WITH_FAILURES
Partial cache + some failures
1+
Cache Status Flow
Request → Check Cache
│
├─ All data cached ────────────→ FULL
│
├─ No data cached ─────────────→ MISS → Fetch → Cache
│
├─ Some cached, some missing ──→ PARTIAL → Fetch missing → Merge
│
└─ Cached as empty ────────────→ EMPTYExample: Observing Cache Status
# First request - MISS
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
| grep X-Cache-Status
# X-Cache-Status: MISS
# Second request - FULL HIT
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
| grep X-Cache-Status
# X-Cache-Status: FULLPartial Cache Fulfillment
The killer feature for eth_getLogs: intelligently combine cached and fresh data.
How It Works
Request arrives:
eth_getLogsfor blocks 1000-1200Cache check: Blocks 1000-1100 are cached, 1101-1200 are not
Retrieve cached: Get logs for 1000-1100 from cache
Fetch missing: Request 1101-1200 from upstream
Merge results: Combine cached + fetched logs
Cache new data: Store 1101-1200 for future use
Return: Response with
X-Cache-Status: PARTIAL
Example
# Query logs for blocks 16000000-16001000
curl -X POST http://localhost:3030/ -H "Content-Type: application/json" -d '{
"jsonrpc": "2.0",
"method": "eth_getLogs",
"params": [{
"fromBlock": "0xF42400",
"toBlock": "0xF427E8",
"address": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
}],
"id": 1
}' -v 2>&1 | grep X-Cache-StatusFirst call: X-Cache-Status: MISS (nothing cached)
Second call (slightly extended range):
# Query blocks 16000000-16002000 (extended by 1000 blocks)
curl -X POST http://localhost:3030/ -H "Content-Type: application/json" -d '{
"jsonrpc": "2.0",
"method": "eth_getLogs",
"params": [{
"fromBlock": "0xF42400",
"toBlock": "0xF42BD0",
"address": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
}],
"id": 1
}' -v 2>&1 | grep X-Cache-StatusResult: X-Cache-Status: PARTIAL
Cached: blocks 16000000-16001000
Fetched: blocks 16001001-16002000
Merged and returned
Performance Impact
Without Partial Caching:
Cache miss for range 16000000-16002000
Fetch all 2000 blocks from upstream
~500ms latency
With Partial Caching:
Retrieve 1000 blocks from cache (~5ms)
Fetch 1000 blocks from upstream (~250ms)
Total: ~255ms (50% faster)
Cache Invalidation
Prism handles chain reorganizations automatically.
Reorg Detection
The ReorgManager continuously monitors the chain for reorganizations:
┌──────────────────────────────────────────────────────────┐
│ ReorgManager │
├──────────────────────────────────────────────────────────┤
│ 1. Monitor chain tip via WebSocket subscriptions │
│ 2. Detect block hash mismatches │
│ 3. Identify reorg depth │
│ 4. Invalidate affected cache entries │
│ 5. Update safe head (tip - safety_depth) │
└──────────────────────────────────────────────────────────┘Safety Depth
Safety Depth is the number of blocks from the tip that are considered "safe" from reorganization.
[cache.manager_config.reorg_manager]
safety_depth = 12 # Blocks from tip considered "safe" (~2.5 min on mainnet)Chain State:
Latest Block: 18500000
Safety Depth: 12
Safe Head: 18499988 (latest - safety_depth)Cache Behavior:
Safe Blocks (≤ 18499988): Aggressively cached, unlikely to reorg
Unsafe Blocks (> 18499988): Cached but monitored for reorg
Reorg Detected: Invalidate blocks from reorg point to tip
Reorg Handling Flow
1. WebSocket receives new block 18500001
├─ Expected parent: 18500000 (hash: 0xabc...)
│
2. Compare with cached block 18500000
├─ Hash mismatch! (cached: 0xdef...)
│
3. Reorg detected!
├─ Invalidate cache from block 18499999 to 18500000
├─ Clear logs, transactions, receipts in affected range
│
4. Update safe head
├─ New safe head: 18500001 - 64 = 18499937
│
5. Continue normal operationConfiguration
[cache.manager_config.reorg_manager]
safety_depth = 12 # Blocks from tip considered safe (~2.5 min on mainnet)
max_reorg_depth = 100 # Maximum depth to search for divergence
coalesce_window_ms = 100 # Milliseconds to batch reorg eventsMonitoring Reorgs
Reorgs are tracked in Prometheus metrics:
# Number of reorgs detected
rpc_reorgs_detected_total 5
# Reorg depth histogram
rpc_reorg_depth_bucket{le="2"} 3
rpc_reorg_depth_bucket{le="5"} 4
rpc_reorg_depth_bucket{le="10"} 5
# Last reorg block
rpc_last_reorg_block 18499995Performance Tuning
Memory vs. Performance
The fundamental tradeoff:
More memory = Higher cache hit rate = Lower latency
Less memory = Lower cache hit rate = More upstream calls
Tuning by Workload
DeFi Applications (Heavy eth_getLogs)
eth_getLogs)[cache.manager_config.log_cache]
chunk_size = 1000
max_exact_results = 100000 # Lots of log storage
max_bitmap_entries = 500000 # Better range tracking
[cache.manager_config.block_cache]
hot_window_size = 300 # More recent blocks
max_headers = 20000
max_bodies = 5000 # Don't need many full blocks
[cache.manager_config.transaction_cache]
max_transactions = 100000
max_receipts = 100000Expected Memory: ~3-5GB
Block Explorer (Heavy eth_getBlockByNumber)
eth_getBlockByNumber)[cache.manager_config.block_cache]
hot_window_size = 500 # Very large hot window
max_headers = 50000 # Lots of headers
max_bodies = 50000 # Lots of bodies
[cache.manager_config.log_cache]
max_exact_results = 20000
max_bitmap_entries = 100000
[cache.manager_config.transaction_cache]
max_transactions = 200000 # Many transactions
max_receipts = 200000Expected Memory: ~6-10GB
Wallet Application (Balanced)
[cache.manager_config.block_cache]
hot_window_size = 200
max_headers = 10000
max_bodies = 10000
[cache.manager_config.log_cache]
max_exact_results = 30000
max_bitmap_entries = 150000
[cache.manager_config.transaction_cache]
max_transactions = 50000
max_receipts = 50000Expected Memory: ~2-4GB
Cache Hit Rate Optimization
Monitor cache hit rates:
curl -s http://localhost:3030/metrics | grep cache_hit_rateTarget hit rates:
eth_getBlockByNumber: > 90%eth_getLogs: > 70% (depends on query patterns)eth_getTransactionReceipt: > 85%
If hit rate is low:
Increase cache sizes
Increase
retain_blocksCheck query patterns (random vs. sequential)
Memory Monitoring
# Check cache memory usage
curl -s http://localhost:3030/metrics | grep cache_bytes
# Example output:
# rpc_block_cache_bytes 25165824 # 25MB
# rpc_log_cache_bytes 104857600 # 100MB
# rpc_transaction_cache_bytes 52428800 # 50MBCleanup Interval Tuning
[cache.manager_config]
enable_auto_cleanup = true
cleanup_interval_seconds = 300 # 5 minutes
# Faster cleanup (higher CPU, fresher cache):
cleanup_interval_seconds = 60 # 1 minute
# Slower cleanup (lower CPU, stale data lingers):
cleanup_interval_seconds = 600 # 10 minutesBest Practices
1. Size Caches for Your Working Set
Working Set: The range of blocks your application typically queries.
Example for DeFi app querying last 7 days:
7 days × 7200 blocks/day = ~50,000 blocks
Set
retain_blocks = 50000Size caches to hold ~50,000 blocks worth of data
2. Use Appropriate Safety Depth
Default (balanced):
safety_depth = 12 # ~2.5 minutes on mainnet (default)Conservative (more reorg protection):
safety_depth = 32 # ~6.5 minutes on mainnetVery Conservative (wait for finality):
safety_depth = 64 # ~13 minutes, near finality3. Monitor and Adjust
Weekly review:
Check cache hit rates
Review memory usage
Analyze query patterns
Adjust sizes accordingly
Metrics to watch:
rpc_cache_hits_total / rpc_requests_total = Hit rate
rpc_cache_evictions_total = How often evictions occur
rpc_partial_cache_fulfillments_total = Partial hits4. Leverage Partial Caching
Design queries to benefit from partial caching:
Good (sequential ranges):
// Query 1: blocks 100-200
// Query 2: blocks 150-250 (50% cached!)
// Query 3: blocks 200-300 (50% cached!)Bad (random ranges):
// Query 1: blocks 100-200
// Query 2: blocks 500-600 (0% cached)
// Query 3: blocks 900-1000 (0% cached)5. WebSocket for Real-Time Updates
Enable WebSocket on upstreams for real-time cache updates:
[[upstreams.providers]]
https_url = "https://..."
wss_url = "wss://..." # Enable WebSocket!Benefits:
Immediate reorg detection
Faster cache invalidation
Real-time chain tip tracking
Troubleshooting
Low Cache Hit Rates
Symptoms: X-Cache-Status: MISS on most requests
Causes:
Cache sizes too small
Query patterns don't match cache retention
Rapid block progression (tip moves too fast)
Solutions:
# Increase cache sizes
max_exact_results = 100000 # Double it
max_headers = 20000
# Increase retention
retain_blocks = 2000 # Keep more blocksHigh Memory Usage
Symptoms: Process using more RAM than expected
Causes:
Cache sizes too large
Many unique addresses in log queries
Bitmap explosion
Solutions:
# Reduce cache sizes
max_exact_results = 5000 # Half it
max_bitmap_entries = 50000
# More aggressive cleanup
cleanup_interval_seconds = 60Frequent Cache Evictions
Symptoms: rpc_cache_evictions_total growing rapidly
Causes:
Working set larger than cache capacity
Random access patterns
Solutions:
# Increase cache capacity
max_exact_results = 50000
retain_blocks = 2000
# Or: accept lower hit rateNext: Learn about Routing Strategies or Monitoring.
Last updated