Caching

Deep dive into Prism's multi-layer caching system and how to optimize it for your workload.

Table of Contents


Cache Architecture

Prism implements a sophisticated three-tier caching system:

Cache Layers

┌─────────────────────────────────────────────────────────────┐
│                      CacheManager                            │
│  (Orchestrates all caches, tracks chain state)              │
└───────────────┬─────────────┬──────────────┬────────────────┘
                │             │              │
        ┌───────▼────┐  ┌────▼────┐   ┌─────▼──────┐
        │ BlockCache │  │LogCache │   │TransactionC│
        │            │  │         │   │ache        │
        └────────────┘  └─────────┘   └────────────┘

1. Block Cache

Caches block headers and bodies with O(1) access for recent blocks.

Features:

  • Hot Window: Ring buffer for recent N blocks (default: 200)

  • LRU Cache: Least-recently-used cache for older blocks

  • Separate Storage: Headers and bodies cached independently

Use Cases:

  • eth_getBlockByNumber

  • eth_getBlockByHash

Memory Formula:

2. Log Cache

Advanced caching with partial-range fulfillment using roaring bitmaps.

Features:

  • Bitmap Indexing: Track which blocks have been queried

  • Partial Fulfillment: Serve cached logs + fetch missing ranges

  • Deduplicated Storage: Store each log only once

  • Range Tracking: Know which ranges are fully cached

Use Cases:

  • eth_getLogs queries with block ranges

Memory Formula:

3. Transaction Cache

Caches transactions and receipts with automatic log resolution.

Features:

  • Transaction Storage: Raw transaction data

  • Receipt Storage: Transaction receipts

  • Log Resolution: Receipts reference logs from log cache

Use Cases:

  • eth_getTransactionByHash

  • eth_getTransactionReceipt

Memory Formula:


Cache Status

Every response includes an X-Cache-Status header indicating how it was served.

Status Values

Status
Meaning
Upstream Calls

MISS

No cached data, fetched from upstream

1+

FULL

Complete cache hit

0

PARTIAL

Some cached, some fetched

1+

EMPTY

Cached empty result

0

PARTIAL_WITH_FAILURES

Partial cache + some failures

1+

Cache Status Flow

Example: Observing Cache Status


Partial Cache Fulfillment

The killer feature for eth_getLogs: intelligently combine cached and fresh data.

How It Works

  1. Request arrives: eth_getLogs for blocks 1000-1200

  2. Cache check: Blocks 1000-1100 are cached, 1101-1200 are not

  3. Retrieve cached: Get logs for 1000-1100 from cache

  4. Fetch missing: Request 1101-1200 from upstream

  5. Merge results: Combine cached + fetched logs

  6. Cache new data: Store 1101-1200 for future use

  7. Return: Response with X-Cache-Status: PARTIAL

Example

First call: X-Cache-Status: MISS (nothing cached)

Second call (slightly extended range):

Result: X-Cache-Status: PARTIAL

  • Cached: blocks 16000000-16001000

  • Fetched: blocks 16001001-16002000

  • Merged and returned

Performance Impact

Without Partial Caching:

  • Cache miss for range 16000000-16002000

  • Fetch all 2000 blocks from upstream

  • ~500ms latency

With Partial Caching:

  • Retrieve 1000 blocks from cache (~5ms)

  • Fetch 1000 blocks from upstream (~250ms)

  • Total: ~255ms (50% faster)


Cache Invalidation

Prism handles chain reorganizations automatically.

Reorg Detection

The ReorgManager continuously monitors the chain for reorganizations:

Safety Depth

Safety Depth is the number of blocks from the tip that are considered "safe" from reorganization.

Chain State:

Cache Behavior:

  • Safe Blocks (≤ 18499988): Aggressively cached, unlikely to reorg

  • Unsafe Blocks (> 18499988): Cached but monitored for reorg

  • Reorg Detected: Invalidate blocks from reorg point to tip

Reorg Handling Flow

Configuration

Monitoring Reorgs

Reorgs are tracked in Prometheus metrics:


Performance Tuning

Memory vs. Performance

The fundamental tradeoff:

  • More memory = Higher cache hit rate = Lower latency

  • Less memory = Lower cache hit rate = More upstream calls

Tuning by Workload

DeFi Applications (Heavy eth_getLogs)

Expected Memory: ~3-5GB

Block Explorer (Heavy eth_getBlockByNumber)

Expected Memory: ~6-10GB

Wallet Application (Balanced)

Expected Memory: ~2-4GB

Cache Hit Rate Optimization

Monitor cache hit rates:

Target hit rates:

  • eth_getBlockByNumber: > 90%

  • eth_getLogs: > 70% (depends on query patterns)

  • eth_getTransactionReceipt: > 85%

If hit rate is low:

  1. Increase cache sizes

  2. Increase retain_blocks

  3. Check query patterns (random vs. sequential)

Memory Monitoring

Cleanup Interval Tuning


Best Practices

1. Size Caches for Your Working Set

Working Set: The range of blocks your application typically queries.

Example for DeFi app querying last 7 days:

  • 7 days × 7200 blocks/day = ~50,000 blocks

  • Set retain_blocks = 50000

  • Size caches to hold ~50,000 blocks worth of data

2. Use Appropriate Safety Depth

Default (balanced):

Conservative (more reorg protection):

Very Conservative (wait for finality):

3. Monitor and Adjust

Weekly review:

  1. Check cache hit rates

  2. Review memory usage

  3. Analyze query patterns

  4. Adjust sizes accordingly

Metrics to watch:

4. Leverage Partial Caching

Design queries to benefit from partial caching:

Good (sequential ranges):

Bad (random ranges):

5. WebSocket for Real-Time Updates

Enable WebSocket on upstreams for real-time cache updates:

Benefits:

  • Immediate reorg detection

  • Faster cache invalidation

  • Real-time chain tip tracking


Troubleshooting

Low Cache Hit Rates

Symptoms: X-Cache-Status: MISS on most requests

Causes:

  1. Cache sizes too small

  2. Query patterns don't match cache retention

  3. Rapid block progression (tip moves too fast)

Solutions:

High Memory Usage

Symptoms: Process using more RAM than expected

Causes:

  1. Cache sizes too large

  2. Many unique addresses in log queries

  3. Bitmap explosion

Solutions:

Frequent Cache Evictions

Symptoms: rpc_cache_evictions_total growing rapidly

Causes:

  1. Working set larger than cache capacity

  2. Random access patterns

Solutions:


Next: Learn about Routing Strategies or Monitoring.

Last updated