Configuration Reference
Complete reference for all Prism configuration options.
Table of Contents
Configuration File
Prism uses TOML configuration files. The configuration file location can be specified via the PRISM_CONFIG environment variable.
# Use custom config file
export PRISM_CONFIG=/path/to/config.toml
cargo make run-serverIf PRISM_CONFIG is not set, Prism looks for config/config.toml in the current directory.
Configuration Loading Priority
TOML file values (from
PRISM_CONFIGor default path)Environment variables with
PRISM__prefix (override TOML values)Built-in defaults (used if not specified)
Environment Variable Overrides
Use PRISM__ prefix with __ as separator for nested fields:
# Override server port
export PRISM__SERVER__BIND_PORT=8080
# Override cache settings
export PRISM__CACHE__ENABLED=true
export PRISM__CACHE__CACHE_TTL_SECONDS=600
# Override logging
export PRISM__LOGGING__LEVEL=debugServer Configuration
HTTP server settings for binding address, port, and request handling.
Configuration Block
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100
request_timeout_seconds = 30Options
bind_address
string
"127.0.0.1"
IP address to bind the server. Use "0.0.0.0" to listen on all interfaces
bind_port
integer
3030
Port number for the HTTP server. Must be > 0
max_concurrent_requests
integer
100
Maximum number of concurrent RPC requests. Limits memory and prevents overload
request_timeout_seconds
integer
30
Global request timeout in seconds. Requests exceeding this are cancelled
Examples
Public Server (listen on all interfaces):
[server]
bind_address = "0.0.0.0" # WARNING: Ensure firewall is configured!
bind_port = 80
max_concurrent_requests = 1000High-Throughput Configuration:
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 2000 # Handle more concurrent requests
request_timeout_seconds = 60 # Longer timeout for complex queriesDevelopment Configuration:
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 50
request_timeout_seconds = 15Upstream Providers
Configuration for upstream Ethereum RPC providers.
Configuration Block
[upstreams]
[[upstreams.providers]]
name = "provider-name"
chain_id = 1
https_url = "https://eth-mainnet.provider.com"
wss_url = "wss://eth-mainnet.provider.com" # Optional
weight = 1
timeout_seconds = 30
circuit_breaker_threshold = 2
circuit_breaker_timeout_seconds = 1Options
name
string
Yes
-
Human-readable identifier for metrics and logs
chain_id
integer
Yes
-
Ethereum chain ID (1 for mainnet, 11155111 for Sepolia, etc.)
https_url
string
Yes
-
HTTP(S) endpoint URL for JSON-RPC requests
wss_url
string
No
-
WebSocket URL for subscriptions (e.g., newHeads)
weight
integer
No
1
Load balancing weight. Higher weight = more traffic
timeout_seconds
integer
No
30
Request timeout for this provider
circuit_breaker_threshold
integer
No
2
Consecutive failures before circuit opens
circuit_breaker_timeout_seconds
integer
No
1
Seconds to wait before retrying after circuit opens
Multiple Providers Example
[upstreams]
# Primary provider - higher weight
[[upstreams.providers]]
name = "alchemy-mainnet"
chain_id = 1
https_url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
wss_url = "wss://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
weight = 3
timeout_seconds = 30
circuit_breaker_threshold = 5
circuit_breaker_timeout_seconds = 60
# Backup provider
[[upstreams.providers]]
name = "infura-mainnet"
chain_id = 1
https_url = "https://mainnet.infura.io/v3/YOUR_PROJECT_ID"
wss_url = "wss://mainnet.infura.io/ws/v3/YOUR_PROJECT_ID"
weight = 2
timeout_seconds = 30
circuit_breaker_threshold = 3
circuit_breaker_timeout_seconds = 30
# Fallback public endpoint
[[upstreams.providers]]
name = "publicnode"
chain_id = 1
https_url = "https://ethereum-rpc.publicnode.com"
wss_url = "wss://ethereum-rpc.publicnode.com"
weight = 1
timeout_seconds = 45 # Slower public endpoint
circuit_breaker_threshold = 2
circuit_breaker_timeout_seconds = 10Provider Selection Behavior
Weight: Determines relative traffic distribution. Provider with weight 3 receives ~3x traffic of weight 1 provider
Circuit Breaker: Automatically isolates failing providers to prevent cascade failures
WebSocket: Required for real-time chain tip updates and reorg detection
Cache Configuration
Advanced caching system with multi-layer architecture.
Basic Cache Settings
[cache]
enabled = true
cache_ttl_seconds = 300enabled
boolean
true
Master switch for caching. Set to false to disable all caching
cache_ttl_seconds
integer
300
Default TTL for cache entries (currently not actively used)
Cache Manager Configuration
[cache.manager_config]
retain_blocks = 1000
enable_auto_cleanup = true
cleanup_interval_seconds = 300retain_blocks
integer
1000
Number of recent blocks to retain in cache
enable_auto_cleanup
boolean
true
Enable automatic background cleanup of old cache entries
cleanup_interval_seconds
integer
300
Interval for background cleanup tasks (5 minutes)
Log Cache Configuration
Event log caching with partial-range support.
[cache.manager_config.log_cache]
chunk_size = 1000
max_exact_results = 10000
max_bitmap_entries = 100000
safety_depth = 12chunk_size
integer
1000
Number of blocks per chunk for log storage
max_exact_results
integer
10000
Maximum number of individual log records to cache
max_bitmap_entries
integer
100000
Maximum bitmap entries for efficient range tracking
safety_depth
integer
12
Blocks from tip to consider "safe" (reorg protection)
Memory Impact:
max_exact_results = 10000≈ 30-60MB (depending on log size)max_bitmap_entries = 100000≈ 10-20MB
Block Cache Configuration
Block header and body caching with hot window optimization.
[cache.manager_config.block_cache]
hot_window_size = 200
max_headers = 10000
max_bodies = 10000
safety_depth = 12hot_window_size
integer
200
Number of recent blocks kept in O(1) hot window
max_headers
integer
10000
Maximum block headers to cache
max_bodies
integer
10000
Maximum block bodies to cache
safety_depth
integer
12
Blocks from tip to consider "safe"
Memory Impact:
max_headers = 10000≈ 5-10MBmax_bodies = 10000≈ 10-30MB (varies with transaction count)hot_window_size = 200≈ 1-2MB
Transaction Cache Configuration
Transaction and receipt caching.
[cache.manager_config.transaction_cache]
max_transactions = 50000
max_receipts = 50000
safety_depth = 12max_transactions
integer
50000
Maximum transactions to cache
max_receipts
integer
50000
Maximum receipts to cache
safety_depth
integer
12
Blocks from tip to consider "safe"
Memory Impact:
max_transactions = 50000≈ 20-40MBmax_receipts = 50000≈ 30-50MB
Reorg Manager Configuration
Chain reorganization detection and cache invalidation.
[cache.manager_config.reorg_manager]
safety_depth = 12
max_reorg_depth = 100
coalesce_window_ms = 100safety_depth
integer
12
Blocks from tip considered safe from reorgs (~2.5 minutes on mainnet)
max_reorg_depth
integer
100
Maximum depth to search for divergence point during reorg
coalesce_window_ms
integer
100
Milliseconds to batch reorg events (prevents cache thrashing)
Cache Tuning Examples
Low Memory Configuration (< 1GB):
[cache.manager_config]
retain_blocks = 500
[cache.manager_config.log_cache]
max_exact_results = 5000
max_bitmap_entries = 50000
[cache.manager_config.block_cache]
hot_window_size = 100
max_headers = 5000
max_bodies = 5000
[cache.manager_config.transaction_cache]
max_transactions = 10000
max_receipts = 10000High Memory Configuration (8GB+):
[cache.manager_config]
retain_blocks = 5000
[cache.manager_config.log_cache]
max_exact_results = 100000
max_bitmap_entries = 500000
[cache.manager_config.block_cache]
hot_window_size = 500
max_headers = 50000
max_bodies = 50000
[cache.manager_config.transaction_cache]
max_transactions = 200000
max_receipts = 200000DeFi Application (high eth_getLogs traffic):
[cache.manager_config]
retain_blocks = 2000
[cache.manager_config.log_cache]
chunk_size = 1000
max_exact_results = 50000 # More log storage
max_bitmap_entries = 200000 # Better range tracking
[cache.manager_config.block_cache]
hot_window_size = 300
max_headers = 20000
max_bodies = 5000 # Don't need many full blocks
[cache.manager_config.transaction_cache]
max_transactions = 100000
max_receipts = 100000Authentication Configuration
API key authentication and quota management.
Configuration Block
[auth]
enabled = true
database_url = "sqlite://db/auth.db"enabled
boolean
false
Enable API key authentication
database_url
string
"sqlite://./db/auth.db"
SQLite database URL for API keys
Creating API Keys
Use the CLI to manage API keys:
# Create a new key
cargo run --bin cli -- auth create \
--name "production-api" \
--description "Main production service" \
--rate-limit 100 \
--refill-rate 10 \
--daily-limit 100000 \
--expires-in-days 365 \
--methods "eth_getLogs,eth_getBlockByNumber"
# List all keys
cargo run --bin cli -- auth list
# Revoke a key
cargo run --bin cli -- auth revoke --name "production-api"Using API Keys
Include the API key in requests:
Header (recommended):
curl -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key-here" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'Query parameter:
curl -X POST "http://localhost:3030/?api_key=your-api-key-here" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'Metrics and Logging
Observability configuration for Prometheus metrics and structured logging.
Metrics Configuration
[metrics]
enabled = true
prometheus_port = 9090enabled
boolean
true
Enable Prometheus metrics collection
prometheus_port
integer
9090
Port for /metrics endpoint (optional)
Access metrics at: http://localhost:3030/metrics
Logging Configuration
[logging]
level = "info"
format = "pretty"level
string
"info"
Log level: trace, debug, info, warn, error
format
string
"pretty"
Output format: pretty (human-readable) or json (structured)
Logging Examples
Production (JSON logs for aggregation):
[logging]
level = "info"
format = "json"Development (readable logs):
[logging]
level = "debug"
format = "pretty"Troubleshooting (verbose):
[logging]
level = "trace"
format = "pretty"Health Check Configuration
[health_check]
interval_seconds = 60interval_seconds
integer
60
Interval between upstream health checks (using eth_blockNumber)
Advanced Features
Advanced routing and optimization features.
Consensus Validation
Multi-upstream validation for critical methods.
[consensus]
enabled = true
max_count = 3
min_count = 2
timeout_seconds = 10
dispute_behavior = "PreferBlockHeadLeader"
methods = ["eth_getBlockByNumber", "eth_getBlockByHash", "eth_getLogs"]enabled
boolean
false
Enable consensus validation
max_count
integer
3
Maximum upstreams to query
min_count
integer
2
Minimum upstreams for consensus
timeout_seconds
integer
10
Timeout for consensus requests
dispute_behavior
string
"PreferBlockHeadLeader"
How to resolve disputes
methods
array
[eth_getBlockByNumber, ...]
Methods requiring consensus
See Routing Strategies for details.
Hedging (Tail Latency Reduction)
Parallel request execution to reduce P99 latency.
[hedging]
enabled = true
latency_quantile = 0.95
min_delay_ms = 50
max_delay_ms = 2000
max_parallel = 2enabled
boolean
false
Enable request hedging
latency_quantile
float
0.95
Latency percentile for hedge trigger
min_delay_ms
integer
50
Minimum delay before hedging
max_delay_ms
integer
2000
Maximum delay before hedging
max_parallel
integer
2
Maximum parallel requests
See Routing Strategies for details.
Scoring Engine
Multi-factor upstream ranking and selection.
[scoring]
enabled = true
window_seconds = 1800
min_samples = 10
max_block_lag = 5
top_n = 3
[scoring.weights]
latency = 8.0
error_rate = 4.0
throttle_rate = 3.0
block_head_lag = 2.0
total_requests = 1.0enabled
boolean
false
Enable scoring-based selection
window_seconds
integer
1800
Metric collection window (30 min)
min_samples
integer
10
Minimum samples before scoring
max_block_lag
integer
5
Max block lag before penalty
top_n
integer
3
Top upstreams for selection
weights.latency
float
8.0
Latency factor weight
weights.error_rate
float
4.0
Error rate factor weight
weights.throttle_rate
float
3.0
Throttle factor weight
weights.block_head_lag
float
2.0
Block lag factor weight
weights.total_requests
float
1.0
Request count factor weight
See Routing Strategies for details.
Example Configurations
Development
environment = "development"
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 50
[[upstreams.providers]]
name = "local-geth"
chain_id = 1337
https_url = "http://localhost:8545"
weight = 1
[cache]
enabled = true
[auth]
enabled = false
[logging]
level = "debug"
format = "pretty"Production
environment = "production"
[server]
bind_address = "0.0.0.0"
bind_port = 3030
max_concurrent_requests = 1000
request_timeout_seconds = 60
[[upstreams.providers]]
name = "alchemy-primary"
chain_id = 1
https_url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
wss_url = "wss://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
weight = 3
circuit_breaker_threshold = 5
[[upstreams.providers]]
name = "infura-backup"
chain_id = 1
https_url = "https://mainnet.infura.io/v3/YOUR_ID"
wss_url = "wss://mainnet.infura.io/ws/v3/YOUR_ID"
weight = 2
circuit_breaker_threshold = 3
[cache]
enabled = true
[cache.manager_config]
retain_blocks = 2000
[cache.manager_config.log_cache]
max_exact_results = 50000
max_bitmap_entries = 200000
[cache.manager_config.block_cache]
hot_window_size = 300
max_headers = 20000
max_bodies = 20000
[auth]
enabled = true
database_url = "sqlite://db/auth.db"
[metrics]
enabled = true
[logging]
level = "info"
format = "json"
[health_check]
interval_seconds = 30High-Performance DeFi
environment = "production"
[server]
bind_address = "0.0.0.0"
bind_port = 3030
max_concurrent_requests = 2000
[[upstreams.providers]]
name = "alchemy-1"
chain_id = 1
https_url = "https://eth-mainnet.g.alchemy.com/v2/KEY1"
wss_url = "wss://eth-mainnet.g.alchemy.com/v2/KEY1"
weight = 2
[[upstreams.providers]]
name = "alchemy-2"
chain_id = 1
https_url = "https://eth-mainnet.g.alchemy.com/v2/KEY2"
wss_url = "wss://eth-mainnet.g.alchemy.com/v2/KEY2"
weight = 2
[[upstreams.providers]]
name = "quicknode"
chain_id = 1
https_url = "https://your-endpoint.quiknode.pro/YOUR_KEY"
wss_url = "wss://your-endpoint.quiknode.pro/YOUR_KEY"
weight = 1
[cache]
enabled = true
[cache.manager_config]
retain_blocks = 5000
[cache.manager_config.log_cache]
chunk_size = 1000
max_exact_results = 100000
max_bitmap_entries = 500000
[cache.manager_config.block_cache]
hot_window_size = 500
max_headers = 50000
max_bodies = 10000 # DeFi apps don't need many full blocks
[cache.manager_config.transaction_cache]
max_transactions = 200000
max_receipts = 200000
[scoring]
enabled = true
window_size = 200
[hedging]
enabled = true
initial_timeout_ms = 50
max_hedge_requests = 2
[auth]
enabled = true
[logging]
level = "info"
format = "json"Configuration Validation
Always validate configuration before deploying:
# Validate config file
cargo make cli-config-validate
# Show resolved configuration (with environment overrides)
cargo make cli-config-show
# Test upstream connectivity
cargo make cli-test-upstreamsEnvironment-Specific Configs
Organize configs by environment:
config/
├── config.toml # Default config
├── development.toml # Development settings
├── staging.toml # Staging environment
└── production.toml # Production settingsLoad specific config:
export PRISM_CONFIG=config/production.toml
cargo make run-server-releaseNext: Learn about the Caching System or explore the API Reference.
Last updated