Getting Started with Prism
This guide will help you install, configure, and run Prism, the high-performance Ethereum JSON-RPC aggregator.
Table of Contents
Prerequisites
Before installing Prism, ensure you have the following:
Required
Rust Nightly Toolchain: Prism uses nightly Rust features
# Install rustup if you don't have it curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Install and set nightly as default for this project rustup install nightly rustup default nightlycargo-make: Build automation tool
cargo install cargo-make
Optional but Recommended
cargo-nextest: Fast test runner
cargo install cargo-nextestDocker and Docker Compose: For running the local devnet
# On Ubuntu/Debian sudo apt-get install docker.io docker-compose # On macOS brew install docker docker-compose
System Requirements
CPU: 2+ cores recommended
RAM: Minimum 4GB, 8GB+ recommended for production
Disk: 10GB+ free space (more if caching large amounts of data)
Network: Stable internet connection for upstream RPC providers
Installation
Option 1: Build from Source (Recommended)
Clone the repository
git clone https://github.com/prismrpc/prism.git cd prismDevelopment build
cargo make buildThis builds all workspace crates with default optimizations.
Production build
cargo make build-releaseThis creates optimized binaries with full release optimizations. The binaries will be located in:
Server:
target/release/serverCLI:
target/release/cli
Option 2: Docker
Prism can also run in Docker (configuration pending - see deployment guide).
First Configuration
Prism uses TOML configuration files. You can specify the config path via the PRISM_CONFIG environment variable or use the default config/config.toml.
Minimal Configuration
Create a file config/config.toml with the following minimal setup:
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100
# Configure at least one upstream RPC provider
[[upstreams.providers]]
name = "primary"
chain_id = 1 # Ethereum Mainnet
https_url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY"
weight = 1
timeout_seconds = 30
# Optional: Add a second provider for redundancy
[[upstreams.providers]]
name = "fallback"
chain_id = 1
https_url = "https://mainnet.infura.io/v3/YOUR_PROJECT_ID"
weight = 1
timeout_seconds = 30
[cache]
enabled = true
[auth]
enabled = false # Start without authentication for testingReplace YOUR_API_KEY and YOUR_PROJECT_ID with your actual RPC provider credentials.
Configuration with Free Public Endpoints
If you don't have API keys, you can start with public endpoints:
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100
[[upstreams.providers]]
name = "publicnode"
chain_id = 1
https_url = "https://ethereum-rpc.publicnode.com"
wss_url = "wss://ethereum-rpc.publicnode.com"
weight = 1
timeout_seconds = 30
[[upstreams.providers]]
name = "mevblocker"
chain_id = 1
https_url = "https://rpc.mevblocker.io/fast"
wss_url = "wss://rpc.mevblocker.io/fast"
weight = 1
timeout_seconds = 30
[cache]
enabled = true
[auth]
enabled = falseNote: Public endpoints have rate limits and may not be suitable for production use.
Validate Configuration
Before running Prism, validate your configuration:
cargo make cli-config-validateThis will check:
At least one upstream is configured
URLs are properly formatted
All required fields are present
Numeric values are valid
Running Prism
Start the Server
Using cargo-make (recommended):
# Development mode with default config
cargo make run-server
# Production mode
cargo make run-server-release
# With custom config file
PRISM_CONFIG=config/my-config.toml cargo make run-serverOr directly with cargo:
# Development build
cargo run --bin server
# Release build
cargo run --release --bin serverVerify It's Running
The server should start and display:
INFO Prism RPC Aggregator starting
INFO Server binding to 127.0.0.1:3030
INFO Loaded configuration from config/config.toml
INFO Added upstream: primary (https://eth-mainnet.g.alchemy.com/...)
INFO Health checker started (interval: 60s)
INFO Server listening on http://127.0.0.1:3030Check the health endpoint:
curl http://localhost:3030/healthExpected response:
{
"status": "healthy",
"upstreams": [
{
"name": "primary",
"healthy": true,
"latency_ms": 45,
"latest_block": 18500000
}
],
"cache": {
"enabled": true,
"blocks_cached": 0,
"logs_cached": 0,
"transactions_cached": 0
}
}Your First Request
Basic RPC Call
Make a simple eth_blockNumber request:
curl -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_blockNumber",
"params": [],
"id": 1
}'Response:
{
"jsonrpc": "2.0",
"result": "0x11a6e3b",
"id": 1
}Cached Request
Try a request that will be cached:
# First request - will fetch from upstream
curl -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_getBlockByNumber",
"params": ["0x1000000", true],
"id": 1
}' -vLook for the X-Cache-Status header in the response. First request shows MISS, subsequent identical requests show FULL.
Get Logs with Partial Caching
curl -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_getLogs",
"params": [{
"fromBlock": "0x1000000",
"toBlock": "0x1000100",
"address": "0xA0b86a33a603e5e0e4a1d72e7e7e7e4e4e4e4e4e"
}],
"id": 1
}'Batch Requests
Send multiple requests in one HTTP call (send array to / endpoint):
curl -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-d '[
{
"jsonrpc": "2.0",
"method": "eth_blockNumber",
"params": [],
"id": 1
},
{
"jsonrpc": "2.0",
"method": "eth_chainId",
"params": [],
"id": 2
}
]'Understanding Cache Status
Every response includes an X-Cache-Status header:
MISS
Request served entirely from upstream (not cached)
FULL
Complete cache hit, no upstream call needed
PARTIAL
Some data from cache, rest fetched from upstream
EMPTY
Cached empty result (e.g., no logs in range)
PARTIAL_WITH_FAILURES
Partial cache hit with some upstream failures
Example: Observing Cache Behavior
# First call - MISS
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
| grep -i cache-status
# X-Cache-Status: MISS
# Second call - FULL HIT
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
| grep -i cache-status
# X-Cache-Status: FULLMonitoring Your Instance
Prometheus Metrics
Metrics are exposed at /metrics:
curl http://localhost:3030/metricsYou'll see metrics like:
# HELP rpc_requests_total Total number of RPC requests
# TYPE rpc_requests_total counter
rpc_requests_total{method="eth_blockNumber",upstream="primary"} 42
# HELP rpc_cache_hits_total Cache hits by method
# TYPE rpc_cache_hits_total counter
rpc_cache_hits_total{method="eth_getBlockByNumber"} 15
# HELP rpc_request_duration_seconds Request latency histogram
# TYPE rpc_request_duration_seconds histogram
rpc_request_duration_seconds_bucket{method="eth_blockNumber",upstream="primary",le="0.05"} 40Health Checks
The /health endpoint provides detailed status:
curl http://localhost:3030/health | jqNext Steps
Now that you have Prism running:
Configure Advanced Features
Enable authentication with API keys
Configure consensus validation for critical methods
Enable hedging to reduce tail latency
Optimize Caching
Review cache configuration
Adjust cache sizes based on your workload
Configure reorg detection
Set Up Monitoring
Integrate Prometheus metrics
Configure alerting for upstream failures
Monitor cache hit rates
Add More Upstreams
Configure multiple RPC providers
Set up WebSocket subscriptions for real-time updates
Implement circuit breakers
Production Hardening
Review deployment guide
Enable authentication
Configure rate limiting
Set up proper logging
Common Issues
Port Already in Use
If port 3030 is already taken:
[server]
bind_port = 8080 # Use a different portUpstream Connection Failures
Check your upstream URLs and network connectivity:
# Test upstream connectivity
cargo make cli-test-upstreamsCache Not Working
Ensure caching is enabled and methods are cacheable:
[cache]
enabled = trueOnly these methods are cached:
eth_getBlockByHasheth_getBlockByNumbereth_getLogseth_getTransactionByHasheth_getTransactionReceipt
High Memory Usage
Reduce cache sizes in configuration:
[cache.manager_config.block_cache]
max_headers = 5000 # Reduce from default 10000
max_bodies = 5000
[cache.manager_config.log_cache]
max_exact_results = 5000 # Reduce from default 10000Getting Help
Documentation: Browse the full docs for detailed guides
Configuration Reference: See configuration-reference.md
API Reference: See api-reference.md
Routing: Understand routing strategies in routing-strategies.md
Issues: Check existing component documentation for troubleshooting
Quick Reference
Useful Commands
# Build
cargo make build # Development build
cargo make build-release # Production build
# Run
cargo make run-server # Start server
cargo make run-server-release # Start optimized server
# CLI Tools
cargo make cli-config-validate # Validate configuration
cargo make cli-config-show # Show resolved config
cargo make cli-test-upstreams # Test upstream connectivity
# Testing
cargo make test # Run all tests
cargo make test-core # Core library tests only
cargo make bench # Run benchmarks
# Development
cargo make # Format + clippy + test
cargo make dev # Full development workflowDefault Ports
HTTP Server: 3030
Prometheus Metrics: 9090 (if enabled)
Important Files
Configuration:
config/config.tomlDatabase:
db/auth.db(if auth enabled)Logs: Stdout/stderr (configure in logging section)
Ready to go deeper? Continue with the Configuration Reference to learn about all available options.
Last updated