Deployment Guide
Complete guide for deploying Prism RPC Aggregator in production and development environments.
Table of Contents
Prerequisites
Required Software
For Docker deployment:
Docker Engine 20.10 or later
Docker Compose v2.0 or later (for stack deployment)
For binary deployment:
Rust nightly toolchain (1.93.0 or later)
cargo-makefor build automationOpenSSL development libraries (not required at runtime - Prism uses rustls)
For all deployments:
Upstream RPC provider endpoints (Alchemy, Infura, QuickNode, or public endpoints)
Minimum 4GB RAM, 8GB+ recommended for production
10GB+ disk space for database and cache
Network Requirements
Port 3030 (RPC endpoint) - TCP
Port 9090 (Prometheus metrics) - TCP, optional
Outbound HTTPS/WSS access to upstream RPC providers
System Requirements
Minimum Requirements
CPU
1 core
2+ cores
RAM
2GB
4-8GB
Disk
5GB
20GB+
Network
10 Mbps
100 Mbps+
Recommended Production Specifications
CPU
4+ cores (for high concurrency)
RAM
8GB (16GB for large cache configurations)
Disk
SSD with 50GB+ (for database and extensive caching)
Network
Low-latency connection to upstream providers
Memory Planning
Cache size directly impacts memory usage:
# Low memory profile (~500MB-1GB)
[cache.manager_config]
retain_blocks = 500
[cache.manager_config.log_cache]
max_exact_results = 5000
max_bitmap_entries = 50000
[cache.manager_config.block_cache]
max_headers = 5000
max_bodies = 5000
[cache.manager_config.transaction_cache]
max_transactions = 10000
max_receipts = 10000# High memory profile (~4GB-8GB)
[cache.manager_config]
retain_blocks = 5000
[cache.manager_config.log_cache]
max_exact_results = 100000
max_bitmap_entries = 500000
[cache.manager_config.block_cache]
max_headers = 50000
max_bodies = 50000
[cache.manager_config.transaction_cache]
max_transactions = 200000
max_receipts = 200000See Configuration Reference for detailed cache tuning.
Building from Source
Development Build
# Clone repository
git clone https://github.com/prismrpc/prism.git
cd prism
# Install Rust nightly
rustup install nightly
rustup default nightly
# Install cargo-make
cargo install cargo-make
# Development build
cargo make build
# Binaries located at:
# - target/debug/server
# - target/debug/cliProduction Release Build
# Optimized release build
cargo make build-release
# Binaries with full optimizations:
# - target/release/server
# - target/release/cliThe release profile includes:
opt-level = 3: Maximum optimization
lto = true: Link-time optimization
codegen-units = 1: Single codegen unit for best optimization
strip = true: Stripped symbols for smaller binary size
Expected binary sizes:
server: ~25-35MB (stripped)cli: ~15-20MB (stripped)
Verify Build
# Check version
./target/release/server --version
# Validate configuration
./target/release/cli config validate --config config/config.toml
# Test server startup (dry-run)
PRISM_CONFIG=config/config.toml ./target/release/serverDocker Deployment
Quick Start
Pull and run the pre-built image:
# Pull latest image
docker pull ghcr.io/prismrpc/prism:0.1.0
# Create config directory
mkdir -p config db
# Download example config
curl -o config/config.toml https://raw.githubusercontent.com/prismrpc/prism/main/config/docker.toml
# Edit config with your RPC endpoints
nano config/config.toml
# Run container
docker run -d \
--name prism-rpc \
--restart unless-stopped \
-p 3030:3030 \
-p 9090:9090 \
-v $(pwd)/config/config.toml:/app/config/config.toml:ro \
-v prism-db:/app/db \
-e RUST_LOG=info,prism_core=info,server=info \
ghcr.io/prismrpc/prism:0.1.0Verify Deployment
# Check container status
docker ps | grep prism
# View logs
docker logs prism-rpc -f
# Test health endpoint
curl http://localhost:3030/health
# Test RPC call
curl -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'Docker Build from Source
# Build custom image
docker build -t prism-rpc:custom .
# Run custom build
docker run -d \
--name prism-rpc \
-p 3030:3030 \
-v $(pwd)/config/config.toml:/app/config/config.toml:ro \
prism-rpc:customDocker Configuration Requirements
Important: Use bind_address = "0.0.0.0" in Docker configs:
[server]
bind_address = "0.0.0.0" # Required for Docker - not "127.0.0.1"
bind_port = 3030Volume Mounts
docker run -d \
--name prism-rpc \
# Configuration (read-only recommended)
-v $(pwd)/config/config.toml:/app/config/config.toml:ro \
# Database (persistent, read-write)
-v prism-db:/app/db \
# Optional: External logs directory
-v $(pwd)/logs:/app/logs \
ghcr.io/prismrpc/prism:0.1.0Container Management
# Stop container
docker stop prism-rpc
# Start container
docker start prism-rpc
# Restart container
docker restart prism-rpc
# View logs (last 100 lines)
docker logs prism-rpc --tail 100
# Follow logs
docker logs prism-rpc -f
# Execute CLI inside container
docker exec prism-rpc prism-cli auth list
# Remove container (preserves volumes)
docker rm prism-rpc
# Remove container and volumes
docker rm -v prism-rpcDocker Compose Deployment
Basic Deployment
Create docker-compose.yml:
version: '3.8'
services:
prism:
image: ghcr.io/prismrpc/prism:0.1.0
container_name: prism-rpc
restart: unless-stopped
ports:
- "3030:3030"
- "9090:9090"
volumes:
- ./config/config.toml:/app/config/config.toml:ro
- prism-db:/app/db
- prism-logs:/app/logs
environment:
- PRISM_CONFIG=/app/config/config.toml
- RUST_LOG=info,prism_core=info,server=info
- RUST_BACKTRACE=0
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:3030/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
# Resource limits
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
volumes:
prism-db:
prism-logs:Start the service:
# Start in background
docker compose up -d
# View logs
docker compose logs -f prism
# Stop service
docker compose down
# Stop and remove volumes
docker compose down -vFull Production Stack with Monitoring
Use the included production compose file:
# Clone repository
git clone https://github.com/prismrpc/prism.git
cd prism
# Copy and configure
cp config/docker.toml config/config.toml
nano config/config.toml # Add your RPC endpoints
# Start Prism only
docker compose -f docker/docker-compose.prod.yml up -d
# Or start with monitoring stack (Prometheus + Grafana)
docker compose -f docker/docker-compose.prod.yml --profile monitoring up -dThe monitoring stack includes:
Prism: RPC aggregator on ports 3030 (RPC) and 9090 (metrics)
Prometheus: Metrics collection on port 9091
Grafana: Visualization on port 3001 (login: admin/admin)
Access points:
RPC Endpoint:
http://localhost:3030/Health Check:
http://localhost:3030/healthMetrics:
http://localhost:3030/metricsPrometheus UI:
http://localhost:9091Grafana:
http://localhost:3001
Monitoring Stack Configuration
The docker-compose.prod.yml creates:
services:
prism:
# Main RPC service
prometheus:
# Metrics collection (--profile monitoring)
grafana:
# Metrics visualization (--profile monitoring)Prometheus configuration (deploy/prometheus/prometheus.yml):
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prism'
static_configs:
- targets: ['prism:3030']
metrics_path: '/metrics'
scrape_interval: 5sSee Monitoring Guide for detailed metrics documentation.
Binary Deployment
Installation Steps
# 1. Create application user
sudo useradd -r -s /bin/false prism
# 2. Create directory structure
sudo mkdir -p /opt/prism/{config,db,logs}
# 3. Copy binaries
sudo cp target/release/server /opt/prism/prism-server
sudo cp target/release/cli /opt/prism/prism-cli
# 4. Copy configuration
sudo cp config/config.toml /opt/prism/config/
# 5. Set ownership
sudo chown -R prism:prism /opt/prism
# 6. Make binaries executable
sudo chmod +x /opt/prism/prism-server /opt/prism/prism-cli
# 7. Symlink to system path (optional)
sudo ln -s /opt/prism/prism-server /usr/local/bin/
sudo ln -s /opt/prism/prism-cli /usr/local/bin/Directory structure:
/opt/prism/
├── config/
│ └── config.toml
├── db/
│ └── auth.db (created on first run)
├── logs/ (optional)
├── prism-server
└── prism-cliManual Execution
# Run as prism user
sudo -u prism /opt/prism/prism-server
# With custom config
sudo -u prism PRISM_CONFIG=/opt/prism/config/config.toml /opt/prism/prism-server
# With custom logging
sudo -u prism RUST_LOG=debug /opt/prism/prism-serversystemd Service Setup
Create Service File
Create /etc/systemd/system/prism.service:
[Unit]
Description=Prism RPC Aggregator
Documentation=https://docs.prismrpc.dev
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=prism
Group=prism
WorkingDirectory=/opt/prism
# Environment
Environment="PRISM_CONFIG=/opt/prism/config/config.toml"
Environment="RUST_LOG=info,prism_core=info,server=info"
Environment="RUST_BACKTRACE=0"
# Execution
ExecStart=/opt/prism/prism-server
ExecReload=/bin/kill -HUP $MAINPID
# Restart policy
Restart=on-failure
RestartSec=5s
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=30s
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/prism/db /opt/prism/logs
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
MemoryDenyWriteExecute=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.targetEnable and Start Service
# Reload systemd daemon
sudo systemctl daemon-reload
# Enable service (start on boot)
sudo systemctl enable prism
# Start service
sudo systemctl start prism
# Check status
sudo systemctl status prism
# View logs
sudo journalctl -u prism -f
# Stop service
sudo systemctl stop prism
# Restart service
sudo systemctl restart prism
# Disable service
sudo systemctl disable prismService Management
# View recent logs
sudo journalctl -u prism -n 100
# View logs since boot
sudo journalctl -u prism -b
# Follow logs in real-time
sudo journalctl -u prism -f
# View only error logs
sudo journalctl -u prism -p err
# Check service is enabled
systemctl is-enabled prism
# Check service is running
systemctl is-active prismLog Rotation
Create /etc/logrotate.d/prism:
/opt/prism/logs/*.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
create 0640 prism prism
sharedscripts
postrotate
systemctl reload prism > /dev/null 2>&1 || true
endscript
}Reverse Proxy Configuration
nginx Configuration
Basic Setup
Create /etc/nginx/sites-available/prism:
upstream prism_backend {
server 127.0.0.1:3030;
keepalive 32;
}
server {
listen 80;
server_name rpc.example.com;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name rpc.example.com;
# TLS configuration (see TLS section below)
ssl_certificate /etc/ssl/certs/rpc.example.com.crt;
ssl_certificate_key /etc/ssl/private/rpc.example.com.key;
# Logging
access_log /var/log/nginx/prism-access.log;
error_log /var/log/nginx/prism-error.log;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
# Main RPC endpoint
location / {
proxy_pass http://prism_backend;
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Timeouts
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering off;
proxy_request_buffering off;
# Keep-alive
proxy_set_header Connection "";
}
# Health check endpoint (no auth required)
location /health {
proxy_pass http://prism_backend;
proxy_http_version 1.1;
access_log off;
}
# Metrics endpoint (restrict access)
location /metrics {
proxy_pass http://prism_backend;
proxy_http_version 1.1;
# Restrict to internal networks
allow 10.0.0.0/8;
allow 172.16.0.0/12;
allow 192.168.0.0/16;
deny all;
}
}Enable and test:
# Create symlink
sudo ln -s /etc/nginx/sites-available/prism /etc/nginx/sites-enabled/
# Test configuration
sudo nginx -t
# Reload nginx
sudo systemctl reload nginxRate Limiting
Add rate limiting to nginx config:
# Define rate limit zones (add to http block)
http {
limit_req_zone $binary_remote_addr zone=rpc_limit:10m rate=100r/s;
limit_req_zone $http_x_api_key zone=api_key_limit:10m rate=1000r/s;
server {
# ... SSL config ...
location / {
# Apply rate limiting
limit_req zone=rpc_limit burst=200 nodelay;
limit_req zone=api_key_limit burst=2000 nodelay;
# Return 429 on rate limit
limit_req_status 429;
proxy_pass http://prism_backend;
# ... rest of config ...
}
}
}WebSocket Support
If you need WebSocket subscriptions:
location /ws {
proxy_pass http://prism_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}Caddy Configuration
Caddy is simpler with automatic HTTPS:
Create Caddyfile:
rpc.example.com {
# Automatic HTTPS with Let's Encrypt
# Main RPC endpoint
reverse_proxy localhost:3030 {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto {scheme}
}
# Rate limiting (100 requests/second per IP)
rate_limit {
zone dynamic 100r/s
}
# Logging
log {
output file /var/log/caddy/prism-access.log
}
# Restrict metrics endpoint
@metrics path /metrics
handle @metrics {
@internal {
remote_ip 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
}
handle @internal {
reverse_proxy localhost:3030
}
respond 403
}
}Run Caddy:
caddy run --config CaddyfileHAProxy Configuration
For advanced load balancing:
global
log /dev/log local0
maxconn 4096
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 10s
timeout client 60s
timeout server 60s
frontend prism_frontend
bind *:443 ssl crt /etc/ssl/certs/prism.pem
default_backend prism_backend
# Rate limiting
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
backend prism_backend
balance roundrobin
option httpclose
option forwardfor
server prism1 127.0.0.1:3030 check inter 5s rise 2 fall 3TLS/HTTPS Configuration
Let's Encrypt with Certbot (nginx)
# Install certbot
sudo apt-get install certbot python3-certbot-nginx
# Obtain certificate
sudo certbot --nginx -d rpc.example.com
# Test auto-renewal
sudo certbot renew --dry-run
# Auto-renewal is configured via systemd timer
sudo systemctl status certbot.timerLet's Encrypt with Caddy
Caddy handles this automatically:
rpc.example.com {
# HTTPS is automatic with valid domain
reverse_proxy localhost:3030
}Manual TLS Configuration
Generate self-signed certificate (development only):
# Generate private key and certificate
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/private/prism-selfsigned.key \
-out /etc/ssl/certs/prism-selfsigned.crt \
-subj "/C=US/ST=State/L=City/O=Organization/CN=rpc.example.com"
# Set permissions
sudo chmod 600 /etc/ssl/private/prism-selfsigned.key
sudo chmod 644 /etc/ssl/certs/prism-selfsigned.crtUse commercial certificate:
# Copy certificate files
sudo cp your-cert.crt /etc/ssl/certs/rpc.example.com.crt
sudo cp your-cert.key /etc/ssl/private/rpc.example.com.key
sudo cp ca-bundle.crt /etc/ssl/certs/ca-bundle.crt
# Set permissions
sudo chmod 600 /etc/ssl/private/rpc.example.com.key
sudo chmod 644 /etc/ssl/certs/rpc.example.com.crtTLS Best Practices
Strong SSL configuration for nginx:
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
# SSL session cache
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/certs/ca-bundle.crt;
# DNS resolver for OCSP
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;Test SSL configuration:
# Test with SSL Labs
# Visit: https://www.ssllabs.com/ssltest/
# Test with testssl.sh
testssl.sh https://rpc.example.comEnvironment-Specific Configurations
Development
config/development.toml:
environment = "development"
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100
request_timeout_seconds = 30
[[upstreams.providers]]
name = "local-geth"
chain_id = 1337
https_url = "http://localhost:8545"
weight = 1
[cache]
enabled = true
[cache.manager_config]
retain_blocks = 500
[auth]
enabled = false
[metrics]
enabled = true
[logging]
level = "debug"
format = "pretty"Staging
config/staging.toml:
environment = "staging"
[server]
bind_address = "0.0.0.0"
bind_port = 3030
max_concurrent_requests = 500
[[upstreams.providers]]
name = "alchemy-sepolia"
chain_id = 11155111
https_url = "https://eth-sepolia.g.alchemy.com/v2/YOUR_KEY"
wss_url = "wss://eth-sepolia.g.alchemy.com/v2/YOUR_KEY"
weight = 1
[cache]
enabled = true
[cache.manager_config]
retain_blocks = 1000
[auth]
enabled = true
database_url = "sqlite:///app/db/auth.db"
[metrics]
enabled = true
[logging]
level = "info"
format = "json"Production
config/production.toml:
environment = "production"
[server]
bind_address = "0.0.0.0"
bind_port = 3030
max_concurrent_requests = 2000
request_timeout_seconds = 60
[[upstreams.providers]]
name = "alchemy-primary"
chain_id = 1
https_url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
wss_url = "wss://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
weight = 3
timeout_seconds = 30
circuit_breaker_threshold = 5
circuit_breaker_timeout_seconds = 60
[[upstreams.providers]]
name = "infura-backup"
chain_id = 1
https_url = "https://mainnet.infura.io/v3/YOUR_PROJECT_ID"
wss_url = "wss://mainnet.infura.io/ws/v3/YOUR_PROJECT_ID"
weight = 2
timeout_seconds = 30
circuit_breaker_threshold = 3
circuit_breaker_timeout_seconds = 30
[[upstreams.providers]]
name = "quicknode-fallback"
chain_id = 1
https_url = "https://your-endpoint.quiknode.pro/YOUR_KEY"
wss_url = "wss://your-endpoint.quiknode.pro/YOUR_KEY"
weight = 1
timeout_seconds = 45
circuit_breaker_threshold = 2
circuit_breaker_timeout_seconds = 15
[cache]
enabled = true
[cache.manager_config]
retain_blocks = 2000
enable_auto_cleanup = true
cleanup_interval_seconds = 300
[cache.manager_config.log_cache]
chunk_size = 1000
max_exact_results = 50000
max_bitmap_entries = 200000
safety_depth = 12
[cache.manager_config.block_cache]
hot_window_size = 300
max_headers = 20000
max_bodies = 20000
safety_depth = 12
[cache.manager_config.transaction_cache]
max_transactions = 100000
max_receipts = 100000
safety_depth = 12
[cache.manager_config.reorg_manager]
safety_depth = 64
max_reorg_depth = 256
reorg_detection_threshold = 3
[health_check]
interval_seconds = 30
[auth]
enabled = true
database_url = "sqlite:///app/db/auth.db"
[metrics]
enabled = true
prometheus_port = 9090
[logging]
level = "info"
format = "json"Loading Environment-Specific Config
# Development
export PRISM_CONFIG=config/development.toml
./prism-server
# Staging
export PRISM_CONFIG=config/staging.toml
./prism-server
# Production
export PRISM_CONFIG=config/production.toml
./prism-serverFor systemd, update the service file:
[Service]
Environment="PRISM_CONFIG=/opt/prism/config/production.toml"Health Checks & Readiness Probes
Health Check Endpoint
Endpoint: GET /health
Response (healthy):
{
"status": "healthy",
"upstreams": [
{
"name": "alchemy-primary",
"healthy": true,
"latency_ms": 45,
"latest_block": 18500000
},
{
"name": "infura-backup",
"healthy": true,
"latency_ms": 52,
"latest_block": 18500000
}
],
"cache": {
"enabled": true,
"blocks_cached": 1523,
"logs_cached": 8942,
"transactions_cached": 12453
}
}Response (degraded):
{
"status": "degraded",
"upstreams": [
{
"name": "alchemy-primary",
"healthy": true,
"latency_ms": 45,
"latest_block": 18500000
},
{
"name": "infura-backup",
"healthy": false,
"error": "Connection timeout"
}
],
"cache": {
"enabled": true,
"blocks_cached": 1523,
"logs_cached": 8942,
"transactions_cached": 12453
}
}Docker Health Check
Built-in Docker health check (already in Dockerfile):
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD curl -sf http://localhost:3030/health || exit 1Custom health check in docker-compose.yml:
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:3030/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15sKubernetes Probes
Liveness probe (container is alive):
livenessProbe:
httpGet:
path: /health
port: 3030
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
failureThreshold: 3Readiness probe (ready to accept traffic):
readinessProbe:
httpGet:
path: /health
port: 3030
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3Startup probe (container has started):
startupProbe:
httpGet:
path: /health
port: 3030
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 30Load Balancer Health Checks
AWS ALB/NLB:
Protocol: HTTP
Port: 3030
Path: /health
Interval: 30 seconds
Timeout: 5 seconds
Healthy threshold: 2
Unhealthy threshold: 3GCP Load Balancer:
Protocol: HTTP
Port: 3030
Request path: /health
Check interval: 30 seconds
Timeout: 5 seconds
Healthy threshold: 2
Unhealthy threshold: 3Monitoring Script
Create /usr/local/bin/prism-health-check.sh:
#!/bin/bash
set -e
ENDPOINT="http://localhost:3030/health"
MAX_ATTEMPTS=3
RETRY_DELAY=2
for i in $(seq 1 $MAX_ATTEMPTS); do
if curl -sf "$ENDPOINT" > /dev/null; then
echo "Prism is healthy"
exit 0
fi
if [ $i -lt $MAX_ATTEMPTS ]; then
echo "Health check failed (attempt $i/$MAX_ATTEMPTS), retrying in ${RETRY_DELAY}s..."
sleep $RETRY_DELAY
fi
done
echo "Prism health check failed after $MAX_ATTEMPTS attempts"
exit 1Make executable and use in cron:
chmod +x /usr/local/bin/prism-health-check.sh
# Add to crontab
*/5 * * * * /usr/local/bin/prism-health-check.sh || systemctl restart prismResource Sizing
CPU Sizing
Development
1-2
Sufficient for testing
Low traffic (<100 req/s)
2
Light production workload
Medium traffic (100-500 req/s)
4
Typical production
High traffic (500-2000 req/s)
8-16
High concurrency, multiple upstreams
Memory Sizing
Base memory: ~200-500MB (without cache)
Cache memory estimation:
Total Memory = Base + Block Cache + Log Cache + Transaction Cache
Block Cache:
- Headers: max_headers × 2KB = 10000 × 2KB = 20MB
- Bodies: max_bodies × 5KB = 10000 × 5KB = 50MB
Log Cache:
- Exact results: max_exact_results × 3KB = 10000 × 3KB = 30MB
- Bitmaps: max_bitmap_entries × 100B = 100000 × 100B = 10MB
Transaction Cache:
- Transactions: max_transactions × 2KB = 50000 × 2KB = 100MB
- Receipts: max_receipts × 2KB = 50000 × 2KB = 100MBExample configurations:
Minimal
Small cache (5k entries)
1-2GB
Development, testing
Standard
Default cache (10k-50k)
2-4GB
Low-medium traffic
Large
Large cache (50k-100k)
4-8GB
High traffic, DeFi apps
XL
Extra large (200k+)
8-16GB
Very high traffic
Disk Sizing
Binaries
~50MB
Server + CLI
Configuration
<1MB
TOML files
Database
10-100MB
SQLite auth database (grows with API keys)
Logs
Variable
Depends on retention policy
System overhead
1GB
OS and dependencies
Recommended disk space:
Minimum: 10GB
Production: 50GB+ with log rotation
Network Sizing
Bandwidth estimation:
Avg RPC request: ~500 bytes
Avg RPC response: ~2-20KB (varies by method)
Upstream latency: 50-200ms
At 100 req/s:
- Inbound: ~50KB/s = 0.4 Mbps
- Outbound: ~500KB/s = 4 Mbps
- Total: ~4.4 Mbps
At 1000 req/s:
- Inbound: ~500KB/s = 4 Mbps
- Outbound: ~5MB/s = 40 Mbps
- Total: ~44 MbpsCache hit rate significantly reduces upstream bandwidth:
80% cache hit rate: ~10x reduction in upstream traffic
95% cache hit rate: ~20x reduction in upstream traffic
Resource Limits
Docker Compose:
deploy:
resources:
limits:
cpus: '4.0'
memory: 8G
reservations:
cpus: '1.0'
memory: 2Gsystemd:
Add to service file:
[Service]
# CPU limit (40% of 4 cores)
CPUQuota=160%
# Memory limit (8GB)
MemoryMax=8G
MemoryHigh=7G
# File descriptor limit
LimitNOFILE=65536
# Process limit
LimitNPROC=4096ulimit (manual runs):
# Set file descriptor limit
ulimit -n 65536
# Set process limit
ulimit -u 4096
# Run server
./prism-serverProduction Checklist
Pre-Deployment
Security
Monitoring
Performance
Reliability
Operations
Troubleshooting
Service Won't Start
Check logs:
# Docker
docker logs prism-rpc
# systemd
sudo journalctl -u prism -n 100 --no-pager
# Binary
RUST_LOG=debug ./prism-serverCommon issues:
Port already in use:
# Find process using port 3030 sudo lsof -i :3030 sudo netstat -tulpn | grep 3030 # Change port in config [server] bind_port = 8080Configuration errors:
# Validate config prism-cli config validate --config config/config.toml # Show resolved config prism-cli config showPermission denied:
# Check file ownership ls -la /opt/prism/ # Fix ownership sudo chown -R prism:prism /opt/prism/Database errors:
# Check database directory exists mkdir -p /opt/prism/db # Check permissions chmod 755 /opt/prism/db chown prism:prism /opt/prism/db
Upstream Connection Failures
Test upstream connectivity:
# Using CLI
prism-cli upstream test
# Manual test
curl -X POST https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'Common issues:
Invalid API keys: Update keys in config
Firewall blocking: Check outbound HTTPS/WSS access
Rate limiting: Use multiple providers or upgrade plan
DNS issues: Test DNS resolution
High Memory Usage
Check memory usage:
# Docker
docker stats prism-rpc
# System
ps aux | grep prism-server
top -p $(pgrep prism-server)Reduce memory:
[cache.manager_config]
retain_blocks = 500 # Reduce from 2000
[cache.manager_config.log_cache]
max_exact_results = 5000 # Reduce from 50000
max_bitmap_entries = 50000 # Reduce from 200000
[cache.manager_config.block_cache]
max_headers = 5000 # Reduce from 20000
max_bodies = 5000
[cache.manager_config.transaction_cache]
max_transactions = 10000 # Reduce from 100000
max_receipts = 10000Cache Not Working
Check cache status:
# Make request and check header
curl -v -X POST http://localhost:3030/ \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
| grep -i x-cache-statusVerify cache is enabled:
[cache]
enabled = trueOnly these methods are cached:
eth_getBlockByHasheth_getBlockByNumbereth_getLogseth_getTransactionByHasheth_getTransactionReceipt
High CPU Usage
Check CPU:
# Docker
docker stats prism-rpc
# System
top -p $(pgrep prism-server)Possible causes:
Too many concurrent requests: Reduce
max_concurrent_requestsComplex log queries: Enable caching, increase cache sizes
Insufficient CPU: Scale up instance
TLS/SSL Errors
Test certificate:
# Check certificate
openssl s_client -connect rpc.example.com:443 -servername rpc.example.com
# Verify certificate chain
curl -vI https://rpc.example.com/healthCommon issues:
Expired certificate: Renew with certbot
Certificate mismatch: Ensure CN/SAN matches domain
Missing intermediate certs: Install CA bundle
Database Errors
Check database:
# Verify database exists
ls -la /opt/prism/db/auth.db
# Check integrity
sqlite3 /opt/prism/db/auth.db "PRAGMA integrity_check;"
# View tables
sqlite3 /opt/prism/db/auth.db ".tables"Reset database (WARNING: deletes all API keys):
rm /opt/prism/db/auth.db
# Database will be recreated on next startupPerformance Issues
Check metrics:
# Request latency
curl -s http://localhost:3030/metrics | grep rpc_request_duration
# Cache hit rate
curl -s http://localhost:3030/metrics | grep cache_hit
# Upstream health
curl http://localhost:3030/health | jq .upstreamsOptimize:
Enable caching: Set
[cache] enabled = trueAdd more upstreams: Distribute load
Tune cache sizes: Increase for better hit rates
Enable hedging: Reduce tail latency (see Routing Strategies)
Getting Help
Collect diagnostics:
# System info
uname -a
free -h
df -h
# Prism version
prism-server --version
# Configuration (redact API keys)
prism-cli config show
# Recent logs
journalctl -u prism -n 200 --no-pager
# Health status
curl http://localhost:3030/health | jq
# Metrics snapshot
curl -s http://localhost:3030/metrics > metrics.txtResources:
Next Steps
After successful deployment:
Configure Advanced Features
Set up authentication with API keys
Enable consensus validation
Configure hedging
Set Up Monitoring
Configure Prometheus and Grafana
Set up alerting rules
Monitor cache performance
Optimize Performance
Tune cache configuration
Add more upstream providers
Enable advanced routing features
Harden Security
Review authentication settings
Configure rate limiting
Set up automated backups
Production deployment complete! Your Prism instance is now ready to handle Ethereum JSON-RPC requests with intelligent caching, routing, and monitoring.
Last updated