Getting Started with Prism

This guide will help you install, configure, and run Prism, the high-performance Ethereum JSON-RPC aggregator.

Table of Contents


Prerequisites

Before installing Prism, ensure you have the following:

Required

  • Rust Nightly Toolchain: Prism uses nightly Rust features

    # Install rustup if you don't have it
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
    # Install and set nightly as default for this project
    rustup install nightly
    rustup default nightly
  • cargo-make: Build automation tool

    cargo install cargo-make
  • cargo-nextest: Fast test runner

    cargo install cargo-nextest
  • Docker and Docker Compose: For running the local devnet

    # On Ubuntu/Debian
    sudo apt-get install docker.io docker-compose
    
    # On macOS
    brew install docker docker-compose

System Requirements

  • CPU: 2+ cores recommended

  • RAM: Minimum 4GB, 8GB+ recommended for production

  • Disk: 10GB+ free space (more if caching large amounts of data)

  • Network: Stable internet connection for upstream RPC providers


Installation

  1. Clone the repository

    git clone https://github.com/prismrpc/prism.git
    cd prism
  2. Development build

    cargo make build

    This builds all workspace crates with default optimizations.

  3. Production build

    cargo make build-release

    This creates optimized binaries with full release optimizations. The binaries will be located in:

    • Server: target/release/server

    • CLI: target/release/cli

Option 2: Docker

Prism can also run in Docker (configuration pending - see deployment guide).


First Configuration

Prism uses TOML configuration files. You can specify the config path via the PRISM_CONFIG environment variable or use the default config/config.toml.

Minimal Configuration

Create a file config/config.toml with the following minimal setup:

[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100

# Configure at least one upstream RPC provider
[[upstreams.providers]]
name = "primary"
chain_id = 1  # Ethereum Mainnet
https_url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY"
weight = 1
timeout_seconds = 30

# Optional: Add a second provider for redundancy
[[upstreams.providers]]
name = "fallback"
chain_id = 1
https_url = "https://mainnet.infura.io/v3/YOUR_PROJECT_ID"
weight = 1
timeout_seconds = 30

[cache]
enabled = true

[auth]
enabled = false  # Start without authentication for testing

Replace YOUR_API_KEY and YOUR_PROJECT_ID with your actual RPC provider credentials.

Configuration with Free Public Endpoints

If you don't have API keys, you can start with public endpoints:

[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100

[[upstreams.providers]]
name = "publicnode"
chain_id = 1
https_url = "https://ethereum-rpc.publicnode.com"
wss_url = "wss://ethereum-rpc.publicnode.com"
weight = 1
timeout_seconds = 30

[[upstreams.providers]]
name = "mevblocker"
chain_id = 1
https_url = "https://rpc.mevblocker.io/fast"
wss_url = "wss://rpc.mevblocker.io/fast"
weight = 1
timeout_seconds = 30

[cache]
enabled = true

[auth]
enabled = false

Note: Public endpoints have rate limits and may not be suitable for production use.

Validate Configuration

Before running Prism, validate your configuration:

cargo make cli-config-validate

This will check:

  • At least one upstream is configured

  • URLs are properly formatted

  • All required fields are present

  • Numeric values are valid


Running Prism

Start the Server

Using cargo-make (recommended):

# Development mode with default config
cargo make run-server

# Production mode
cargo make run-server-release

# With custom config file
PRISM_CONFIG=config/my-config.toml cargo make run-server

Or directly with cargo:

# Development build
cargo run --bin server

# Release build
cargo run --release --bin server

Verify It's Running

The server should start and display:

INFO Prism RPC Aggregator starting
INFO Server binding to 127.0.0.1:3030
INFO Loaded configuration from config/config.toml
INFO Added upstream: primary (https://eth-mainnet.g.alchemy.com/...)
INFO Health checker started (interval: 60s)
INFO Server listening on http://127.0.0.1:3030

Check the health endpoint:

curl http://localhost:3030/health

Expected response:

{
  "status": "healthy",
  "upstreams": [
    {
      "name": "primary",
      "healthy": true,
      "latency_ms": 45,
      "latest_block": 18500000
    }
  ],
  "cache": {
    "enabled": true,
    "blocks_cached": 0,
    "logs_cached": 0,
    "transactions_cached": 0
  }
}

Your First Request

Basic RPC Call

Make a simple eth_blockNumber request:

curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "eth_blockNumber",
    "params": [],
    "id": 1
  }'

Response:

{
  "jsonrpc": "2.0",
  "result": "0x11a6e3b",
  "id": 1
}

Cached Request

Try a request that will be cached:

# First request - will fetch from upstream
curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "eth_getBlockByNumber",
    "params": ["0x1000000", true],
    "id": 1
  }' -v

Look for the X-Cache-Status header in the response. First request shows MISS, subsequent identical requests show FULL.

Get Logs with Partial Caching

curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "eth_getLogs",
    "params": [{
      "fromBlock": "0x1000000",
      "toBlock": "0x1000100",
      "address": "0xA0b86a33a603e5e0e4a1d72e7e7e7e4e4e4e4e4e"
    }],
    "id": 1
  }'

Batch Requests

Send multiple requests in one HTTP call (send array to / endpoint):

curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '[
    {
      "jsonrpc": "2.0",
      "method": "eth_blockNumber",
      "params": [],
      "id": 1
    },
    {
      "jsonrpc": "2.0",
      "method": "eth_chainId",
      "params": [],
      "id": 2
    }
  ]'

Understanding Cache Status

Every response includes an X-Cache-Status header:

Status
Meaning

MISS

Request served entirely from upstream (not cached)

FULL

Complete cache hit, no upstream call needed

PARTIAL

Some data from cache, rest fetched from upstream

EMPTY

Cached empty result (e.g., no logs in range)

PARTIAL_WITH_FAILURES

Partial cache hit with some upstream failures

Example: Observing Cache Behavior

# First call - MISS
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
  | grep -i cache-status
# X-Cache-Status: MISS

# Second call - FULL HIT
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
  | grep -i cache-status
# X-Cache-Status: FULL

Monitoring Your Instance

Prometheus Metrics

Metrics are exposed at /metrics:

curl http://localhost:3030/metrics

You'll see metrics like:

# HELP rpc_requests_total Total number of RPC requests
# TYPE rpc_requests_total counter
rpc_requests_total{method="eth_blockNumber",upstream="primary"} 42

# HELP rpc_cache_hits_total Cache hits by method
# TYPE rpc_cache_hits_total counter
rpc_cache_hits_total{method="eth_getBlockByNumber"} 15

# HELP rpc_request_duration_seconds Request latency histogram
# TYPE rpc_request_duration_seconds histogram
rpc_request_duration_seconds_bucket{method="eth_blockNumber",upstream="primary",le="0.05"} 40

Health Checks

The /health endpoint provides detailed status:

curl http://localhost:3030/health | jq

Next Steps

Now that you have Prism running:

  1. Configure Advanced Features

  2. Optimize Caching

  3. Set Up Monitoring

    • Configure alerting for upstream failures

    • Monitor cache hit rates

  4. Add More Upstreams

    • Configure multiple RPC providers

    • Set up WebSocket subscriptions for real-time updates

  5. Production Hardening

    • Enable authentication

    • Configure rate limiting

    • Set up proper logging


Common Issues

Port Already in Use

If port 3030 is already taken:

[server]
bind_port = 8080  # Use a different port

Upstream Connection Failures

Check your upstream URLs and network connectivity:

# Test upstream connectivity
cargo make cli-test-upstreams

Cache Not Working

Ensure caching is enabled and methods are cacheable:

[cache]
enabled = true

Only these methods are cached:

  • eth_getBlockByHash

  • eth_getBlockByNumber

  • eth_getLogs

  • eth_getTransactionByHash

  • eth_getTransactionReceipt

High Memory Usage

Reduce cache sizes in configuration:

[cache.manager_config.block_cache]
max_headers = 5000  # Reduce from default 10000
max_bodies = 5000

[cache.manager_config.log_cache]
max_exact_results = 5000  # Reduce from default 10000

Getting Help


Quick Reference

Useful Commands

# Build
cargo make build                # Development build
cargo make build-release        # Production build

# Run
cargo make run-server           # Start server
cargo make run-server-release   # Start optimized server

# CLI Tools
cargo make cli-config-validate  # Validate configuration
cargo make cli-config-show      # Show resolved config
cargo make cli-test-upstreams   # Test upstream connectivity

# Testing
cargo make test                 # Run all tests
cargo make test-core            # Core library tests only
cargo make bench                # Run benchmarks

# Development
cargo make                      # Format + clippy + test
cargo make dev                  # Full development workflow

Default Ports

  • HTTP Server: 3030

  • Prometheus Metrics: 9090 (if enabled)

Important Files

  • Configuration: config/config.toml

  • Database: db/auth.db (if auth enabled)

  • Logs: Stdout/stderr (configure in logging section)


Ready to go deeper? Continue with the Configuration Reference to learn about all available options.

Last updated