# Getting Started with Prism

## Table of Contents

* [Prerequisites](#prerequisites)
* [Installation](#installation)
* [First Configuration](#first-configuration)
* [Running Prism](#running-prism)
* [Your First Request](#your-first-request)
* [Next Steps](#next-steps)

***

## Prerequisites

Before installing Prism, ensure you have the following:

### Required

* **Rust Nightly Toolchain**: Prism uses nightly Rust features

  ```bash
  # Install rustup if you don't have it
  curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

  # Install and set nightly as default for this project
  rustup install nightly
  rustup default nightly
  ```
* **cargo-make**: Build automation tool

  ```bash
  cargo install cargo-make
  ```

### Optional but Recommended

* **cargo-nextest**: Fast test runner

  ```bash
  cargo install cargo-nextest
  ```
* **Docker and Docker Compose**: For running the local devnet

  ```bash
  # On Ubuntu/Debian
  sudo apt-get install docker.io docker-compose

  # On macOS
  brew install docker docker-compose
  ```

### System Requirements

* **CPU**: 2+ cores recommended
* **RAM**: Minimum 4GB, 8GB+ recommended for production
* **Disk**: 10GB+ free space (more if caching large amounts of data)
* **Network**: Stable internet connection for upstream RPC providers

***

## Installation

### Option 1: Build from Source (Recommended)

1. **Clone the repository**

   ```bash
   git clone https://github.com/prismrpc/prism.git
   cd prism
   ```
2. **Development build**

   ```bash
   cargo make build
   ```

   This builds all workspace crates with default optimizations.
3. **Production build**

   ```bash
   cargo make build-release
   ```

   This creates optimized binaries with full release optimizations. The binaries will be located in:

   * Server: `target/release/server`
   * CLI: `target/release/cli`

### Option 2: Docker

Prism can also run in Docker (configuration pending - see deployment guide).

***

## First Configuration

Prism uses TOML configuration files. You can specify the config path via the `PRISM_CONFIG` environment variable or use the default `config/config.toml`.

### Minimal Configuration

Create a file `config/config.toml` with the following minimal setup:

```toml
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100

# Configure at least one upstream RPC provider
[[upstreams.providers]]
name = "primary"
chain_id = 1  # Ethereum Mainnet
https_url = "https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY"
weight = 1
timeout_seconds = 30

# Optional: Add a second provider for redundancy
[[upstreams.providers]]
name = "fallback"
chain_id = 1
https_url = "https://mainnet.infura.io/v3/YOUR_PROJECT_ID"
weight = 1
timeout_seconds = 30

[cache]
enabled = true

[auth]
enabled = false  # Start without authentication for testing
```

Replace `YOUR_API_KEY` and `YOUR_PROJECT_ID` with your actual RPC provider credentials.

### Configuration with Free Public Endpoints

If you don't have API keys, you can start with public endpoints:

```toml
[server]
bind_address = "127.0.0.1"
bind_port = 3030
max_concurrent_requests = 100

[[upstreams.providers]]
name = "publicnode"
chain_id = 1
https_url = "https://ethereum-rpc.publicnode.com"
wss_url = "wss://ethereum-rpc.publicnode.com"
weight = 1
timeout_seconds = 30

[[upstreams.providers]]
name = "mevblocker"
chain_id = 1
https_url = "https://rpc.mevblocker.io/fast"
wss_url = "wss://rpc.mevblocker.io/fast"
weight = 1
timeout_seconds = 30

[cache]
enabled = true

[auth]
enabled = false
```

> **Note**: Public endpoints have rate limits and may not be suitable for production use.

### Validate Configuration

Before running Prism, validate your configuration:

```bash
cargo make cli-config-validate
```

This will check:

* At least one upstream is configured
* URLs are properly formatted
* All required fields are present
* Numeric values are valid

***

## Running Prism

### Start the Server

Using cargo-make (recommended):

```bash
# Development mode with default config
cargo make run-server

# Production mode
cargo make run-server-release

# With custom config file
PRISM_CONFIG=config/my-config.toml cargo make run-server
```

Or directly with cargo:

```bash
# Development build
cargo run --bin server

# Release build
cargo run --release --bin server
```

### Verify It's Running

The server should start and display:

```
INFO Prism RPC Aggregator starting
INFO Server binding to 127.0.0.1:3030
INFO Loaded configuration from config/config.toml
INFO Added upstream: primary (https://eth-mainnet.g.alchemy.com/...)
INFO Health checker started (interval: 60s)
INFO Server listening on http://127.0.0.1:3030
```

Check the health endpoint:

```bash
curl http://localhost:3030/health
```

Expected response:

```json
{
  "status": "healthy",
  "upstreams": [
    {
      "name": "primary",
      "healthy": true,
      "latency_ms": 45,
      "latest_block": 18500000
    }
  ],
  "cache": {
    "enabled": true,
    "blocks_cached": 0,
    "logs_cached": 0,
    "transactions_cached": 0
  }
}
```

***

## Your First Request

### Basic RPC Call

Make a simple `eth_blockNumber` request:

```bash
curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "eth_blockNumber",
    "params": [],
    "id": 1
  }'
```

Response:

```json
{
  "jsonrpc": "2.0",
  "result": "0x11a6e3b",
  "id": 1
}
```

### Cached Request

Try a request that will be cached:

```bash
# First request - will fetch from upstream
curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "eth_getBlockByNumber",
    "params": ["0x1000000", true],
    "id": 1
  }' -v
```

Look for the `X-Cache-Status` header in the response. First request shows `MISS`, subsequent identical requests show `FULL`.

### Get Logs with Partial Caching

```bash
curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "eth_getLogs",
    "params": [{
      "fromBlock": "0x1000000",
      "toBlock": "0x1000100",
      "address": "0xA0b86a33a603e5e0e4a1d72e7e7e7e4e4e4e4e4e"
    }],
    "id": 1
  }'
```

### Batch Requests

Send multiple requests in one HTTP call (send array to `/` endpoint):

```bash
curl -X POST http://localhost:3030/ \
  -H "Content-Type: application/json" \
  -d '[
    {
      "jsonrpc": "2.0",
      "method": "eth_blockNumber",
      "params": [],
      "id": 1
    },
    {
      "jsonrpc": "2.0",
      "method": "eth_chainId",
      "params": [],
      "id": 2
    }
  ]'
```

***

## Understanding Cache Status

Every response includes an `X-Cache-Status` header:

| Status                  | Meaning                                            |
| ----------------------- | -------------------------------------------------- |
| `MISS`                  | Request served entirely from upstream (not cached) |
| `FULL`                  | Complete cache hit, no upstream call needed        |
| `PARTIAL`               | Some data from cache, rest fetched from upstream   |
| `EMPTY`                 | Cached empty result (e.g., no logs in range)       |
| `PARTIAL_WITH_FAILURES` | Partial cache hit with some upstream failures      |

### Example: Observing Cache Behavior

```bash
# First call - MISS
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
  | grep -i cache-status
# X-Cache-Status: MISS

# Second call - FULL HIT
curl -s -D - http://localhost:3030/ -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1000000",false],"id":1}' \
  | grep -i cache-status
# X-Cache-Status: FULL
```

***

## Monitoring Your Instance

### Prometheus Metrics

Metrics are exposed at `/metrics`:

```bash
curl http://localhost:3030/metrics
```

You'll see metrics like:

```
# HELP rpc_requests_total Total number of RPC requests
# TYPE rpc_requests_total counter
rpc_requests_total{method="eth_blockNumber",upstream="primary"} 42

# HELP rpc_cache_hits_total Cache hits by method
# TYPE rpc_cache_hits_total counter
rpc_cache_hits_total{method="eth_getBlockByNumber"} 15

# HELP rpc_request_duration_seconds Request latency histogram
# TYPE rpc_request_duration_seconds histogram
rpc_request_duration_seconds_bucket{method="eth_blockNumber",upstream="primary",le="0.05"} 40
```

### Health Checks

The `/health` endpoint provides detailed status:

```bash
curl http://localhost:3030/health | jq
```

***

## Next Steps

Now that you have Prism running:

1. **Configure Advanced Features**
   * Enable [authentication](https://docs.prismrpc.dev/features/authentication) with API keys
   * Configure [consensus validation](https://docs.prismrpc.dev/features/routing-strategies#consensus-validation) for critical methods
   * Enable [hedging](https://docs.prismrpc.dev/features/routing-strategies#hedging-for-tail-latency) to reduce tail latency
2. **Optimize Caching**
   * Review [cache configuration](https://docs.prismrpc.dev/configuration-reference#cache-configuration)
   * Adjust cache sizes based on your workload
   * Configure [reorg detection](https://docs.prismrpc.dev/features/caching-guide#cache-invalidation)
3. **Set Up Monitoring**
   * Integrate [Prometheus metrics](https://docs.prismrpc.dev/features/monitoring#prometheus-metrics)
   * Configure alerting for upstream failures
   * Monitor cache hit rates
4. **Add More Upstreams**
   * Configure multiple RPC providers
   * Set up WebSocket subscriptions for real-time updates
   * Implement [circuit breakers](https://docs.prismrpc.dev/features/routing-strategies#circuit-breaker)
5. **Production Hardening**
   * Review [deployment guide](https://docs.prismrpc.dev/deployment)
   * Enable authentication
   * Configure rate limiting
   * Set up proper logging

***

## Common Issues

### Port Already in Use

If port 3030 is already taken:

```toml
[server]
bind_port = 8080  # Use a different port
```

### Upstream Connection Failures

Check your upstream URLs and network connectivity:

```bash
# Test upstream connectivity
cargo make cli-test-upstreams
```

### Cache Not Working

Ensure caching is enabled and methods are cacheable:

```toml
[cache]
enabled = true
```

Only these methods are cached:

* `eth_getBlockByHash`
* `eth_getBlockByNumber`
* `eth_getLogs`
* `eth_getTransactionByHash`
* `eth_getTransactionReceipt`

### High Memory Usage

Reduce cache sizes in configuration:

```toml
[cache.manager_config.block_cache]
max_headers = 5000  # Reduce from default 10000
max_bodies = 5000

[cache.manager_config.log_cache]
max_exact_results = 5000  # Reduce from default 10000
```

***

## Getting Help

* **Documentation**: Browse the full docs for detailed guides
* **Configuration Reference**: See [configuration-reference.md](https://docs.prismrpc.dev/configuration-reference)
* **API Reference**: See [api-reference.md](https://docs.prismrpc.dev/api-reference)
* **Routing**: Understand routing strategies in [routing-strategies.md](https://docs.prismrpc.dev/features/routing-strategies)
* **Issues**: Check existing component documentation for troubleshooting

***

## Quick Reference

### Useful Commands

```bash
# Build
cargo make build                # Development build
cargo make build-release        # Production build

# Run
cargo make run-server           # Start server
cargo make run-server-release   # Start optimized server

# CLI Tools
cargo make cli-config-validate  # Validate configuration
cargo make cli-config-show      # Show resolved config
cargo make cli-test-upstreams   # Test upstream connectivity

# Testing
cargo make test                 # Run all tests
cargo make test-core            # Core library tests only
cargo make bench                # Run benchmarks

# Development
cargo make                      # Format + clippy + test
cargo make dev                  # Full development workflow
```

### Default Ports

* **HTTP Server**: 3030
* **Prometheus Metrics**: 9090 (if enabled)

### Important Files

* **Configuration**: `config/config.toml`
* **Database**: `db/auth.db` (if auth enabled)
* **Logs**: Stdout/stderr (configure in logging section)

***

**Ready to go deeper?** Continue with the [Configuration Reference](https://docs.prismrpc.dev/configuration-reference) to learn about all available options.
