⚠️ This software is in alpha. Use in production environments at your own risk.
This relayer service
- Multi-Chain Support: Interact with multiple blockchain networks, including Solana and EVM-based chains.
- Transaction Relaying: Submit transactions to supported blockchain networks efficiently.
- Transaction Signing: Securely sign transactions using configurable key management.
- Transaction Fee Estimation: Estimate transaction fees for better cost management.
- Solana Gasless Transactions: Support for gasless transactions on Solana, enabling users to interact without transaction fees.
- Transaction Nonce Management: Handle nonce management to ensure transaction order.
- Transaction Status Monitoring: Track the status of submitted transactions.
- SDK Integration: Easily interact with the relayer through our companion JavaScript/TypeScript SDK.
- Extensible Architecture: Easily add support for new blockchain networks.
- Configurable Network Policies: Define and enforce network-specific policies for transaction processing.
- Metrics and Observability: Monitor application performance using Prometheus and Grafana.
- Docker Support: Deploy the relayer using Docker for both development and production environments.
- Solana
- EVM (đźš§ Partial support)
For details about current development status and upcoming features, check our Project Roadmap.
View the Installation documentation for detailed information. For a quicker introduction, check out the Quickstart guide.
View the Usage documentation for more information.
The repository includes several ready-to-use examples to help you get started with different configurations:
Example | Description |
---|---|
basic-example |
Simple setup with Redis |
basic-example-logging |
Configuration with file-based logging |
basic-example-metrics |
Setup with Prometheus and Grafana metrics |
vault-secret-signer |
Using HashiCorp Vault for key management |
vault-transit-signer |
Using Vault Transit for secure signing |
Each example includes:
- A README with step-by-step instructions
- Docker Compose configuration
- Required configuration files
The OpenZeppelin Relayer is built using Actix-web and provides HTTP endpoints for transaction submission, in-memory repository implementations, and configurable network policies.
The following diagram illustrates the architecture of the relayer service, highlighting key components and their interactions.
%%{init: {
'theme': 'base',
'themeVariables': {
'background': '#ffffff',
'mainBkg': '#ffffff',
'primaryBorderColor': '#cccccc'
}
}}%%
flowchart TB
subgraph "Clients"
client[API/SDK]
end
subgraph "OpenZeppelin Relayer"
subgraph "API Layer"
api[API Routes & Controllers]
middleware[Middleware]
end
subgraph "Domain Layer"
domain[Domain Logic]
relayer[Relayer Services]
policies[Policy Enforcement]
end
subgraph "Infrastructure"
repositories[Repositories]
jobs[Job Queue System]
signer[Signer Services]
provider[Network Providers]
end
subgraph "Services Layer"
transaction[Transaction Services]
vault[Vault Services]
webhook[Webhook Notifications]
monitoring[Monitoring & Metrics]
end
subgraph "Configuration"
config_files[Config Files]
env_vars[Environment Variables]
end
end
subgraph "External Systems"
blockchain[Blockchain Networks]
redis[Redis]
vault_ext[HashiCorp Vault]
metrics[Prometheus/Grafana]
notification[Notification Services]
end
%% Client connections
client -- "HTTP Requests" --> api
%% API Layer connections
api -- "Processes requests" --> middleware
middleware -- "Validates & routes" --> domain
%% Domain Layer connections
domain -- "Uses" --> relayer
domain -- "Enforces" --> policies
relayer -- "Processes" --> transaction
%% Services Layer connections
transaction -- "Signs with" --> signer
transaction -- "Connects via" --> provider
transaction -- "Queues jobs" --> jobs
webhook -- "Notifies" --> notification
monitoring -- "Collects" --> metrics
signer -- "May use" --> vault
%% Infrastructure connections
repositories -- "Stores data" --> redis
jobs -- "Processes async" --> redis
vault -- "Secrets management" --> vault_ext
provider -- "Interacts with" --> blockchain
%% Configuration connections
config_files -- "Configures" --> domain
env_vars -- "Configures" --> domain
%% Styling
classDef apiClass fill:#f9f,stroke:#333,stroke-width:2px
classDef domainClass fill:#bbf,stroke:#333,stroke-width:2px
classDef infraClass fill:#bfb,stroke:#333,stroke-width:2px
classDef serviceClass fill:#fbf,stroke:#333,stroke-width:2px
classDef configClass fill:#fbb,stroke:#333,stroke-width:2px
classDef externalClass fill:#ddd,stroke:#333,stroke-width:1px
class api,middleware apiClass
class domain,relayer,policies domainClass
class repositories,jobs,signer,provider infraClass
class transaction,vault,webhook,monitoring serviceClass
class config_files,env_vars configClass
class blockchain,redis,vault_ext,metrics,notification externalClass
The project follows a standard Rust project layout:
openzeppelin-relayer/
├── src/
│ ├── api/ # Route and controllers logic
│ ├── bootstrap/ # Service initialization logic
│ ├── config/ # Configuration logic
│ ├── constants/ # Constant values used in the system
│ ├── domain/ # Domain logic
│ ├── jobs/ # Asynchronous processing logic (queueing)
│ ├── logging/ # Logs File rotation logic
│ ├── metrics/ # Metrics logic
│ ├── models/ # Data structures and types
│ ├── repositories/ # Configuration storage
│ ├── services/ # Services logic
│ └── utils/ # Helper functions
│
├── config/ # Configuration files
├── tests/ # Integration tests
├── docs/ # Documentation
├── scripts/ # Utility scripts
├── examples/ # Configuration examples
├── helpers/ # Rust helper scripts
└── ... other root files (Cargo.toml, README.md, etc.)
- Docker
- Rust
- Redis
- Sodium
To get started, clone the repository:
git clone https://github.com/openzeppelin/openzeppelin-relayer
cd openzeppelin-relayer
Run the following commands to install pre-commit hooks:
-
Install pre-commit hooks:
pip install pre-commit pre-commit install --install-hooks -t commit-msg -t pre-commit -t pre-push
⚠️ If you encounter issues with pip, consider using pipx for a global installation. -
Install the toolchain:
rustup component add rustfmt
- Install stable libsodium version from here.
- Follow steps to install libsodium from the libsodium installation guide.
To run tests, use the following commands:
cargo test
cargo test properties
cargo test integration
Create config/config.json
file. You can use config/config.example.json
as a starting point:
cp config/config.example.json config/config.json
Refer to the Configuration References section for a complete list of configuration options.
Create .env
with correct values according to your needs from .env.example
file as a starting point:
cp .env.example .env
To create a new signer keystore, use the provided key generation tool:
cargo run --example create_key -- \
--password DEFINE_YOUR_PASSWORD \
--output-dir config/keys \
--filename local-signer.json
Then update the KEYSTORE_PASSPHRASE
field in your .env
file with the password you used in the key creation example.
The tool supports the following options:
--password
: Required. Must contain at least:- 12 characters
- One uppercase letter
- One lowercase letter
- One number
- One special character
--output-dir
: Directory for the keystore file (creates if not exists)--filename
: Optional. Uses timestamp-based name if not provided--force
: Optional. Allows overwriting existing files
Example with all options:
cargo run --example create_key -- \
--password "YourSecurePassword123!" \
--output-dir config/keys \
--filename local-signer.json \
--force
/config/config.json
file is partially pre-configured. You need to specify the webhook URL that will receive updates from the relayer service.
For simplicity, visit Webhook.site, copy your unique URL, and then update the notifications[0].url field in config/config.json
with this value.
To sign webhook notification payloads, populate the WEBHOOK_SIGNING_KEY
entry in the .env
file.
For development purposes, you can generate the signing key using:
cargo run --example generate_uuid
Note: Alternatively, you can use any online UUID generator.
Copy the generated UUID and update the WEBHOOK_SIGNING_KEY
entry in the .env
file.
Generate an API key signing key for development purposes using:
cargo run --example generate_uuid
# or run this command to generate a UUID
# uuidgen
Note: Alternatively, you can use any online UUID generator.
Copy the generated UUID and update the API_KEY
entry in the .env
file.
Run Redis container:
docker run --name openzeppelin-redis \
-p 6379:6379 \
-d redis:latest
Install dependencies:
cargo build
Run relayer:
cargo run
The service is available at http://localhost:8080/api/v1
curl -X GET http://localhost:8080/api/v1/relayers \
-H "Content-Type: application/json" \
-H "AUTHORIZATION: Bearer YOUR_API_KEY"
If you use docker-compose
over docker compose
please read Compose V1 vs Compose V2 section.
Based on your .env
file, docker compose may or may not start the metrics server ( within relayer app container), prometheus and grafana.
Note: If you want to start the metrics server, prometheus and grafana, make sure to set
METRICS_ENABLED=true
in your.env
file.
If you want to start the services using make target, you can use the following command to start the services:
cargo make docker-compose-up
Note: By default docker compose command uses Dockerfile.development to build the image. If you want to use Dockerfile.production, you can set:
DOCKERFILE=Dockerfile.production
before runningcargo make docker-compose-up
.
We have a make target to start the services with docker compose with metrics profile based on your .env
file. For metrics server you will need to make sure METRICS_ENABLED=true
is set in your .env
file. If you want to start the services directly using docker compose, you can use the following command:
# without metrics profile ( METRICS_ENABLED=false by default )
# will only start the relayer app container and redis container
docker compose up -d
# or with metrics profile ( METRICS_ENABLED=true in .env file )
# docker compose --profile metrics up -d
Make sure the containers are running without any restarts/issues:
docker ps -a
To stop the services, run the following command:
cargo make docker-compose-down
# or
# using docker compose without make target
# without metrics profile
# docker compose down
# or with metrics profile
# docker compose --profile metrics down
To check the logs of the services/containers, run the following command:
docker compose logs -f
- If you use
docker-compose
command, it will use Compose V1 by default which is deprecated. We recommend usingdocker compose
command. - You can read more about the differences between Compose V1 and Compose V2 here.
- You can also check out the issue here.
-
Pre-requisites:
-
You need
antora
site-generator
andmermaid
extension to generate the documentation. -
You can directly install these dependencies by running
cd docs && npm i --include dev
. If you want to install them manually, you can follow the steps mentioned below. -
Install
antora
locally, you can follow the steps mentioned here, if you already have you can skip this step.Note: If you want to install globally, you can run:
npm install -g @antora/[email protected] @antora/[email protected] @sntke/antora-mermaid-extension
-
Verify the installation by running
antora --version
or by runningnpx antora --version
if you installed it locally.
-
-
To generate documentation locally, run the following command:
cargo make rust-antora
-
Site will be generated in
docs/build/site/openzeppelin-relayer/<version>/
directory. -
To view the documentation, open the
docs/build/site/openzeppelin-relayer/<version>/index.html
in your browser.
- Currently we support logs and metrics ( uses prometheus and grafana) for the relayer server.
- For logs, our app defaults to writing logs to stdout/console. You can also configure it to write logs to a file path by setting
LOG_MODE
tofile
. See docker compose file for more details.
-
Metrics server is started on port
8081
by default, which collects the metrics from the relayer server.-
Exposes list of metrics on the
/metrics
endpoint.Note: By default, we don't map this port to the host machine. If you want to access the metrics server from the host machine, you can update the
docker-compose.yaml
file. -
Exposes
/debug/metrics/scrape
endpoint for prometheus to scrape metrics.
-
-
To view prometheus metrics in a UI, you can use
http://localhost:9090
on your browser. -
To view grafana dashboard, you can use
http://localhost:3000
on your browser.
We welcome contributions from the community! Here's how you can get involved:
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
If you are looking for a good place to start, find a good first issue here.
You can open an issue for a bug report, feature request, or documentation request.
You can find more details in our Contributing guide.
Please read our Code of Conduct and check the Security Policy for reporting vulnerabilities.
This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.
For security concerns, please refer to our Security Policy.
If you have any questions, first see if the answer to your question can be found in the User Documentation.
If the answer is not there:
We encourage you to reach out with any questions or feedback.
See CODEOWNERS file for the list of project maintainers.