This page covers the configuration options available to validator and Super Validator (SV) operators on Canton Network. It covers Splice app configuration, Canton participant settings, database setup, authentication, traffic management, pruning, and observability. For application developer configuration (Canton + DPM), see the AppDev Configuration Reference.Documentation Index
Fetch the complete documentation index at: https://docs.canton.network/llms.txt
Use this file to discover all available pages before exploring further.
Configuration format
Helm chart configuration
When deploying with Helm, passADDITIONAL_CONFIG values through the additionalEnvVars field in your Helm values file. See the standalone-validator-values.yaml section below for the full set of Helm values.
Custom bootstrap scripts
Custom bootstrap scripts run Canton Console commands at node startup. This section will be expanded in a future update. See the Canton Console Scripting page for script syntax and examples.Validator node configuration
Required network parameters
Your validator needs these values to connect to a network (DevNet, TestNet, or MainNet):- MIGRATION_ID — The current migration ID of the target network. Find it at sync.global/sv-network.
- SPONSOR_SV_URL — The URL of your sponsoring SV’s app (e.g., the GSF SV URL).
- ONBOARDING_SECRET — A one-time secret from your sponsor SV. Secrets expire after 48 hours.
- TRUSTED_SCAN_URL — The Scan URL of a trusted SV, used to obtain additional Scan URLs for BFT reads.
Helm values: standalone-validator-values.yaml
This file defines your validator’s identity and network binding:Helm values: validator-values.yaml
This file covers the validator app behavior, authentication, and wallet settings:Helm values: participant-values.yaml
This file configures the Canton participant that underlies the validator:On Kubernetes versions earlier than 1.24, set
enableHealthProbes to false to disable gRPC liveness and readiness probes.Synchronizer connection options
By default, the validator discovers multiple Scan instances and sequencer connections for BFT reads. This is the recommended production configuration, as it distributes trust across multiple SVs.BFT via Scan proxy (recommended)
The default behavior uses theTRUSTED_SCAN_URL to discover additional Scan instances and sequencer endpoints. No extra configuration is needed beyond the required network parameters above. The validator reads from multiple Scans and connects to multiple sequencers automatically.
Single trusted Scan
To connect to only one trusted Scan instead (accepting the single-point-of-failure trade-off):Single trusted sequencer
Similarly, to route through a single sequencer instead of the set discovered from Scan:Database configuration
PostgreSQL setup (Helm)
The Helm-deployed PostgreSQL instance and all Splice apps share a password stored in a Kubernetes secret. All apps use thecnadmin database user.
PostgreSQL configuration (standalone Canton)
For a standalone Canton participant connected to PostgreSQL:PostgreSQL SSL
Enable SSL on the database connection with thesePGSimpleDataSource properties:
ssl = true— verify the SSL certificate and hostnamesslmode = "verify-ca"— check the certificate chain against the root certificatesslrootcert = "path/to/root.cert"— path to the root CA certificate
sslcert and sslkey pointing to the client certificate and key.
Authentication
Validator components authenticate to each other and to external users through JWT tokens issued by an OpenID Connect (OIDC) provider. Full setup instructions are on the OIDC Providers page. The key configuration secrets are:disableAuth: true in both validator-values.yaml and participant-values.yaml.
TLS configuration
Canton APIs (Ledger API and Admin API) support TLS with optional mutual authentication. A minimal server-side TLS configuration for the Ledger API:Traffic configuration
Your validator automatically purchases traffic (sequencer throughput) using Canton Coin. Configure the top-up behavior instandalone-validator-values.yaml:
enabled: false or targetThroughput: 0 to disable automatic top-ups. Current traffic parameters (base rate limits, extra traffic price, minimum top-up amount) are recorded on the AmuletRules contract and can be queried from any Scan instance.
For Docker Compose deployments, set these as environment variables instead:
Participant pruning
By default, participants preserve full transaction history. Enabling pruning removes history older than the retention window, keeping only the active contract set. Pruning does not affect Splice app data (wallet history, for example, is never pruned). Add this tovalidator-values.yaml:
Monitoring and observability
Metrics endpoint
Helm deployments: setmetrics.enable: true in your Helm values to create a ServiceMonitor custom resource (requires the Prometheus Operator). Alternatively, add Prometheus scrape annotations targeting port 10013.
Docker Compose deployments: metrics are enabled by default at http://validator.localhost/metrics (validator app) and http://participant.localhost/metrics (participant).
Histograms
Metrics are built with OpenTelemetry and exposed as Prometheus native histograms. Enable this in Prometheus with-enable-feature=native-histograms. To fall back to regular histograms:
Topology metrics
The validator app can export synchronizer topology metrics (prefixedsplice.synchronizer-topology) by enabling a polling trigger:
Health checks
All Splice apps provide/readyz and /livez endpoints on port 5003. In Kubernetes, liveness and readiness probes are preconfigured. You can also check them manually:
Grafana dashboards
The release bundle includes Grafana dashboards under thegrafana-dashboards folder. These dashboards assume a Kubernetes deployment and use Prometheus native histogram queries.
HTTP proxy configuration
If your environment routes egress through an HTTP forward proxy, set the proxy host and port in your Helm values:https.nonProxyHosts to exclude specific addresses. Proxy authentication is not currently supported.