Quick-reference solutions for common Canton Network issues
Quick-reference guide for resolving common Canton Network issues. Each entry provides immediate diagnostic steps and solutions derived from production support cases.
Symptom: Container crashes with memory-related errors or OOM (Out of Memory) kills.Solution:
Increase Docker memory allocation to 8GB minimum in Docker Desktop settings:
Open Docker Desktop → Settings → Resources
Set Memory to 8GB or higher
Set CPU to 4 cores or higher
Apply & Restart
For Colima users on Mac:
Copy
colima stopcolima start --memory 8 --cpu 4
Docker Compose version mismatch
Symptom: Error message containing 'env_file[1]' expected type 'string', got unconvertible type 'map[string]interface {}'Cause: Docker Compose version is below 2.26.0.Solution:
Copy
# Check your versiondocker compose version# Upgrade if below 2.26.0# On Mac with Homebrew:brew install docker-compose# Or update Docker Desktop to latest version
Canton Quickstart requires Docker Compose version 2.26.0 or higher.
Splice container keeps restarting (unhealthy)
Symptom:Container splice errors; looks it tries to start and fails after about 20 seconds. Other containers start ok.Diagnostic Steps:
Check container logs:
Copy
docker logs splice-validator-participant-1
Verify resource allocation (memory/CPU)
Check for configuration errors in .env file
Common Causes:
Insufficient system resources (increase Docker memory to 8GB+)
Network/DNS resolution issues
Configuration errors in environment variables
Solution:
If using Colima, restart with increased resources:
Copy
colima stopcolima start --memory 8 --cpu 4
Then restart the quickstart from scratch following documentation exactly.
Java version incompatibility
Symptom: Build or startup failures mentioning Java version issues.Solution:
Canton Network requires Java 17 or 21. Java 22+ is not supported.
Copy
# Check Java versionjava -version# Use SDKMAN to manage Java versionssdk install java 21-temsdk use java 21-tem
GENERIC_CONFIG_ERROR(8,0): Cannot convert configuration...at 'canton.participants.participant.ledger-api.auth-services.0.url':The value you gave for this configuration setting ('') was the empty string
Cause:LEDGER_API_AUTH_AUDIENCE is not properly configured.Solution:
Ensure all authentication environment variables are set in your .env file:
Do not leave any authentication variable as an empty string. Either set a valid value or remove the variable entirely if not using authentication.
ACCESS_TOKEN_EXPIRED errors
Symptom:
Copy
ACCESS_TOKEN_EXPIRED(2,0): Claims were valid until 2025-09-08T08:32:07Z, current time is 2025-09-08T08:32:07.254413051Z
Cause: Token refresh timing is too tight. Default 5-minute token lifetime may be insufficient for newer splice versions.Solution:
Increase the access token timeout in your OIDC provider (Auth0/Keycloak):
In Auth0: Applications → Your App → Settings → Advanced → Access Token Lifetime
Set to 15 minutes or higher
Restart your validator
For Keycloak:
Realm Settings → Tokens → Access Token Lifespan → Set to 900 (15 minutes)
Cause: JWT token is misconfigured or expired. The first command (listing parties) uses the admin API without auth, but party enablement queries the Ledger API which requires valid authentication.Solution:
Verify your JWT token is valid and not expired
Check that the token has the correct scope (daml_ledger_api)
Ensure the token audience matches LEDGER_API_AUTH_AUDIENCE
Wallet shows 'Onboard Yourself' button after upgrade
Symptom: After upgrading, logging into the wallet shows the onboarding screen instead of existing wallet data.Cause: User-to-party mapping may not have been migrated correctly during authentication setup.Solution:
Verify the logged-in user ID matches your configured WALLET_ADMIN_USER
If migrating to authenticated setup, follow the documentation precisely:
Stop validator with ./stop.sh
Restart with -a flag: ./start.sh -a
Ensure WALLET_ADMIN_USER matches the user ID in your OIDC provider
Gave up getting 'app version of https://scan.sv-1.global.canton.network...
Cause: Configuration is using the scan address instead of the sv address.Solution:
Replace the scan URL with the SV URL in your configuration:❌ Wrong: https://scan.sv-2.global.canton.network.digitalasset.com/api/sv✅ Correct: https://sv.sv-2.global.canton.network.digitalasset.comThe SV (Super Validator) URL is for onboarding. The scan URL is for viewing network data.
TLS handshake failed
Symptom: Connection errors mentioning TLS, SSL, or certificate issues.Diagnostic Checklist:
NOT_FOUND: No package vetting state found for domain Some(global-domain::1220b1431ef2...)
Cause: Validator startup failed because package vetting didn’t complete successfully.Solution:
Ensure you’re using the correct SV sponsor URL (not the scan URL)
Check network connectivity to the synchronizer
Restart the validator to retry package vetting
If the issue persists, check validator logs for the specific vetting failure.
Authorization failed
Symptom: Commands fail with authorization-related errors.Checklist:
Party has signatory rights on the contract
Observer list includes the requesting party
Choice controller is correctly specified
Delegation pattern is properly configured
Example Fix:
Ensure the party is properly authorized:
Copy
// Check party authorizationparticipant.parties.list().filter(_.party.contains("yourParty"))// Verify signatory status on contractledger.contracts.filter(c => c.signatories.contains(yourParty))
Rejected transaction as the mediator did not receive sufficient confirmationswithin the expected timeframe.context = 'unresponsiveParties=>...'
Cause: One or more parties in the transaction didn’t respond in time. Common reasons:
Insufficient Canton Coin balance for traffic top-ups
Network connectivity issues
Validator unhealthy or offline
Solution:
Check Canton Coin balance for all involved parties
Top up if necessary:
Copy
curl -X POST "http://localhost/api/validator/v0/admin/traffic/purchase" \ -H "Authorization: Bearer $TOKEN"
Verify all validators involved are healthy
Synchronizer outbox flusher failed
Symptom:
Copy
synchronizer outbox flusher failed Did not observe transactions in target synchronizer store
Cause: Topology transactions may not be getting submitted to the synchronizer properly. This can indicate network latency or participant synchronization issues.Solution:
Check network connectivity to the synchronizer
If persistent, consider enabling topology batching:
Consider using PQS (Participant Query Store) for read-heavy workloads
Implement retry logic with exponential backoff in your application
Pruning not running on MainNet
Symptom: Pruning schedule is configured but /v2/state/latest-pruned-offsets shows no pruning activity.Cause: First-time pruning on MainNet with large history may exceed maxDuration.Solution:
Increase maxDuration significantly for initial pruning:
Version mismatch after upgrade (still showing old version)
Symptom: Public explorers show old validator version despite successful helm upgrade.Cause: Using --reuse-values flag with helm upgrade can cause version configuration to be retained from the previous release.Solution:
Upgrade without the --reuse-values flag:
kubectl -n validator get deploy validator-app \ -o "jsonpath={.spec.template.spec.containers[0].image}"
Node crashes after upgrade - UnrecoverableError
Symptom:
Copy
Fatal error occurred, crashing the node to recover from invalid state: UnrecoverableError(Synchronizer 'global', handler returned error: ApplicationHandlerException
Recovery Steps:
DO NOT repeatedly restart - this may compound the issue
Check if you have a pre-upgrade snapshot/backup
Verify migration configuration in standalone-validator-values.yaml:
Copy
migration: id: 4 # Should match current network migration migrating: true
Ensure participant database name is updated:
Copy
persistence: databaseName: participant_4 # Matches migration id
If stuck, contact support with full logs
Always take snapshots/backups before upgrading. For Kubernetes: snapshot both Validator and Participant Persistent Volumes.
'You don't speak 0.5.x' errors
Symptom: Version compatibility errors during upgrade attempts.Cause: Network has upgraded to a newer version than your node.Solution:
Symptom: Restoration fails with Expected exactly one OwnerToKeyMapping for old node id... but got List()Cause: Node ID dump format changed between versions. Old format used different key names.Solution:
Update the JSON key names in your node ID dump to match current format:Old format: