There are three ways to recover from disasters:Documentation Index
Fetch the complete documentation index at: https://docs.canton.network/llms.txt
Use this file to discover all available pages before exploring further.
- In simple cases where only a single node is affected but the overall network is still healthy, a Restore from backup is usually sufficient.
- If a full backup is unavailable but an identities backup has been created, the balance of the validator can be recovered on a new validator.
- If the global synchronizer breaks, the super validators will initiate a roll-forward Logical Synchronizer Upgrade to roll forward to a new physical synchronizer. Validators will need to initiate the procedure on their node based on the information communicated by the SVs.
If neither of the above holds, it is not possible to recover the relevant participant secret keys to prove asset ownership.
- A recent database backup is available, or:
- An up-to-date identities backup is available, or:
- The validator participant was using an external KMS to manage its keys and the KMS still retains those keys. (Note that recovering the validator from only KMS keys - i.e., without an identities backup or database backup -is an involved process that is not explicitly documented here.)
Restoring a validator from backups
The entire node can be restored from backups as long as all of the following hold:- A database backup is available.
- The database backup is less than 30 days old. Due to sequencer pruning, a participant that is more than 30 days behind will be unable to catch up on the synchronizer to become fully operational again.
- If the backup was taken before the synchronizer underwent a logical synchronizer upgrade, then restoring the node from the backup will only be possible if synchronizer nodes on the old physical synchronizer are still available. If this is true, you must restore the node on the old physical synchronizer first so it can catch up and become fully operational on the new physical synchronizer.
- Scale down all components in the validator node to 0 replicas.
- Restore the storage and DBs of all components from the backups. The exact process for this depends on the storage and DBs used by the components, and is not documented here.
- Once all storage has been restored, scale up all components in the validator node back to 1 replica.
- Stop the validator and participant using
./stop.sh. - Wipe out the existing database volume:
docker volume rm compose_postgres-splice. - Start only the postgres container:
docker compose up -d postgres-splice - Check whether postgres is ready with:
docker exec splice-validator-postgres-splice-1 pg_isready(rerun this command until it succeeds) - Restore the validator database (assuming
validator_dump_filecontains the filename of the dump from which you wish to restore):docker exec -i splice-validator-postgres-splice-1 psql -U cnadmin validator < $validator_dump_file - Restore the participant database (assuming
participant_dump_filecontains the filename of the dump from which you wish to restore, andmigration_idcontains the latest migration ID):docker exec -i splice-validator-postgres-splice-1 psql -U cnadmin participant-$migration_id < $participant_dump_file - Stop the postgres instance:
docker compose down - Start your validator as usual
Recovery from an identities backup: Re-onboard a validator and recover balances of all users it hosts
In the case of a catastrophic failure of the validator node, some data owned by the validator and users it hosts can be recovered from the SVs. This data includes Canton Coin balance and CNS entries. This is achieved by deploying a new validator node with control over the original validator’s namespace key. The namespace key must be provided via an identities backup file. It is used by the new validator for migrating the parties hosted on the original validator to the new validator. SVs assist this process by providing information about all contracts known to them that the migrated parties are stakeholders of. The following steps assume that you have a backup of the identities of the validator, as created in the Backup of Node Identities section. In case you do not have such a backup but instead have a backup of the validator participant’s database, you can assemble an identities backup manually. To recover from the identities backup, we deploy a new validator with some special configuration described below. Refer to either the docker-compose deployment instructions or the kubernetes instructions depending on which setup you chose. Once the new validator is up and running, you should be able to login as the administrator and see its balance. Other users hosted on the validator will need to re-onboard, but their coin balance and CNS entries should be recovered and will accessible to users that have re-onboarded. In case of issues, please consult the troubleshooting section below.Kubernetes Deployment
To re-onboard a validator in a Kubernetes deployment and recover the balances of all users it hosts, repeat the steps described inhelm-validator-install for installing the validator app and participant. While doing so, please note the following:
- Create a Kubernetes secret with the content of the identities backup file. Assuming you set the environment variable
PARTICIPANT_BOOTSTRAP_DUMP_FILEto a backup file path, you can create the secret with the following command:
- Uncomment the following lines in the
standalone-validator-values.yamlfile. This will specify a new participant ID for the validator. Replaceput-some-new-string-never-used-beforewith a string that was never used before. Make sure to also adjustnodeIdentifierto match the same value.
Docker-Compose Deployment
To re-onboard a validator in a Docker-compose deployment and recover the balances of all users it hosts, type:<node_identities_dump_file> is the path to the file containing the node identities backup, and <new_participant_id> is a new identifier to be used for the new participant. It must be one never used before. Note that in subsequent restarts of the validator, you should keep providing -P with the same <new_participant_id>.
Obtaining an Identities Backup from a Participant Database Backup
In case you do not have a usable identities backup but instead have a backup of the validator participant’s database, you can assemble an identities backup manually. Here is one possible way to do so:-
Restore the database backup into a temporary postgres instance and deploy a temporary participant against that instance.
- See the section on restoring a validator from backups for pointers that match your deployment model.
- You only need to restore and scale up the participant, i.e., you can ignore the validator app and its database.
- In case the restored participant shuts down immediately due to failures, add the following additional configuration:
- Open a Canton console to the temporary participant.
-
Run below commands in the opened console. This will store the backup into a local file (relative to the local directory from which you opened the console) called
identities-dump.json.
Note that above commands need to be adapted if your participant is configured to store keys in an external KMS.
Limitations and Troubleshooting
In some non-standard cases, the automated re-onboarding from key backup might not succeed in migrating (i.e., recovering) a party. Please check the logs of the validator for warnings or error entries that may give clues.Parties not migrated automatically
The following types of parties will not be migrated by default:- Parties that are hosted on multiple participants. These may get unhosted from the original (failed) participant, but will remain hosted on any other participants.
- External parties that are hosted on the validator. These may get unhosted from the original (failed) participant. Please refer to
validator_recover_external_partyfor instructions on how to recover external parties.
parties-to-migrate configuration option on your validator app. A migration will be attempted for each party that you pass to this option. The initialization of the validator app will be interrupted on the first failed migration attempt.
Troubleshooting failed ACS imports
If you still observe issues, in particular you observeACS_COMMITMENT_MISMATCH warnings in your participant logs, something has likely gone wrong while importing the active contracts of at least one of the parties hosted on your node. Another common symptom (in case the validator party is affected) is that your your validator initialization fails with a Unknown secret error and your validator logs contain a ValidatorLicense not found message. To address a failed ACS import, you can usually:
-
First make sure all parties are hosted on the same node. The most common case is that either the parties are still on the old node with the old participant ID or they have been migrated to the new node. You can check by opening a Canton console to any participant on the network (i.e., you can also ask another validator or SV operator for this information) and running the following query where <namespace> is the part after the
::in, for example, your validator party ID.If all parties are on the same node, proceed to the next step. If some are on the old node and some are on the new node, migrate the ones on the old node to the new node by opening a console to the new node and running the following command (adjust the parameters as required for your parties): -
If all parties are on the new node already, you can attempt to (re-)import the ACS for those parties manually. The following steps concern your new validator node:
- Stop your validator app.
- Open a participant console to that new validator and keep it open for the next steps.
-
From the Canton console, run:
-
For each
PARTY_IDyou want to migrate / re-import the ACS for: Run from a regular shell (same working directory like the one you started your Canton console from):From the Canton console: -
From the Canton console, run
participant.synchronizers.reconnect_all(). - Start your validator app again.
-
If the previous step failed or you chose not to attempt it, you can retry the migration procedure with a fresh participant. If your parties are still on the original node that you took identities backup from, you can use your existing backup. If your parties have been migrated to the new node already, take a new identities dump from the new node. If the new node is in a state where you cannot take a fresh dump, use the old dump but edit the
idfield to the participant ID of the new node. You can obtain theidin the correct format by, for example, runningparticipant.id.toProtoPrimitivein a Canton console to the participant. You can now take down the node to which you originally tried to restore and try the restore procedure again with your adjusted dump on a fresh node with a different participant ID prefix (i.e., a differentnewParticipantIdentifier/<new_participant_id>depending on your deployment model).
Troubleshooting rejected topology snapshots
In rare cases, the re-onboarding process may fail at theImportTopologySnapshot step because an OwnerToKeyMapping for the old participant ID has an insufficient number of signatures in the topology snapshot. This only affects validators that were originally onboarded on Splice 0.4.1 or earlier, which used a Canton version that did not require the mapped keys to co-sign OwnerToKeyMapping transactions. You can identify this issue by looking for the following messages in your participant logs:
- Start only the new participant (without the validator app). Do not wipe its state from the previous (failed) re-onboarding attempt.
-
Open a Canton console to the new participant and run the following commands to propose the corrected
OwnerToKeyMapping. Replace the key ID prefixes with those from the rejectedOwnerToKeyMappingin your participant logs, and replace the old participant ID with your actual old participant ID: - Start the validator app using your original identities dump configuration.
Recover the Coin balance of an external party
For a party relying on external signing, a similar procedure can be used to recover its coin balance in case the validator originally hosting it becomes unusable for whatever reason. First, setup a new validator following the standard standard validator deployment docs. Next, connect a Canton console to that new validator. We now need to sign and submit the topology transaction to host the external party on the new node and import the ACS for that party. To do so, first generate the topology transaction. Note that the instructions here assume that the party is only hosted on a single participant node. If you want to host it on multiple nodes, you will need to adjust this.validFrom time:
2025-05-14T10:19:33.534074Z.
We can now query CC Scan to get the active contract set (ACS) for a party and write it to the file acs_snapshot:
/v0/admin/external-party/setup-proposal, /v0/admin/external-party/setup-proposal/prepare-accept and /v0/admin/external-party/setup-proposal/submit-accept. For details refer to the docs for the validator external signing API.
Roll Forward Logical Synchronizer Upgrade
In case the SVs communicate that they recover from a loss of the physical synchronizer, they will communicate thenewPhysicalSynchronizerId and the sequencerSuccessors.
Validators then need to:
- Wait for their node to finish catching up to the latest transaction on the existing synchronizer. A good indicator for that is that you don’t see any new logs containing
Processing event atin your participant INFO logs. - Initiate the roll forward LSU through a Canton console: