Documentation Index
Fetch the complete documentation index at: https://docs.canton.network/llms.txt
Use this file to discover all available pages before exploring further.
Upgrade To a New Release
This section covers the processes to upgrade Canton participant nodes and synchronizers. Upgrading Daml applications is covered elsewhere. As elaborated in the versioning guide, new features, improvements and fixes are released regularly. To benefit from these changes, the Canton-based system must be upgraded. There are two key aspects that need to be addressed when upgrading a system:- Upgrading the Canton binary that is used to run a node.
- Upgrading the protocol version that defines how nodes interact with each other.
Upgrade Canton Binary
A Canton node consists of one or more processes, where each process is defined by- A Java Virtual Machine application running a versioned JAR of Canton.
- A set of configuration files describing the node that is being run.
- An optional bootstrap script passed via
--boostrap, which runs on startup. - A database (with a specific schema), holding the data of the node.
- Replace the Canton binary (which contains the Canton JAR).
- Test that the configuration files can still be parsed by the new process.
- Test that the bootstrap script you are using is still working.
- Upgrade the database schema.
Preparation
First, please download the new Canton binary that you want to upgrade to and store it on the test system where you plan to test the upgrade process. Then, obtain a recent backup of the database of the node and deploy it to a database server of your convenience, such that you can test the upgrade process without affecting your production system. While we extensively test the upgrade process ourselves, we cannot exclude the eventuality that you are using the system in a non-anticipated way. Testing is cumbersome, but breaking a production system is worse. If you are upgrading a participant, then we suggest that you also use an in-memory synchronizer which you can tear down after you have tested that the upgrade of the participant is working. You might do that by adding a simple synchronizer definition as a configuration mixin to your participant configuration. Generally, if you are running a high-availability setup, please take all nodes offline before performing an upgrade. If the update requires a database migration (check the release notes), avoid running older and newer binaries in a replicated setup, as the two binaries might expect a different database layout. You can upgrade the binaries of a microservice-based synchronizer in any order, as long as you upgrade the binaries of nodes accessing the same database at the same time. For example, you could upgrade the binary of a replicated mediator node on one weekend and an active-active database sequencer on another weekend.Back Up Your Database
Before you upgrade the database and binary, please ensure that you have backed up your data, such that you can roll back to the previous version in case of an issue. You can back up your data by cloning it. In Postgres, the command is:Test your Configuration
Test that the configuration still worksstorage-for-upgrade-testing.conf and mynode.conf need to be adjusted to match your case.
If Canton starts and shows the command prompt of the console, then the configuration was parsed successfully.
The command line option --manual-start prevents the node from starting up automatically, as we first need to migrate the database.
Migrating the Database
Canton does not perform a database migration automatically. Migrations need to be forced. If you start a node that requires a database migration, you will observe the following Flyway error:Test Your Upgrade
Once your node is up and running, you can test it by running a ping. If you are testing the upgrade of your participant node, then you might want to connect to the test synchronizerVersion Specific Notes
Currently, nothing.Change the Canton Protocol Version
The Canton protocol is defined by the semantics and the wire format used by the nodes to communicate to each other. In order to process transactions, all nodes must be able to understand and speak the same protocol. Therefore, a new protocol can be introduced only once all nodes have been upgraded to a binary that can run the version.Upgrade the Synchronizer to a new Protocol Version
A synchronizer is tied to a protocol version. This protocol version is configured when the synchronizer is initialized and cannot be changed afterward. Therefore, you can not upgrade the protocol version of a synchronizer. Instead, you deploy a new synchronizer side by side with the old synchronizer process. This applies to all synchronizer services, be it sequencer, mediator, or topology manager. Please note that currently, the synchronizer id cannot be preserved during upgrades. The new synchronizer must have a different synchronizer id because the participant associates a synchronizer connection with a synchronizer id, and that association must be unique. Therefore, the protocol upgrade process boils down to:- Deploy a new synchronizer next to the old synchronizer. Ensure that the new synchronizer is using the desired protocol version. Also make sure to use different databases (or at least different schemas in the same database) for the synchronizer services (mediator, sequencer node, and topology manager), channel names, smart contract addresses, etc. The new synchronizer must be completely separate, but you can reuse your DLT backend as long as you use different sequencer contract addresses or Fabric channels.
- Instruct the participants individually using the hard synchronizer migration to use the new synchronizer.
currentSchema either in the JDBC URL or as a parameter in storage.config.properties.
Hard Synchronizer Connection Upgrade
A hard synchronizer connection upgrade can be performed using the respective migration command. Again, please ensure that you have appropriate backups in place and that you have tested this procedure before applying it to your production system. You will have to enable these commands using a special config switch:oldsynchronizer, ensure that there are no pending transactions. You can do that by either controlling your applications, or by setting the resource limits to 0 on all participants:
repair.migrate_synchronizer command. The command expects two input arguments: The alias of the source synchronizer and a synchronizer connection configuration describing the new synchronizer.
In order to build a synchronizer connection config, we can just type
newsynchronizer (which is what we are doing in this example), you can grab the connection details using
oldsynchronizer to the new synchronizer.
Once all participants have performed the migration, they can reconnect to the synchronizer
Note that currently, the hard migration is the only supported way to migrate a production system. This is because unique contract keys are restricted to a single synchronizer.
Expected Performance
Performance-wise, we can note the following: when we migrate contracts, we write directly into the respective event logs. This means that on the source synchronizer, we insert a transfer-out, while we write a transfer-in and the contract into the target synchronizer. Writing this information is substantially faster than any kind of transaction processing (several thousand migrations per second on a single CPU/16-core test server). However, with very large datasets, the process can still take quite some time. Therefore, we advise you to measure the time the migration takes during the upgrade test to understand the necessary downtime required for the migration. Furthermore, upon reconnecting, the participant needs to recompute the new set of commitments. This can take a while for large numbers of contracts.Soft Synchronizer Connection Upgrade
The soft synchronizer connection upgrade is currently only supported as an alpha feature.The hard synchronizer connection upgrade requires coordination among all participants in a network. The soft synchronizer connection upgrade is operationally much simpler, and can be leveraged using multi-synchronizer support (which exists as a pre-alpha feature only for now). By turning off non-unique contract keys, participants can connect to multiple synchronizers and transfer contracts between synchronizers. This allows us to avoid using the
repair.migrate_synchronizer step.
Assuming the same setup as before, where the participant is connected to the old synchronizer, we can just connect it to the new synchronizer
priority flag of the new synchronizer connection:
#22917: Fix broken ref ref:transfer command
, contracts can be moved over to the new synchronizer one by one, such that eventually, all contracts are associated with the new synchronizer, allowing the old synchronizer to be decommissioned and turned off.
The soft upgrade path provides a smooth user experience that does not require a hard migration of the synchronizer connection to be coordinated across all participants. Instead, participants upgrade individually, whenever they are ready, allowing them to reverse the process if needed.