Documentation Index
Fetch the complete documentation index at: https://docs.canton.network/llms.txt
Use this file to discover all available pages before exploring further.
The Canton Console provides direct, interactive access to Canton node internals. It’s a Scala-based REPL that connects to participant, sequencer, and mediator nodes for debugging, diagnostics, and advanced operations that aren’t exposed through the standard APIs.
When You Need the Console
Most day-to-day operations happen through the Ledger API, Admin API, or web UIs. The Canton Console is for situations that require deeper access:
- Debugging stuck transactions or unexpected behavior
- Inspecting topology state (party-to-participant mappings, package vetting)
- Managing parties and packages at a low level
- Disaster recovery or repair operations
- Advanced key management and rotation
Starting the Console
This section was adapted from existing reviewed documentation.
Source: deployment/console_access.rst
Reviewers: Skip this section. Remove markers after final approval.
To access the Canton Console, you need:
- Direct network access to the Canton node process (participant, sequencer, or mediator)
- The Canton binary (typically via the Canton Docker image)
Participant Console
- Obtain an authentication token (see the Canton authentication docs for JWT setup)
- Ensure you can access the participant’s ports 5001 (Ledger API) and 5002 (Admin API)
- Create a configuration file
console.conf:
canton {
remote-participants {
participant {
admin-api {
port = 5002
address = localhost
}
ledger-api {
port = 5001
address = localhost
}
token = "<auth token>"
}
}
features.enable-preview-commands = yes
features.enable-testing-commands = yes
features.enable-repair-commands = yes
}
- Run the Canton Console via Docker:
docker run -it --rm --network host \
-v $(pwd)/console.conf:/app/app.conf \
digitalasset/canton:3.3.0 --console
Replace 3.3.0 with the Canton version matching your deployment. Check available versions at Docker Hub.
If you run the participant using Docker Compose, use the Docker network used by the participant (e.g., --network splice-validator) and adjust the address in the config to the container name (e.g., address = participant).
Sequencer Console
- Ensure you can access the sequencer’s ports 5008 (public API) and 5009 (Admin API)
- Create
console.conf:
canton {
remote-sequencers {
sequencer {
public-api {
port = 5008
address = localhost
}
admin-api {
port = 5009
address = localhost
}
}
}
features.enable-preview-commands = yes
features.enable-testing-commands = yes
features.enable-repair-commands = yes
}
- Run the Docker command as shown above for the participant.
- Ensure you can access the mediator’s port 5007 (Admin API)
- Create
console.conf:
canton {
remote-mediators {
mediator {
admin-api {
port = 5007
address = localhost
}
}
}
features.enable-preview-commands = yes
features.enable-testing-commands = yes
features.enable-repair-commands = yes
}
- Run the Docker command as shown above.
K8s Cluster Access
In a Kubernetes cluster, use a debug pod to access the console directly:
kubectl debug "${POD_NAME}" \
--image "$(kubectl get pod "${POD_NAME}" -o json | jq -re '.spec.containers[0].image')" \
-i -t -- bash
Where POD_NAME is the name of the participant, sequencer, or mediator pod. Inside the pod:
apt-get update && apt-get install -y vim
vim console.conf # paste in the appropriate config from above
/app/bin/canton -v -c console.conf
Console Structure
Once connected, the console provides access to node-specific objects:
participant — For participant consoles: party management, package management, ledger operations
sequencer — For sequencer consoles: synchronizer status, sequencing operations
mediator — For mediator consoles: mediator state inspection
Each object exposes a hierarchy of commands organized by domain: health, topology, parties, packages, domains (on participants), and more.
Help System
Use the built-in help to discover available commands:
@ participant1.help()
Top-level Commands
------------------
config - Return participant config
help - Help for specific commands (use help() or help("method") for more information)
id - Yields the globally unique id of this participant
is_initialized - Check if the local instance is running and is fully initialized
is_running - Check if the local instance is running
maybeId - Yields Some(id) of this participant if id present
simClock - Returns the node specific simClock
start - Start the instance
stop - Stop the instance
Command Groups
--------------
dars - Manage DAR packages
db - Database related operations
health - Health and diagnostic related commands
keys - Manage public and secret keys
ledger_api - Group of commands that access the ledger-api
metrics - Access the local nodes metrics
packages - Manage raw Daml-LF packages
parties - Inspect and manage parties
pruning - Commands to pruning the archive of the ledger
replication - Manage participant replication
resources - Functionality for managing resources
synchronizers - Manage synchronizer connections
topology - Topology management related commands
traffic_control - Traffic control related commands
@ participant1.topology.help()
Top-level Commands
------------------
help - Help for specific commands (use help() or help("method") for more information)
init_id - Initialize the node with a unique identifier
init_id_from_uid - Initialize the node with a unique identifier
Command Groups
--------------
decentralized_namespaces - Manage decentralized namespaces
mediators - Inspect mediator synchronizer state
namespace_delegations - Manage namespace delegations
owner_to_key_mappings - Manage owner to key mappings
participant_synchronizer_permissions - Inspect participant synchronizer permissions
participant_synchronizer_states - Inspect participant synchronizer states
party_hosting_limits - Manage party hosting limits
party_to_key_mappings - Manage party to key mappings
party_to_participant_mappings - Manage party to participant mappings
sequencers - Inspect sequencer synchronizer state
stores - Inspect topology stores
synchronizer_parameters - Manage synchronizer parameters state
synchronizer_trust_certificates - Manage synchronizer trust certificates
transactions - Inspect all topology transactions at once
vetted_packages - Manage package vettings
@ participant1.parties.help("hosted")
hosted(filterParty: String, synchronizerIds: Set[com.digitalasset.canton.topology.SynchronizerId], asOf: Option[java.time.Instant], limit: com.digitalasset.canton.config.RequireTypes.PositiveInt): Seq[com.digitalasset.canton.admin.api.client.data.ListPartiesResult]
Inspect the parties hosted by this participant as used for synchronisation.
The response is built from the timestamped topology transactions of each synchronizer,
excluding the authorized store of the given node. The search will include all hosted
parties and is equivalent to running the `list` method using the participant id of the
invoking participant.
Parameters:
- filterParty: Filter by parties starting with the given string.
- filterSynchronizerId: Filter by synchronizers whose id starts with the given string.
- asOf: Optional timestamp to inspect the topology state at a given point in time.
- limit: How many items to return (defaults to canton.parameters.console.default-limit)
Example: participant1.parties.hosted(filterParty="alice")
Next Steps