Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.canton.network/llms.txt

Use this file to discover all available pages before exploring further.

PQS (Participant Query Store) subscribes to a validator’s transaction stream and projects contract data into a PostgreSQL database. Your backend queries this database with standard SQL, enabling: filtered queries, aggregations, joins, and full-text search that would be impractical through the Ledger API alone.

How PQS Works

PQS runs as a sidecar service alongside a validator. It connects to the validator’s transaction stream and maintains a PostgreSQL database that reflect the current ledger state and historical data. As the validator emits update events, PQS inserts corresponding entries in the database. This is for both contract create and archive events, along with exercise actions. Your SQL queries always reflect current ledger state, with a small propagation delay (typically milliseconds).

When to Use PQS

PQS is the right choice when you need:
  • Filtered queries across one or more contracts — “All licenses expiring this month” or “all assets owned by this party”
  • Aggregations and reporting — Sums, counts, averages across contract data
  • Complex joins — Combining data from multiple template types
  • Full-text search — Searching contract fields by keyword
  • A read path that does not load the Ledger API — PQS queries hit PostgreSQL, not the participant
For simple queries that are specific (“get this specific contract by ID”), the Ledger API’s active contract set query works fine without PQS.

Setup

PQS requires a PostgreSQL database and a connection to a participant node’s transaction stream.

In LocalNet

The cn-quickstart LocalNet configuration includes PQS instances pre-configured for each validator. When you run make start, PQS starts automatically and begins projecting data.

Standalone Setup

For a standalone deployment:
  1. Provision a PostgreSQL database (version 14 or later recommended)
  2. Configure PQS with the participant node’s Ledger API address and authentication credentials
  3. Start the PQS service, which downloads packages from the validator that it uses to create its schema. This is automatically done when PQS starts and when it detects a new template.
PQS configuration includes:
  • Participant connection — Host, port, and authentication token for the Ledger API
  • Database connection — PostgreSQL connection string
  • Party filter — Which parties’ contract data to project (reduces storage and processing)
  • Template filter — Which templates to include (optional, projects all by default)

SQL Query Examples

Active Contracts

Query all active contracts for a specific template:
SELECT *
FROM active_contracts
WHERE template_id = 'Licensing:License'
  AND payload->>'owner' = 'Alice::1234...';

Transaction History

View recent transactions for a party:
SELECT event_id, template_id, event_type, effective_at
FROM events
WHERE witness_parties @> ARRAY['Alice::1234...']
ORDER BY effective_at DESC
LIMIT 50;

Aggregations

Count active contracts by template:
SELECT template_id, COUNT(*) as contract_count
FROM active_contracts
GROUP BY template_id
ORDER BY contract_count DESC;

Party Filtering

Find all contracts visible to a specific party:
SELECT contract_id, template_id, payload
FROM active_contracts
WHERE stakeholders @> ARRAY['Alice::1234...'];

Performance Optimization

Indexes

Add indexes on columns you query frequently. PQS creates basic indexes on startup, but your application may benefit from additional ones:
-- Index on a payload field you filter by often
CREATE INDEX idx_license_owner
ON active_contracts ((payload->>'owner'))
WHERE template_id = 'Licensing:License';

Connection Pooling

Use a connection pool (like HikariCP for Java or pg-pool for Node.js) between your backend and the PQS database. PQS itself maintains a separate connection to PostgreSQL for writes.

Query Patterns

  • Avoid SELECT * on large tables; specify the columns you need
  • Use template_id filters to narrow the dataset before applying payload filters
  • For time-range queries, add indexes on effective_at or created_at columns

High Availability and Database Sharing

PQS High Availability

Each hosting validator runs its own PQS instance backed by its own PostgreSQL database. If one validator’s PQS becomes unavailable, your application can fail over to another hosting validator’s PQS. Design your backend to support PQS failover:
  • Configure multiple PQS connection strings, one per hosting validator
  • Implement connection health checks and automatic failover
  • Accept that different PQS instances may be at slightly different offsets during normal operation — they converge once the synchronizer delivers all pending updates

Sharing PQS Databases with Application Tables

This section was adapted from existing reviewed documentation. Source: replicated/pqs/3.5/component-howtos/pqs/references/sharing-database.rst Reviewers: Skip this section. Remove markers after final approval.
An application making use of the PQS datastore may also manage its own database migrations via Flyway — either embedded, command-line, or other supported means. An example of such a scenario is the creation of application-specific indexes. With default settings, the application’s Flyway produces an error because its view of available/valid migrations is different from PQS. However, it is trivial to instruct the application’s Flyway to use a different, non-default table name to store its versioning information, which allows both Flyways to coexist in the same database:
flyway -configFiles=conf/flyway.toml migrate \
  -table=myapp_version \
  -baselineOnMigrate=true \
  -baselineVersion=0
Now both PQS and the application can manage their own schema versions independently. The application should limit itself to adding indexes and other non-conflicting changes so the two Flyways can coexist without issues.

PQS in cn-quickstart

The cn-quickstart backend demonstrates PQS usage in the repository/ and pqs/ modules. The Pqs class generates SQL queries, and DamlRepository provides domain-specific methods that combine PQS reads with Ledger API writes. See Backend Development for detailed code examples.