Replicating databases across clusters

Preview Feature

The cross-cluster database replication feature is offered AS-IS as described in your agreement with Neo4j and should only be used for internal development purposes.

When this feature becomes generally available, you will need to upgrade to the latest Neo4j version (which may require downtime) to use the feature for non-development purposes.

During the Preview period, customers with active contracts may contact Neo4j Support through the standard support channels. Please note that any cases related to the Preview feature will be classified as Severity 4 by default, in accordance with the Support Terms.

Overview

In 2026.03, cross-cluster database replication (CCDR) is introduced. This provides the functionality for one database in a cluster to be replicated over to another cluster. The original database is called upstream or source, while the database that replicates data is called the replica. The replica continuously polls the upstream database for new transactions and applies them as soon as new transactions become available (see Figure 1).

replication
Figure 1. Schema of replicating a database across clusters

Prerequisites for creating a replica database

Let’s assume you have two separate clusters running independently from each other: Cluster A and Cluster B.

Ensure that clusters run the same Neo4j version, and that version must be 2026.03 or later.

The procedures for creating and promoting replica databases are supported by both versions of Cypher: Cypher 5 and Cypher 25.

In the preview version, the block store format is only supported for the upstream and replica databases.

Configuring inter-cluster encryption

Firstly, ensure that each cluster is encrypted using intra-cluster encryption. To ensure that both Cluster A and Cluster B can mutually authenticate, each cluster must trust the Certificate Authority (CA) that signs the other cluster’s certificates.

If clusters use different CAs, the CA certificate of Cluster A must be installed in the trusted/ directory of Cluster B, and vice versa.

For example, here is one possible certificate folder configuration:

ClusterA

cluster/
├── private.key                 ← clusterA node private key
├── public.crt                  ← clusterA node certificate
├── trusted/
│   └── clusterB-ca.crt         ← CA that signs ClusterB node certs
└── revoked/


ClusterB

cluster/
├── private.key                 ← ClusterB node private key
├── public.crt                  ← ClusterB node certificate
├── trusted/
│   └── clusterA-ca.crt         ← CA that signs ClusterA node certs
└── revoked/

Creating a replica database

Use the following procedure to create a replica database.

Table 1. internal.dbms.createReplicaDatabase()

Syntax

internal.dbms.createReplicaDatabase(databaseName, numPrimaries, numSecondaries, {remote: upstreamDatabaseName, addresses: remoteAddresses})

Description

Create a replica database with a topology of primaries and secondaries.

Input arguments

Name

Type

Description

databaseName

STRING

The name of the replica in the current cluster. Note that the databaseName can be the same as that of the upstreamDatabaseName.

numPrimaries

INTEGER

The number of replica primaries. These primaries will align with the mode constraints set on the server, but will not be writable because they are still replicas.

numSecondaries

INTEGER

The number of replica secondaries.

upstreamDatabaseName

STRING

The name of the upstream database in the remote cluster.

remoteAddresses

LIST<STRING>

A list of the cluster endpoints of the servers in the remote cluster.

Mode

WRITE

On Cluster A, the database foo is running and has three servers with cluster addresses: server01.example.com:6000, server02.example.com:6000, server03.example.com:6000.

To create a replica database foo-replica with three primaries and two secondaries on Cluster B that will replicate from the upstream database foo from Cluster A, run the following procedure:

CALL internal.dbms.createReplicaDatabase("foo-replica", 3, 2, {remote: "foo", addresses:["server01.example.com:6000","server02.example.com:6000","server03.example.com:6000"]});

All replica databases are read-only. A replica database can have topology with primary and secondary copies that follow the servers' mode constraints. However, both primaries and secondaries remain read-only.

The command can only be executed successfully if the replica database is able to contact the servers of Cluster A and if the upstream database foo exists and is running. If not, the command results in an error.

Verify that the foo-replica database is online on the desired number of servers, in the desired roles. Note that the type is labelled as replica.

SHOW DATABASE foo-replica;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| name          | type      | aliases | access       | address          | role         | writer | requestedStatus | currentStatus | statusMessage | default | home  | constituents|
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "foo-replica" | "replica" | []      | "read-write" | "localhost:7687" | "primary"    | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "replica" | []      | "read-write" | "localhost:7688" | "primary"    | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "replica" | []      | "read-write" | "localhost:7689" | "primary"    | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "replica" | []      | "read-write" | "localhost:7690" | "secondary"  | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "replica" | []      | "read-write" | "localhost:7691" | "secondary"  | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

5 rows available after 3 ms, consumed after another 1 ms

Accessing replica databases via drivers or Cypher Shell

Given that replica databases are read-only, you must ensure that the drivers either have "AccessMode" set to "READ", or are only doing executeRead. Attempts to write to the replica database will fail.

Furthermore, when accessing the database via Cypher Shell, ensure that --access-mode is also set to read when connecting.

bin/cypher-shell --access-mode=read -a neo4j://localhost:7684 -d foo-replica

This is necessary because all our replicas are read-only. Failure to do so would result in failure to find a WRITE server.

When accessing the database via the Neo4j Browser, ensure that the Access mode is set to Read. This can be found in the Browser Settings drawer.

Setting up routing policies

Replica databases respect the built-in routing policies.

Example 1

For example, if you have no routing policies, querying the CALL dbms.routing.getRoutingTable({}, "foo-replica"); returns all five servers you have as readers and no writers:

+-------------------------------------------------------------------------------------------------------------------------------+
| ttl | servers                                                                                                                 |
+-------------------------------------------------------------------------------------------------------------------------------+
| 300 | [{addresses: ["localhost:7687", "localhost:7688", "localhost:7689", "localhost:7690", "localhost:7691"], role: "READ"}  |
+-------------------------------------------------------------------------------------------------------------------------------+
Example2

If you set dbms.routing.reads_on_primaries_enabled=false in the neo4j.conf file to disable reads on primaries, the routing policy results in the following routing table where only the two replicas with secondary role are readers:

+-------------------------------------------------------------------------+
| ttl | servers                                                           |
+-------------------------------------------------------------------------+
| 300 | [{addresses: ["localhost:7690", "localhost:7691"], role: "READ"}  |
+-------------------------------------------------------------------------+

Managing user roles and privileges

User privileges and roles are not copied over when replicating a database.

Permissions and role-based access control have to be set up separately on Cluster A and Cluster B as they cannot be copied over. You should treat the two clusters as independent entities, with a standard database replicating between them. Furthermore, the system database cannot be replicated.

Monitoring replica databases

Replicas asynchronously replicate data from the upstream database. Therefore, it is important to track the replication lag between the upstream and the replica databases. In order to do this, plot the <prefix>.database.<db>.transaction.last_committed_tx_id metric for both the upstream and replica databases. This difference effectively captures the lag between the upstream and the replica.

Disaster recovery scenario

It is recommended to set up two independent clusters in two different cloud regions. If the main region has gone down, you can promote the replicating database in the disaster recovery cluster (DR-cluster) and redirect data traffic to the promoted database.

disaster step 1
Figure 2. Disaster
disaster step 2
Figure 3. Promoting the replica database
disaster step 3
Figure 4. Redirecting data traffic

Promoting a replica database

If the source cluster becomes unavailable, you can promote a replica database to accept writes.

Promotion is a one-way operation. Once a replica is promoted, it cannot be re-attached to the original source. To resume replication, you need to create a new replica.

A replica can be promoted in two ways: by retaining its topology or by modifying it.

Promote a replica database without modifying its topology
Table 2. internal.dbms.promoteReplicaDatabase()

Syntax

internal.dbms.promoteReplicaDatabase(replicaDatabaseName)

Description

Convert a replica database to a standard database while keeping the data and topology it currently has.

Input arguments

Name

Type

Description

replicaDatabaseName

STRING

The name of the replica in the current cluster which is going to be promoted.

Mode

WRITE

On Cluster B, to promote the replica database foo-replica with the existing topology which is three primaries and two secondaries, call the procedure:

CALL internal.dbms.promoteReplicaDatabase('foo-replica');

The database becomes write-available.

Verify that the foo-replica database is online on the desired number of servers and has type standard.

SHOW DATABASE foo-replica;
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| name          | type       | aliases | access       | address          | role         | writer | requestedStatus | currentStatus | statusMessage | default | home  | constituents|
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "foo-replica" | "standard" | []      | "read-write" | "localhost:7687" | "primary"    | TRUE   | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "standard" | []      | "read-write" | "localhost:7688" | "primary"    | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "standard" | []      | "read-write" | "localhost:7689" | "primary"    | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "standard" | []      | "read-write" | "localhost:7690" | "secondary"  | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
| "foo-replica" | "standard" | []      | "read-write" | "localhost:7691" | "secondary"  | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []          |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

5 rows available after 3 ms, consumed after another 1 ms

One of the primary databases is now write-available as the writer equals to TRUE. At this point, our routing tables will return both writers and readers as per a standard database.

Promote a replica database while modifying its topology

You can alter replica’s topology during the promote process by using the following procedure:

Table 3. internal.dbms.promoteReplicaDatabaseWithTopology()

Syntax

internal.dbms.promoteReplicaDatabaseWithTopology(replicaDatabaseName, numPrimaries, numSecondaries)

Description

Convert a replica database to a standard database with new topology while keeping the data it currently has.

Input arguments

Name

Type

Description

replicaDatabaseName

STRING

The name of the replica in the current cluster which is going to be promoted.

numPrimaries

INTEGER

The new number of primaries. These primaries will align with the mode constraints set on the server, and will become writable.

numSecondaries

INTEGER

The new number of replica secondaries.

Mode

WRITE

You have to specify a number of primaries or secondaries:

CALL internal.dbms.promoteReplicaDatabaseWithTopology('foo-replica', 3, 4);

This will promote the replica database with three primaries and four secondaries.

Failback

Failback is the process of restoring the main cluster back into operation after a DR scenario. This is under the assumption that the promoted database is currently running in the DR-cluster.

To restore the previous source database, follow the steps:

  1. In the current example, the replica database’s upstream is the promoted database foo-replica in the DR-cluster, Cluster B (see Figure 3).
    Create a replica database foo in the main cluster which is Cluster A.

    recovery
    Figure 5. Recovery after a disaster
  2. Stop write operations on the promoted database foo-replica in Cluster B.

  3. Check the metrics to ensure that tx_id is exactly the same for both the promoted and replicating databases.

  4. Promote the replica database foo in Cluster A, meaning that this database starts serving write operations and becomes again the main database serving traffic.

  5. Stop and drop the database foo-replica in the DR-cluster. Create a new replica foo-new-replica with the upstream pointing to the main database foo in the main cluster (see Figure 5).

    replication restored
    Figure 6. Restoring the upstream database in the main cluster