When you start a GemFire XD locator in a WAN deployment, you must
specify additional configuration options to identify each GemFire XD cluster in
Important Configuration Notes
-distributed-system-id option specifies a unique
integer to identify the local cluster in which the locator participates.
Configure all locators for a given WAN site using the same
-remote-locators option specifies the host and port
numbers of one or more locators that identify remote GemFire XD clusters. The
local cluster use the remote locators to connect to the remote GemFire XD systems
for WAN replication. If a GemFire XD cluster will replicate to multiple clusters
(or if a remote customer uses more than one locator), specify multiple remote
locator addresses in a comma-separated list when starting a locator as shown in
- The -conserve-sockets option
determines whether the locator will share a minimum number socket connections
between applications that connect to the local system and local GemFire XD members
that distribute DML operations to remote WAN sites.
Because of the
increased messaging requirements involved in WAN replication, always set
to false for GemFire XD members that
participate in a WAN deployment.
- Start a locator for a GemFire XD cluster. For
example, to start a locator for GemFire XD cluster 1 shown in Figure 2:
gfxd locator start
The above command configures the local GemFire XD distributed system (with
-distributed-system-id=1) to replicate to two different remote GemFire XD
distributed systems using standalone locators.
identify the network connection
that local GemFire XD members use to discovery each other in the distributed
identifies all of the locators used in
the distributed system (the above example uses the single, standalone
locator that the gfxd command starts). These parameters are always used when
starting locators, regardless of whether WAN replication is configured. See
for more information.
- To start a locator in GemFire XD cluster 2, you
would enter a command similar to:
gfxd locator start
The preceding commands ensure that both cluster 1 and cluster 2 are
associated with one another (and to cluster 3), and can replicate or receive
data as necessary.