SlideShare a Scribd company logo
1 of 45
1
- Introduction guide for Apache Geode
How to configure the cluster
based on Multi-site
(WAN) configuration
Akihiro Kitada
2018/1/31
2
What is “Apache Geode ?”
What is “Multi-site (WAN)
configuration ?”
Apache Geode and Multi-site (WAN) configuration
 What is “Apache Geode?”
– OSS software version of Pivotal GemFire – In-Memory Data Grid software
– It’s a Key-Value NoSQL In-Memory Database implementation which can be scaled out
based on the distributed system concept, enables users to query data entries with OQL
language (like SQL), has listeners to execute logics driven by data modification and
execute functions according to data location, etc.
– Similar products are Apache Ignite, Infinispan, Hazelcast and so on.
– http://geode.apache.org/
 What is “Multi-site (WAN) configuration ?”
– One of the topologies to replicate data between different Apache Geode clusters, similar to Oralce
GoldenGate or database replication solution.
– Considering communication latency for data replication over WAN – executing data replication
asynchronously based on batch processing by chunking some data.
3
4
Let’s configure two single Apache
Geode clusters first, work them
together based on Multi-site
(WAN) and replicate data each
others!
For the purpose of observing data
replication, let’s run a simple client
application to notify the behavior
of data replication!
Target single cluster configuration (1/2)
5
Server machine
Locator 1
Cache Server 1
Locator 2
Cache Server 2
Partitioned Region x2
(Redundant copy x1)
Local Disk Local Disk
This time, let’s configure a single
Apache Geode cluster based on
minimum-redundant configuration
on your macOS or Linux host. It’s
O.K. to build on virtual machines.
“Locator” is the communication
point to connect to the target
cluster and the manager for the
cluster itself. One locator is
enough but let’s have two
Locators for fault tolerance.
Redundant
Configuration
Data Persistence Data Persistence
Target single cluster configuration (2/2)
6
Server machine
Locator 1
Cache Server 1
Locator 2
Cache Server 2
Partitioned Region x2
(Redundant copy x1)
Local Disk Local Disk
“Cache Server” is the entity of in-
memory data store. By
aggregating multiple Cache
Servers, they behave like one
huge in-memory data store. This
time, let’s have two Cache Sever
for the purpose of fault tolerance
too.
Data Persistence Data Persistence
“Region” is similar to the “Table” in
the case of RDBMS. This time,
let’s create one “Partitioned
region”, which is like Hadoop
HDFS data store (with replication
factor = 2), with persisting to local
disk store.
Redundant
Configuration
Target Multi-site (WAN) configuration
7
Cluster 1 Cluster 2
WAN
S
R
S
R
S
R
S
R
S = Gateway Sender R = Gateway Receiver
For HA purpose, you have a
Gateway Receiver and a Gateway
Sender per Cache Servers. The
former is a service for getting
updates from the other cluster.
The latter is a service for sending
updates for the other cluster.
As you can see, you will have
already created two regions. One
is configured for receiving updates
from the remote cluster. Another is
configured for sending updates to
the remote cluster.
8
Now, let’s configure a single
Apache Geode cluster!
Preparation (1/8)
9
Server machine
Locator 1
Cache Server 1
Locator 2
Cache Server 2
Partitioned Region x2
(Redundant copy x1)
Local Disk Local Disk
Data Persistence Data Persistence
Redundant
Configuration
At your machine, let’s prepare
configuration files, start-up scripts,
local directories for logs and
persistence files, required for
Locators and Cache Servers.
Preparation (2/8)
 Set your network configuration to make all the machines
communicate each other with TCP/IP protocols.
 Ensure the assigned IP address to your machine.
 Confirm your host name: $ hostname
 Add the following entry in /etc/hosts of your machine.
– [your IP] [your hostname]
– Ex) 192.168.2.100 akitada-mcbk13
10
In the environment having
multiple NICs, it may fail to
connect from Locators to
JMX Manager server without
this configuration.
Preparation (3/8)
 Deploy the base configuration and application
– Get from the required artifacts from GitHub (https://github.com/quitada41/Geode-Multisite-handson).
Now, deploy them under /Users/[user name]/Geode. The example deployment is like below.
▪ /Users/akitada/Geode/GEO130.profile – profile to set required environment variables
▪ /Users/akitada/Geode/cache-wan.xml – cache configuration file for Multi-site (WAN) configuration
▪ /Users/akitada/Geode/cache.xml – cache configuration file for single cluster configuration
▪ /Users/akitada/Geode/client – client application directory to check replication behavior on Multi-site (WAN) configuration
— lib/GeodeCacheListenerClient.jar – jar file for the client application
— src/io/pivotal/akitada/GeodeCacheListenerClient.java – source code for the client application (just for your reference)
— startListener.sh – start up script for the client application
▪ /Users/akitada-Geode/geode-wan-diff.properties – properties file for Multi-site configuration (delta from single cluster)
▪ /Users/akitada/Geode/geode.properties – properties file for single cluster configuration
▪ /Users/akitada/Geode/locator* – local directory for Locators
▪ /Users/akitada/Geode/server* – local directory for Cache Servers
▪ /Users/akitada/Geode/startLocator*.sh – start up script for Locators
▪ /Users/akitada/Geode/startServer*.sh – start up script for Cache Servers
▪ /Users/akitada/Geode/stopLocator*.sh – shut down script for Locators
▪ /Users/akitada/Geode/stopServer*.sh – shut down script for Cache Servers
11
Preparation (4/8)
 Unzip/untgz Apache Geode binary, install on your preferred cache directory
and set required environment variables to run gfsh*.
– Set JAVA_HOME environment variable
– Determine install directory: Ex) /opt/Apache/apache-geode-1.3.0
– Set PATH environment to run gfsh*
– Example profile to set required environment variable (GEO130.profile)
12
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
#export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home
export GEODE_HOME=/opt/Apache/apache-geode-1.3.0
export PATH=$JAVA_HOME/bin:$GEODE_HOME/bin:$PATH
*= pronounced “jee-fish” – command line tool to manage and configure Apache Geode.
Preparation (5/8)
 Set required environment variables
– After that, you will execute any operations from this console.
 Modify the property file (1/3)
– You can find the file “geode.properties” under your cache directory (/Users/[OS
user name]/Geode). Modify it according to your machine environment.
– The default name of the property file is “gemfire.properties.” This could be
derived from Pivotal GemFire (the commercial version of Apache Geode). If you
specify the actual file name when starting Locators and Cache Servers, any file
name is O.K.
13
$ cd /Users/[your OS user name]/Geode
$ . ./GEO130.profile
Preparation (6/8)
 Modify the property file (2/3)
– Mainly, modify IP address according to your environment.
▪ x.x.x.x = IP address of your machine
14
log-level=config
locators=x.x.x.x[55221],x.x.x.x[55222]
bind-address=x.x.x.x
server-bind-address=x.x.x.x
jmx-manager-bind-address=x.x.x.x
enable-cluster-configuration=false
Preparation (7/8)
 Modify the property file (3/3)
– Explanation for each specified properties
▪ log-level : specifics logging level, as you can see. It’s convenient to set “config” in terms of checking actual configuration as
well as general info level log messages.
▪ locators : specifies locators’ list (hostname[port],hostname[port],…) to be connected. This is required to start clusters
properly.
▪ bind-address : specifies host name for peer-to-peer communications in the cluster. If you have only one NIC for the target
host, you don’t have to specify this parameter. But it’s recommended to specify this parameter for the purpose of
communicating via expected network segments if you have multiple NICs in the host.
▪ server-bind-address : specifics host name for client-server communications. If you have only one NIC for the target host, you
don’t have to specify this parameter like bind-address parameter. This is applicable to only Cache Server processes.
▪ jmx-manager-bind-address : specifies host name for communication with JMX manager service. If you have only one NIC for
the target host, you don’t have to specify this parameter like bind-address. This is only applicable to members which host
JMX manager services. By default, this is only applicable to the locator which starts first.
▪ enable-cluster-configuration : specifies whether enables the cluster configuration service. This time, it’s set false and
disables the service because you set cache configuration via cache.xml. If enabling the service, all the cache configurations
via gfsh are persisted in Locators’ directory and they are automatically applied when restarting the cluster.
▪ You can specify IP(s) instead of host name for parameters which specify host name if your system can resolve IP(s) from
hosts name by DNS or /etc/hosts file or something.
15
Preparation (8/8)
 At each machines, you have directories for Locators and Cache Servers
which store log files, persistent disk store and so on.
16
$ cd /Users/[your OS user name]/Geode
$ ls –l
:
drwxr-xr-x 9 akitada staff 306 1 17 14:04 locator1/
:
drwxr-xr-x 5 akitada staff 170 1 17 14:04 server1/
:
In this case, “locator1” is a directory for Locator 1.
“server1” is a directory for Cache Server 1.
Start Locators (1/3)
17
Server machine
Locator 1
Cache Server 1
Locator 2
Cache Server 2
Partitioned Region x2
(Redundant copy x1)
Local Disk Local Disk
Data Persistence Data Persistence
Redundant
Configuration
Start 2 locators on each machines
with using gfsh from command
line.
Start Locators (2/3)
 Start Locators in your machine
– Modify the start up scripts (startLocator*.sh) : modify Locator’s name with “—
name” parameter in the cluster. Any name is O.K. if it’s a unique name in the
same cluster.
– Start Locators
18
$ cd /Users/[your OS user name]/Geode
$ sh ./startLocator1.sh
$ sh ./startLocator2.sh
#! /bin/sh
gfsh start locator --name=**** --dir=locator --port=55221 --properties-
file=geode.properties
Start Locators (3/3)
 Explanation for parameters specified with “gfsh start locator” command
– --name : specifies unique name in the cluster for the target Locator for the purpose of
identifying them. Now, specify each directories name for Locators. Any name is O.K if the
name is unique in the cluster.
– --dir : specifies the path for Locator directory, which is created at Preparation section.
Both full path and relative path are O.K. to be specified. Now, specify the relative path to
the target directory where you start Locators at the root directory of the cluster
(/Users/[your OS user name]/Geode).
– --port : specifies listen port number for the Locator, according to locators list specified at
gemfire.properties. Now, as the listen port number, assign “55221” with Locator 1 and
“55222” with Locator 2.
– --properties-file : specifies the path for gemfire.properties. It’s O.K. to specify both full
path and relative path.
19
Start Cache Servers (1/4)
20
Server machine
Locator 1
Cache Server 1
Locator 2
Cache Server 2
Partitioned Region x2
(Redundant copy x1)
Local Disk Local Disk
Data Persistence Data Persistence
Redundant
Configuration
Start 2 Cache Servers with using
gfsh command.
Start Cache Servers (2/4)
 Start Cache Servers on your machine
– Modify the start up scripts (startServer*.sh) : modify Cache Server’s name with
“—name” parameter in the cluster. Any name is O.K. if it’s a unique name in the
same cluster.
– Start Cache Servers
21
$ cd /Users/[your OS user name]/Geode
$ sh ./startServer1.sh
$ sh ./startServer2.sh
#! /bin/sh
gfsh start server --name=**** --dir=server --server-port=0 --properties-
file=geode.properties --cache-xml-file=cache.xml
Start Cache Servers (3/4)
 Explanation for parameters specified with “gfsh start server” command
– --name : specifies unique name in the cluster for the target Cache Servers for the purpose of
identifying them. Now, specify each directories name for Cache Servers. Any name is O.K if the name is
unique in the cluster.
– --dir : specifies the path for Cache Server directory, which is created at Preparation section. Both full
path and relative path are O.K. to be specified. Now, specify the relative path to the target directory
where you start Cache Servers at the root directory of the cluster (/Users/[your OS user name]/Geode).
– --server-port : specifies the listen port number used for client-server communication. Now, by setting
“0” with this parameter, the unused port number is automatically assigned. The assigned port
numbers for each Cache Servers are registered by Locators. Hence, each client applications can
understand the target Cache Server’s listen port from Locators. Of course, you can set the specific
port number.
– --properties-file : specifies the path for gemfire.properties. It’s O.K. to specify both full path and
relative path.
– --cache-xml-file : specifies the path for the cache configuration file. Each cache servers create Regions
and so on according to this configuration file.
22
Start Cache Servers (4/4)
 (reference) the example of cache configuration file (cache.xml)
23
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://geode.apache.org/schema/cache"
xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0" lock-lease="120" lock-timeout="60" search-timeout="300"
is-server="true" copy-on-read="false">
<region name="ExRegion1" refid="PARTITION_REDUNDANT_PERSISTENT" />
<region name="ExRegion2" refid="PARTITION_REDUNDANT_PERSISTENT" />
</cache>
This definition means to create two regions as ExRegion and
ExRegion2 based on Partitioned Region with redundant copy x1 and
persisting data to disk (PARTITION_REDUNDANT_PERSISTENT).
24
Now, you’ve configured Apache
Geode cluster. Let’s confirm
the actual configuration and
behavior with using gfsh.
Cluster management by gfsh (1/2)
 If executing gfsh command without any arguments, you can see the command prompt -
“gfsh>” – like the following and you can execute each sub commands interactively.
 Like “bash”, you can show all the available sub commands and complement each parameters
for them with using TAB key.
25
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ 1.3.0
Monitor and Manage Apache Geode
gfsh>
Cluster management by gfsh (2/2)
 You have to execute the following command to connect the locator to make it available of
almost all the sub commands, first.
– Specify one of the Locators’ host name and port number with “--locator” parameter.
 Now, let’s execute the following management sub commands and check results.
26
gfsh>connect --locator=machine1[55221]
gfsh>help
gfsh>list members
gfsh>show log —member=server1 —lines=100
gfsh>change loglevel —loglevel=fine --members=server1
gfsh>show log —member=server1 —lines=100
gfsh>change loglevel --loglevel=config --members=server1
gfsh>status server —name=server1
gfsh>describe member —name=server1
27
You’ve confirmed the
configuration of your Apache
Geode cluster. Next, let’s put
and get data entries via gfsh
and query them.
Put, get, query data entries (1/2)
 First of all, execute the following command to confirm whether two Regions -
ExRegion1 and ExRegion2 - are actually created.
 Next, confirm the detailed configuration for each Regions.
 Now, if you actually confirm both regions are successfully created, let’s put, get and
query some data entries.
28
gfsh>list region
gfsh>describe region --name=/ExRegion1
gfsh>describe region --name=/ExRegion2
Put, get, query data entries (2/2)
 Now, put a data entry. Please execute the following command after connecting to the cluster via gfsh. You insert a key-value pair actually.
This time, please specify something strings you like with parameters –key and –value. Let put multiple data entries by changing the key
string and the value string.
 Get a data entry by specifying the existing key like the following.
 Query all the values you put at the previous step like the following. The query language is similar to SQL.
 Query all the key-value pair you put at the previous step like the following. You can use alias to refer the target region.
 You can use “where” sentence to filter the result by specifying some conditions like SQL below.
29
gfsh>put --region=/ExRegion1 --key='<any string>' --value='<any string>'
gfsh>get --region=/ExRegion1 --key='<existing key>'
gfsh>query --query="select distinct * from /ExRegion1"
gfsh>query --query="select ex1.key,ex1.value from /ExRegion1.entrySet ex1"
gfsh>query --query="select ex1.key,ex1.value from /ExRegion1.entrySet ex1 where
ex1.key='<existing key>'"
Post process
 To prepare for the next session, shut down your cluster.
 (reference) gfsh command to shut down Locators and Cache Servers
– The most shortest command line by specifying local directory
30
$ cd /Users/[your OS user name]/Geode
$ sh ./stopServer1.sh &
$ sh ./stopServer2.sh &
$ sh ./stopLocator1.sh
$ sh ./stopLocator2.sh
gfsh stop server --dir=server1
gfsh stop locator --dir=locator1
31
Now, Apache Geode clusters
work on each machines. Next,
let’s reconfigure them based on
Multi-site (WAN) configuration
and replicate data entries
between clusters!
Multi-site (WAN) configuration – cluster configuration (1/2)
32
Cluster 1 Cluster 2
WAN
S
R
S
R
S
R
S
R
S = Gateway Sender R = Gateway Receiver
Let’s add additional property to
the property file to replicate data
entries between both clusters.
Multi-site (WAN) configuration – cluster configuration (2/2)
 Set properties required for Multi-site (WAN) configuration
– Delta file (geode-wan-diff.properteis) : modify the following value for each
additional properties.
▪ x = Specify Cluster ID. Set the unique ID for each cluster. You specify any positive integers. This
should be the same in the same cluster members.
▪ y = Specify locators list of remote cluster. Set remote cluster’s locators IP in this case.
– Add the above properties into the existing geode.properties file.
33
$ cd /Users/[your OS user name]/Geode
$ cat geode-wan-diff.properties >> geode.properties
distributed-system-id=x
remote-locators=y.y.y.y[55221],y.y.y.y[55221]
Multi-site (WAN) configuration – cache configuration (1/4)
34
Cluster 1 Cluster 2
WAN
S
R
S
R
S
R
S
R
S = Gateway Sender R = Gateway Receiver
Modify the cache configuration file
and add setting for Gateway
Sender which sends updates to
the remote cluster and Gateway
Receiver which receives updates
from the remote cluster.
Multi-site (WAN) configuration – cache configuration (2/4)
 Modify and confirm the cache configuration file added Gateway Sender/Receiver
configuration (cache-wan.xml) : check the value with red color below
35
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi=…>
<gateway-sender id="GwSender1" enable-persistence="true"
manual-start="true" batch-size="100" batch-time-interval="1000"
remote-distributed-system-id="2" parallel="true" />
<gateway-receiver manual-start="false" start-port="41000" end-port="41999" />
<region name="ExRegion1" refid="PARTITION_REDUNDANT_PERSISTENT">
<region-attributes gateway-sender-ids="GwSender1" />
</region>
<region name="ExRegion2" refid="PARTITION_REDUNDANT_PERSISTENT"/>
</cache>
Gateway Sender configuration:
configuration to send updates to
the remote cluster
Gateway
Receiver
configuration:
configuration to
receive
updates from
the remote
cluster
Add Gateway
Sender which ID is
“GwSender1” with
ExRegion1 region,
to send updates to
the remote cluster.Start Gateway Receiver service only at the time when
starting Cache Server (manual-start attribute).
Multi-site (WAN) configuration – cache configuration (3/4)
 Explanation for each attributes specified at cache-wan.xml (1/2)
– gateway-sender element:
▪ id : specifics ID for the specific Gateway Sender configuration. In the example, it’s “GwSender1”
because of adding to “ExRegion1.” If you will add to “ExRegion2”, it could be “GwSender2.”
▪ enable-persistence : specifies whether persists updates in the Gateway Sender queue. In the case,
the target region itself is persisted. So you have to set this value true.
▪ manual-start : specifies whether you have to start Gateway Sender service manually. In the
example configuration, it’s set true. This means that you have to start manually Gateway Sender
service with using gfsh or API. Otherwise, Gateway Sender service starts automatically at the
same time starting the target cache server.
▪ remote-distributed-system-id : specifies the remote cluster to send updates via Gateway Sender.
In this example, the remote cluster’s “distributed system ID” is “2”, which has been set at the
property – distributed-system-id – with the remote cluster.
▪ parallel : specifies whether sends updates in parallel or not. In this case, it’s “true” and we call
“Parallel Gateway Sender.”
36
Multi-site (WAN) configuration – cache configuration (4/4)
 Explanation for each attributes specified at cache-wan.xml (2/2)
– gateway-receiver element:
▪ manual-start : specifies whether you have to start Gateway Receiver service manually. In the
example configuration, it’s set false. This means that Gateway Receiver service starts
automatically at the same time starting the target cache server.
– region element:
▪ name : specifies the region name. In the Multi-site (WAN) configuration, it’s the target region
name to add Gateway Sender(s) to send its updates to the remote cluster. In this example,
Gateway Sender is added to “ExRegion1” while it’s added to “ExRegion2” at the remote cluster.
– region-attributes element:
▪ gateway-sender-ids : specifies Gateway Senders’ ID (which has been set in gateway-sender
element) to add Gateway Senders with the target region. You can specify multiple Gateway
Sender’s ID by the comma-separated value.
37
Multi-site (WAN) configuration – start your cluster
 Modify the existing start up scripts for Cache Server to refer the cache configuration file
including Gateway Sender/Receiver configuration.
– Modify your start up script for Cache Servers (startServer*.sh) like the following.
 Restart your cluster.
 Confirm whether the remote cluster on the other machine starts successfully.
38
#! /bin/sh
gfsh start server --name=c1s1 --dir=server --server-port=0 --properties-
file=geode.properties --cache-xml-file=cache-wan.xml
$ cd /Users/[your OS user name]/Geode
$ sh ./startLocator1.sh
$ sh ./startLocator2.sh
$ sh ./startServer1.sh &
$ sh ./startServer2.sh &
39
Now, you can reconfigure
Apache Geode clusters (x2)
with Multi-site (WAN)
configuration. Let’s confirm
whether data entries are
replicated each other.
Start the client application to check the behavior (1/3)
 Start the Java client application to check the behavior of Multi-Site (WAN)
configuration.
– Check the start up script (client/startListener.sh)
▪ geode-dependencies.jar only includes a manifest file with classpath information which is required for Apache
Geode application.
– Start the client : see the next slide on arguments…
40
#!/bin/sh
java -classpath ${GEODE_HOME}/lib/geode-
dependencies.jar:./lib/GeodeCacheListenerClient.jar
io.pivotal.akitada.GeodeCacheListenerClient $1 $2 $3
$ cd /Users/[your OS user name]/Geode/client
$ sh ./startListener.sh
Usage: java io.pivotal.akitada.GeodeCacheListenerClient [region name] [locator host
name] [locator port]
$ sh ./startListener.sh ExRegion2 192.168.2.100 55221
Start the client application to check the behavior (2/3)
 Arguments for the client application
– 1st argument : Specify the name of region which receives updates from
the remote cluster. If you specify Gateway Sender with “ExRegion1” at
the local cluster, your mate specifies Gateway Sender with
“ExRegion2” at the remote cluster. In this case, you should specify
“ExRegion2” as the argument because the remote cluster should send
updates for ExRegion2.
– 2nd argument : Specify the address of one of the locators’ address.
– 3rd argument : Specify the listen port of one of the locators.
41
Start the client application to check the behavior (3/3)
 (reference) details about the client application
– For actual details, please refer to the source code. You can find it at the
following path.
▪ /Users/[your OS user
name]/Geode/client/src/io/pivotal/akitada/GeodeCacheListenerClient.java
– At the main() function, it connects to the region specified at 1st
argument and add the Cache Listener with it. The Cache Listener will
trigger the logic to print messages if getting updates in the target
region.
– The application source code itself extends CacheListenerAdapter and
includes some logics as a Cache Listener.
42
Multi-site (WAN) configuration – start Gateway Sender
 If starting Gateway Sender at the local cluster before starting Gateway Receiver at
the remote cluster, you may see tons of log messages to try to connect to Gateway
Receiver at the remote cluster. You may not like this behavior. Hence, in this hands
on session, your mate and you will start Gateway Senders manually after confirming
Gateway Receiver starts at the remote cluster.
 Start gfsh without any arguments, connect to your cluster and start Gateway
Senders like the following.
– Check the status of Gateway Senders/Receivers before and after starting Gateway Senders manually
by “list gateways” sub command.
43
gfsh>connect --locator=192.168.2.100[55221]
gfsh>list gateways
gfsh>start gateway-sender --id=GwSender1
gfsh>list gateways
Multi-site (WAN) configuration – check behavior
 Start gfsh and connect to your cluster. Put something data entries to the region
added Gateway Sender.
 Those updates should be sent to the remote cluster. Then the client application
running on the remote cluster will respond to those updates and log the following
kind of messages in the console. This means data replication is successfully done
based on Multi-site (WAN) configuration.
44
gfsh>connect --locator=192.168.2.100[55221]
gfsh>put --region=/ExRegion1 --key='Something' --value='Data'
Received afterCreate event for entry: Something, Data, isOriginRemote=true
45
Now, you completed to
configure Apache Geode
cluster x2 based on Multi-site
(WAN) configuration and
confirm the data replication
behavior!

More Related Content

What's hot

Postgresql Database Administration Basic - Day1
Postgresql  Database Administration Basic  - Day1Postgresql  Database Administration Basic  - Day1
Postgresql Database Administration Basic - Day1PoguttuezhiniVP
 
Using oracle12c pluggable databases to archive
Using oracle12c pluggable databases to archiveUsing oracle12c pluggable databases to archive
Using oracle12c pluggable databases to archiveSecure-24
 
Plugging in oracle database 12c pluggable databases
Plugging in   oracle database 12c pluggable databasesPlugging in   oracle database 12c pluggable databases
Plugging in oracle database 12c pluggable databasesKellyn Pot'Vin-Gorman
 
D17316 gc20 l06_dataprot_logtrans
D17316 gc20 l06_dataprot_logtransD17316 gc20 l06_dataprot_logtrans
D17316 gc20 l06_dataprot_logtransMoeen_uddin
 
Sap basis administrator user guide
Sap basis administrator   user guideSap basis administrator   user guide
Sap basis administrator user guidePoguttuezhiniVP
 
D17316 gc20 l05_phys_sql
D17316 gc20 l05_phys_sqlD17316 gc20 l05_phys_sql
D17316 gc20 l05_phys_sqlMoeen_uddin
 
PostgreSQL Performance Tuning
PostgreSQL Performance TuningPostgreSQL Performance Tuning
PostgreSQL Performance Tuningelliando dias
 
Cassandra 2.1 boot camp, Protocol, Queries, CQL
Cassandra 2.1 boot camp, Protocol, Queries, CQLCassandra 2.1 boot camp, Protocol, Queries, CQL
Cassandra 2.1 boot camp, Protocol, Queries, CQLJoshua McKenzie
 
Inside PostgreSQL Shared Memory
Inside PostgreSQL Shared MemoryInside PostgreSQL Shared Memory
Inside PostgreSQL Shared MemoryEDB
 
Oracle data guard configuration in 12c
Oracle data guard configuration in 12cOracle data guard configuration in 12c
Oracle data guard configuration in 12cuzzal basak
 
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA -   UKOUGEmbracing Database Diversity: The New Oracle / MySQL DBA -   UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUGKeith Hollman
 
MySQL Enterprise Backup: PITR Partial Online Recovery
MySQL Enterprise Backup: PITR Partial Online RecoveryMySQL Enterprise Backup: PITR Partial Online Recovery
MySQL Enterprise Backup: PITR Partial Online RecoveryKeith Hollman
 
HDFS introduction
HDFS introductionHDFS introduction
HDFS introductioninjae yeo
 
[B5]memcached scalability-bag lru-deview-100
[B5]memcached scalability-bag lru-deview-100[B5]memcached scalability-bag lru-deview-100
[B5]memcached scalability-bag lru-deview-100NAVER D2
 

What's hot (20)

Postgresql Database Administration Basic - Day1
Postgresql  Database Administration Basic  - Day1Postgresql  Database Administration Basic  - Day1
Postgresql Database Administration Basic - Day1
 
Rac questions
Rac questionsRac questions
Rac questions
 
Using oracle12c pluggable databases to archive
Using oracle12c pluggable databases to archiveUsing oracle12c pluggable databases to archive
Using oracle12c pluggable databases to archive
 
Plugging in oracle database 12c pluggable databases
Plugging in   oracle database 12c pluggable databasesPlugging in   oracle database 12c pluggable databases
Plugging in oracle database 12c pluggable databases
 
D17316 gc20 l06_dataprot_logtrans
D17316 gc20 l06_dataprot_logtransD17316 gc20 l06_dataprot_logtrans
D17316 gc20 l06_dataprot_logtrans
 
RMAN – The Pocket Knife of a DBA
RMAN – The Pocket Knife of a DBA RMAN – The Pocket Knife of a DBA
RMAN – The Pocket Knife of a DBA
 
Presentation day5 oracle12c
Presentation day5 oracle12cPresentation day5 oracle12c
Presentation day5 oracle12c
 
Sap basis administrator user guide
Sap basis administrator   user guideSap basis administrator   user guide
Sap basis administrator user guide
 
D17316 gc20 l05_phys_sql
D17316 gc20 l05_phys_sqlD17316 gc20 l05_phys_sql
D17316 gc20 l05_phys_sql
 
Introduction to HDFS
Introduction to HDFSIntroduction to HDFS
Introduction to HDFS
 
Hadoop HDFS Concepts
Hadoop HDFS ConceptsHadoop HDFS Concepts
Hadoop HDFS Concepts
 
PostgreSQL Performance Tuning
PostgreSQL Performance TuningPostgreSQL Performance Tuning
PostgreSQL Performance Tuning
 
Cassandra 2.1 boot camp, Protocol, Queries, CQL
Cassandra 2.1 boot camp, Protocol, Queries, CQLCassandra 2.1 boot camp, Protocol, Queries, CQL
Cassandra 2.1 boot camp, Protocol, Queries, CQL
 
Inside PostgreSQL Shared Memory
Inside PostgreSQL Shared MemoryInside PostgreSQL Shared Memory
Inside PostgreSQL Shared Memory
 
Oracle data guard configuration in 12c
Oracle data guard configuration in 12cOracle data guard configuration in 12c
Oracle data guard configuration in 12c
 
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA -   UKOUGEmbracing Database Diversity: The New Oracle / MySQL DBA -   UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUG
 
Presentation day4 oracle12c
Presentation day4 oracle12cPresentation day4 oracle12c
Presentation day4 oracle12c
 
MySQL Enterprise Backup: PITR Partial Online Recovery
MySQL Enterprise Backup: PITR Partial Online RecoveryMySQL Enterprise Backup: PITR Partial Online Recovery
MySQL Enterprise Backup: PITR Partial Online Recovery
 
HDFS introduction
HDFS introductionHDFS introduction
HDFS introduction
 
[B5]memcached scalability-bag lru-deview-100
[B5]memcached scalability-bag lru-deview-100[B5]memcached scalability-bag lru-deview-100
[B5]memcached scalability-bag lru-deview-100
 

Similar to How to configure the cluster based on Multi-site (WAN) configuration

Hadoop configuration & performance tuning
Hadoop configuration & performance tuningHadoop configuration & performance tuning
Hadoop configuration & performance tuningVitthal Gogate
 
Dell linux cluster sap
Dell linux cluster sapDell linux cluster sap
Dell linux cluster sapPrakash Kolli
 
Best Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the CloudBest Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the CloudLeons Petražickis
 
ACADGILD:: HADOOP LESSON
ACADGILD:: HADOOP LESSON ACADGILD:: HADOOP LESSON
ACADGILD:: HADOOP LESSON Padma shree. T
 
Learn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node ClusterLearn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node ClusterEdureka!
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapaHadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapakapa rohit
 
A presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NASA presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NASRahul Janghel
 
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter SlidesJuly 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter Slidesryancox
 
Setting up mongodb sharded cluster in 30 minutes
Setting up mongodb sharded cluster in 30 minutesSetting up mongodb sharded cluster in 30 minutes
Setting up mongodb sharded cluster in 30 minutesSudheer Kondla
 
Design and Research of Hadoop Distributed Cluster Based on Raspberry
Design and Research of Hadoop Distributed Cluster Based on RaspberryDesign and Research of Hadoop Distributed Cluster Based on Raspberry
Design and Research of Hadoop Distributed Cluster Based on RaspberryIJRESJOURNAL
 
Big data interview questions and answers
Big data interview questions and answersBig data interview questions and answers
Big data interview questions and answersKalyan Hadoop
 
Power Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS CloudPower Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS CloudEdureka!
 
Introduction to Apache Mesos
Introduction to Apache MesosIntroduction to Apache Mesos
Introduction to Apache MesosJoe Stein
 
Hadoop operations basic
Hadoop operations basicHadoop operations basic
Hadoop operations basicHafizur Rahman
 
Technical Overview of Apache Drill by Jacques Nadeau
Technical Overview of Apache Drill by Jacques NadeauTechnical Overview of Apache Drill by Jacques Nadeau
Technical Overview of Apache Drill by Jacques NadeauMapR Technologies
 
3. v sphere big data extensions
3. v sphere big data extensions3. v sphere big data extensions
3. v sphere big data extensionsChiou-Nan Chen
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Olalekan Fuad Elesin
 
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...Simplilearn
 
Zookeeper Introduce
Zookeeper IntroduceZookeeper Introduce
Zookeeper Introducejhao niu
 

Similar to How to configure the cluster based on Multi-site (WAN) configuration (20)

Hadoop configuration & performance tuning
Hadoop configuration & performance tuningHadoop configuration & performance tuning
Hadoop configuration & performance tuning
 
Dell linux cluster sap
Dell linux cluster sapDell linux cluster sap
Dell linux cluster sap
 
Best Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the CloudBest Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the Cloud
 
ACADGILD:: HADOOP LESSON
ACADGILD:: HADOOP LESSON ACADGILD:: HADOOP LESSON
ACADGILD:: HADOOP LESSON
 
Learn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node ClusterLearn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node Cluster
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapaHadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
 
A presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NASA presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NAS
 
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter SlidesJuly 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
 
Setting up mongodb sharded cluster in 30 minutes
Setting up mongodb sharded cluster in 30 minutesSetting up mongodb sharded cluster in 30 minutes
Setting up mongodb sharded cluster in 30 minutes
 
Design and Research of Hadoop Distributed Cluster Based on Raspberry
Design and Research of Hadoop Distributed Cluster Based on RaspberryDesign and Research of Hadoop Distributed Cluster Based on Raspberry
Design and Research of Hadoop Distributed Cluster Based on Raspberry
 
Big data interview questions and answers
Big data interview questions and answersBig data interview questions and answers
Big data interview questions and answers
 
Power Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS CloudPower Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS Cloud
 
Introduction to Apache Mesos
Introduction to Apache MesosIntroduction to Apache Mesos
Introduction to Apache Mesos
 
Hadoop operations basic
Hadoop operations basicHadoop operations basic
Hadoop operations basic
 
Technical Overview of Apache Drill by Jacques Nadeau
Technical Overview of Apache Drill by Jacques NadeauTechnical Overview of Apache Drill by Jacques Nadeau
Technical Overview of Apache Drill by Jacques Nadeau
 
3. v sphere big data extensions
3. v sphere big data extensions3. v sphere big data extensions
3. v sphere big data extensions
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...
 
Zookeeper Introduce
Zookeeper IntroduceZookeeper Introduce
Zookeeper Introduce
 

More from Akihiro Kitada

Reactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみた
Reactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみたReactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみた
Reactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみたAkihiro Kitada
 
〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!Akihiro Kitada
 
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!Akihiro Kitada
 
Apache Geode の Apache Lucene Integration を試してみた
Apache Geode の Apache Lucene Integration を試してみたApache Geode の Apache Lucene Integration を試してみた
Apache Geode の Apache Lucene Integration を試してみたAkihiro Kitada
 
Apache Calcite の Apache Geode Adapter を弄った
Apache Calcite の Apache Geode Adapter を弄ったApache Calcite の Apache Geode Adapter を弄った
Apache Calcite の Apache Geode Adapter を弄ったAkihiro Kitada
 
Grafana を使った Apache Geode クラスター監視
Grafana を使った Apache Geode クラスター監視Grafana を使った Apache Geode クラスター監視
Grafana を使った Apache Geode クラスター監視Akihiro Kitada
 
〜Apache Geode 入門 Multi-site(WAN)構成による クラスター連携
〜Apache Geode 入門 Multi-site(WAN)構成によるクラスター連携〜Apache Geode 入門 Multi-site(WAN)構成によるクラスター連携
〜Apache Geode 入門 Multi-site(WAN)構成による クラスター連携Akihiro Kitada
 
My first reactive programming - To deliver a deathblow “Shoryuken” with using...
My first reactive programming - To deliver a deathblow “Shoryuken” with using...My first reactive programming - To deliver a deathblow “Shoryuken” with using...
My first reactive programming - To deliver a deathblow “Shoryuken” with using...Akihiro Kitada
 
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!Akihiro Kitada
 
〜Apache Geode 入門 gfsh によるクラスター構築・管理
〜Apache Geode 入門 gfsh によるクラスター構築・管理〜Apache Geode 入門 gfsh によるクラスター構築・管理
〜Apache Geode 入門 gfsh によるクラスター構築・管理Akihiro Kitada
 
はじめての Cloud Foundry: .NET アプリケーションのはじめ方
はじめての Cloud Foundry: .NET アプリケーションのはじめ方はじめての Cloud Foundry: .NET アプリケーションのはじめ方
はじめての Cloud Foundry: .NET アプリケーションのはじめ方Akihiro Kitada
 
Apache Geode で始める Spring Data Gemfire
Apache Geode で始めるSpring Data GemfireApache Geode で始めるSpring Data Gemfire
Apache Geode で始める Spring Data GemfireAkihiro Kitada
 
Reactor によるデータインジェスチョン
Reactor によるデータインジェスチョンReactor によるデータインジェスチョン
Reactor によるデータインジェスチョンAkihiro Kitada
 

More from Akihiro Kitada (13)

Reactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみた
Reactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみたReactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみた
Reactive Streams に基づく非同期処理プログラミング 〜 Reactor を使ってみた
 
〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
 
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
 
Apache Geode の Apache Lucene Integration を試してみた
Apache Geode の Apache Lucene Integration を試してみたApache Geode の Apache Lucene Integration を試してみた
Apache Geode の Apache Lucene Integration を試してみた
 
Apache Calcite の Apache Geode Adapter を弄った
Apache Calcite の Apache Geode Adapter を弄ったApache Calcite の Apache Geode Adapter を弄った
Apache Calcite の Apache Geode Adapter を弄った
 
Grafana を使った Apache Geode クラスター監視
Grafana を使った Apache Geode クラスター監視Grafana を使った Apache Geode クラスター監視
Grafana を使った Apache Geode クラスター監視
 
〜Apache Geode 入門 Multi-site(WAN)構成による クラスター連携
〜Apache Geode 入門 Multi-site(WAN)構成によるクラスター連携〜Apache Geode 入門 Multi-site(WAN)構成によるクラスター連携
〜Apache Geode 入門 Multi-site(WAN)構成による クラスター連携
 
My first reactive programming - To deliver a deathblow “Shoryuken” with using...
My first reactive programming - To deliver a deathblow “Shoryuken” with using...My first reactive programming - To deliver a deathblow “Shoryuken” with using...
My first reactive programming - To deliver a deathblow “Shoryuken” with using...
 
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
【古いスライド】〜僕の初めてのリアクティブプログラミング Reactor を使ってリアクティブに昇龍拳を繰り出してみた!
 
〜Apache Geode 入門 gfsh によるクラスター構築・管理
〜Apache Geode 入門 gfsh によるクラスター構築・管理〜Apache Geode 入門 gfsh によるクラスター構築・管理
〜Apache Geode 入門 gfsh によるクラスター構築・管理
 
はじめての Cloud Foundry: .NET アプリケーションのはじめ方
はじめての Cloud Foundry: .NET アプリケーションのはじめ方はじめての Cloud Foundry: .NET アプリケーションのはじめ方
はじめての Cloud Foundry: .NET アプリケーションのはじめ方
 
Apache Geode で始める Spring Data Gemfire
Apache Geode で始めるSpring Data GemfireApache Geode で始めるSpring Data Gemfire
Apache Geode で始める Spring Data Gemfire
 
Reactor によるデータインジェスチョン
Reactor によるデータインジェスチョンReactor によるデータインジェスチョン
Reactor によるデータインジェスチョン
 

Recently uploaded

How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Kaya Weers
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Nikki Chapple
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructureitnewsafrica
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 

Recently uploaded (20)

How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 

How to configure the cluster based on Multi-site (WAN) configuration

  • 1. 1 - Introduction guide for Apache Geode How to configure the cluster based on Multi-site (WAN) configuration Akihiro Kitada 2018/1/31
  • 2. 2 What is “Apache Geode ?” What is “Multi-site (WAN) configuration ?”
  • 3. Apache Geode and Multi-site (WAN) configuration  What is “Apache Geode?” – OSS software version of Pivotal GemFire – In-Memory Data Grid software – It’s a Key-Value NoSQL In-Memory Database implementation which can be scaled out based on the distributed system concept, enables users to query data entries with OQL language (like SQL), has listeners to execute logics driven by data modification and execute functions according to data location, etc. – Similar products are Apache Ignite, Infinispan, Hazelcast and so on. – http://geode.apache.org/  What is “Multi-site (WAN) configuration ?” – One of the topologies to replicate data between different Apache Geode clusters, similar to Oralce GoldenGate or database replication solution. – Considering communication latency for data replication over WAN – executing data replication asynchronously based on batch processing by chunking some data. 3
  • 4. 4 Let’s configure two single Apache Geode clusters first, work them together based on Multi-site (WAN) and replicate data each others! For the purpose of observing data replication, let’s run a simple client application to notify the behavior of data replication!
  • 5. Target single cluster configuration (1/2) 5 Server machine Locator 1 Cache Server 1 Locator 2 Cache Server 2 Partitioned Region x2 (Redundant copy x1) Local Disk Local Disk This time, let’s configure a single Apache Geode cluster based on minimum-redundant configuration on your macOS or Linux host. It’s O.K. to build on virtual machines. “Locator” is the communication point to connect to the target cluster and the manager for the cluster itself. One locator is enough but let’s have two Locators for fault tolerance. Redundant Configuration Data Persistence Data Persistence
  • 6. Target single cluster configuration (2/2) 6 Server machine Locator 1 Cache Server 1 Locator 2 Cache Server 2 Partitioned Region x2 (Redundant copy x1) Local Disk Local Disk “Cache Server” is the entity of in- memory data store. By aggregating multiple Cache Servers, they behave like one huge in-memory data store. This time, let’s have two Cache Sever for the purpose of fault tolerance too. Data Persistence Data Persistence “Region” is similar to the “Table” in the case of RDBMS. This time, let’s create one “Partitioned region”, which is like Hadoop HDFS data store (with replication factor = 2), with persisting to local disk store. Redundant Configuration
  • 7. Target Multi-site (WAN) configuration 7 Cluster 1 Cluster 2 WAN S R S R S R S R S = Gateway Sender R = Gateway Receiver For HA purpose, you have a Gateway Receiver and a Gateway Sender per Cache Servers. The former is a service for getting updates from the other cluster. The latter is a service for sending updates for the other cluster. As you can see, you will have already created two regions. One is configured for receiving updates from the remote cluster. Another is configured for sending updates to the remote cluster.
  • 8. 8 Now, let’s configure a single Apache Geode cluster!
  • 9. Preparation (1/8) 9 Server machine Locator 1 Cache Server 1 Locator 2 Cache Server 2 Partitioned Region x2 (Redundant copy x1) Local Disk Local Disk Data Persistence Data Persistence Redundant Configuration At your machine, let’s prepare configuration files, start-up scripts, local directories for logs and persistence files, required for Locators and Cache Servers.
  • 10. Preparation (2/8)  Set your network configuration to make all the machines communicate each other with TCP/IP protocols.  Ensure the assigned IP address to your machine.  Confirm your host name: $ hostname  Add the following entry in /etc/hosts of your machine. – [your IP] [your hostname] – Ex) 192.168.2.100 akitada-mcbk13 10 In the environment having multiple NICs, it may fail to connect from Locators to JMX Manager server without this configuration.
  • 11. Preparation (3/8)  Deploy the base configuration and application – Get from the required artifacts from GitHub (https://github.com/quitada41/Geode-Multisite-handson). Now, deploy them under /Users/[user name]/Geode. The example deployment is like below. ▪ /Users/akitada/Geode/GEO130.profile – profile to set required environment variables ▪ /Users/akitada/Geode/cache-wan.xml – cache configuration file for Multi-site (WAN) configuration ▪ /Users/akitada/Geode/cache.xml – cache configuration file for single cluster configuration ▪ /Users/akitada/Geode/client – client application directory to check replication behavior on Multi-site (WAN) configuration — lib/GeodeCacheListenerClient.jar – jar file for the client application — src/io/pivotal/akitada/GeodeCacheListenerClient.java – source code for the client application (just for your reference) — startListener.sh – start up script for the client application ▪ /Users/akitada-Geode/geode-wan-diff.properties – properties file for Multi-site configuration (delta from single cluster) ▪ /Users/akitada/Geode/geode.properties – properties file for single cluster configuration ▪ /Users/akitada/Geode/locator* – local directory for Locators ▪ /Users/akitada/Geode/server* – local directory for Cache Servers ▪ /Users/akitada/Geode/startLocator*.sh – start up script for Locators ▪ /Users/akitada/Geode/startServer*.sh – start up script for Cache Servers ▪ /Users/akitada/Geode/stopLocator*.sh – shut down script for Locators ▪ /Users/akitada/Geode/stopServer*.sh – shut down script for Cache Servers 11
  • 12. Preparation (4/8)  Unzip/untgz Apache Geode binary, install on your preferred cache directory and set required environment variables to run gfsh*. – Set JAVA_HOME environment variable – Determine install directory: Ex) /opt/Apache/apache-geode-1.3.0 – Set PATH environment to run gfsh* – Example profile to set required environment variable (GEO130.profile) 12 export JAVA_HOME=`/usr/libexec/java_home -v 1.8` #export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home export GEODE_HOME=/opt/Apache/apache-geode-1.3.0 export PATH=$JAVA_HOME/bin:$GEODE_HOME/bin:$PATH *= pronounced “jee-fish” – command line tool to manage and configure Apache Geode.
  • 13. Preparation (5/8)  Set required environment variables – After that, you will execute any operations from this console.  Modify the property file (1/3) – You can find the file “geode.properties” under your cache directory (/Users/[OS user name]/Geode). Modify it according to your machine environment. – The default name of the property file is “gemfire.properties.” This could be derived from Pivotal GemFire (the commercial version of Apache Geode). If you specify the actual file name when starting Locators and Cache Servers, any file name is O.K. 13 $ cd /Users/[your OS user name]/Geode $ . ./GEO130.profile
  • 14. Preparation (6/8)  Modify the property file (2/3) – Mainly, modify IP address according to your environment. ▪ x.x.x.x = IP address of your machine 14 log-level=config locators=x.x.x.x[55221],x.x.x.x[55222] bind-address=x.x.x.x server-bind-address=x.x.x.x jmx-manager-bind-address=x.x.x.x enable-cluster-configuration=false
  • 15. Preparation (7/8)  Modify the property file (3/3) – Explanation for each specified properties ▪ log-level : specifics logging level, as you can see. It’s convenient to set “config” in terms of checking actual configuration as well as general info level log messages. ▪ locators : specifies locators’ list (hostname[port],hostname[port],…) to be connected. This is required to start clusters properly. ▪ bind-address : specifies host name for peer-to-peer communications in the cluster. If you have only one NIC for the target host, you don’t have to specify this parameter. But it’s recommended to specify this parameter for the purpose of communicating via expected network segments if you have multiple NICs in the host. ▪ server-bind-address : specifics host name for client-server communications. If you have only one NIC for the target host, you don’t have to specify this parameter like bind-address parameter. This is applicable to only Cache Server processes. ▪ jmx-manager-bind-address : specifies host name for communication with JMX manager service. If you have only one NIC for the target host, you don’t have to specify this parameter like bind-address. This is only applicable to members which host JMX manager services. By default, this is only applicable to the locator which starts first. ▪ enable-cluster-configuration : specifies whether enables the cluster configuration service. This time, it’s set false and disables the service because you set cache configuration via cache.xml. If enabling the service, all the cache configurations via gfsh are persisted in Locators’ directory and they are automatically applied when restarting the cluster. ▪ You can specify IP(s) instead of host name for parameters which specify host name if your system can resolve IP(s) from hosts name by DNS or /etc/hosts file or something. 15
  • 16. Preparation (8/8)  At each machines, you have directories for Locators and Cache Servers which store log files, persistent disk store and so on. 16 $ cd /Users/[your OS user name]/Geode $ ls –l : drwxr-xr-x 9 akitada staff 306 1 17 14:04 locator1/ : drwxr-xr-x 5 akitada staff 170 1 17 14:04 server1/ : In this case, “locator1” is a directory for Locator 1. “server1” is a directory for Cache Server 1.
  • 17. Start Locators (1/3) 17 Server machine Locator 1 Cache Server 1 Locator 2 Cache Server 2 Partitioned Region x2 (Redundant copy x1) Local Disk Local Disk Data Persistence Data Persistence Redundant Configuration Start 2 locators on each machines with using gfsh from command line.
  • 18. Start Locators (2/3)  Start Locators in your machine – Modify the start up scripts (startLocator*.sh) : modify Locator’s name with “— name” parameter in the cluster. Any name is O.K. if it’s a unique name in the same cluster. – Start Locators 18 $ cd /Users/[your OS user name]/Geode $ sh ./startLocator1.sh $ sh ./startLocator2.sh #! /bin/sh gfsh start locator --name=**** --dir=locator --port=55221 --properties- file=geode.properties
  • 19. Start Locators (3/3)  Explanation for parameters specified with “gfsh start locator” command – --name : specifies unique name in the cluster for the target Locator for the purpose of identifying them. Now, specify each directories name for Locators. Any name is O.K if the name is unique in the cluster. – --dir : specifies the path for Locator directory, which is created at Preparation section. Both full path and relative path are O.K. to be specified. Now, specify the relative path to the target directory where you start Locators at the root directory of the cluster (/Users/[your OS user name]/Geode). – --port : specifies listen port number for the Locator, according to locators list specified at gemfire.properties. Now, as the listen port number, assign “55221” with Locator 1 and “55222” with Locator 2. – --properties-file : specifies the path for gemfire.properties. It’s O.K. to specify both full path and relative path. 19
  • 20. Start Cache Servers (1/4) 20 Server machine Locator 1 Cache Server 1 Locator 2 Cache Server 2 Partitioned Region x2 (Redundant copy x1) Local Disk Local Disk Data Persistence Data Persistence Redundant Configuration Start 2 Cache Servers with using gfsh command.
  • 21. Start Cache Servers (2/4)  Start Cache Servers on your machine – Modify the start up scripts (startServer*.sh) : modify Cache Server’s name with “—name” parameter in the cluster. Any name is O.K. if it’s a unique name in the same cluster. – Start Cache Servers 21 $ cd /Users/[your OS user name]/Geode $ sh ./startServer1.sh $ sh ./startServer2.sh #! /bin/sh gfsh start server --name=**** --dir=server --server-port=0 --properties- file=geode.properties --cache-xml-file=cache.xml
  • 22. Start Cache Servers (3/4)  Explanation for parameters specified with “gfsh start server” command – --name : specifies unique name in the cluster for the target Cache Servers for the purpose of identifying them. Now, specify each directories name for Cache Servers. Any name is O.K if the name is unique in the cluster. – --dir : specifies the path for Cache Server directory, which is created at Preparation section. Both full path and relative path are O.K. to be specified. Now, specify the relative path to the target directory where you start Cache Servers at the root directory of the cluster (/Users/[your OS user name]/Geode). – --server-port : specifies the listen port number used for client-server communication. Now, by setting “0” with this parameter, the unused port number is automatically assigned. The assigned port numbers for each Cache Servers are registered by Locators. Hence, each client applications can understand the target Cache Server’s listen port from Locators. Of course, you can set the specific port number. – --properties-file : specifies the path for gemfire.properties. It’s O.K. to specify both full path and relative path. – --cache-xml-file : specifies the path for the cache configuration file. Each cache servers create Regions and so on according to this configuration file. 22
  • 23. Start Cache Servers (4/4)  (reference) the example of cache configuration file (cache.xml) 23 <?xml version="1.0" encoding="UTF-8"?> <cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://geode.apache.org/schema/cache" xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd" version="1.0" lock-lease="120" lock-timeout="60" search-timeout="300" is-server="true" copy-on-read="false"> <region name="ExRegion1" refid="PARTITION_REDUNDANT_PERSISTENT" /> <region name="ExRegion2" refid="PARTITION_REDUNDANT_PERSISTENT" /> </cache> This definition means to create two regions as ExRegion and ExRegion2 based on Partitioned Region with redundant copy x1 and persisting data to disk (PARTITION_REDUNDANT_PERSISTENT).
  • 24. 24 Now, you’ve configured Apache Geode cluster. Let’s confirm the actual configuration and behavior with using gfsh.
  • 25. Cluster management by gfsh (1/2)  If executing gfsh command without any arguments, you can see the command prompt - “gfsh>” – like the following and you can execute each sub commands interactively.  Like “bash”, you can show all the available sub commands and complement each parameters for them with using TAB key. 25 $ gfsh _________________________ __ / _____/ ______/ ______/ /____/ / / / __/ /___ /_____ / _____ / / /__/ / ____/ _____/ / / / / /______/_/ /______/_/ /_/ 1.3.0 Monitor and Manage Apache Geode gfsh>
  • 26. Cluster management by gfsh (2/2)  You have to execute the following command to connect the locator to make it available of almost all the sub commands, first. – Specify one of the Locators’ host name and port number with “--locator” parameter.  Now, let’s execute the following management sub commands and check results. 26 gfsh>connect --locator=machine1[55221] gfsh>help gfsh>list members gfsh>show log —member=server1 —lines=100 gfsh>change loglevel —loglevel=fine --members=server1 gfsh>show log —member=server1 —lines=100 gfsh>change loglevel --loglevel=config --members=server1 gfsh>status server —name=server1 gfsh>describe member —name=server1
  • 27. 27 You’ve confirmed the configuration of your Apache Geode cluster. Next, let’s put and get data entries via gfsh and query them.
  • 28. Put, get, query data entries (1/2)  First of all, execute the following command to confirm whether two Regions - ExRegion1 and ExRegion2 - are actually created.  Next, confirm the detailed configuration for each Regions.  Now, if you actually confirm both regions are successfully created, let’s put, get and query some data entries. 28 gfsh>list region gfsh>describe region --name=/ExRegion1 gfsh>describe region --name=/ExRegion2
  • 29. Put, get, query data entries (2/2)  Now, put a data entry. Please execute the following command after connecting to the cluster via gfsh. You insert a key-value pair actually. This time, please specify something strings you like with parameters –key and –value. Let put multiple data entries by changing the key string and the value string.  Get a data entry by specifying the existing key like the following.  Query all the values you put at the previous step like the following. The query language is similar to SQL.  Query all the key-value pair you put at the previous step like the following. You can use alias to refer the target region.  You can use “where” sentence to filter the result by specifying some conditions like SQL below. 29 gfsh>put --region=/ExRegion1 --key='<any string>' --value='<any string>' gfsh>get --region=/ExRegion1 --key='<existing key>' gfsh>query --query="select distinct * from /ExRegion1" gfsh>query --query="select ex1.key,ex1.value from /ExRegion1.entrySet ex1" gfsh>query --query="select ex1.key,ex1.value from /ExRegion1.entrySet ex1 where ex1.key='<existing key>'"
  • 30. Post process  To prepare for the next session, shut down your cluster.  (reference) gfsh command to shut down Locators and Cache Servers – The most shortest command line by specifying local directory 30 $ cd /Users/[your OS user name]/Geode $ sh ./stopServer1.sh & $ sh ./stopServer2.sh & $ sh ./stopLocator1.sh $ sh ./stopLocator2.sh gfsh stop server --dir=server1 gfsh stop locator --dir=locator1
  • 31. 31 Now, Apache Geode clusters work on each machines. Next, let’s reconfigure them based on Multi-site (WAN) configuration and replicate data entries between clusters!
  • 32. Multi-site (WAN) configuration – cluster configuration (1/2) 32 Cluster 1 Cluster 2 WAN S R S R S R S R S = Gateway Sender R = Gateway Receiver Let’s add additional property to the property file to replicate data entries between both clusters.
  • 33. Multi-site (WAN) configuration – cluster configuration (2/2)  Set properties required for Multi-site (WAN) configuration – Delta file (geode-wan-diff.properteis) : modify the following value for each additional properties. ▪ x = Specify Cluster ID. Set the unique ID for each cluster. You specify any positive integers. This should be the same in the same cluster members. ▪ y = Specify locators list of remote cluster. Set remote cluster’s locators IP in this case. – Add the above properties into the existing geode.properties file. 33 $ cd /Users/[your OS user name]/Geode $ cat geode-wan-diff.properties >> geode.properties distributed-system-id=x remote-locators=y.y.y.y[55221],y.y.y.y[55221]
  • 34. Multi-site (WAN) configuration – cache configuration (1/4) 34 Cluster 1 Cluster 2 WAN S R S R S R S R S = Gateway Sender R = Gateway Receiver Modify the cache configuration file and add setting for Gateway Sender which sends updates to the remote cluster and Gateway Receiver which receives updates from the remote cluster.
  • 35. Multi-site (WAN) configuration – cache configuration (2/4)  Modify and confirm the cache configuration file added Gateway Sender/Receiver configuration (cache-wan.xml) : check the value with red color below 35 <?xml version="1.0" encoding="UTF-8"?> <cache xmlns:xsi=…> <gateway-sender id="GwSender1" enable-persistence="true" manual-start="true" batch-size="100" batch-time-interval="1000" remote-distributed-system-id="2" parallel="true" /> <gateway-receiver manual-start="false" start-port="41000" end-port="41999" /> <region name="ExRegion1" refid="PARTITION_REDUNDANT_PERSISTENT"> <region-attributes gateway-sender-ids="GwSender1" /> </region> <region name="ExRegion2" refid="PARTITION_REDUNDANT_PERSISTENT"/> </cache> Gateway Sender configuration: configuration to send updates to the remote cluster Gateway Receiver configuration: configuration to receive updates from the remote cluster Add Gateway Sender which ID is “GwSender1” with ExRegion1 region, to send updates to the remote cluster.Start Gateway Receiver service only at the time when starting Cache Server (manual-start attribute).
  • 36. Multi-site (WAN) configuration – cache configuration (3/4)  Explanation for each attributes specified at cache-wan.xml (1/2) – gateway-sender element: ▪ id : specifics ID for the specific Gateway Sender configuration. In the example, it’s “GwSender1” because of adding to “ExRegion1.” If you will add to “ExRegion2”, it could be “GwSender2.” ▪ enable-persistence : specifies whether persists updates in the Gateway Sender queue. In the case, the target region itself is persisted. So you have to set this value true. ▪ manual-start : specifies whether you have to start Gateway Sender service manually. In the example configuration, it’s set true. This means that you have to start manually Gateway Sender service with using gfsh or API. Otherwise, Gateway Sender service starts automatically at the same time starting the target cache server. ▪ remote-distributed-system-id : specifies the remote cluster to send updates via Gateway Sender. In this example, the remote cluster’s “distributed system ID” is “2”, which has been set at the property – distributed-system-id – with the remote cluster. ▪ parallel : specifies whether sends updates in parallel or not. In this case, it’s “true” and we call “Parallel Gateway Sender.” 36
  • 37. Multi-site (WAN) configuration – cache configuration (4/4)  Explanation for each attributes specified at cache-wan.xml (2/2) – gateway-receiver element: ▪ manual-start : specifies whether you have to start Gateway Receiver service manually. In the example configuration, it’s set false. This means that Gateway Receiver service starts automatically at the same time starting the target cache server. – region element: ▪ name : specifies the region name. In the Multi-site (WAN) configuration, it’s the target region name to add Gateway Sender(s) to send its updates to the remote cluster. In this example, Gateway Sender is added to “ExRegion1” while it’s added to “ExRegion2” at the remote cluster. – region-attributes element: ▪ gateway-sender-ids : specifies Gateway Senders’ ID (which has been set in gateway-sender element) to add Gateway Senders with the target region. You can specify multiple Gateway Sender’s ID by the comma-separated value. 37
  • 38. Multi-site (WAN) configuration – start your cluster  Modify the existing start up scripts for Cache Server to refer the cache configuration file including Gateway Sender/Receiver configuration. – Modify your start up script for Cache Servers (startServer*.sh) like the following.  Restart your cluster.  Confirm whether the remote cluster on the other machine starts successfully. 38 #! /bin/sh gfsh start server --name=c1s1 --dir=server --server-port=0 --properties- file=geode.properties --cache-xml-file=cache-wan.xml $ cd /Users/[your OS user name]/Geode $ sh ./startLocator1.sh $ sh ./startLocator2.sh $ sh ./startServer1.sh & $ sh ./startServer2.sh &
  • 39. 39 Now, you can reconfigure Apache Geode clusters (x2) with Multi-site (WAN) configuration. Let’s confirm whether data entries are replicated each other.
  • 40. Start the client application to check the behavior (1/3)  Start the Java client application to check the behavior of Multi-Site (WAN) configuration. – Check the start up script (client/startListener.sh) ▪ geode-dependencies.jar only includes a manifest file with classpath information which is required for Apache Geode application. – Start the client : see the next slide on arguments… 40 #!/bin/sh java -classpath ${GEODE_HOME}/lib/geode- dependencies.jar:./lib/GeodeCacheListenerClient.jar io.pivotal.akitada.GeodeCacheListenerClient $1 $2 $3 $ cd /Users/[your OS user name]/Geode/client $ sh ./startListener.sh Usage: java io.pivotal.akitada.GeodeCacheListenerClient [region name] [locator host name] [locator port] $ sh ./startListener.sh ExRegion2 192.168.2.100 55221
  • 41. Start the client application to check the behavior (2/3)  Arguments for the client application – 1st argument : Specify the name of region which receives updates from the remote cluster. If you specify Gateway Sender with “ExRegion1” at the local cluster, your mate specifies Gateway Sender with “ExRegion2” at the remote cluster. In this case, you should specify “ExRegion2” as the argument because the remote cluster should send updates for ExRegion2. – 2nd argument : Specify the address of one of the locators’ address. – 3rd argument : Specify the listen port of one of the locators. 41
  • 42. Start the client application to check the behavior (3/3)  (reference) details about the client application – For actual details, please refer to the source code. You can find it at the following path. ▪ /Users/[your OS user name]/Geode/client/src/io/pivotal/akitada/GeodeCacheListenerClient.java – At the main() function, it connects to the region specified at 1st argument and add the Cache Listener with it. The Cache Listener will trigger the logic to print messages if getting updates in the target region. – The application source code itself extends CacheListenerAdapter and includes some logics as a Cache Listener. 42
  • 43. Multi-site (WAN) configuration – start Gateway Sender  If starting Gateway Sender at the local cluster before starting Gateway Receiver at the remote cluster, you may see tons of log messages to try to connect to Gateway Receiver at the remote cluster. You may not like this behavior. Hence, in this hands on session, your mate and you will start Gateway Senders manually after confirming Gateway Receiver starts at the remote cluster.  Start gfsh without any arguments, connect to your cluster and start Gateway Senders like the following. – Check the status of Gateway Senders/Receivers before and after starting Gateway Senders manually by “list gateways” sub command. 43 gfsh>connect --locator=192.168.2.100[55221] gfsh>list gateways gfsh>start gateway-sender --id=GwSender1 gfsh>list gateways
  • 44. Multi-site (WAN) configuration – check behavior  Start gfsh and connect to your cluster. Put something data entries to the region added Gateway Sender.  Those updates should be sent to the remote cluster. Then the client application running on the remote cluster will respond to those updates and log the following kind of messages in the console. This means data replication is successfully done based on Multi-site (WAN) configuration. 44 gfsh>connect --locator=192.168.2.100[55221] gfsh>put --region=/ExRegion1 --key='Something' --value='Data' Received afterCreate event for entry: Something, Data, isOriginRemote=true
  • 45. 45 Now, you completed to configure Apache Geode cluster x2 based on Multi-site (WAN) configuration and confirm the data replication behavior!