Pivotal GemFire® v8.0

Running the Tutorial

Running the Tutorial

The tutorial shows a sample social networking application that stores people's profiles and posts. Each person has a profile that lists their friends, and each person can write posts.

Running the tutorial encompasses the following areas:


Before you begin, verify that you meet the prerequisites:

Part 1: Start a Locator

JVMs running GemFire discover each other through multicast messaging or through a TCP locator service, which is called a locator. Multicasting is a good way to run quick tests, but it is less robust and flexible than the locator service. The locator runs as a separate process to which each new member connects to first discover the list of available peers. Clients connect to the locator to get server connection information. For more information, see How Member Discovery Works. For this example, we use a locator.

Figure 1. Peer Discovery Using a Locator Peer discovery using a locator
  1. Change directories to a working directory (if necessary, create a directory) where you have write permissions. For example:
    $ cd /Users/username/<tutorial_working_directory>
  2. Create a jar file with the Java classes required for this tutorial. You will deploy this jar file later using the deploy command. For example:
    $ jar cf tutorial.jar -C $GEMFIRE/SampleCode/tutorial/classes .
  3. Start the gfsh command-line interface using the following command:
    $ gfsh
        _________________________     __
       / _____/ ______/ ______/ /____/ /
      / /  __/ /___  /_____  / _____  / 
     / /__/ / ____/  _____/ / /    / /  
    /______/_/      /______/_/    /_/    v8.0.0
    Monitor and Manage GemFire

    Keep the window where this gfsh command-line interface is running open. You will use this window to run gfsh commands through out this tutorial.

  4. Start a locator.
    Type the following command at the gfsh prompt:
    gfsh>start locator --name=locator1 --port=55221
    The locator process runs in the background, listening for connections on port 55221. To stop the process, you can type gfsh stop locator --dir=locator1. But don't stop it yet.

Part 2: Create a Cache and Start Peers

You will store the data on several GemFire peers. The first step to starting up a GemFire peer is to create a com.gemstone.gemfire.cache.Cache. The cache is the central component of GemFire. It manages connections to other GemFire peers. For more information, see Cache Management.
  1. Use a text editor or your favorite IDE to open the file in the tutorial application source code. This file is located <product_directory>/SampleCode/tutorial/src/com/gemstone/gemfire/tutorial/storage directory where <product_directory> corresponds to the location where you installed Pivotal GemFire. Look at the cache creation in the initPeer method in The first thing this method does is create a cache:
    Cache cache = new CacheFactory()
          .set("locators", "localhost[55221]")
          .set("mcast-port", "0")
          .set("log-level", "error")
    Secondly, it configures the cache:
    • The GemFire locators property tells the cache which locators to use to discover other GemFire JVMs.
    • The mcast-port property tells GemFire not to use multicast discovery to find peers.
    • The log-level property controls the log level of GemFire's internal logging. Here it is set to "error" to limit the amount of messaging that will show up in the console.
    When this code is run, after the call to create finishes, this peer will discover other peers and connect to them.
  2. Run the Peer application in two terminal sessions. You already have one window open where you started the locator. Start another terminal window. In each window, run the Peer application:
    $ java -cp "$GEMFIRE/SampleCode/tutorial/classes:$GEMFIRE/lib/server-dependencies.jar" com.gemstone.gemfire.tutorial.Peer
    The peers start up and connect to each other. You should see the interactive command prompt for the tutorial's social networking application in each window:
    person [name] [friend] [friend] ...
      Add a person. The person's name and friends cannot
    		contain spaces
    post [author] [text]
      Add a post.
      List all people.
      List all posts.
      Exit the shell.

Part 3: Create Replicated Regions

The GemFire com.gemstone.gemfire.cache.Region interface defines a key-value collection. Region extends the java.util.concurrent.ConcurrentMap interface. The simplest type of region is a replicated region. Every peer that hosts a replicated region stores a copy of the entire region locally. Changes to the replicated region are sent synchronously to all peers that host the region. For more information on regions, see Data Regions.

This procedure walks you through the creation of a replicated region called People.

Perform the following steps:

  1. Look at the initPeer method in to see where it creates the people region.

    To create a com.gemstone.gemfire.cache.Region, you use a com.gemstone.gemfire.cache.RegionFactory class. This is the code in initPeer method that creates the people region:

        people = cache.<String, Profile>createRegionFactory(REPLICATE)
    The people region is constructed with com.gemstone.gemfire.cache.RegionShortcut.REPLICATE, which tells the factory to start with the configuration for a replicated region. This method adds a cache listener to the region. You can use a cache listener to receive notifications when the data in the region changes. This sample application includes a LoggingCacheListener class, which prints changes to the region to System.out and lets you see how entries are distributed.
  2. Look at the addPerson method. It adds the entry to the region by calling the put method of Region.
      public void addPerson(String name, Profile profile) {
        people.put(name, profile);
    Calling put on the people region distributes the person to all other peers that host that region. After this call completes, each peer will have a copy of this person.
  3. Add people. In one of your terminal windows, type:
    person Isabella
    person Ethan
    You will see the users show up in the other window:
    In region people created key Isabella value Profile [friends=[]]
    In region people created key Ethan value Profile [friends=[]]

Part 4: Create Partitioned Regions

You expect to create a lot of posts. Because of that, you do not want to host a copy of the posts on every server. Thus you store them in a partitioned region. The API to use the partitioned region is the same as that for a replicated region, but the data is stored differently. A partitioned region lets you control how many copies of your data will exist in the distributed system. The data is partitioned over all peers that host the partitioned region. GemFire automatically maintains the number of copies of each partition that you request.

In the above diagram, the application code writes the post "I like toast" to two JVMs; it stores a primary copy in one peer (JVM 2) and a redundant copy in another peer (JVM 3). You will also notice that while each post appears in the local data of two peers, the logical view contains all three posts.

  1. Look at the code that creates the posts region. You can create partitioned regions with the PARTITION_XXX shortcuts.
    To create the posts region, the initPeer method uses the com.gemstone.gemfire.cache.RegionShortcut.PARTITION_REDUNDANT shortcut to tell GemFire to create a partitioned region that keeps one primary and one redundant copy of each post on different machines.
    posts =  cache.<PostID, String>createRegionFactory(PARTITION_REDUNDANT)
  2. Start another terminal window and launch the peer application in that window.
    $ java -cp "$GEMFIRE/SampleCode/tutorial/classes:$GEMFIRE/lib/server-dependencies.jar" com.gemstone.gemfire.tutorial.Peer
    You should have three peers running now. Each post you create will be stored in only two of these peers.
  3. Add some posts to the posts region.
    > post Isabella I like toast
    > post Isabella LOL!
    > post Ethan Hello
    You see that the listener in only one of the JVMs is invoked for each post. For example, in only one of your open terminal windows, a message for each of your posts similar to the following appears:
    > In region posts created key PostID [author=Isabella, timestamp=1375987268621]
    value  I like toast
    That's because partitioned regions make one copy of the post the primary copy. By default GemFire only invokes the listener in the peer that holds the primary copy of each post.
  4. From any window, list the available posts with the posts command.

    You should be able to list all posts, because GemFire fetches them from the peer that hosts each post.
    > posts
    Isabella: I like toast
    Ethan: Hello
    Isabella: LOL!
    If you kill one of the JVMs, you should still be able to list all posts. You can bring that member back up and kill another one, and you should still see all of the posts.
  5. Kill your peers before moving on to the next section. Type quit in each window.

Part 5: Set Up Client/Server Caching

You have a fully working system now, but the UI code is running in the same member in which you are storing data. That works well for some use cases, but it may be undesirable for others. For example, if you have a Web server for the UI layer, you may want to increase or decrease the number of Web servers without modifying your data servers. Or you may need to host only 100 GB of data, but have thousands of clients that might access it. For these use cases, it makes more sense to have dedicated GemFire servers and access the data through a GemFire client.

GemFire servers are GemFire peers, but they also listen on a separate port for connections from GemFire clients. GemFire clients connect to a limited number of these cache servers.

Like regular peers, GemFire servers still need to define the regions they will host. You could take the peer code you already have and create a com.gemstone.gemfire.cache.server.CacheServer instance programmatically, but this example shows you how to define the regions and create a configuration for the distributed system using the gfsh command-line interface.
  1. From the gfsh command-line interface, you started in a previous step, start two cache servers.
    Type the following command at the gfsh prompt:
    gfsh>start server --name=server1 --locators=localhost[55221] --server-port=0
    This command automatically creates a working directory for this server named server1 under <tutorial_working_directory> where you executed the command.
    Next start a second cache server by executing the following command in the same terminal:
    gfsh>start server --name=server2 --locators=localhost[55221] --server-port=0
    This command automatically creates a working directory for this server named server2 under <tutorial_working_directory> where you executed the command.

    You should now have two GemFire peers (cache servers) that are listening for client connections.

  2. Deploy the jar file containing the Java classes for this tutorial by running the following command in the gfsh command-line interface:
    gfsh>deploy --jar=tutorial.jar

    The tutorial.jar file contains the Java class you specify as the cache listener when creating regions in the next step.

  3. Create the two regions required for the Social Networking application by running the following commands in the gfsh command-line interface:
    gfsh>create region --name=people --type=REPLICATE
    gfsh>create region --name=posts --type=PARTITION_REDUNDANT
  4. Use a text editor or your favorite IDE to open the file in the tutorial application code. This file is located <product_directory>/SampleCode/tutorial/src/com/gemstone/gemfire/tutorial/storage directory where <product_directory> corresponds to the location where you installed Pivotal GemFire.
    1. Look at the code in to see how it starts a client.

      Starting a GemFire client is very similar to starting a GemFire peer. In the GemFire client, you create an instance of com.gemstone.gemfire.cache.client.ClientCache, which connects to the locator to discover servers.

      Examine the GemfireDAO.initClient method. The first thing the method does is create a ClientCache:
      ClientCache cache = new ClientCacheFactory()
            .addPoolLocator("localhost", 55221)
            .set("log-level", "error")
      Once you create a ClientCache, it maintains a pool of connections to the servers similar to a JDBC connection pool. However, with GemFire you do not need to retrieve connections from the pool and return them. That happens automatically as you perform operations on the regions. The pool locator property tells the client how to discover the servers. The client uses the same locator that the peers do to discover cache servers. Setting the subscription enabled and subscription redundancy properties allow the client to subscribe to updates for entries that change in the server regions. You are going to subscribe to notifications about any people that are added. The updates are sent asynchronously to the client. Because the updates are sent asynchronously, they need to be queued on the server side. The subscription redundancy setting controls how many copies of the queue are kept on the server side. Setting the redundancy level to 1 means that you can lose 1 server without missing any updates.
    2. Look at the code that creates a proxy region called posts in the client.
      Creating regions in the client is similar to creating regions in a peer. There are two main types of client regions, PROXY regions and CACHING_PROXY regions. PROXY regions store no data on the client. CACHING_PROXY regions allow the client to store keys locally on the client. This example uses a lot of posts, so you won't cache any posts on the client. You can create a proxy region with the shortcut PROXY. Look at the GemFireDAO.initPeer method. This method creates the posts region like this:
          posts = cache.<PostID, String>createClientRegionFactory(PROXY)
    3. Look at the code that creates a caching proxy region called people in the client.
      You do not have many people, so for this sample the client caches all people on the client. First you create a region that has local storage enabled. You can create a region with local storage on the client with ClientRegionShortcut.CACHING_PROXY. In the initClient method, here's where the people region is created.
          people = cache.<String, Profile>createClientRegionFactory(CACHING_PROXY)
    4. Look at the code that calls the registerInterest method to subscribe to notifications from the server.

      By creating a CACHING_PROXY, you told GemFire to cache any people that you create from this client. However, you can also choose to receive any updates to the people region that occur on other peers or other clients by invoking the registerInterest methods on the client. In this case you want to register interest in all people, so you cache the entire people region on the client. The regular expression ".*" matches all keys in the people region.

      Look at the initClient method. The next line calls registerInterestRegex:
      When the registerInterestRegex method is invoked, the client downloads all existing people from the server. When a new person is added on the server it is pushed to the client.
    5. Look at the code that iterates over keys from the client.
      Look at the getPeople and getPosts methods in GemFireDAO.
        public Set<String> getPeople() {
          return people.keySet();
        public Set<PostID> getPosts() {
          if(isClient) {
            return posts.keySetOnServer();
          } else {
            return posts.keySet();

      GemFireDAO.getPeople calls people.keySet(), whereas GemFireDAO.getPosts calls posts.keySetOnServer() for the client. That's because the keySet method returns the keys that are stored locally on the client. For the People region you can use your locally cached copy of the keys, but for the Post region you need to go to the server to get the list of keys.

  5. Open two terminal windows, and start the client application in both terminals.
    Type the following command to start the client:
    $ java -cp "$GEMFIRE/SampleCode/tutorial/classes:$GEMFIRE/lib/server-dependencies.jar" com.gemstone.gemfire.tutorial.Client
    Add people and posts from one of the clients. For example, add a person:
    >person Ethan Isabella
    Notice that the following message appears in both clients:
    In region people created key Ethan value Profile [friends=[Isabella]]
    In either one of the client windows, add more people and add some posts:
    >person Isabella Ethan
    >post Ethan Waaa!!! 
    >post Isabella I want toast
    Switch to the other client window, and observe that the data you entered with one client can be seen by the other client:
    Ethan:  Waaa!!!
    Isabella: I want toast
    Leave the clients running for the next part.

Part 6: Add and Stop Cache Servers (Peers)

You can dynamically add peers to the system while it is running. The new peers are automatically discovered by the other peers and the client. The new peers automatically receive a copy of any replicated regions they create. However, partitioned regions do not automatically redistribute data to the new peer unless you explicitly instruct GemFire to rebalance the partitioned region. With the gfsh start server command, you can pass a command line flag indicating that the new peer should trigger a rebalance of all partitioned regions.

  1. Add a new peer.
    Type the following command at the gfsh command prompt:
    gfsh>start server --name=server3 --locators=localhost[55221] --server-port=0 --rebalance
    This command will automatically create a working directory for this server named server3 under the <product_directory>/SampleCode/tutorial directory where you executed the command.
  2. Stop the cache servers.
    1. Stop the first cache server by entring the following command in the gfsh shell:
      gfsh>stop server --dir=server1
    2. Observe that the data you entered is still available and the client automatically ignores the dead server.
      Ethan:  Waaa!!!
      Isabella: I want toast
    3. Stop the distributed system. You can leave the client running. Run the following command in the gfsh window:

Part 7: Configure Persistence

GemFire supports shared-nothing persistence. Each member writes its portion of the region data to its own disk files. For the People region, each peer will be writing the entire region to its own disk files. For the Posts region, a copy of each post will reside on two different peers.

  1. Walk through configuring a persistence region. We will now create a new configuration that uses persistence regions. To do this, we start a new locator. Because shared configurations are stored by locators, we can create a new configuration by starting a new locator. Note that the shutdown command you issued in the last step stops all servers and locators in the distributed system.
  2. Start the gfsh command-line interface by running the following command:
    $ gfsh
        _________________________     __
       / _____/ ______/ ______/ /____/ /
      / /  __/ /___  /_____  / _____  / 
     / /__/ / ____/  _____/ / /    / /  
    /______/_/      /______/_/    /_/    v8.0.0
    Monitor and Manage GemFire
  3. Start a new locator by running the following command in the gfsh command-line interface:
    gfsh>start locator --name=locator2 --port=55221
  4. Start two servers. Enter the following commands in the gfsh command-line interface:
    gfsh>start server --name=server1 --locators=localhost[55221] --server-port=0
    gfsh>start server --name=server2 --locators=localhost[55221] --server-port=0

    This command uses working directories for these servers named server1 and server2 under <tutorial_working_directory> where you executed the command. These directories were created in an previous step when you started server1 and server2 for the first time. Note that we do not have to issue the deploy command to deploy the tutorial.jar file because the jar file already exists in the server directories.

  5. Create the two regions required for the Social Networking application by running the following commands in the gfsh command-line interface:
    gfsh>create region --name=people --type=REPLICATE_PERSISTENT
    gfsh>create region --name=posts --type=PARTITION_REDUNDANT_PERSISTENT

    Note that, unlike the earlier step where you created regions without persistence, these create region commands specify the region types as REPLICATE_PERSISTENT and PARTITION_REDUNDANT_PERSISTENT respectively.

  6. Add some posts from the client.
    > post Isabella Hello
    > post Ethan Goodbye
    > posts
    Ethan:  Goodbye
    Isabella:  Hello
  7. Kill both servers and restart them to make sure the posts are recovered. Run the following commands in the gfsh windoe:
    gfsh>stop server --dir=server1
    gfsh>stop server --dir=server2

    When you restart persistent members, you call gfsh start server in parallel for each server.

    The reason is that GemFire ensures that your complete data set is recovered before allowing the JVMs to start. Remember that each member is persisting only part of the Posts region. Each GemFire member waits until all posts are available before starting. This protects you from seeing an incomplete view of the posts region.

    To start the servers in parallel, open a new terminal window, run the following command:


    $ gfsh start server --name=server1 --locators=localhost[55221] \ --server-port=0 \
    & gfsh start server --name=server2 --locators=localhost[55221] \--server-port=0 &


    prompt> start /B gfsh start server --name=server1 --locators=localhost[55221]  --server-port=0
    prompt> start /B gfsh start server --name=server2 --locators=localhost[55221] --server-port=0
  8. View the posts from the client.
    > posts
    Ethan:  Goodbye
    Isabella:  Hello
  9. Shutdown the Client applications by typing quit at the prompt.
  10. Stop the distributed system. Run the following command in the gfsh window:
  11. Review the cluster configuration created on the locator. Under the locator's working directory, look at the shared-config directory.

    gfsh creates a shared configuration by writing out files and cache.xml files in the shared-config/all subdirectory of the locator's directory. For example, the cache.xml file created after running this example looks like the following. (The file is named all.xml.)

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Cache 8.0//EN" "">
    <cache lock-lease="120" lock-timeout="60" search-timeout="300" is-server="false" copy-on-read="false">
      <!-- Add Region Elements Here -->
      <region name="people">
        <region-attributes scope="distributed-ack" data-policy="persistent-replicate" concurrency-checks-enabled="true">
      <region name="posts">
        <region-attributes data-policy="persistent-partition">
          <partition-attributes redundant-copies="1"/>

    For more information about using gfsh and shared configurations, see Overview of the Cluster Configuration Service.