LATEST VERSION: 8.0.0 - CHANGELOG
Pivotal GemFire® v8.0

Pivotal GemFire in 15 Minutes or Less

Pivotal GemFire in 15 Minutes or Less

Need a quick introduction to Pivotal GemFire? Take this brief tour to try out basic features and functionality.

About Pivotal GemFire briefly describes some components and concepts you will explore here.

Step 1: Download and install Pivotal GemFire.

Visit the GemFire product download page, and download the latest GemFire distribution:

https://network.pivotal.io/products/pivotal-gemfire

Locate the Downloads link, and download the GemFire ZIP file. Use any graphical unzip utility or the command line to install GemFire from the ZIP file. For example:
unzip Pivotal_GemFire_XXX_bNNNNN_platform.zip -d /home/your_userid/pivotal-gemfire
where XXX and NNNNN correspond to the latest GemFire version and build number and platform corresponds to the OS you are using (Linux, Solaris or Windows).
Note: If you have not done so already, download and install Java for your system. See Pivotal GemFire Supported Configurations for a list of supported Java versions. GemFire requires the JDK (and not just a JRE) to run gfsh commands and to launch servers and locators using the ServerLauncher or LocatorLauncher APIs.

Step 2: Set GemFire, Java, and OS environment variables.

Open two terminal windows and configure GemFire and Java environment variables in both. For example, on Linux:
JAVA_HOME=/usr/java/jdk1.6.0_38
GEMFIRE=/home/your_userid/pivotal-gemfire/Pivotal_GemFire_800_b54567_Linux;export GEMFIRE
GF_JAVA=$JAVA_HOME/bin/java;export GF_JAVA 
PATH=$PATH:$JAVA_HOME/bin:$GEMFIRE/bin;export PATH 
See Windows/Unix/Linux: Install Pivotal GemFire from a ZIP or tar.gz File if you need additional guidance on setting environment variables for your platform.

Step 3: Use gfsh to start a Locator.

In one of the terminal windows, use the gfsh command line interface to start up a locator. Pivotal GemFire gfsh (pronounced "gee-fish") provides a single, intuitive command-line interface from which you can launch, manage, and monitor Pivotal GemFire processes, data, and applications. See Using the Pivotal GemFire SHell (gfsh).

The locator is a Pivotal GemFire process that tells new, connecting members where running members are located and provides load balancing for server use. A locator, by default, starts up a JMX Manager, which s used for monitoring and managing of a GemFire cluster. The cluster configuration service uses locators to persist and distribute cluster configurations to cluster members. See Pivotal GemFire Locator Processes and Overview of the Cluster Configuration Service.

  1. Create a scratch working directory (for example, my_gemfire) and change directories into it. gfsh saves locator and server working directories and log files in this location.
  2. Start gfsh by typing gfsh at the command line (or gfsh.bat if you are using Windows).
        _________________________     __
       / _____/ ______/ ______/ /____/ /
      / /  __/ /___  /_____  / _____  /
     / /__/ / ____/  _____/ / /    / /
    /______/_/      /______/_/    /_/    v8.0.0
    
    Monitor and Manage GemFire
    gfsh>
  3. At the gfsh prompt, type:
    gfsh>start locator --name=locator1 --port=40401
    Starting a GemFire Locator in /home/username/gf80/jul11/test11/locator1...
    .............................
    Locator in /home/username/gf80/jul11/test11/locator1 on 172.16.196.144[40401] as locator1 is currently online.
    Process ID: 11162
    Uptime: 15 seconds
    GemFire Version: 8.0
    Java Version: 1.6.0_33
    Log File: /home/username/gf80/jul11/test11/locator1/locator1.log
    JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false 
    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
    Class-Path: /home/username/gf80/jul11/lib/locator-dependencies.jar:/home/username/java/lib/tools.jar
    
    Successfully connected to: [host=172.16.196.144, port=1099]
    
    Cluster configuration service is up and running.
     

Step 4. Start GemFire Pulse.

Start up the browser-based GemFire Pulse monitoring tool. GemFire Pulse is a Web Application that provides a graphical dashboard for monitoring vital, real-time health and performance of GemFire clusters, members, and regions. See GemFire Pulse.
gfsh>start pulse
This command launches Pulse and automatically connects you to the JMX Manager running in the Locator. At the Pulse login screen, type in the default username admin and password admin.

The Pulse application now displays the locator you just started (locator1):

Step 5: Start a server.

A GemFire server is a Pivotal GemFire process that runs as a long-lived, configurable member of a cluster (also called a distributed system). The GemFire server is used primarily for hosting long-lived data regions and for running standard GemFire processes such as the server in a client/server configuration. See Pivotal GemFire Server Processes.

Start the cache server:
gfsh>start server --name=server1 --server-port=40411

This commands starts a cache server named "server1" on the specified port of 40411.

Observe the changes (new member and server) in Pulse. Try expanding the distributed system icon to see the locator and cache server graphically.

Step 6: Create a replicated, persistent region.

In this step you create a region with the gfsh command line utility. Regions are the core building blocks of the GemFire cluster and provide the means for organizing your data. The region you create for this exercise employs replication to replicate data across members of the cluster and utilizes persistence to save the data to disk. See Data Regions.

  1. Create a replicated, persistent region:
    gfsh>create region --name=regionA --type=REPLICATE_PERSISTENT
    Member  | Status
    ------- | --------------------------------------
    server1 | Region "/regionA" created on "server1"
    

    Note that the region is hosted on server1.

  2. Use the gfsh command line to view a list of the regions in the cluster.
    gfsh>list regions
    List of regions
    ---------------
    regionA
    
  3. List the members of your cluster. The locator and cache servers you started appear in the list:
    gfsh>list members
      Name   | Id
    -------- | ------------------------------------------------
    server1  | 172.16.196.144(server1:29319)<v7>:22266
    locator1 | 172.16.196.144(locator1:28635:locator)<v0>:35313
  4. To view specifics about a region, type the following:
    gfsh>describe region --name=regionA
    ..........................................................
    Name            : regionA
    Data Policy     : persistent replicate
    Hosting Members : server1
                      
    Non-Default Attributes Shared By Hosting Members  
    
     Type  | Name | Value
    ------ | ---- | -----
    Region | size | 0
    
    
  5. In Pulse, click the green cluster icon to see all the new members and new regions that you just added to your cluster.
Note: Keep this gfsh prompt open for the next steps.

Step 7: Manipulate data in the region and demonstrate persistence:

Pivotal GemFire manages data as key/value pairs. In most applications, a Java program adds, deletes and modifies stored data. You can also use gfsh commands to add and retrieve data. See Data Commands.

  1. Run the following put commands to add some data to the region:
    gfsh>put --region=regionA --key="1" --value="one"
    Result      : true
    Key Class   : java.lang.String
    Key         : 1
    Value Class : java.lang.String
    Old Value   : <NULL>
    
    
    gfsh>put --region=regionA --key="2" --value="two"
    Result      : true
    Key Class   : java.lang.String
    Key         : 2
    Value Class : java.lang.String
    Old Value   : <NULL>
    
  2. Run the following command to retrieve data from the region:
    gfsh>query --query="select * from /regionA"
    
    Result     : true
    startCount : 0
    endCount   : 20
    Rows       : 2
    
    Result
    ------
    two
    one

    Note that the result displays the values for the two data entries you created with the put commands.

    See Data Entries.

  3. Stop the cache server using the following command:
    gfsh>stop server --dir=server1
    Stopping Cache Server running in /home/username/gf80/jul11/test8/server1 on 172.16.196.144[40411] as server1...
    Process ID: 28853
    Log File: /home/username/gf80/jul11/test8/server1/server1.log
  4. Restart the cache server using the following command:
    gfsh>start server --name=server1 --server-port=40411
    
  5. Run the following command to retrieve data from the region:
    gfsh>query --query="select * from /regionA"
    
    Result     : true
    startCount : 0
    endCount   : 20
    Rows       : 2
    
    Result
    ------
    two
    one

    Because regionA uses persistence, it writes a copy of the data to disk. When a server hosting regionA starts, the data is populated into the cache. Note that the result displays the values for the two data entries you created prior to stopping the server with the put commands.

    See Data Entries.

    See Data Regions

Step 8: Examine the effects of replication.

In this step, you start a second cache server. Because regionA is replicated, the data will be available on any server hosting the region.

See Data Regions

  1. Start a second cache server:
    gfsh>start server --name=server2 --server-port=40412
  2. Run the describe region command to view information about regionA:
    gfsh>describe region --name=regionA
    ..........................................................
    Name            : regionA
    Data Policy     : persistent replicate
    Hosting Members : server1
                      server2
    
    Non-Default Attributes Shared By Hosting Members  
    
     Type  | Name | Value
    ------ | ---- | -----
    Region | size | 2
    
    

    Note that you do not need to create regionA again for server2. The output of the command shows that regionA is hosted on both server1 and server2. When gfsh starts a server, it requests the configuration from the cluster configuration service which then distributes the shared configuration to any new servers joining the cluster.

  3. Add a third data entry:
    gfsh>put --region=regionA --key="3" --value="three"
    Result      : true
    Key Class   : java.lang.String
    Key         : 3
    Value Class : java.lang.String
    Old Value   : <NULL>
    
  4. Open the Pulse application (in a Web browser) and observe the cluster topology. You should see a locator with two attached servers. Click the Data tab to view information about regionA.
  5. Stop the first cache server with the following command:
    gfsh>stop server --dir=server1
    Stopping Cache Server running in /home/username/gf80/jul11/test8/server1 on 172.16.196.144[40411] as server1...
    Process ID: 28853
    Log File: /home/username/gf80/jul11/test8/server1/server1.log
    
  6. Retrieve data from the remaining cache server.
    gfsh>query --query="select * from /regionA"
    
    Result     : true
    startCount : 0
    endCount   : 20
    Rows       : 3
    
    Result
    ------
    two
    one
    three

    Note that the data contains 3 entries, including the entry you just added.

  7. Add a fourth data entry:
    gfsh>put --region=regionA --key="4" --value="four"
    Result      : true
    Key Class   : java.lang.String
    Key         : 3
    Value Class : java.lang.String
    Old Value   : <NULL>
    

    Note that only server2 is running. Because the data is replicated and persisted, all of the data is still available. But the new data entry is currently only available on server 2.

  8. Stop the remaining cache server:
    gfsh>stop server --dir=server2
    Stopping Cache Server running in /home/username/gf80/jul11/test8/server2 on 172.16.196.144[40412] as server2...
    Process ID: 28991
    Log File: /home/username/gf80/jul11/test8/server2/server2.log

Step 9: Restart the cache servers in parallel

In this step you restart the cache servers in parallel. Because the data is persisted, the data is available when the servers restart. Because the data is replicated, you must start the servers in parallel so that they can synchronize their data before starting.

  1. Start server1. Because regionA is replicated, it needs data from the other server to start and waits for the server to start:
    gfsh>start server --name=server1 --server-port=40411
    Starting a GemFire Server in /home/username/gf80/jul11/test9/server1...
    ......................................
    Region /regionA has potentially stale data. It is waiting for another member to recover the latest data.
    My persistent id:
    
      DiskStore ID: e0854a05-cd30-4e0d-a630-541e49fb05db
      Name: server1
      Location: /172.16.196.144:/home/username/gf80/jul11/test9/server1/.
    
    Members with potentially new data:
    [
      DiskStore ID: c1054ff6-183d-41bc-92ce-71c152f333fa
      Name: server2
      Location: /172.16.196.144:/home/username/gf80/jul11/test9/server2/.
    ]
    Use the "gemfire list-missing-disk-stores" command to see all disk stores that are being waited on by other members.
    .............................
    
  2. In the second terminal window, start gfsh:
    [username@localhost test8]$ gfsh
        _________________________     __
       / _____/ ______/ ______/ /____/ /
      / /  __/ /___  /_____  / _____  / 
     / /__/ / ____/  _____/ / /    / /  
    /______/_/      /______/_/    /_/    v8.0.0
    
    Monitor and Manage GemFire
    
  3. Run the following command to connect to the cluster:
    gfsh>connect --locator=localhost[40401]
    Connecting to Locator at [host=localhost, port=40401] ..
    Connecting to Manager at [host=172.16.196.144, port=1099] ..
    Successfully connected to: [host=172.16.196.144, port=1099]
  4. Start server2:
    gfsh>start server --name=server2 --server-port=40412
    When server2 starts, note that server1 completes its start up in the first gfsh window:
    Server in /home/username/gf80/jul11/test9/server1 on 172.16.196.144[40411] as server1 is currently online.
    Process ID: 6773
    Uptime: 2 minutes 55 seconds
    GemFire Version: 8.0
    Java Version: 1.6.0_33
    Log File: /home/username/gf80/jul11/test9/server1/server1.log
    JVM Arguments: -Dgemfire.default.locators=172.16.196.144[40401] -Dgemfire.use-cluster-configuration=true 
    -XX:OnOutOfMemoryError="kill -9 %p" -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
    Class-Path: /home/username/gf80/jul11/lib/server-dependencies.jar:/home/username/java/lib/tools.jar
  5. Verify that the locator and two servers are running:
    gfsh>list members
      Name   | Id
    -------- | ------------------------------------------------
    server2  | 172.16.196.144(server2:29328)<v7>:13516
    server1  | 172.16.196.144(server1:29319)<v7>:22266
    locator1 | 172.16.196.144(locator1:28635:locator)<v0>:35313
  6. Run a query to verify that all the data you entered with the put commands is available:
    gfsh>query --query="select * from /regionA"
    
    Result     : true
    startCount : 0
    endCount   : 20
    Rows       : 5
    
    Result
    ------
    one
    two
    four
    Three
    
    NEXT_STEP_NAME : END
    
  7. Stop server2 with the following command:
    gfsh>stop server --dir=server2
    Stopping Cache Server running in /home/username/gf80/jul11/test11/server2 on 172.16.196.144[40412] as server2...
    Process ID: 12404
    Log File: /home/username/gf80/jul11/test11/server2/server2.log
    ....
    
  8. Run a query to verify that all the data you entered with the put commands is still available:
    gfsh>query --query="select * from /regionA"
    
    Result     : true
    startCount : 0
    endCount   : 20
    Rows       : 5
    
    Result
    ------
    one
    two
    four
    Three
    
    NEXT_STEP_NAME : END
    

Step 10: Shut down the system.

To shut down your cluster, do the following:

  1. In the current gfsh session, stop the cluster:
    gfsh>shutdown

    See shutdown.

  2. When prompted, type 'Y' to confirm the shutdown of the cluster.

Step 11: What to do next...

Here are some suggestions on what to explore next with Pivotal GemFire: