forked from apache/geode
-
Notifications
You must be signed in to change notification settings - Fork 0
/
15_minute_quickstart_gfsh.html.md.erb
519 lines (381 loc) · 19.9 KB
/
15_minute_quickstart_gfsh.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
<% set_title(product_name_long, "in 15 Minutes or Less") %>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<a id="topic_FE3F28ED18E145F787431EC87B676A76"></a>
Need a quick introduction to <%=vars.product_name_long%>? Take this brief tour to try out basic features and functionality.
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_ECE5170BAD9B454E875F13BEB5762DDD" class="no-quick-link"></a>Step 1: Install <%=vars.product_name_long%>
See [How to Install](installation/install_standalone.html#concept_0129F6A1D0EB42C4A3D24861AF2C5425) for instructions.
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_582F8CBBD99D42F1A55C07591E2E9E9E" class="no-quick-link"></a>Step 2: Use gfsh to start a locator
In a terminal window, use the `gfsh` command line interface to start up a locator. <%=vars.product_name_long%> *gfsh* (pronounced "jee-fish") provides a single, intuitive command-line interface from which you can launch, manage, and monitor <%=vars.product_name_long%> processes, data, and applications. See [gfsh](../tools_modules/gfsh/chapter_overview.html).
The *locator* is a <%=vars.product_name%> process that tells new, connecting members where running members are located and provides load balancing for server use. A locator, by default, starts up a JMX Manager, which is used for monitoring and managing a <%=vars.product_name%> cluster. The cluster configuration service uses locators to persist and distribute cluster configurations to cluster members. See [Running <%=vars.product_name%> Locator Processes](../configuring/running/running_the_locator.html) and [Overview of the Cluster Configuration Service](../configuring/cluster_config/gfsh_persist.html).
1. Create a scratch working directory (for example, `my_geode`) and change directories into it. `gfsh` saves locator and server working directories and log files in this location.
2. Start gfsh by typing `gfsh` at the command line (or `gfsh.bat` if you are using Windows).
``` pre
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ <%=vars.product_version%>
Monitor and Manage <%=vars.product_name%>
gfsh>
```
3. At the `gfsh` prompt, type the `start locator` command and specify a name for the locator:
``` pre
gfsh>start locator --name=locator1
Starting a <%=vars.product_name%> Locator in /home/username/my_geode/locator1...
.................................
Locator in /home/username/my_geode/locator1 on ubuntu.local[10334] as locator1 is currently online.
Process ID: 3529
Uptime: 18 seconds
<%=vars.product_name%> Version: <%=vars.product_version%>
Java Version: 1.8.0_<%=vars.min_java_update%>
Log File: /home/username/my_geode/locator1/locator1.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/username/Apache_Geode_Linux/lib/geode-core-1.0.0.jar:
/home/username/Apache_Geode_Linux/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=10.118.33.169, port=1099]
Cluster configuration service is up and running.
```
If you run `start locator` from gfsh without specifying the member name, gfsh will automatically pick a random member name. This is useful for automation.
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_02C79BFFB5334E78A5856AE1EB1F1F84" class="no-quick-link"></a>Step 3: Start Pulse
Start up the browser-based Pulse monitoring tool. Pulse is a Web Application that provides a graphical dashboard for monitoring vital, real-time health and performance of <%=vars.product_name%> clusters, members, and regions. See [<%=vars.product_name%> Pulse](../tools_modules/pulse/pulse-overview.html).
``` pre
gfsh>start pulse
```
This command launches Pulse and automatically connects you to the JMX Manager running in the Locator. At the Pulse login screen, type in the default username `admin` and password `admin`.
The Pulse application now displays the locator you just started (locator1):
<img src="../images/pulse_locator.png" id="topic_FE3F28ED18E145F787431EC87B676A76__image_ign_ff5_t4" class="image" />
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_C617BC1C70EB41B8BCA3691D6E3C891A" class="no-quick-link"></a>Step 4: Start a server
A <%=vars.product_name%> server is a process that runs as a long-lived, configurable member of a cluster. The <%=vars.product_name%> server is used primarily for hosting long-lived data regions and for running standard <%=vars.product_name%> processes such as the server in a client/server configuration. See [Running <%=vars.product_name%> Server Processes](../configuring/running/running_the_cacheserver.html).
Start the cache server:
``` pre
gfsh>start server --name=server1 --server-port=40411
```
This commands starts a cache server named "server1" on the specified port of 40411.
If you run `start server` from gfsh without specifying the member name, gfsh will automatically pick a random member name. This is useful for automation.
Observe the changes (new member and server) in Pulse. Try expanding the distributed system icon to see the locator and cache server graphically.
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_3EA12E44B8394C6A9302DF4D14888AF4" class="no-quick-link"></a>Step 5: Create a replicated, persistent region
In this step you create a region with the `gfsh` command line utility. Regions are the core building blocks of the <%=vars.product_name%> cluster and provide the means for organizing your data. The region you create for this exercise employs replication to replicate data across members of the cluster and utilizes persistence to save the data to disk. See [Data Regions](../basic_config/data_regions/chapter_overview.html#data_regions).
1. Create a replicated, persistent region:
``` pre
gfsh>create region --name=regionA --type=REPLICATE_PERSISTENT
Member | Status
------- | --------------------------------------
server1 | Region "/regionA" created on "server1"
```
Note that the region is hosted on server1.
2. Use the `gfsh` command line to view a list of the regions in the cluster.
``` pre
gfsh>list regions
List of regions
---------------
regionA
```
3. List the members of your cluster. The locator and cache server you started appear in the list:
``` pre
gfsh>list members
Name | Id
------------ | ---------------------------------------
Coordinator: | 192.0.2.0(locator1:3529:locator)<ec><v0>:59926
locator1 | 192.0.2.0(locator1:3529:locator)<ec><v0>:59926
server1 | 192.0.2.0(server1:3883)<v1>:65390
```
4. To view specifics about a region, type the following:
``` pre
gfsh>describe region --name=regionA
..........................................................
Name : regionA
Data Policy : persistent replicate
Hosting Members : server1
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ---- | -----
Region | size | 0
```
5. In Pulse, click the green cluster icon to see all the new members and new regions that you just added to your cluster.
**Note:** Keep this `gfsh` prompt open for the next steps.
## Step 6: Manipulate data in the region and demonstrate persistence
<%=vars.product_name_long%> manages data as key/value pairs. In most applications, a Java program adds, deletes and modifies stored data. You can also use gfsh commands to add and retrieve data. See [Data Commands](../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_C7DB8A800D6244AE8FF3ADDCF139DCE4).
1. Run the following `put` commands to add some data to the region:
``` pre
gfsh>put --region=regionA --key="1" --value="one"
Result : true
Key Class : java.lang.String
Key : 1
Value Class : java.lang.String
Old Value : <NULL>
gfsh>put --region=regionA --key="2" --value="two"
Result : true
Key Class : java.lang.String
Key : 2
Value Class : java.lang.String
Old Value : <NULL>
```
2. Run the following command to retrieve data from the region:
``` pre
gfsh>query --query="select * from /regionA"
Result : true
startCount : 0
endCount : 20
Rows : 2
Result
------
two
one
```
Note that the result displays the values for the two data entries you created with the `put` commands.
See [Data Entries](../basic_config/data_entries_custom_classes/chapter_overview.html).
3. Stop the cache server using the following command:
``` pre
gfsh>stop server --name=server1
Stopping Cache Server running in /home/username/my_geode/server1 on ubuntu.local[40411] as server1...
Process ID: 3883
Log File: /home/username/my_geode/server1/server1.log
....
```
4. Restart the cache server using the following command:
``` pre
gfsh>start server --name=server1 --server-port=40411
```
5. Run the following command to retrieve data from the region again -- notice that the data is still available:
``` pre
gfsh>query --query="select * from /regionA"
Result : true
startCount : 0
endCount : 20
Rows : 2
Result
------
two
one
```
Because regionA uses persistence, it writes a copy of the data to disk. When a server hosting regionA starts, the data is populated into the cache. Note that the result displays the values for the two data entries you created with the `put` commands prior to stopping the server.
See [Data Entries](../basic_config/data_entries_custom_classes/chapter_overview.html).
See [Data Regions](../basic_config/data_regions/chapter_overview.html#data_regions).
## Step 7: Examine the effects of replication
In this step, you start a second cache server. Because regionA is replicated, the data will be available on any server hosting the region.
See [Data Regions](../basic_config/data_regions/chapter_overview.html#data_regions).
1. Start a second cache server:
``` pre
gfsh>start server --name=server2 --server-port=40412
```
2. Run the `describe region` command to view information about regionA:
``` pre
gfsh>describe region --name=regionA
..........................................................
Name : regionA
Data Policy : persistent replicate
Hosting Members : server1
server2
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ---- | -----
Region | size | 2
```
Note that you do not need to create regionA again for server2. The output of the command shows that regionA is hosted on both server1 and server2. When gfsh starts a server, it requests the configuration from the cluster configuration service which then distributes the shared configuration to any new servers joining the cluster.
3. Add a third data entry:
``` pre
gfsh>put --region=regionA --key="3" --value="three"
Result : true
Key Class : java.lang.String
Key : 3
Value Class : java.lang.String
Old Value : <NULL>
```
4. Open the Pulse application (in a Web browser) and observe the cluster topology. You should see a locator with two attached servers. Click the <span class="ph uicontrol">Data</span> tab to view information about regionA.
5. Stop the first cache server with the following command:
``` pre
gfsh>stop server --name=server1
Stopping Cache Server running in /home/username/my_geode/server1 on ubuntu.local[40411] as server1...
Process ID: 4064
Log File: /home/username/my_geode/server1/server1.log
....
```
6. Retrieve data from the remaining cache server.
``` pre
gfsh>query --query="select * from /regionA"
Result : true
startCount : 0
endCount : 20
Rows : 3
Result
------
two
one
three
```
Note that the data contains 3 entries, including the entry you just added.
7. Add a fourth data entry:
``` pre
gfsh>put --region=regionA --key="4" --value="four"
Result : true
Key Class : java.lang.String
Key : 3
Value Class : java.lang.String
Old Value : <NULL>
```
Note that only server2 is running. Because the data is replicated and persisted, all of the data is still available. But the new data entry is currently only available on server 2.
``` pre
gfsh>describe region --name=regionA
..........................................................
Name : regionA
Data Policy : persistent replicate
Hosting Members : server2
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ---- | -----
Region | size | 4
```
8. Stop the remaining cache server:
``` pre
gfsh>stop server --name=server2
Stopping Cache Server running in /home/username/my_geode/server2 on ubuntu.local[40412] as server2...
Process ID: 4185
Log File: /home/username/my_geode/server2/server2.log
.....
```
## Step 8: Restart the cache servers in parallel
In this step you restart the cache servers in parallel. Because the data is persisted, the data is available when the servers restart. Because the data is replicated, you must start the servers in parallel so that they can synchronize their data before starting.
1. Start server1. Because regionA is replicated and persistent, it needs data from the other server to start and waits for the server to start:
``` pre
gfsh>start server --name=server1 --server-port=40411
Starting a <%=vars.product_name%> Server in /home/username/my_geode/server1...
............................................................................
............................................................................
```
Note that if you look in the <span class="ph filepath">server1.log</span> file for the restarted server, you will see a log message similar to the following:
``` pre
[info 2015/01/14 09:08:13.610 PST server1 <main> tid=0x1] Region /regionA has pot
entially stale data. It is waiting for another member to recover the latest data.
My persistent id:
DiskStore ID: 8e2d99a9-4725-47e6-800d-28a26e1d59b1
Name: server1
Location: /192.0.2.0:/home/username/my_geode/server1/.
Members with potentially new data:
[
DiskStore ID: 2e91b003-8954-43f9-8ba9-3c5b0cdd4dfa
Name: server2
Location: /192.0.2.0:/home/username/my_geode/server2/.
]
Use the "gfsh show missing-disk-stores" command to see all disk stores that
are being waited on by other members.
```
2. In a second terminal window, change directories to the scratch working directory (for example, `my_geode`) and start gfsh:
``` pre
[username@localhost ~/my_geode]$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ <%=vars.product_version%>
Monitor and Manage <%=vars.product_name%>
```
3. Run the following command to connect to the cluster:
``` pre
gfsh>connect --locator=localhost[10334]
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=ubuntu.local, port=1099] ..
Successfully connected to: [host=ubuntu.local, port=1099]
```
4. Start server2:
``` pre
gfsh>start server --name=server2 --server-port=40412
```
When server2 starts, note that **server1 completes its start up** in the first gfsh window:
``` pre
Server in /home/username/my_geode/server1 on ubuntu.local[40411] as server1 is currently online.
Process ID: 3402
Uptime: 1 minute 46 seconds
<%=vars.product_name%> Version: <%=vars.product_version%>
Java Version: 1.8.0_<%=vars.min_java_update%>
Log File: /home/username/my_geode/server1/server1.log
JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] -Dgemfire.use-cluster-configuration=true
-XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
-Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/username/Apache_Geode_Linux/lib/geode-core-1.0.0.jar:
/home/username/Apache_Geode_Linux/lib/geode-dependencies.jar
```
5. Verify that the locator and two servers are running:
``` pre
gfsh>list members
Name | Id
------------ | ---------------------------------------
Coordinator: | ubuntu(locator1:2813:locator)<ec><v0>:46644
locator1 | ubuntu(locator1:2813:locator)<ec><v0>:46644
server2 | ubuntu(server2:3992)<v8>:21507
server1 | ubuntu(server1:3402)<v7>:36532
```
6. Run a query to verify that all the data you entered with the `put` commands is available:
``` pre
gfsh>query --query="select * from /regionA"
Result : true
startCount : 0
endCount : 20
Rows : 5
Result
------
one
two
four
Three
NEXT_STEP_NAME : END
```
7. Stop server2 with the following command:
``` pre
gfsh>stop server --dir=server2
Stopping Cache Server running in /home/username/my_geode/server2 on 192.0.2.0[40412] as server2...
Process ID: 3992
Log File: /home/username/my_geode/server2/server2.log
....
```
8. Run a query to verify that all the data you entered with the `put` commands is still available:
``` pre
gfsh>query --query="select * from /regionA"
Result : true
startCount : 0
endCount : 20
Rows : 5
Result
------
one
two
four
Three
NEXT_STEP_NAME : END
```
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_E417BEEC172B4E96A92A61DC7601E572" class="no-quick-link"></a>Step 9: Shut down the system including your locators
To shut down your cluster, do the following:
1. In the current `gfsh` session, stop the cluster:
``` pre
gfsh>shutdown --include-locators=true
```
See [shutdown](../tools_modules/gfsh/command-pages/shutdown.html).
2. When prompted, type 'Y' to confirm the shutdown of the cluster.
``` pre
As a lot of data in memory will be lost, including possibly events in queues,
do you really want to shutdown the entire distributed system? (Y/n): Y
Shutdown is triggered
gfsh>
No longer connected to ubuntu.local[1099].
gfsh>
```
3. Type `exit` to quit the gfsh shell.
## <a id="topic_FE3F28ED18E145F787431EC87B676A76__section_C8694C6BB07E4430A73DDD72ABB473F1" class="no-quick-link"></a>Step 10: What to do next...
Here are some suggestions on what to explore next with <%=vars.product_name_long%>:
- Continue reading the next section to learn more about the components and concepts that were just introduced.
- To get more practice using `gfsh`, see [Tutorial—Performing Common Tasks with gfsh](../tools_modules/gfsh/tour_of_gfsh.html#concept_0B7DE9DEC1524ED0897C144EE1B83A34).
- To learn about the cluster configuration service, see [Tutorial—Creating and Using a Cluster Configuration](../configuring/cluster_config/persisting_configurations.html#task_bt3_z1v_dl).