-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
91 additions
and
20 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,8 +2,10 @@ | |
If you want to know more about Kerberos. Check out this [**link**](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/Using_Kerberos.html)<br> | ||
* <a href="#intro_1"/> Pre-requisite | ||
* <a href="#intro_2"/> KDC Installation and Configuration | ||
* <a href="#intro_3"/> Enable Kerberos in Your Cluster | ||
* <a href="#intro_4"/> Test Kerberos Authentication | ||
* <a href="#intro_3"/> Enabling Kerberos Authentication Using the Wizard | ||
* <a href="#intro_4"/> Create the HDFS Superuser | ||
* <a href="#intro_5"/> Create a Kerberos Principal and prepare the cluster for Each User Account | ||
* <a href="#intro_6"/> Verify that Kerberos Security is Working | ||
|
||
|
||
## <center> <a name="intro_1"/> Pre-requisite | ||
|
@@ -82,40 +84,109 @@ If you want to know more about Kerberos. Check out this [**link**](https://acces | |
<code># yum install -y openldap-clients</code><p> | ||
|
||
|
||
## <center> <a name="intro_3"/> Enable Kerberos in Your Cluster | ||
**Step 1:** Create Cloudera Manager Administrator Principal<br> | ||
## <center> <a name="intro_3"/> Enabling Kerberos Authentication Using the Wizard | ||
**Step 1:** Create a Kerberos Principal for the Cloudera Manager Server<br> | ||
<code># kadmin.local</code><br> | ||
<code># kadmin.local: addprinc ***cloudera-scm/[email protected]***</code><p> | ||
<code># kadmin.local: addprinc -pw <Password> ***cloudera-scm/[email protected]***</code><p> | ||
|
||
**Step 2:** Copy **/etc/krb5.conf** to each cluster node, including both service node and gateway node<br> | ||
|
||
**Step 3:** If you are using **AES-256** Encryption, please install the JCE policy file for each host. (You can re-run the upgrade wizard in host page, navigate to "**Hosts**" -> "**Re-run Upgrade Wizard**")<br> | ||
|
||
**Step 4:** Properly configure Kerberos setting, navigate to "**Administration**" -> "**Security**" -> "**Enable Kerberos**"<br> | ||
* KDC Type | ||
**Step 3:** If you are using **AES-256** Encryption, please install the JCE policy file for each host. There are 2 ways to do this:<br> | ||
* In the Cloudera Manager Admin Console, navigate to the **Hosts** page. Both, the **Add New Hosts to Cluster** wizard and the **Re-run Upgrade Wizard** will give you the option to have Cloudera Manager install the JCE Policy file for you. | ||
* You can follow the JCE Policy File installation instructions in the README.txt file included in the jce_policy-x.zip file. | ||
|
||
**Step 4:** (Optional) To verify the type of encryption used in your cluster<br> | ||
* On the local KDC host, type this command in the kadmin.local or kadmin shell to create a test principal: | ||
<code>kadmin: addprinc test</code><br> | ||
* On a cluster host, type this command to start a Kerberos session as test: | ||
<code># kinit test </code><br> | ||
* On a cluster host, type this command to view the encryption type in use: | ||
<code># klist -e </code><br> | ||
* If AES is being used, output like the following is displayed after you type the klist command (note that AES-256 is included in the output): | ||
```bash | ||
Ticket cache: FILE:/tmp/krb5cc_0 | ||
Default principal: test@Cloudera Manager | ||
Valid starting Expires Service principal | ||
05/19/15 13:25:04 05/20/15 13:25:04 krbtgt/Cloudera Manager@Cloudera Manager | ||
Etype (skey, tkt): AES-256 CTS mode with 96-bit SHA-1 HMAC, AES-256 CTS mode with 96-bit SHA-1 HMAC | ||
``` | ||
|
||
**Step 5:** Properly configure Kerberos setting, navigate to "**Administration**" -> "**Security**" -> "**Enable Kerberos**"<br> | ||
* KDC Type: MIT KDC | ||
* KDC Server Host: **the FQDN of the KDC server** | ||
* Kerberos Security Realm: **SEBC.SIN** | ||
* Kerberos Encryption Type | ||
|
||
**Step 5:** Follow the Wizard<br> | ||
**Step 6:** Do **NOT** use "**Manage krb5.conf through Cloudera Manager**" when asked. Click **Continue**<br> | ||
|
||
**Step 7:** Import KDC Account Manager Credentials<br> | ||
* Enter the **username** and **password** for the user that can create principals for CDH cluster in the KDC. This is the user/principal you created in **Step 1**: Create a Kerberos Principal for the Cloudera Manager Server | ||
|
||
## <center> <a name="intro_4"/> Test Kerberos Authentication | ||
**Step 1:** (Optional) If Linux box is already integrated with AD (or other LDAP services) to do authentication, please ignore this step. Make sure all the hosts in the cluster have a Unix user account with same name as the first component of the user's principal name. Also to allow users submitting jobs, please look at the MapReduce configuration for **banned.users** and **min.user.id**, usually **banned.users** is set to mapred, hdfs and bin to prevent jobs from being submitted via those user accounts. And the default setting for the **min.user.id** property is 1000 to prevent jobs from being submitted with a user ID less than 1000. You can make changes to those two configurations if necessary.<br> | ||
**Step 8:** (Optional) Configuring Custom Kerberos Principals<br> | ||
|
||
**Step 9:** Configure HDFS DataNode Ports<br> | ||
* Use the checkbox to confirm you are ready to restart the cluster. Click **Continue** | ||
|
||
**Step 10:** Enabling Kerberos<br> | ||
* This page lets you track the progress made by the wizard as it first stops all services on your cluster, deploys the krb5.conf, generates keytabs for other CDH services, deploys client configuration and finally restarts all services. Click **Continue** | ||
|
||
**Step 8:** **Congratulations**<br> | ||
* The final page lists the cluster(s) for which Kerberos has been successfully enabled. Click **Finish** to return to the Cloudera Manager Admin Console home page | ||
|
||
|
||
## <center> <a name="intro_4"/> Create the HDFS Superuser | ||
**Step 1:** In the kadmin.local or kadmin shell, type the following command to create a Kerberos principal called hdfs<br> | ||
<code>kadmin: addprinc [email protected]</code><br> | ||
* **Note:** This command prompts you to create a password for the hdfs principal. You should use a strong password because having access to this principal provides superuser access to all of the files in HDFS. | ||
|
||
**Step 2:** To run commands as the HDFS superuser, you must obtain Kerberos credentials for the hdfs principal. To do so, run the following command and provide the appropriate password when prompted<br> | ||
<code># kinit [email protected]</code><p> | ||
|
||
**Step 2:** Create user directory under */user* on HDFS for each user account. Change the owner and group of that directory to be the user. Assume you have the hdfs Keytab files like *hdfs.keytab*, so first you should login as *hdfs* user:<br> | ||
<code># kinit hdfs</code> or <code># kinit -k -t hdfs.keytab hdfs</code><br> | ||
|
||
## <center> <a name="intro_5"/> Create a Kerberos Principal and prepare the cluster for Each User Account | ||
**Step 1:** In the kadmin.local or kadmin shell, use the following command to create user principals by replacing YOUR-LOCAL-REALM.COM with the name of your realm, and replacing USERNAME with a username:<br> | ||
```bash | ||
kadmin: addprinc [email protected] | ||
|
||
// Enter and re-enter a password when prompted | ||
``` | ||
OR <br> | ||
```bash | ||
# kadmin.local -q "addprinc -randkey rainy" | ||
# kadmin.local -q "xst -norandkey -k rainy.keytab [email protected]" | ||
# kinit -k -t rainy.keytab rainy | ||
# klist -e | ||
|
||
If you want to destroy the Kerberos ticket, please type "kdestroy" | ||
``` | ||
|
||
**Step 2:** Create user directory under */user* on HDFS for each user account. Change the owner and group of that directory to be the user. First you should login as *hdfs* user:<br> | ||
<code># kinit hdfs</code><br> | ||
<code># hadoop fs -mkdir /user/rainy</code><br> | ||
<code># hadoop fs -chown rainy /user/rainy</code><p> | ||
|
||
**Step 3:** Create the user principal, and authenticate the user<br> | ||
* create user principal: <code># kadmin.local -q "addprinc -randkey rainy"</code><br> | ||
* retrieve the keytab file: <code># kadmin.local -q "xst -norandkey -k rainy.keytab [email protected]"</code><br> | ||
* authenticate the user: <code># kinit -k -t rainy.keytab rainy</code><br> | ||
* list the authenticated user: <code># klist</code><br> | ||
* If you want to destroy the Kerberos ticket, please type "kdestroy"<p> | ||
|
||
## <center> <a name="intro_6"/> Verify that Kerberos Security is Working | ||
**Step 1:** (Optional) If Linux box is already integrated with AD (or other LDAP services) to do authentication, please ignore this step. Make sure all the hosts in the cluster have a Unix user account with same name as the first component of the user's principal name. Also to allow users submitting jobs, please look at the MapReduce configuration for **banned.users** and **min.user.id**, usually **banned.users** is set to mapred, hdfs and bin to prevent jobs from being submitted via those user accounts. And the default setting for the **min.user.id** property is 1000 to prevent jobs from being submitted with a user ID less than 1000. You can make changes to those two configurations if necessary.<br> | ||
|
||
**Step 2:** Acquire Kerberos credentials for your user account.<br> | ||
```bash | ||
# kinit [email protected] | ||
``` | ||
|
||
**Step 3:** Enter a password when prompted.<br> | ||
|
||
**Step 4:** Submit a sample pi calculation as a test MapReduce job. Use the following command if you use a package-based setup for Cloudera Manager:<br> | ||
```bash | ||
# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 10000 | ||
Number of Maps = 10 | ||
Samples per Map = 10000 | ||
... | ||
Job Finished in 38.572 seconds | ||
Estimated value of Pi is 3.14120000000000000000 | ||
``` | ||
|
||
**Step 5:** You have now verified that Kerberos security is working on your cluster. | ||
|
||
|
||
|
||
|