Skip to content

Commit

Permalink
fix: Bad code fence indenting in ordered lists
Browse files Browse the repository at this point in the history
  • Loading branch information
nschonni committed Nov 24, 2020
1 parent e0cae0a commit 8506c52
Show file tree
Hide file tree
Showing 5 changed files with 146 additions and 128 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
mysql-master = <em>HOSTNAME</em>
redis-master = <em>HOSTNAME</em>
<strong>primary-datacenter = default</strong>
```
```

- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.

4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
datacenter = default
```
datacenter = default
```
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}

```shell
[cluster "<em>HOSTNAME</em>"]
<strong>datacenter = default</strong>
hostname = <em>HOSTNAME</em>
ipv4 = <em>IP ADDRESS</em>
```shell
[cluster "<em>HOSTNAME</em>"]
<strong>datacenter = default</strong>
hostname = <em>HOSTNAME</em>
ipv4 = <em>IP ADDRESS</em>
...
...
...
```
```

{% note %}
{% note %}

**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.

consul-datacenter = primary
```
consul-datacenter = primary
```

{% endnote %}
{% endnote %}

{% data reusables.enterprise_clustering.apply-configuration %}

Expand All @@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio

1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."

{% note %}
{% note %}

**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
{% endnote %}
{% endnote %}
{% data reusables.enterprise_clustering.ssh-to-a-node %}
3. Back up your existing cluster configuration.
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
```
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
```
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
```
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
```
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
git config -f ~/cluster-passive.conf --remove-section cluster
```
git config -f ~/cluster-passive.conf --remove-section cluster
```
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
```shell
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
```
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
```
7. Decide on a pattern for the passive nodes' hostnames.

Expand All @@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.

```shell
sudo vim ~/cluster-passive.conf
sudo vim ~/cluster-passive.conf
```

9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
Expand All @@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
- Add a new key-value pair, `replica = enabled`.
```shell
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
...
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
<strong>replica = enabled</strong>
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
...
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
<strong>replica = enabled</strong>
...
...
...
```
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
```shell
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
```
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
```
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
```shell
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
```
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
```
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
```shell
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
```
{% warning %}
{% warning %}
**Warning**: Review your cluster configuration file before proceeding.
**Warning**: Review your cluster configuration file before proceeding.
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
- In each section for an active node named <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code>, double-check the following key-value pairs.
Expand All @@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
- `replica` should be configured as `enabled`.
- Take the opportunity to remove sections for offline nodes that are no longer in use.

To review an example configuration, see "[Example configuration](#example-configuration)."
To review an example configuration, see "[Example configuration](#example-configuration)."

{% endwarning %}
{% endwarning %}

13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}

Expand All @@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.

```shell
Finished cluster initialization
Finished cluster initialization
```

{% data reusables.enterprise_clustering.apply-configuration %}
Expand Down Expand Up @@ -294,19 +304,27 @@ You can monitor the progress on any node in the cluster, using command-line tool
- Monitor replication of databases:
/usr/local/share/enterprise/ghe-cluster-status-mysql
```
/usr/local/share/enterprise/ghe-cluster-status-mysql
```
- Monitor replication of repository and Gist data:
ghe-spokes status
```
ghe-spokes status
```
- Monitor replication of attachment and LFS data:
ghe-storage replication-status
```
ghe-storage replication-status
```
- Monitor replication of Pages data:
ghe-dpages replication-status
```
ghe-dpages replication-status
```
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Before launching {% data variables.product.product_location %} on Google Cloud P
{% data variables.product.prodname_ghe_server %} is supported on the following Google Compute Engine (GCE) machine types. For more information, see [the Google Cloud Platform machine types article](https://cloud.google.com/compute/docs/machine-types).

| High-memory |
------------- |
| ------------- |
| n1-highmem-4 |
| n1-highmem-8 |
| n1-highmem-16 |
Expand All @@ -54,7 +54,7 @@ Based on your user license count, we recommend these machine types.
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
```shell
$ gcloud compute images list --project github-enterprise-public --no-standard-images
```
```

2. Take note of the image name for the latest GCE image of {% data variables.product.prodname_ghe_server %}.

Expand All @@ -63,18 +63,18 @@ Based on your user license count, we recommend these machine types.
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."

1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
```shell
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
```
```shell
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
```
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
```shell
$ gcloud compute firewall-rules create <em>RULE-NAME</em> \
--network <em>NETWORK-NAME</em> \
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
```
This table identifies the required ports and what each port is used for.
```shell
$ gcloud compute firewall-rules create <em>RULE-NAME</em> \
--network <em>NETWORK-NAME</em> \
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
```
This table identifies the required ports and what each port is used for.

{% data reusables.enterprise_installation.necessary_ports %}
{% data reusables.enterprise_installation.necessary_ports %}

### Allocating a static IP and assigning it to the VM

Expand All @@ -87,21 +87,21 @@ In production High Availability configurations, both primary and replica applian
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."

1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
```shell
$ gcloud compute disks create <em>DATA-DISK-NAME</em> --size <em>DATA-DISK-SIZE</em> --type <em>DATA-DISK-TYPE</em> --zone <em>ZONE</em>
```
```shell
$ gcloud compute disks create <em>DATA-DISK-NAME</em> --size <em>DATA-DISK-SIZE</em> --type <em>DATA-DISK-TYPE</em> --zone <em>ZONE</em>
```

2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
```shell
$ gcloud compute instances create <em>INSTANCE-NAME</em> \
--machine-type n1-standard-8 \
--image <em>GITHUB-ENTERPRISE-IMAGE-NAME</em> \
--disk name=<em>DATA-DISK-NAME</em> \
--metadata serial-port-enable=1 \
--zone <em>ZONE</em> \
--network <em>NETWORK-NAME</em> \
--image-project github-enterprise-public
```
```shell
$ gcloud compute instances create <em>INSTANCE-NAME</em> \
--machine-type n1-standard-8 \
--image <em>GITHUB-ENTERPRISE-IMAGE-NAME</em> \
--disk name=<em>DATA-DISK-NAME</em> \
--metadata serial-port-enable=1 \
--zone <em>ZONE</em> \
--network <em>NETWORK-NAME</em> \
--image-project github-enterprise-public
```

### Configuring the instance

Expand Down
36 changes: 18 additions & 18 deletions content/admin/policies/creating-a-pre-receive-hook-environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ You can use a Linux container management tool to build a pre-receive hook enviro
{% data reusables.linux.ensure-docker %}
2. Create the file `Dockerfile.alpine-3.3` that contains this information:

```
FROM gliderlabs/alpine:3.3
RUN apk add --no-cache git bash
```
```
FROM gliderlabs/alpine:3.3
RUN apk add --no-cache git bash
```
3. From the working directory that contains `Dockerfile.alpine-3.3`, build an image:

```shell
Expand All @@ -36,37 +36,37 @@ You can use a Linux container management tool to build a pre-receive hook enviro
> ---> Using cache
> ---> 0250ab3be9c5
> Successfully built 0250ab3be9c5
```
```
4. Create a container:

```shell
$ docker create --name pre-receive.alpine-3.3 pre-receive.alpine-3.3 /bin/true
```
```
5. Export the Docker container to a `gzip` compressed `tar` file:

```shell
$ docker export pre-receive.alpine-3.3 | gzip > alpine-3.3.tar.gz
```
```

This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.

### Creating a pre-receive hook environment using chroot

1. Create a Linux `chroot` environment.
2. Create a `gzip` compressed `tar` file of the `chroot` directory.
```shell
$ cd /path/to/chroot
$ tar -czf /path/to/pre-receive-environment.tar.gz .
```shell
$ cd /path/to/chroot
$ tar -czf /path/to/pre-receive-environment.tar.gz .
```

{% note %}
{% note %}

**Notes:**
- Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
- `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
- Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
**Notes:**
- Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
- `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
- Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.

{% endnote %}
{% endnote %}

For more information about creating a chroot environment see "[Chroot](https://wiki.debian.org/chroot)" from the *Debian Wiki*, "[BasicChroot](https://help.ubuntu.com/community/BasicChroot)" from the *Ubuntu Community Help Wiki*, or "[Installing Alpine Linux in a chroot](http://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot)" from the *Alpine Linux Wiki*.

Expand Down Expand Up @@ -94,4 +94,4 @@ For more information about creating a chroot environment see "[Chroot](https://w
```shell
admin@ghe-host:~$ ghe-hook-env-create AlpineTestEnv /home/admin/alpine-3.3.tar.gz
> Pre-receive hook environment 'AlpineTestEnv' (2) has been created.
```
```
Loading

0 comments on commit 8506c52

Please sign in to comment.