There are 2 deployment scenarios supported which are implemented as separated playbooks.
iroha-docker-cluster
iroha-standalone-nodes
These playbooks use different roles and inventories. Each role and inventory file is well-documented, sometimes they are referred in the role's README file.
This playbook allows to deploy multiple iroha peers on one node. For example, you want to run 21 nodes of iroha, but you have only 4 hosts that are in the same network. Thus, you might decide that:
- 8
iroha
nodes should be launched on the 1st host (or you can vary amount ofiroha
nodes per each host). - 5
iroha
nodes on the 2nd host - 4
iroha
nodes on the 3rd host - 4
iroha
nodes on the 4th host Actually, you can run as many iroha peers on one host as you want (but no more that 30 - due to docker networks max amount per host) - this is just the example to show you the flexibility of this playbook.
It works in the following way:
- [pre-generation phase] -
peers.list
is generated and stored locally at{{ filesDir }}
directory - [generation phase] all configs are generated using
iroha-cli
and also stored at{{ filesDir }}
:genesis.block
node$KEY.priv
node$KEY.pub
, where$KEY
is a iroha node ID in the P2P network
- [deliver phase] -
config.sample
file is generated from the template and delivered to the{{ confPath }}
which is set to/opt/docker/iroha/conf$KEY
by default. Files from thegeneration phase
are also delivered to these endpoints. Then,docker-compose.yml
file is generated and stored at{{ composeDir }}
(see section 1.4 of this file) location for each host (nodes amount for each host is set by variablenodes_in_region
inplaybooks/iroha/group_vars/<group_name>.yml
- seeinventory/hosts_docker_cluster.list
file for more instructions). - [deploy phase] all previously launched
iroha
andpostgres
containers are stopped and removed usingdocker-compose down
command. After all operationsiroha
andpostgres
nodes are started usingdocker-compose up -d
command.
NOTE: During the [deploy phase] one can see the error messages during execution of task
stop and remove all docker-compose containers before operations
. That means that you don't have launchediroha
andpostgres
containers. This error is handled and will not affect playbook execution.
Let's discuss how it works in details.
[iroha-east]
iroha-bench1 ansible_host=0.0.0.0 ansible_user=root key=0
[iroha-west]
iroha-bench2 ansible_host=0.0.0.0 ansible_user=root key=8
[iroha-south]
iroha-bench3 ansible_host=0.0.0.0 ansible_user=root key=13
[iroha-north]
iroha-bench4 ansible_host=0.0.0.0 ansible_user=root key=17
As you can see, basic host field in group contains hostname
, ansible_host <ip>
, ansible_user
, and key
field.
key
is a node ID in a iroha network. This value is used for passing only node-specific keypair to the iroha
node to start.
In this particular playbook this value is used to the start of the count.
nodes_in_region
is an amount of iroha
nodes running on each host.
Values key
and nodes_in_region
are used in the following manner:
- for host iroha-bench1 we have 8 iroha peers. First peer ID will be
key=0
, for secondkey=1
, and so on up tokey=7
- for host iroha-bench2 we have another 5 iroha peers. Their IDs will start from 8 to 12.
- for host iroha-bench3 we want to run 4 iroha peers. Their IDs will start from 13 to 16.
- for host iroha-bench4 we want to run 4 iroha peers. Their IDs will start from 17 to 20.
nodes_in_region
variable could be set atplaybooks/iroha-docker-cluster/group_vars/<group_name>.yml
, or default value fromplaybooks/iroha-docker-cluster/group_vars/all.yml
will be used.
There is no need to describe them as they are generated automatically. Port management is also automated.
If you want to see how it works --> you can see roles/iroha-cluster-deploy-node/tasks/ubuntu.yml
and templates roles/iroha-cluster-deploy-node/templates/
This section provides full list of variables that are used in the playbook.
playbooks/iroha-docker-cluster/group_vars/all.yml
confPath
- config files directory on target hostfilesDir
- local directory with files generated byiroha-cli
composeDir
-docker-compose.yml
file location on targettorii_port
- torii port start valueinternal_port
- iroha port start valuenodes_in_region
- default 4 nodes of iroha on each target host
If you want everything to work from scratch, these variables should not be changed.
roles/iroha-cluster-config-gen/defaults/main.yml
filesDir
- local directory on localhost where keys and genesis.block files will be stored afterpre-generation
andgeneration
phases (if you want to change it - do it here, it has higher precedence)
roles/iroha-cluster-deploy-node/defaults/main.yml
postgresName
- docker container name of postgrespostgresPort
- docker container port exposed by postgrespostgresUser
- postgres usernamepostgresPassword
- postgres passwordiroha_net
- prefix name of docker networkcontainerConfPath
- path to folder with config files inside docker container (mount point for docker volume)irohaDockerImage
-iroha
docker image nameirohaDockerImageTag
-iroha
docker image tagdbDockerImage
- image name forpostgres
dbDockerImageTag
- image tag forpostgres
After hosts_docker_cluster.list
inventory file is configured one could launch the playbook.
ansible-playbook -i inventory/hosts_docker_cluster.list playbooks/iroha-docker-cluster/iroha-deploy.yml --private-key=~/.ssh/<key>
, where you should specify your SSH key.
NOTE: you might see the tags property defined in the
playbooks/iroha-docker-cluster/iroha-deploy.yml
in
- { role: iroha-cluster-deploy-node, tags: ["deliver", "deploy"] }
Tags are used to separate tasks in case you want to run only few of them without changing the role. In this case, if you exclude
"deploy"
tag, then only configuration files will be delivered. After that is you run the playbook with only
- hosts: all
gather_facts: True
roles:
- { role: iroha-cluster-deploy-node, tags: ["deploy"] }
will allow you to only run
iroha
nodes without changing the configuration. It is just an option for flexibility, there is no need to use it.
- iroha-cli must be installed and could be accessed using PATH variable (e.g. /usr/bin).
This is required because keys and genesis.block are generated on your local host and stored in
{{ filesDir }}
folder.
This playbook allows to run iroha cluster by delivering previously generated genesis.block
,
keypair for each node, and config.sample
to target hosts (single iroha
node for each host).
It runs iroha
and postgres:9.5
in docker containers.
It works in the following way:
- [pre-generation phase] -
peers.list
is generated and stored locally at{{ filesDir }}
directory - [generation phase] all configs are generated using
iroha-cli
and also stored at{{ filesDir }}
:genesis.block
node$KEY.priv
node$KEY.pub
, where$KEY
is a iroha node ID in the P2P network
- [deliver phase] -
config.sample
file is generated from the template and delivered to the{{ confPath }}
which is set to/opt/docker/iroha/conf
by default. Files from thegeneration phase
are also delivered to these endpoints. - [deploy phase] all previously launched
iroha
andpostgres
containers are stopped and removed, then images are updated usingdocker pull
command and after thatiroha
andpostgres
are started usingdocker run
command.
NOTE: During the [deploy phase] one can see the error messages during execution of task
Stop and remove previous running docker containers
. That means that you have neither launchediroha
andpostgres
containers nor existingdocker-compose.yml
file. This error is handled and will not affect playbook execution.
Let's discuss how it works in details.
[iroha-nodes]
iroha-1 ansible_host=0.0.0.0 ansible_user=root key=0
iroha-2 ansible_host=0.0.0.0 ansible_user=root key=1
iroha-3 ansible_host=0.0.0.0 ansible_user=root key=2
As you can see, basic host field in group contains hostname
, ansible_host <ip>
, ansible_user
, and key
field.
key
is a node ID in a iroha network.
Values key
is used in the following manner:
- for host iroha-1 host peer ID will be
key=0
- for host iroha-2 host peer ID will be
key=1
- the following hosts in the list should increment this value
you can use multiple group of hosts with different names. The only requirement is that value
key
should be increased throughout the all list on hosts.
There is no need to describe them as they are generated automaitally. Port management is also automated.
If you want to see how it works --> you can see roles/iroha-standalone-deploy-node/tasks/ubuntu.yml
and templates
roles/iroha-standalone-deploy-node/templates/
This section provides full list of variables that are used in the playbook.
playbooks/iroha-docker-cluster/group_vars/all.yml
confPath
- config files directory on target hostfilesDir
- local directory with files generated byiroha-cli
torii_port
- torii port start valueinternal_port
- iroha port start value
If you want everything to work from scratch, these variables should not be changed.
roles/iroha-cluster-config-gen/defaults/main.yml
filesDir
- local directory on localhost where keys and genesis.block files will be stored afterpre-generation
andgeneration
phases (if you want to change it - do it here, it has higher precedence)
roles/iroha-cluster-deploy-node/defaults/main.yml
postgresName
- docker container name of postgrespostgresPort
- docker container port exposed by postgrespostgresUser
- postgres usernamepostgresPassword
- postgres passwordiroha_net
- prefix name of docker networkcontainerConfPath
- path to folder with config files inside docker container (mount point for docker volume)irohaDockerImage
-iroha
docker image nameirohaDockerImageTag
-iroha
docker image tagdbDockerImage
- image name forpostgres
dbDockerImageTag
- image tag forpostgres
After hosts_standalone_nodes.list
inventory file is configured one could launch the playbook.
ansible-playbook -i inventory/hosts_standalone_nodes.list playbooks/iroha-standalone-nodes/iroha-deploy.yml --private-key=~/.ssh/<key>
, where you should specify your SSH key.
- iroha-cli must be installed and could be accessed using PATH variable (e.g. /usr/bin). This is due to keys and genesis.block is generated on your local host and stored on /tmp/iroha-bench