CB-Tumblebug (CB-TB for short) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. (Cloud-Barista)
- CB-Tumblebug Overview
- CB-Tumblebug Features
- CB-Tumblebug Architecture
- CB-Tumblebug Operation Sequence
- Hot usecase of CB-Tumblebug
- Deploy a Multi-Cloud Infra with GPUs and Enjoy muiltple LLMs in parallel (YouTube)
- LLM-related scripts
[Note] Development of CB-Tumblebug is ongoing
CB-TB is not v1.0 yet.
We welcome any new suggestions, issues, opinions, and contributors!
Please note that the functionalities of Cloud-Barista are not stable and secure yet.
Be careful if you plan to use the current release in production.
If you have any difficulties in using Cloud-Barista, please let us know.
(Open an issue or join the Cloud-Barista Slack)
[Note] Localization and Globalization of CB-Tumblebug
As an open-source project initiated by Korean members,
we would like to promote participation of Korean contributors during the initial stage of this project.
So, the CB-TB repo will accept the use of the Korean language in its early stages.
However, we hope this project flourishes regardless of the contributor's country eventually.
So, the maintainers recommend using English at least for the titles of Issues, Pull Requests, and Commits,
while the CB-TB repo accommodates local languages in their contents.
- Linux (recommended:
Ubuntu 22.04
) - Golang (recommended:
v1.21.6
)
Open source packages used in this project
CB-TB welcomes improvements from both new and experienced contributors!
Check out CONTRIBUTING.
-
Clone CB-TB repository
git clone --depth 1 https://github.com/cloud-barista/cb-tumblebug.git $HOME/go/src/github.com/cloud-barista/cb-tumblebug cd ~/go/src/github.com/cloud-barista/cb-tumblebug
The
--depth 1
option reduces the size by limiting the commit history download.For contributing, it is recommended not to specify this option or to restore the commit history using the following command.
git fetch --unshallow
Register alias for the CB-TB directory (optional action for convenience:
cdtb
,cbtbsrc
,cdtbtest
).echo "alias cdtb='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug'" >> ~/.bashrc echo "alias cdtbsrc='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src'" >> ~/.bashrc echo "alias cdtbtest='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src/testclient/scripts'" >> ~/.bashrc source ~/.bashrc
-
Setup required tools
- Install: git, gcc, make
sudo apt update sudo apt install make gcc git
- Install: Golang
- Check https://golang.org/dl/ and setup Go
-
Download
wget https://go.dev/dl/go1.21.6.linux-amd64.tar.gz; sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.21.6.linux-amd64.tar.gz
-
Setup environment
echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc echo 'export GOPATH=$HOME/go' >> ~/.bashrc
source ~/.bashrc echo $GOPATH go env go version
-
- Check https://golang.org/dl/ and setup Go
- Install: git, gcc, make
-
Build the Golang source code using the Makefile
cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src make
All dependencies will be downloaded automatically by Go.
The initial build will take some time, but subsequent builds will be faster by the Go build cache.
Note To update the Swagger API documentation, run
make swag
incb-tumblebug/src/
- API documentation file will be generated at
cb-tumblebug/src/api/rest/docs/swagger.yaml
- API documentation can be viewed in a web browser at http://localhost:1323/tumblebug/swagger/ (provided when CB-TB is running)
- Detailed information on how to update the API
- API documentation file will be generated at
-
Run CB-Spider
CB-Tumblebug requires CB-Spider to control multiple cloud service providers.
-
(Recommended method) Run the CB-Spider container using the CB-TB script (preferably use the specified version)
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/runSpider.sh
Docker must be installed. If it is not installed, you can use the following script (not for production setup)
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/installDocker.sh
For installation methods other than the container, refer to CB-Spider
-
- Clone the repository
- Build and Setup
- Set environment variables required to run CB-TB (in another tab)
- Check and configure the contents of
cb-tumblebug/conf/setup.env
(CB-TB environment variables, modify as needed)- Apply the environment variables to the system
cd ~/go/src/github.com/cloud-barista/cb-tumblebug source conf/setup.env
- (Note) Automatically set the SELF_ENDPOINT environment variable (an externally accessible address) using a script if needed
- This is necessary if you want to access and control the Swagger API Dashboard from outside when CB-TB is running
cd ~/go/src/github.com/cloud-barista/cb-tumblebug source ./scripts/setPublicIP.sh
- Apply the environment variables to the system
- Check and configure the contents of
store_conf.yaml
incb-tumblebug/conf
(cb-store environment variables, modify as needed)- Specify storetype (NUTSDB or ETCD)
- When setting NUTSDB (local DB), it is necessary to specify the path (by default,
cb-tumblebug/meta_db/dat
)
- Check and configure the contents of
- Execute the built cb-tumblebug binary by using
make run
cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src make run
- Check CB-TB available docker image tags(https://hub.docker.com/r/cloudbaristaorg/cb-tumblebug/tags)
- Run the container image (two options)
-
Run a script to excute CB-TB docker image (recommended)
./scripts/runTumblebug.sh
-
Run docker direclty
docker run -p 1323:1323 \ -v ${HOME}/go/src/github.com/cloud-barista/cb-tumblebug/meta_db:/app/meta_db \ --name cb-tumblebug \ cloudbaristaorg/cb-tumblebug:x.x.x
-
You will see the following messages..
██████╗██████╗ ████████╗██████╗
██╔════╝██╔══██╗ ╚══██╔══╝██╔══██╗
██║ ██████╔╝█████╗██║ ██████╔╝
██║ ██╔══██╗╚════╝██║ ██╔══██╗
╚██████╗██████╔╝ ██║ ██████╔╝
╚═════╝╚═════╝ ╚═╝ ╚═════╝
██████╗ ███████╗ █████╗ ██████╗ ██╗ ██╗
██╔══██╗██╔════╝██╔══██╗██╔══██╗╚██╗ ██╔╝
██████╔╝█████╗ ███████║██║ ██║ ╚████╔╝
██╔══██╗██╔══╝ ██╔══██║██║ ██║ ╚██╔╝
██║ ██║███████╗██║ ██║██████╔╝ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚═════╝ ╚═╝
Multi-cloud infrastructure managemenet framework
________________________________________________
https://github.com/cloud-barista/cb-tumblebug
Access to API dashboard (username: default / password: default)
http://xxx.xxx.xxx.xxx:1323/tumblebug/api
⇨ http server started on [::]:1323
- In default (
cb-tumblebug/conf/setup.env
), you can find the system log incb-tumblebug/log/tumblebug.log
(log is based onzerolog
)
To provisioning multi-cloud infrastructures with CB-TB, it is necessary to register the connection information (credentials) for clouds, as well as commonly used images and specifications.
- Create
credentials.yaml
file and input your cloud credentials-
Overview
credentials.yaml
is a file that includes multiple credentials to use API of Clouds supported by CB-TB (AWS, GCP, AZURE, ALIBABA, etc.)- It should be located in the
~/.cloud-barista/
directory and securely managed. - Refer to the
template.credentials.yaml
for the template
-
Create
credentials.yaml
the fileAutomatically generate the
credentials.yaml
file in the~/.cloud-barista/
directory using the CB-TB scriptcd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/init/genCredential.sh
-
Input credential data
Put credential data to
~/.cloud-barista/credentials.yaml
(Reference: How to obtain a credential for each CSP)### Cloud credentials for credential holders (default: admin) credentialholder: admin: alibaba: # ClientId(ClientId): client ID of the EIAM application # Example: app_mkv7rgt4d7i4u7zqtzev2mxxxx ClientId: # ClientSecret(ClientSecret): client secret of the EIAM application # Example: CSEHDcHcrUKHw1CuxkJEHPveWRXBGqVqRsxxxx ClientSecret: aws: # ClientId(aws_access_key_id) # ex: AKIASSSSSSSSSSS56DJH ClientId: # ClientSecret(aws_secret_access_key) # ex: jrcy9y0Psejjfeosifj3/yxYcgadklwihjdljMIQ0 ClientSecret: ...
-
- Register all multi-cloud connection information and common resources
-
How to register
Refer to README.md for init.py, and execute the
init.py
script. (enter 'y' for confirmation prompts)cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/init/init.sh
- The credentials in
~/.cloud-barista/credentials.yaml
will be automatically registered (all CSP and region information recorded incloudinfo.yaml
will be automatically registered in the system)- Note: You can check the latest regions and zones of CSP using
update-cloudinfo.py
and review the file for updates. (contributions to updates are welcome)
- Note: You can check the latest regions and zones of CSP using
- Common images and specifications recorded in the
cloudimage.csv
andcloudspec.csv
files in theassets
directory will be automatically registered.
- The credentials in
-
-
Shutting down the CB-TB & CB-Spider servers
- CB-Spider: Shut down the server using
ctrl
+c
- CB-TB: Shut down the server using
ctrl
+c
(When a shutdown event occurs, the system will be shutting down gracefully: API requests that can be processed within 10 seconds will be completed) - In case of cleanup is needed due to internal system errors
- Check and delete resources created through CB-TB
- Delete CB-TB & CB-Spider metadata using the provided script
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/cleanDB.sh
- CB-Spider: Shut down the server using
-
Upgrading the CB-TB & CB-Spider versions
The following cleanup steps are unnecessary if you clearly understand the impact of the upgrade
- Check and delete resources created through CB-TB
- Delete CB-TB & CB-Spider metadata
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/cleanDB.sh
- Restart with the upgraded version
- Using CB-TB MapUI (recommended)
- Using CB-TB REST API (recommended)
- Using CB-TB Test Scripts
- With CB-MapUI, you can create, view, and control Mutli-Cloud infra.
- CB-MapUI is a project to visualize the deployment of MCIS in a map GUI.
- Run the CB-MapUI container using the CB-TB script
cd ~/go/src/github.com/cloud-barista/cb-tumblebug ./scripts/runMapUI.sh
- Access via web browser at http://{HostIP}:1324
-
Access to REST API dashboard
-
Using individual APIs
- Create resources required for VM provisioning by using MCIR(multi-cloud infrastructure resources) management APIs
- Create, view, control, execute remote commands, shut down, and delete MCIS using the MCIS(multi-cloud infrastructure service) management APIs
- CB-TB optimal and dynamic provisioning
src/testclient/scripts/
provides Bash shell-based scripts that simplify and automate the MCIS (MC-Infra) provisioning procedures, which require complex steps.
- Step 1: Setup Test Environment
- Step 2: Integrated Tests
- Step 3: Experience Use Cases
- Go to
src/testclient/scripts/
- Configure
conf.env
- Provide basic test information such as CB-Spider and CB-TB server endpoints, cloud regions, test image names, test spec names, etc.
- Much information for various cloud types has already been investigated and input, so it can be used without modification. (However, check for charges based on the specified spec)
- How to modify test VM image:
IMAGE_NAME[$IX,$IY]=ami-061eb2b23f9f8839c
- How to modify test VM spec:
SPEC_NAME[$IX,$IY]=m4.4xlarge
- How to modify test VM image:
- Configure
testSet.env
- Set the cloud and region configurations to be used for MCIS provisioning in a file (you can change the existing
testSet.env
or copy and use it) - Specify the types of CSPs to combine
- Change the number in NumCSP= to specify the total number of CSPs to combine
- Specify the types of CSPs to combine by rearranging the lines in L15-L24 (use up to the number specified in NumCSP)
- Example: To combine aws and alibaba, change NumCSP=2 and rearrange
IndexAWS=$((++IX))
,IndexAlibaba=$((++IX))
- Specify the regions of the CSPs to combine
- Go to each CSP setting item
# AWS (Total: 21 Regions)
- Specify the number of regions to configure in
NumRegion[$IndexAWS]=2
(in the example, it is set to 2) - Set the desired regions by rearranging the lines of the region list (if
NumRegion[$IndexAWS]=2
, the top 2 listed regions will be selected)
- Go to each CSP setting item
- Be aware!
- Be aware that creating VMs on public CSPs such as AWS, GCP, Azure, etc. may incur charges.
- With the default setting of
testSet.env
, TestClouds (TestCloud01
,TestCloud02
,TestCloud03
) will be used to create mock VMs. TestCloud01
,TestCloud02
,TestCloud03
are not real CSPs. They are used for testing purposes (do not support SSH into VM).- Anyway, please be aware of cloud usage costs when using public CSPs.
- Set the cloud and region configurations to be used for MCIS provisioning in a file (you can change the existing
-
You can test the entire process at once by executing
create-all.sh
andclean-all.sh
included insrc/testclient/scripts/sequentialFullTest/
└── sequentialFullTest # Automatic testing from cloud information registration to NS creation, MCIR creation, and MCIS creation ├── check-test-config.sh # Check the multi-cloud infrastructure configuration specified in the current testSet ├── create-all.sh # Automatic testing from cloud information registration to NS creation, MCIR creation, and MCIS creation ├── gen-sshKey.sh # Generate SSH key files to access MCIS ├── command-mcis.sh # Execute remote commands on the created MCIS (multiple VMs) ├── deploy-nginx-mcis.sh # Automatically deploy Nginx on the created MCIS (multiple VMs) ├── create-mcis-for-df.sh # Create MCIS for hosting CB-Dragonfly ├── deploy-dragonfly-docker.sh # Automatically deploy CB-Dragonfly on MCIS and set up the environment ├── clean-all.sh # Delete all objects in reverse order of creation ├── create-cluster-only.sh # Create a K8s cluster for the multi-cloud infrastructure specified in the testSet ├── get-cluster.sh # Get K8s cluster information for the multi-cloud infrastructure specified in the testSet ├── clean-cluster-only.sh # Delete the K8s cluster for the multi-cloud infrastructure specified in the testSet ├── force-clean-cluster-only.sh # Force delete the K8s cluster for the multi-cloud infrastructure specified in the testSet if deletion fails ├── add-nodegroup.sh # Add a new node group to the created K8s cluster ├── remove-nodegroup.sh # Delete the newly created node group in the K8s cluster ├── set-nodegroup-autoscaling.sh # Change the autoscaling setting of the created node group to off ├── change-nodegroup-autoscalesize.sh # Change the autoscale size of the created node group ├── deploy-weavescope-to-cluster.sh # Deploy weavescope to the created K8s cluster └── executionStatus # Logs of the tests performed (information is added when testAll is executed and removed when cleanAll is executed. You can check the ongoing tasks)
-
MCIS Creation Test
-
./create-all.sh -n shson -f ../testSetCustom.env
# Create MCIS with the cloud combination configured in ../testSetCustom.env -
Automatically proceed with the process to check the MCIS creation configuration specified in
../testSetCustom.env
-
Example of execution result
Table: All VMs in the MCIS : cb-shson ID Status PublicIP PrivateIP CloudType CloudRegion CreatedTime -- ------ -------- --------- --------- ----------- ----------- aws-ap-southeast-1-0 Running xx.250.xx.73 192.168.2.180 aws ap-southeast-1 2021-09-17 14:59:30 aws-ca-central-1-0 Running x.97.xx.230 192.168.4.98 aws ca-central-1 2021-09-17 14:59:58 gcp-asia-east1-0 Running xx.229.xxx.26 192.168.3.2 gcp asia-east1 2021-09-17 14:59:42 [DATE: 17/09/2021 15:00:00] [ElapsedTime: 49s (0m:49s)] [Command: ./create-mcis-only.sh all 1 shson ../testSetCustom.env 1] [Executed Command List] [MCIR:aws-ap-southeast-1(28s)] create-mcir-ns-cloud.sh (MCIR) aws 1 shson ../testSetCustom.env [MCIR:aws-ca-central-1(34s)] create-mcir-ns-cloud.sh (MCIR) aws 2 shson ../testSetCustom.env [MCIR:gcp-asia-east1(93s)] create-mcir-ns-cloud.sh (MCIR) gcp 1 shson ../testSetCustom.env [MCIS:cb-shsonvm4(19s+More)] create-mcis-only.sh (MCIS) all 1 shson ../testSetCustom.env [DATE: 17/09/2021 15:00:00] [ElapsedTime: 149s (2m:29s)] [Command: ./create-all.sh -n shson -f ../testSetCustom.env -x 1]
-
-
MCIS Removal Test (Use the input parameters used in creation for deletion)
./clean-all.sh -n shson -f ../testSetCustom.env
# Perform removal of created resources according to../testSetCustom.env
- Be aware!
- If you created MCIS (VMs) for testing in public clouds, the VMs may incur charges.
- You need to terminate MCIS by using
clean-all
to avoid unexpected billing. - Anyway, please be aware of cloud usage costs when using public CSPs.
-
Generate MCIS SSH access keys and access each VM
./gen-sshKey.sh -n shson -f ../testSetCustom.env
# Return access keys for all VMs configured in MCIS- Example of execution result
... [GENERATED PRIVATE KEY (PEM, PPK)] [MCIS INFO: mc-shson] [VMIP]: 13.212.254.59 [MCISID]: mc-shson [VMID]: aws-ap-southeast-1-0 ./sshkey-tmp/aws-ap-southeast-1-shson.pem ./sshkey-tmp/aws-ap-southeast-1-shson.ppk ... [SSH COMMAND EXAMPLE] [VMIP]: 13.212.254.59 [MCISID]: mc-shson [VMID]: aws-ap-southeast-1-0 ssh -i ./sshkey-tmp/aws-ap-southeast-1-shson.pem [email protected] -o StrictHostKeyChecking=no ... [VMIP]: 35.182.30.37 [MCISID]: mc-shson [VMID]: aws-ca-central-1-0 ssh -i ./sshkey-tmp/aws-ca-central-1-shson.pem [email protected] -o StrictHostKeyChecking=no
-
Verify MCIS via SSH remote command execution
./command-mcis.sh -n shson -f ../testSetCustom.env
# Execute IP and hostname retrieval for all VMs in MCIS
-
K8s Cluster Test (WIP: Stability work in progress for each CSP)
`./create-mcir-ns-cloud.sh -n tb -f ../testSet.env` # Create MCIR required for K8s cluster creation `./create-cluster-only.sh -n tb -f ../testSet.env -x 1 -z 1` # Create K8s cluster (-x maximum number of nodes, -z additional name for node group and cluster) `./get-cluster.sh -n tb -f ../testSet.env -z 1` # Get K8s cluster information `./add-nodegroup.sh -n tb -f ../testSet.env -x 1 -z 1` # Add a new node group to the K8s cluster `./change-nodegroup-autoscalesize.sh -n tb -f ../testSet.env -x 1 -z 1` # Change the autoscale size of the new node group `./deploy-weavescope-to-cluster.sh -n tb -f ../testSet.env -y n` # Deploy weavescope to the created cluster `./set-nodegroup-autoscaling.sh -n tb -f ../testSet.env -z 1` # Change the autoscaling setting of the new node group to off `./remove-nodegroup.sh -n tb -f ../testSet.env -z 1` # Delete the newly created node group `./clean-cluster-only.sh -n tb -f ../testSet.env -z 1` # Delete the created K8s cluster `./force-clean-cluster-only.sh -n tb -f ../testSet.env -z 1` # Force delete the created K8s cluster if deletion fails `./clean-mcir-ns-cloud.h -n tb -f ../testSet.env` # Delete the created MCIR
Thanks goes to these wonderful people (emoji key):