-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Single page to make search and fine easier.
pks login -a pks.corp.local -u pks-admin -p VMware1! --skip-ssl-validation
pks create-cluster my-cluster --external-hostname my-cluster.corp.local --plan small
pks get-credentials my-cluster
kubectl create ns planespotter
kubectl config set-context my-cluster --namespace planespotter
1.1) login doesn't work. Says credentials were rejected, please try again.
Use UAAC to check the user and password.
on the OpsMan VM (ssh login ubuntu, VMware1!),
uaac target https://pks.corp.local:8443 --skip-ssl-validation
uaac token client get admin -s <PKS Tile->Credentials->PKS Uaa Management Admin Client>
From opsman vm,
opsman VM login ubuntu, VMware1!
vm: opsman.corp.local (Also get from OpsMan Web UI as BOSH VM)
BOSH client (CLI) is on opsman VM and not on cli-vm
setup bosh target/director url
bosh instances --ps
Detail lab: https://github.com/ragsgit/PKS-Ninja/tree/master/LabGuides/PksControlPlaneBosh-CP3546
To run BOSH CLI get, BOSH Director VM IP, credentials in OpsMan UI
Access Bosh from OpsMan ->BOSH Tile->Status get VM IP (e.g 172.31.0.2)
Credentials->Director Credentials copy password
setup Bosh director env and ca cert location (in .profile as ENV variables, or in command line)
bosh alias-env my-bosh -e 172.31.0.2 --ca-cert /var/tempest/workspaces/default/root_ca_certificate
export BOSH_ENVIRONMENT=my-bosh
bosh -e my-bosh login
Use director and password from above (ydz1jJqLdvwHDtXrBQUqYjStCEv_xMKQ)
//Check deployments
bosh -e my-bosh deployments
bosh instances --ps
bosh tasks --recent
get PKS deployment id into an env variable, and then execute commands for that id.
PKS=$(bosh -e my-bosh deployments | grep ^pivotal | awk '{print ;}')
bosh -d $PKS v ms
bosh -d $PKS instances
bosh -d $PKS tasks
bosh -d $PKS tasks -ar
bosh -d $PKS task <ID>
bosh -d $PKS task <ID> --debug
CLUSTER=$(bosh deployments | grep ^service-instance | awk '{print ;}')
bosh -d $CLUSTER vms --vitals
bosh -d $CLUSTER tasks --recent=9
bosh -d $CLUSTER task 91 --debug
bosh -d $CLUSTER ssh master/0
bosh -d $CLUSTER ssh worker/0
bosh -d $CLUSTER logs`
bosh -d $CLUSTER cloud-check
bosh –d $CLUSTER releases
OpsMan VM: Increase or disable ssh session timeout
edit /etc/ssh/sshd_config
change ClientAliveInterval to 0 or some high value
restart the ssh service
In order for 'kubectl config use-context ' to work, alias of its Master node has to be manually added to the /etc/hosts file like:
127.0.0.1 localhost 10.40.14.34 my-cluster.corp.local my-cluster 127.0.1.1 cli-vm.corp.local cli-vm 192.168.100.100 cli-vm.corp.local cli-vm ....
From cli-vm ssh to opsman (IP is opsman web ui, BOSH Tile/Status)
In OpsMan VM, setup BOSH env, and do BOSH login to run BOSH commands
To get bosh prompt, do ssh vcap@opsman_vm_ip (172.31.0.2)
vcap password is in Bosh tile->Credentials-> VM credentials