To build an OpenShift release you create the make release
target on a
system with Docker, which will create a build environment image and then
execute a cross platform Go build within it. The build output will be copied
to _output/releases
as a set of tars containing each version. It will also
build the openshift/origin-base
image which is the common parent image for all
OpenShift Docker images.
$ make release
NOTE: Only committed code is built. Today, all of our container image builds require imagebuilder.
Once a release has been created, it can be pushed:
$ hack/push-release.sh
To cut an official tag release, we generally use the images built by ci.openshift.redhat.com under the devenv_ami job.
- Create a new git tag
git tag vX.X.X -a -m "vX.X.X" HEAD
- Push the tag to GitHub
git push origin --tags
whereorigin
isgithub.com/openshift/origin.git
- Run the "devenv_ami" job
- Once the images are pushed to the repository, run
OS_PUSH_TAG="vX.X.X" hack/push-release.sh
. Your tag must match the Git tag. - Upload the binary artifacts generated by that build to GitHub release page
- Send an email to the dev list, including the important changes prior to the release.
We generally cut a release before disruptive changes land.
We provide the openshift/origin-release
container in which all of our build
dependencies live, so that one can build a full release of OpenShift without
having to install anything other than a container runtime on their local system.
To run scripts or make
targets from the Origin repo inside of the container,
use:
$ hack/env ${COMMAND}
For instance, to build the oc
binary:
$ hack/env make build WHAT=cmd/oc
The release container works by streaming a copy of the repository into a volume, sharing that volume with the container as its working directory, executing the desired command and streaming the results back to your filesystem.
One can configure what files are uploaded to the container by setting the
$OS_BUILD_ENV_EXCLUDE
variable. By default, the entire repository is streamed
into the container except for anything under _output/
.
Similarly, to configure what files are downloaded from the container after the
action is taken, use $OS_BUILD_ENV_PRESERVE
. By default, _output/local/bin
,
_output/local/releases
, and _output/scripts
will be downloaded.
While make release
and make build
will both build all of the binaries required
for a full release of OpenShift, it is also possible to build individual
binaries. Binary entrypoints are kept under the cmd/
directory and can be
specified to build with the WHAT
parameter, for instance, to build just oc
:
$ make build WHAT=cmd/oc
The make release
will both build the entire suite of images necessary for an
OpenShift release. This can take a very long time and will re-build the full
RPM during the process. Individual images can be built by ADD
-ing in updated
binaries or files with the hack/build-local-images.py
script. For example:
# see help-text
build-local-images.py -h
# build all images
build-local-images.py
# build only the f5-router image
build-local-images.py f5-router
# build with a different image prefix
OS_IMAGE_PREFIX=openshift3/ose build-local-images.sh
OpenShift uses three levels of testing - unit tests, integration test, and end-to-end tests (much like Kubernetes).
Unit tests follow standard Go conventions and are intended to test the behavior and output of a single package in isolation. All code is expected to be easily testable with mock interfaces and stubs, and when they are not it usually means that there's a missing interface or abstraction in the code. A unit test should focus on testing that branches and error conditions are properly returned and that the interface and code flows work as described. Unit tests can depend on other packages but should not depend on other components (an API test should not be writing to etcd).
The unit tests for an entire package should not take more than 0.5s to run, and if they do, are probably not really unit tests or need to be rewritten to avoid sleeps or pauses. Coverage on a unit test should be above 70% unless the units are a special case.
See pkg/template/generator
for examples of unit tests. Unit tests should
follow Go conventions.
Run the unit tests with:
$ hack/test-go.sh
or an individual package using its relative path with:
$ hack/test-go.sh pkg/build
or an individual package and all packages nested under it:
$ hack/test-go.sh pkg/build/...
To run only a certain regex of tests in a package, use:
$ hack/test-go.sh pkg/build -test.run=SynchronizeBuildRunning
To get verbose output for the above example:
$ hack/test-go.sh pkg/build -test.run=SynchronizeBuildRunning -v
To run all tests with verbose output:
$ hack/test-go.sh -v
To change the timeout for individual unit tests, which defaults to one minute, use:
$ TIMEOUT=<timeout> hack/test-go.sh
To enable running the kubernetes unit tests:
$ TEST_KUBE=true hack/test-go.sh
To run unit test for an individual kubernetes package:
$ hack/test-go.sh vendor/k8s.io/kubernetes/examples
To change the coverage mode, which is -cover -covermode=atomic
by default,
use:
$ COVERAGE_SPEC="<some coverage specification>" hack/test-go.sh
To turn off coverage calculation, which is on by default, use:
$ COVERAGE_SPEC= hack/test-go.sh
To run tests without the go race detector, which is on by default, use:
$ DETECT_RACES= hack/test-go.sh
To create a line coverage report, set OUTPUT_COVERAGE
to a path where the
report should be stored. For example:
$ COVERAGE_OUTPUT_DIR='/path/to/dir' hack/test-go.sh
After that you can open /path/to/dir/coverage.html
in the browser.
To generate a jUnit XML report from the output of the tests, and see a summary of the test output instead of the full test output, use:
$ JUNIT_REPORT=true hack/test-go.sh
hack/test-go.sh
cannot generate jUnit XML and a coverage report for all
packages at once. If you require both, you must call hack/test-go.sh
twice.
Integration tests cover multiple components acting together (generally, 2 or 3). These tests should focus on ensuring that naturally related components work correctly. They should not be extensively testing branches or error conditions inside packages (that's what unit tests do), but they should validate that important success and error paths work across layers (especially when errors are being converted from lower level errors). Integration tests should not be testing details of the inter-component connections - API tests should not test that the JSON serialized to the wire is correctly converted back and forth (unit test responsibility), but they should test that those connections have the expected outcomes. The underlying goal of integration tests is to wire together the most important components in isolation. Integration tests should be as fast as possible in order to enable them to be run repeatedly during testing. Integration tests that take longer than 0.5s are probably trying to test too much together and should be reorganized into separate tests. Integration tests should generally be written so that they are starting from a clean slate, but if that involves costly setup those components should be tested in isolation.
We break integration tests into two categories, those that use Docker and those that do not. In general, high-level components that depend on the behavior of code running inside a Docker container should have at least one or two integration tests that test all the way down to Docker, but those should be part of their own test suite. Testing the API and high level API functions should generally not depend on calling into Docker. They are denoted by special test tags and should be in their own files so we can selectively build them.
All integration tests are located under test/integration/*
. For special function
sets please create sub directories like test/integration/deployimages
.
Run all of the integration tests with:
$ hack/test-integration.sh
The script launches an instance of etcd and then invokes the integration tests. If you need to execute a subset of integration tests, run:
$ hack/test-integration.sh <regex>
Where <regex>
is some regular expression that matches the names of all of the
integration tests you want to run. The regular expression is passed into go test -run
,
so ensure that the syntax or features you use are supported.
Each integration test is executed in parallel using test/integration/runner
.
There is a CLI integration test suite which covers general non-Docker functionality of the CLI tool working against the API. Run it with:
$ hack/test-cmd.sh
This suite comprises many smaller suites, which are found under test/cmd
and
can be run individually by specifying a regex filter, passed through grep -E
like with integration tests above:
$ hack/test-cmd.sh <regex>
During development, you can run a file test/cmd/*.sh
directly to test against
a running server. This can speed up the feedback loop considerably. All
test/cmd/*
tests are expected
to be executable repeatedly - please file bugs if a test needs cleanup before
running.
For example, start the OpenShift server, create a "test" project, and then run
oc new-app
tests against the server:
$ oc new-project test
$ test/cmd/newapp.sh
In order to run the suite, generate a jUnit XML report, and see a summary of the test suite, use:
$ JUNIT_REPORT='true' hack/test-cmd.sh
The final test category is end to end tests (e2e) which should verify a long set of flows in the product as a user would see them. Two e2e tests should not overlap more than 10% of function, and are not intended to test error conditions in detail. The project examples should be driven by e2e tests. e2e tests can also test external components working together.
The end-to-end suite is currently implemented primarily in Bash, but will be folded into the extended suite (located in test/extended) over time. The extended suite is closer to the upstream Kubernetes e2e suite and tests the full behavior of a running system.
Run the end to end tests with:
$ hack/test-end-to-end.sh
Run the extended tests with:
$ test/extended/core.sh
This suite comprises many smaller suites, which are found under test/extended
and can be run individually by specifying --ginkgo.focus
and a regex filter:
$ test/extended/core.sh --ginkgo.focus=<regex>
In addition, the extended tests can be ran against an existing OpenShift cluster:
$ KUBECONFIG=/path/to/admin.kubeconfig TEST_ONLY=true test/extended/core.sh --ginkgo.focus=<regex>
Extended tests should be Go tests in the test/extended
directory that use
the Ginkgo library. They must be able to be run remotely, and cannot depend on
any local interaction with the filesystem or Docker.
More information about running extended tests can be found in test/extended/README.
OpenShift is split into three major repositories:
- https://github.com/openshift/api/ - which holds all the external API objects definitions.
- https://github.com/openshift/client-go/ - which holds all the client code (written in Go).
- https://github.com/openshift/origin/ - which holds the actual code behind OpenShift.
This split requires additional effort to introduce any API change. The following steps should guide you through the process.
-
The first place to introduce the changes is openshift/api. Here, you put your external API updates and when you are done run
make generate
. If you need to introduce a new dependency runmake update-deps
, and almost never updateglide.yaml
directly. When you're done open a PR against the aforementioned repository and ping @openshift/api-review for a review. -
The next step includes updating the openshift/client-go with the changes from step 1, since it vendors it. To do so run
make update-deps
to pick up the changes from step 1 and then runmake generate
to update the client code with necessary changes. When you're done open a PR against the aforementioned repository and ping @openshift/sig-master for a review. -
The final step happens in openshift/origin repository. As previously, run
make update-deps
to pick up the changes from previous two steps. Afterwards runmake update
to generated the remaining bits in origin repository. When you're done open a PR against the aforementioned repository and ping @openshift/sig-master for a review.
If at any point you have doubts about any step of the flow reach out to @openshift/sig-master team for help.
NOTE: It may happen that during make update-deps
step you will pick up the changes introduced
by someone else in his PR. In that case sync with the other PR's author and include his changes
in your PR noting the fact to your reviewer.
OpenShift and Kubernetes use Godep for
dependency management. Godep allows versions of dependent packages to be
locked at a specific commit by vendoring them (checking a copy of them into
vendor/
).
This means that everything you need for OpenShift is checked into this
repository.
To install godep
locally run:
$ go get github.com/tools/godep
If you are not updating packages you should not need godep installed.
Origin carries patches inside of vendor/ on top of each rebase. Thus, origin carries upstream patches in two ways.
-
periodic rebases against a Kubernetes commit. Eventually, any code you have in upstream kubernetes will land in Openshift via this mechanism.
-
Cherry-picked patches for important bug fixes. We really try to limit feature back-porting entirely.
You can manually try to cherry pick a commit (by using git apply). This can easily be done in a couple of steps.
- wget the patch, i.e.
wget -O /tmp/mypatch https://github.com/kubernetes/kubernetes/pull/34624.patch
- PATCH=/tmp/mypatch git apply --directory vendor/k8s.io/kubernetes $PATCH
If this fails, then it's possible you may need to pick multiple commits.
Assuming you read the bullets above... If your patch is really far behind, for example, if there have been 5 commits modifying the directory you care about, cherry picking will be increasingly difficult and you should consider waiting for the next rebase, which will likely include the commit you care about, or at least decrease the amount of cherry picks you need to do to merge.
To really know the answer, you need to know how many commits behind you are in a particular directory, often.
To do this, just use git log, like so (using pkg/scheduler/ as an example).
MYDIR=pkg/scheduler/algorithm git log --oneline --
vendor/k8s.io/kubernetes/${MYDIR} | grep UPSTREAM | cut -d' ' -f 4-10 | head -1
The commit message printed above will tell you:
- what the LAST commit in Kubernetes was (which effected "/pkg/scheduler/algorithm")
- directory, which will give you an intuition about how "hot" the code you are cherry picking is. If it has changed a lot, recently, then that means you probably will want to wait for a rebase to land.
For convenience, you can use hack/cherry-pick.sh
to generate patches for
Origin from upstream commits.
The purpose of this command is to allow you to pull individual commits from a local kubernetes repository into origin's vendored kuberenetes in a fully automated manner.
To use this command, be sure to setup remote Pull Request branches in the kubernetes repository you are using (i.e. like https://gist.github.com/piscisaureus/3342247). Specifically, you will be doing this, to the git config you probably already have for kubernetes:
[remote "origin"]
url = https://github.com/kubernetes/kubernetes
fetch = +refs/heads/*:refs/remotes/origin/*
### Add this line
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
so that git show origin/pr/<number>
displays information about your branch
after a git fetch
.
You must also have the Kubernetes repository checked out in your GOPATH
(visible as ../../../k8s.io/kubernetes
),
with openshift/kubernetes as a remote and fetched:
$ pushd $GOPATH/src/k8s.io/kubernetes
$ git remote add openshift https://github.com/openshift/kubernetes.git
$ git fetch openshift
$ popd
There must be no modified or uncommitted files in either repository.
To pull an upstream commit, run:
$ hack/cherry-pick.sh <pr_number>
This will attempt to create a patch from the current Kube rebase version in Origin that contains the commits added in the PR. If the PR has already been merged to the Kube version, you'll get an error. If there are conflicts, you'll have to resolve them in the upstream repo, then hit ENTER to continue. The end result will be a single commit in your Origin repo that contains the changes.
If you want to run without a rebase option, set NO_REBASE=1
before the
command is run. You can also specify a commit range directly with:
$ hack/cherry-pick.sh origin/master...<some_branch>
All upstream commits should have a commit message where the first line is:
UPSTREAM: <PR number|drop|carry>: <short description>
drop
indicates the commit should be removed during the next
rebase. carry
means that the change cannot go into upstream, and we
should continue to use it during the next rebase. PR number
means
that the commit will be dropped during a rebase, as long as that
rebase includes the given PR number.
You can also target repositories other than Kube by setting UPSTREAM_REPO
and
UPSTREAM_PACKAGE
env vars. UPSTREAM_REPO
should be the full name of the Git
repo as Go sees it, i.e. github.com/coreos/etcd
, and UPSTREAM_PACKAGE
must be
a package inside that repo that is currently part of the Godeps.json file. Example:
$ UPSTREAM_REPO=github.com/coreos/etcd UPSTREAM_PACKAGE=store
hack/cherry-pick.sh <pr_number>
By default hack/cherry-pick.sh
uses git remote named origin
to fetch
kubernetes repository, if your git configuration is different, you can pass the git
remote name by setting UPSTREAM_REMOTE
env var:
$ UPSTREAM_REMOTE=upstream hack/cherry-pick.sh <pr_number>
The hack/move-upstream.sh
script takes the current feature branch, finds any
changes to the
requested upstream project (as defined by UPSTREAM_REPO
and
UPSTREAM_PACKAGE
) that differ from origin/master
, and then creates a new
commit in that upstream project on a branch with the same name as your current
branch.
For example, to upstream a commit to OpenShift source-to-image while working from Origin:
$ git checkout my_feature_branch_in_origin
$ git log --oneline
70ffe7e Docker and STI builder support binary extraction
75a22de UPSTREAM: <sti>: Allow prepared directories to be passed to STI
86eefdd UPSTREAM: 14618: Refactor exec to allow reuse from server
# we want to move our STI changes to upstream
$ UPSTREAM_REPO=github.com/openshift/source-to-image
UPSTREAM_PACKAGE=pkg/api hack/move-upstream.sh ...
# All changes to source-to-image in Godeps/. are now in a commit UPSTREAMED
in s2i repo
$ cd ../source-to-image
$ git log --oneline
c0029f6 UPSTREAMED
... # older commits
The default is to work against Kube. go
There are a few steps involved in rebasing Origin to a new version of
Kubernetes. We need to make sure
that not only the Kubernetes packages were updated correctly into Godeps
, but
also that all tests are
still running without errors and code changes, refactorings or the
inclusion/removal of attributes
were properly reflected in the Origin codebase.
Before you begin, make sure you have both openshift/origin and kubernetes/kubernetes in your $GOPATH. You may want to work on a separate $GOPATH just for the rebase:
$ go get github.com/openshift/origin
$ go get k8s.io/kubernetes
You must add the Origin GitHub fork as a remote in your k8s.io/kubernetes repo:
$ cd $GOPATH/src/k8s.io/kubernetes
$ git remote add openshift [email protected]:openshift/kubernetes.git
$ git fetch openshift
Check out the version of Kubernetes you want to rebase as a branch or tag named
stable_proposed
in
kubernetes/kubernetes. For example,
if you are going to rebase the latest master
of Kubernetes:
$ cd $GOPATH/src/k8s.io/kubernetes
$ git checkout master
$ git pull
$ git checkout -b stable_proposed
If all requirements described in Preparation were correctly attended, you should not have any trouble with rebasing the Kubernetes code using the script that automates this process.
$ cd $GOPATH/src/github.com/openshift/origin
$ hack/rebase-kube.sh
Read over the changes with git status
and make sure it looks reasonable.
Check specially the Godeps/Godeps.json
file to make sure no dependency
is unintentionally missing.
Commit using the message bump(k8s.io/kubernetes):<commit SHA>
, where
<commit SHA>
is the commit id for the Kubernetes version we are including in
our Godeps. It can be found in our Godeps/Godeps.json
in the declaration of
any Kubernetes package.
If for any reason you had trouble rebasing using the script, you may need to to
do it manually.
After following all requirements described in the Preparation topic, you will
need to run
godep restore
from both the Origin and the Kubernetes directories and then
godep save ./...
from the Origin directory. Follow these steps:
$ cd $GOPATH/src/github.com/openshift/origin
make clean ; godep restore
will restore the package versions specified in theGodeps/Godeps.json
of Origin to your GOPATH.$ cd $GOPATH/src/k8s.io/kubernetes
$ git checkout stable_proposed
will checkout the desired version of Kubernetes as branched in Preparation.$ godep restore
will restore the package versions specified in theGodeps/Godeps.json
of Kubernetes to your GOPATH.$ cd $GOPATH/src/github.com/openshift/origin
.$ make clean ; godep save ./...
will save a list of the checked-out dependencies to the fileGodeps/Godeps.json
, and copy their source code intovendor
.- If in the previous step godep complaints about the checked out revision of a
package being different than the wanted revision, this probably means there are
new packages in Kubernetes that we need to add. Do a
godep save <pkgname>
with the package specified by the error message and then$ godep save ./...
again. - Read over the changes with
git status
and make sure it looks reasonable. Check specially theGodeps/Godeps.json
file to make sure no dependency is unintentionally missing. The whole Godeps directory will be added to version control, including_workspace
. - Commit using the message
bump(k8s.io/kubernetes):<commit SHA>
, where<commit SHA>
is the commit id for the Kubernetes version we are including in our Godeps. It can be found in ourGodeps/Godeps.json
in the declaration of any Kubernetes package.
If in the process of rebasing manually you found any corner case not attended
by the hack/rebase-kube.sh
script, make sure you update it accordingly to help future rebases.
Eventually during the development cycle we introduce changes to dependencies
right in the Origin
repository. This is not a largely recommended practice, but it's useful if we
need something that,
for example, is in the Kubernetes repository but we are not doing a rebase yet.
So, when doing the next
rebase, we need to make sure we get all these changes otherwise they will be
overridden by godep save
.
- Check the
Godeps
directory commits history for commits tagged with the UPSTREAM keyword. We will need to cherry-pick all UPSTREAM commits since the last Kubernetes rebase (remember you can find the last rebase commit looking for a message likebump(k8s.io/kubernetes):...
). - For every commit tagged UPSTREAM, do
git cherry-pick <commit SHA>
. - Notice that eventually the cherry-pick will be empty. This probably means the given change were already merged in Kubernetes and we don't need to specifically add it to our Godeps. Nice!
- Read over the commit history and make sure you have every UPSTREAM commit since the last rebase (except only for the empty ones).
After making sure we have all the dependencies in place and up-to-date, we need to work in the Origin codebase to make sure the compilation is not broken, all tests pass and it's compliant with any refactorings, architectural changes or behavior changes introduced in Kubernetes. Make sure:
make clean ; hack/build-go.sh
compiles without errors and the standalone server starts correctly.- all of our generated code is up to date by running all
hack/update-*
scripts. hack/verify-open-ports.sh
runs without errors.hack/copy-kube-artifacts.sh
so Kubernetes tests can be fully functional. The diff resulting from this script should be squashed into the Kube bump commit.TEST_KUBE=1 hack/test-go.sh
runs without errors.hack/test-cmd.sh
runs without errors.hack/test-integration.sh
runs without errors.hack/test-end-to-end.sh
runs without errors. See Building a Release above for setting up the environment for the test-end-to-end.sh tests.
It is helpful to look at the Kubernetes commit history to be aware of the major topics. Although it can potentially break or change any part of Origin, the most affected parts are usually:
- https://github.com/openshift/origin/blob/master/pkg/cmd/server/start
- https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master.go
- https://github.com/openshift/origin/blob/master/pkg/cmd/server/origin/master.go
- https://github.com/openshift/origin/blob/master/pkg/cmd/util/clientcmd/factory.go
- https://github.com/openshift/origin/blob/master/pkg/cmd/cli/cli.go
- https://github.com/openshift/origin/blob/master/pkg/api/meta/meta.go
Place all your changes in a commit called "Refactor to match changes upstream".
A typical pull request for your Kubernetes rebase will contain:
- One commit for the Kuberentes Godeps bump (
bump(k8s.io/kubernetes):<commit SHA>
). - Zero, one, or more bump commits for any shared dependencies between Origin and Kubernetes that have been bumped. Any transitive dependencies coming from Kubernetes should be squashed in the Kube bump commit.
- Zero, one, or more cherry-picked commits tagged UPSTREAM.
- One commit "Boring refactor to match changes upstream" that includes boring changes like imports rewriting, etc.
- One commit "Interesting refactor to match changes upstream" that includes interesting changes like new plugins or controller changes.
To update to a new version of a dependency that's not already included in
Kubernetes, checkout the correct version in your GOPATH and then run
godep save <pkgname>
. This should create a new version of Godeps/Godeps.json
,
and update vendor
. Create a commit that includes both of these changes with message
bump(<pkgname>): <pkgcommit>
.
hack/update-external-example.sh
will pull down example files from external
repositories and deposit them under the examples
directory.
Run this script if you need to refresh an example file, or add a new one. See
the script and examples/quickstarts/README.md
for more details.
If you run into difficulties running OpenShift, start by reading through the [troubleshooting guide](https://github.com/openshift/origin/blob/master/docs/debugging-openshift. md).
A specfile is included in this repo which can be used to produce RPMs including the openshift binary. While the specfile will be kept up to date with build requirements the version is not updated. You will need to either update the Version, %commit, and %ldflags values on your own or you may use tito to build and tag releases.
When built with GSSAPI support, the oc
client supports logging in with
Kerberos credentials on Linux and OS X.
GSSAPI-enabled builds of oc
cannot be cross-compiled, but must be built on
the target platform with the GSSAPI header files available.
On Linux, ensure the krb5-devel
package is installed:
$ sudo yum install -y krb5-devel
On OS X, you can obtain header files via Homebrew:
$ brew install homebrew/dupes/heimdal --without-x11
Once dependencies are in place, build with the gssapi
tag:
$ hack/build-go.sh cmd/oc -tags=gssapi
Verify that the GSSAPI feature is enabled with oc version
:
$ oc version
...
features: Basic-Auth GSSAPI Kerberos SPNEGO
OpenShift and Kubernetes integrate with the Swagger 2.0 API
framework which aims to make it easier to document and
write clients for RESTful APIs. When you start OpenShift, the Swagger API
endpoint is exposed at https://localhost:8443/swaggerapi
. The Swagger UI
makes it easy to view your documentation - to view the docs for your local
version of OpenShift start the server with CORS enabled:
$ openshift start --cors-allowed-origins=.*
and then browse to http://openshift3swagger-claytondev.rhcloud.com (which runs a copy of the Swagger UI that points to localhost:8080 by default). Expand the operations available on v1 to see the schemas (and to try the API directly). Additionally, you can download swagger-ui from http://swagger.io/swagger-ui/ and use it to point to your local swagger API endpoint.
Note: Hosted API documentation can be found here.
OpenShift integrates the go pprof
tooling to make it easy to capture CPU and
heap dumps for running systems. The following modes are available for the
openshift
binary (including all the CLI variants):
OPENSHIFT_PROFILE
environment variable:cpu
- will start a CPU profile on startup and write./cpu.pprof
. Contains samples for the entire run at the native sampling resolution (100hz). Note: CPU profiling for Go does not currently work on Mac OS X - the stats are not correctly sampledmem
- generate a running heap dump that tracks allocations to./mem.pprof
block
- will start a block wait time analysis and write./block.pprof
web
- start the pprof webserver in process at http://127.0.0.1:6060/debug/pprof (you can open this in a browser). This supportsOPENSHIFT_PROFILE_HOST=
andOPENSHIFT_PROFILE_PORT=
to change default ip127.0.0.1
and default port6060
.
In order to start the server in CPU profiling mode, run:
$ OPENSHIFT_PROFILE=cpu sudo ./_output/local/bin/linux/amd64/openshift start
Or, if running OpenShift under systemd, append this to
/etc/sysconfig/atomic-openshift-{master,node}
OPENSHIFT_PROFILE=cpu
To view profiles, you use
pprof which is
part of go tool
. You must pass the binary you are debugging (for symbols)
and a captured pprof. For instance, to view a cpu
profile from above, you
would run OpenShift to completion, and then run:
$ go tool pprof ./_output/local/bin/linux/amd64/openshift cpu.pprof
or
$ go tool pprof $(which openshift) /var/lib/origin/cpu.pprof
This will open the pprof
shell, and you can then run:
# see the top 20 results
(pprof) top20
# see the top 50 results
(pprof) top50
# show the top20 sorted by cumulative time
(pprof) cum=true
(pprof) top20
to see the top20 CPU consuming fields or
(pprof) web
to launch a web browser window showing you where CPU time is going.
pprof
supports CLI arguments for looking at profiles in different ways -
memory profiles by default show allocated space:
$ go tool pprof ./_output/local/bin/linux/amd64/openshift mem.pprof
but you can also see the allocated object counts:
$ go tool pprof --alloc_objects ./_output/local/bin/linux/amd64/openshift
mem.pprof
Finally, when using the web
profile mode, you can have the go tool directly
fetch your profiles via HTTP:
# for a 30s CPU trace
$ go tool pprof ./_output/local/bin/linux/amd64/openshift
http://127.0.0.1:6060/debug/pprof/profile
# for a snapshot heap dump at the current time, showing total allocations
$ go tool pprof --alloc_space ./_output/local/bin/linux/amd64/openshift
http://127.0.0.1:6060/debug/pprof/heap
See debugging Go programs for more
info. pprof
has many modes and is very powerful (try tree
) - you can pass
a regex to many arguments to limit your results to only those samples that
match the regex (basically the function name or the call stack).