This folder contains Istio integration tests that use the test framework checked in at istio.io/istio/pkg/test/framework.
The goal of the framework is to make it as easy as possible to author and run tests. In its simplest
case, just typing go test ./...
should be sufficient to run tests.
The test framework is designed to work with standard go tooling and allows developers to write environment-agnostics tests in a high-level fashion. The quickest way to get started with authoring new tests is to checkout the code in the framework folder.
All tests that use the framework, must run as part of a suite. Only a single suite can be defined per package, since
it is bootstrapped by a Go TestMain
, which has the same restriction.
To begin, create a new folder for your suite under tests/integration.
$ cd ${ISTIO}/tests/integration
$ mkdir mysuite
Within that package, create a TestMain
to bootstrap the test suite:
func TestMain(m *testing.M) {
framework.
NewSuite("mysuite", m).
Run()
}
Next, define your tests in the same package:
func TestMyLogic(t *testing.T) {
framework.
NewTest(t).
Run(func(ctx framework.TestContext) {
// Create a component
g := galley.NewOrFail(ctx, ctx, cfg)
// Use the component.
g.ApplyConfigOrFail(ctx, nil, mycfg)
defer g.DeleteConfigOrFail(ctx, nil, mycfg)
// Do more stuff here.
}
}
The framework.TestContext
is a wrapper around the underlying testing.T
and implements the same interface. Test code
should generally not interact with the testing.T
directly.
In the TestMain
, you can also restrict the test to particular environment, apply labels, or do test-wide setup, such as
deploying Istio.
func TestMain(m *testing.M) {
framework.
NewSuite("mysuite", m).
// Restrict the test to the K8s environment only, tests will be skipped in native environment.
RequireEnvironment(environment.Kube).
// Deploy Istio on the cluster
Setup(istio.SetupOnKube(nil, nil)).
// Run your own custom setup
Setup(mySetup).
Run()
}
func mySetup(ctx resource.Context) error {
// Your own setup code
return nil
}
Go allows you to run sub-tests with t.Run()
. Similarly, this framework supports nesting tests with ctx.NewSubTest()
:
func TestMyLogic(t *testing.T) {
framework.
NewTest(t).
Run(func(ctx framework.TestContext) {
// Create a component
g := galley.NewOrFail(ctx, ctx, cfg)
configs := []struct{
name: string
yaml: string
} {
// Some array of YAML
}
for _, cfg := range configs {
ctx.NewSubTest(cfg.name).
Run(func(ctx framework.TestContext) {
g.ApplyConfigOrFail(ctx, nil, mycfg)
defer g.DeleteConfigOrFail(ctx, nil, mycfg)
// Do more stuff here.
})
}
})
}
Under the hood, calling subtest.Run()
delegates to t.Run()
in order to create a child testing.T
.
Many tests can take a while to start up for a variety of reasons, such as waiting for pods to start or waiting for a particular piece of configuration to propagate throughout the system. Where possible, it may be desirable to run these sorts of tests in parallel:
func TestMyLogic(t *testing.T) {
framework.
NewTest(t).
RunParallel(func(ctx framework.TestContext) {
// ...
}
}
Under the hood, this relies on Go's t.Parallel()
and will, therefore, have the same behavior.
A parallel test will run in parallel with siblings that share the same parent test. The parent test function will exit before the parallel children are executed. It should be noted that if the parent test is prevented from exiting (e.g. parent test is waiting for something to occur within the child test), the test will deadlock.
Consider the following example:
func TestMyLogic(t *testing.T) {
framework.NewTest(t).
Run(func(ctx framework.TestContext) {
ctx.NewSubTest("T1").
Run(func(ctx framework.TestContext) {
ctx.NewSubTest("T1a").
RunParallel(func(ctx framework.TestContext) {
// Run in parallel with T1b
})
ctx.NewSubTest("T1b").
RunParallel(func(ctx framework.TestContext) {
// Run in parallel with T1a
})
// Exits before T1a and T1b are run.
})
ctx.NewSubTest("T2").
Run(func(ctx framework.TestContext) {
ctx.NewSubTest("T2a").
RunParallel(func(ctx framework.TestContext) {
// Run in parallel with T2b
})
ctx.NewSubTest("T2b").
RunParallel(func(ctx framework.TestContext) {
// Run in parallel with T2a
})
// Exits before T2a and T2b are run.
})
})
}
In the example above, non-parallel parents T1 and T2 contain parallel children T1a, T1b, T2a, T2b.
Since both T1 and T2 are non-parallel, they are run synchronously: T1 followed by T2. After T1 exits, T1a and T1b are run asynchronously with each other. After T1a and T1b complete, T2 is then run in the same way: T2 exits, then T2a and T2b are run asynchronously to completion.
The framework, itself, is just a platform for running tests and tracking resources. Without these resources
, there
isn't much added value. Enter: components.
Components are utilities that provide abstractions for Istio resources. They are maintained in the components package, which defines various Istio components such as galley, pilot, and namespaces.
Each component defines their own API which simplifies their use from test code, abstracting away the environment-specific details. This means that the test code can (and should, where possible) be written in an environment-agnostic manner, so that they can be run against any Istio implementation.
For example, the following code creates and then interacts with a Galley and Pilot component:
func TestMyLogic(t *testing.T) {
framework.
NewTest(t).
Run(func(ctx framework.TestContext) {
// Create the components.
g := galley.NewOrFail(ctx, ctx, galley.Config{})
p := pilot.NewOrFail(ctx, ctx, pilot.Config {
Galley: g,
})
// Apply configuration via Galley.
g.ApplyConfigOrFail(ctx, nil, mycfg)
defer g.DeleteConfigOrFail(ctx, nil, mycfg)
// Wait until Pilot has received the configuration update.
p.StartDiscoveryOrFail(t, discoveryRequest)
p.WatchDiscoveryOrFail(t, timeout,
func(response *xdsapi.DiscoveryResponse) (b bool, e error) {
// Validate that the discovery response has the configuration applied.
})
// Do more stuff...
}
}
When a component is created, the framework tracks its lifecycle. When the test exits, any components that were created during the test are automatically closed.
To add a new component, you'll first need to create a top-level folder for your component under the components folder.
$ cd ${ISTIO}/tests/integration
$ mkdir mycomponent
You'll then need to define your component's API.
package mycomponent
type Instance interface {
resource.Resource
DoStuff() error
DoStuffOrFail(t test.Failer)
}
NOTE: A common pattern is to provide two versions of many methods: one that returns an error as well as an OrFail version that fails the test upon encountering an error. This provides options to the calling test and helps to simplify the calling logic. |
---|
Next you need to implement your component for one or more environments. If possible, create both a native and Kubernetes version.
package mycomponent
type nativeComponent struct {
id resource.ID
// ...
}
func newNative(ctx resource.Context) (Instance, error) {
if config.Galley == nil {
return nil, errors.New("galley must be provided")
}
instance := &nativeComponent{}
instance.id = ctx.TrackResource(instance)
//...
return instance, nil
}
func (c *nativeComponent) ID() resource.ID {
return c.id
}
Each implementation of the component must implement resource.Resource
, which just exposes a unique identifier for your
component instances used for resource tracking by the framework. To get the ID, the component must call ctx.TrackResource
during construction.
Finally, you'll need to provide an environment-agnostic constructor for your component:
package mycomponent
func New(ctx resource.Context) (i Instance, err error){
err = resource.UnsupportedEnvironment(ctx.Environment())
ctx.Environment().Case(environment.Native, func() {
i, err = newNative(ctx)
})
ctx.Environment().Case(environment.Kube, func() {
i, err = newKube(ctx)
})
return
}
func NewOrFail(t test.Failer, ctx resource.Context) Instance {
i, err := New(ctx)
if err != nil {
t.Fatal(err)
}
return i
}
Now that everything is in place, you can begin using your component:
func TestMyLogic(t *testing.T) {
framework.
NewTest(t).
Run(func(ctx framework.TestContext) {
// Create the components.
g := myComponent.NewOrFail(ctx, ctx)
// Do more stuff...
}
}
The test framework builds on top of the Go testing infrastructure, and is therefore compatible with
the standard go test
command-line. For example, to run the tests under the /tests/integration/mycomponent
using the default (native) environment, you can simply type:
$ go test ./tests/integration/mycomponent/...
Note that samples below invoking variations of go test ./...
are intended to be run from the tests/integration
directory.
WARNING: Many tests, including integration tests, assume that a Helm client is installed and on the path. |
---|
By default, Go will run tests within the same package (i.e. suite) synchronously. However, tests in other packages may be run concurrently.
When running in the Kubernetes environment this can be problematic for suites that deploy Istio. The Istio deployment, as it stands is a singleton per cluster. If multiple suites attempt to deploy/configure Istio, they can corrupt each other and/or simply fail. To avoid this issue, you have a couple of options:
- Run one suite per command (e.g.
go test ./tests/integration/mysuite/...
) - Disable parallelism with
-p 1
(e.g.go test -p 1 ./...
). A major disadvantage to doing this is that it will also disable parallelism within the suite, even when explicitly specified via RunParallel.
When no flags are specified, the test framework will run all applicable tests. It is possible to filter in/out specific tests using 2 mechanisms:
- The standard
-run <regexp>
flag, as exposed by Go's own test framework. --istio.test.select <filter-expr>
flag to select/skip framework-aware tests that use labels.
For example, if a test, or test suite uses labels in this fashion:
func TestMain(m *testing.M) {
framework.
NewSuite("galley_conversion", m).
// Test is tagged with "Presubmit" label
Label(label.CustomSetup).
Run()
Then you can explicitly select execution of such tests using label based selection. For example, the following expression
will select only the tests that have the label.CustomSetup
label.
$ go test ./... --istio.test.select +customsetup
Similarly, you can exclude tests that use label.CustomSetup
label by:
$ go test ./... --istio.test.select -customsetup
You can "and" the predicates by separating with commas:
$ go test ./... --istio.test.select +customsetup,-postsubmit
This will select tests that have label.CustomSetup
only. It will not select tests that have both label.CustomSetup
and label.Postsubmit
.
Istio's CI/CD system is composed of 2 parts:
Tool | Description |
---|---|
Prow | Kubernetes-based CI/CD system developed by the Kubernetes community and is deployed in Google Kubernetes Engine (GKE). |
TestGrid | A Kubernetes dashboard used for visualizing the status of the Prow jobs. |
This section describes the steps for adding new tests to Prow and TestGrid.
To simplify the process of running tests from Prow, each suite is given its own test script under the prow folder.
Embedded in the name of the script is the following:
- Type of test (unit, end-to-end, integration)
- Component/feature being tested
- The environment used (i.e. native/local or k8s)
- Job execution (i.e. presubmit, postsubmit)
For example, the file integ-security-k8s-presubmit-tests.sh
runs integration tests for various Istio security
features on Kubernetes during PR pre-submit.
In general, when creating a new script use similar scripts as a guide.
Istio's Prow jobs are configured in the istio/test-infra repository.
The prow/cluster/jobs/istio/istio folder contains configuration files for running Prow jobs against various Istio branches.
For example, istio.istio.master.yaml configures Prow jobs that run against Istio's master branch.
Each configuration file contains sections for both presubmit and postsubmit. To add a new job, add a new config stanza to one of these sections, using an existing config stanza as a template.
In general, all tests should be required to succeed. However, as flaky tests appear we may need to temporarily disable certain jobs from gating PR submission. This can be done by adding the following to the configuration:
optional: true
When this is done, however, a GitHub issue should be raised to address the flake and move the job back to required.
TestGrid is owned by the Kubernetes team and its configuration is located in the kubernetes/test-infra repository. Configuring testgrid is explained here
When running tests, only one environment can be specified. Currently, the framework supports the following:
The test binaries run on the native platform either in-memory, or as processes. This is the default, however you can also explicitly specify the native environment:
$ go test ./... -istio.test.env native
The test binaries run in a Kubernetes cluster, but the test logic runs in the test binary. To specify the Kubernetes environment:
$ go test ./... -p 1 --istio.test.env kube
WARNING: -p 1 is required when running directly in the tests/integration/ folder, when using kube environment. |
---|
When running the tests against the Kubernetes environment, you will need to provide a K8s cluster to run the tests against. (See here for info about how to set up a suitable GKE cluster.) You can specify the kube config file that should be used to use for connecting to the cluster, through command-line:
$ go test ./... -p 1 --istio.test.env kube --istio.test.kube.config ~/.kube/config
If not specified, ~/.kube/config
will be used by default.
Be aware that any existing content will be altered and/or removed from the cluster.
Note that the HUB and TAG environment variables must be set when running tests in the Kubernetes environment.
The test framework will generate additional diagnostic output in its work directory. Typically, this is
created under the host operating system's temporary folder (which can be overridden using
the --istio.test.work_dir
flag). The name of the work dir will be based on the test id that is supplied in
a tests TestMain method. These files typically contain some of the logging & diagnostic output that components
spew out as part of test execution
$ go test galley/... --istio.test.work_dir /foo
...
$ ls /foo
galley-test-4ef25d910d2746f9b38/
$ ls /foo/galley-test-4ef25d910d2746f9b38/
istio-system-1537332205890088657.yaml
...
When executing in the CI systems, the makefiles use the --istio.test.ci
flag. This flag causes a few changes in
behavior. Specifically, more verbose logging output will be displayed, some of the timeout values will be more relaxed, and
additional diagnostic data will be dumped into the working directory at the end of the test execution.
The flag is not enabled by default to provide a better U/X when running tests locally (i.e. additional logging can clutter test output and error dumping can take quite a while). However, if you see a behavior difference between local and CI runs, you can enable the flag to make the tests work in a similar fashion.
By default, the test framework will cleanup all deployed artifacts after the test run, especially on the Kubernetes
environment. You can specify the --istio.test.nocleanup
flag to stop the framework from cleaning up the state
for investigation.
The framework accepts standard istio logging flags. You can use these flags to enable additional logging for both the framework, as well as some of the components that are used in-line in the native environment:
$ go test ./... --log_output_level=tf:debug,mcp:debug
The above example will enable debugging logging for the test framework (tf
) and the MCP protocol stack (mcp
).
The tests authored in the new test framework can be debugged directly under GoLand using the debugger. If you want to pass command-line flags to the test while running under the debugger, you can use the Run/Debug configurations dialog to specify these flags as program arguments.
If your tests require special Helm values flags, you can specify your Helm values via additional for Kubernetes environments. See mtls_healthcheck_test.go for example.
The test framework supports the following command-line flags:
-istio.test.env string
Specify the environment to run the tests against. Allowed values are: [native kube] (default "native")
-istio.test.work_dir string
Local working directory for creating logs/temp files. If left empty, os.TempDir() is used.
-istio.test.ci
Enable CI Mode. Additional logging and state dumping will be enabled.
-istio.test.nocleanup
Do not cleanup resources after test completion
-istio.test.select string
Comma separatated list of labels for selecting tests to run (e.g. 'foo,+bar-baz').
-istio.test.hub string
Container registry hub to use (default HUB environment variable)
-istio.test.tag string
Common Container tag to use when deploying container images (default TAG environment variable)
-istio.test.pullpolicy string
Common image pull policy to use when deploying container images
-istio.test.kube.config string
The path to the kube config file for cluster environments
-istio.test.kube.deploy
Deploy Istio into the target Kubernetes environment. (default true)
-istio.test.kube.deployTimeout duration
Timeout applied to deploying Istio into the target Kubernetes environment. Only applies if DeployIstio=true.
-istio.test.kube.undeployTimeout duration
Timeout applied to undeploying Istio from the target Kubernetes environment. Only applies if DeployIstio=true.
-istio.test.kube.systemNamespace string
The namespace where the Istio components reside in a typical deployment. (default "istio-system")
-istio.test.kube.helm.chartDir string
Helm chart dir for Istio. Only valid when deploying Istio. (default "/Users/ozben/go/src/istio.io/istio/install/kubernetes/helm/istio")
-istio.test.kube.helm.values string
Manual overrides for Helm values file. Only valid when deploying Istio.
-istio.test.kube.helm.valuesFile string
Helm values file. This can be an absolute path or relative to chartDir. Only valid when deploying Istio. (default "test-values/values-e2e.yaml")
-istio.test.kube.minikube
Indicates that the target environment is Minikube. Used by Ingress component to obtain the right IP address. This also pertains to any environment that doesn't support a LoadBalancer type.
}
- Currently some native tests fail when being run on a Mac with an error like:
unable to locate an Envoy binary
This is documented in this PR. Once the Envoy binary is available for the Mac, these tests will hopefully succeed.
- If one uses Docker for Mac for the kubernetes environment be sure to specify the
-istio.test.kube.minikube
parameter. The solves an error like:
service ingress is not available yet