Documentation
About Kubeapps
Tutorials
- Get Started with Kubeapps
- Using an OIDC provider
- Managing Carvel packages
- Managing Flux packages
- Kubeapps on TKG
- Kubeapps on TCE
How-to guides
- Using the dashboard
- Access Control
- Basic Form Support
- Custon App View Support
- Custom Form Component Support
- Multi-cluster Support
- Offline installation
- Private Package Repository
- Syncing Package Repositories
- Using an OIDC provider with Pinniped
Background
Reference
About the project
End-to-end tests in the project ¶
In every CI build, a set of end-to-end tests are run to verify, as much as possible, that the changes don’t include regressions from a user point of view. Please refer to the CI documentation for further information. The current end-to-end tests are just browser tests
These tests are run by the script script/e2e-test.sh . Particularly, this script:
- Installs Kubeapps using the images built during the CI process (c.f.,
CI config file
) by setting the proper args to the Helm command.
- If the
USE_MULTICLUSTER_OIDC_ENV
is enabled, a set of flags will be passed to configure the Kubeapps installation in a multicluster environment.
- If the
- Waits for:
- the different deployments to be ready.
- the bitnami repo sync job to be completed.
- Installs some dependencies:
- Chart Museum.
- Operator framework (not in GKE).
- Runs the web browser tests .
If all of the above succeeded, the control is returned to the CI with the proper exit code.
Web Browser tests ¶
Apart from the basic functionality tests run by the chart tests, this project contains web browser tests that you can find in the integration folder.
These tests are based on Puppeteer . Puppeteer is a NodeJS library that provides a high-level API to control Chrome or Chromium (in headless mode by default).
On top of Puppeteer, we are using the jest-puppeteer
module that allows us to run these tests using the same syntax as in the rest of the unit tests that we have in the project.
NOTE: this information is now outdated. We are using playwright instead. This documentation will be eventually updated accordingly.
The aforementioned integration folder is self-contained, that is, it contains every required dependency to run the browser tests in a separate package.json . Furthermore, a Dockerfile is used to generate an image with all the dependencies needed to run the browser tests.
These tests can be run either locally or in a container environment .
You can set up a configured Kubeapps instance in your cluster with the script/setup-kubeapps.sh script.
Running browser tests locally ¶
To run the tests locally you just need to install the required dependencies and set the required environment variables:
cd integration
yarn install
INTEGRATION_ENTRYPOINT=http://kubeapps.local USE_MULTICLUSTER_OIDC_ENV=false ADMIN_TOKEN=foo1 VIEW_TOKEN=foo2 EDIT_TOKEN=foo3 yarn test
If a test happens to fail, besides the test logs, a screenshot will be generated and saved in the reports/screenshots
folder.
Running browser tests in a pod ¶
Since the CI environment doesn’t have the required dependencies and to provide a reproducible environment, it’s possible to run the browser tests in a Kubernetes pod.
To do so, you can spin up an instance running the image kubeapps/integration-tests . This image contains all the required dependencies and it waits forever so you can run commands within it. We also provide a simple Kubernetes Deployment manifest for launching this container.
The goal of this setup is that you can copy the latest tests to the image, run the tests and extract the screenshots in case of failure:
cd integration
# Deploy the e2e-runner pod
kubectl apply -f manifests/e2e-runner.yaml
pod=$(kubectl get po -l run=integration -o jsonpath="{.items[0].metadata.name}")
# Copy latest tests
kubectl cp ./tests ${pod}:/app/
# If you also modify the test configuration, you will need to update the files
# for f in *.js; do kubectl cp "./${f}" "${pod}:/app/"; done
# Run tests (you must fill these vars accordingly)
kubectl exec -it ${pod} -- /bin/sh -c "INTEGRATION_ENTRYPOINT=http://kubeapps.kubeapps USE_MULTICLUSTER_OIDC_ENV=${USE_MULTICLUSTER_OIDC_ENV} ADMIN_TOKEN=${admin_token} VIEW_TOKEN=${view_token} EDIT_TOKEN=${edit_token} yarn test"
# If the tests fail, get report screenshot
kubectl cp ${pod}:/app/reports ./reports
Building the “kubeapps/integration-tests” image ¶
Our CI system relies on the kubeapps/integration-tests image to run browser tests (c.f., CI config file and CI documentation ). Consequently, this image should be properly versioned to avoid CI issues.
The kubeapps/integration-tests
image is built using this
Makefile
, it uses the IMAGE_TAG
variable to pass the version with which the image is built. It is important to increase the version each time the image is built and pushed:
# Get the latest tag from https://hub.docker.com/r/kubeapps/integration-tests/tags?page=1&ordering=last_updated
# and then increment the patch version of the latest tag to get the IMAGE_TAG that you'll use below.
cd integration
IMAGE_TAG=v1.0.1 make build
IMAGE_TAG=v1.0.1 make push
It will build and push the image using this Dockerfile (we are using the base image as in the Kubeapps Dashboard build image ). The dependencies of this image are defined in the package.json .
When pushing a new image, also update the field version
in the
package.json
.
Then, update the Kubernetes Deployment manifest to point to the version you have built and pushed.
To sum up, whenever a change triggers a new kubeapps/integration-tests
version (new NodeJS image, updating the integration dependencies, other changes, etc.), you will have to release a new version. This process involves:
- Checking if the integration Dockerfile is using the proper base version.
- Ensuring we are not using any deprecated dependency in the package.json .
- Updating the Makefile with the new version tag.
- running
make build && make push
to release a new image version. - Modifying the Kubernetes Deployment manifest with the new version.