Learn about different types of software testing by visiting Levels of Software Testing.
We focus mainly on system-level testing to make sure the entire application works as expected. We also write unit tests in cases where:
- They won't break easily due to changes in how the code is implemented.
- We need to make sure we handle all possible edge cases correctly
To start the test environment, run the following commands in the project root:
bin/test
To run all system tests, use:
bin/system-tests-run-tests
For debugging, run:
bin/system-tests-debug
When debugging, add test.only
to the test you want to focus on. This will run only that test. Remember to remove test.only
once debugging is complete.
If you want to debug a specific test, you can add .only
to its definition. This ensures only that test runs, skipping all others:
test.only("debugging a specific test", async ({ page }) => {
// Test implementation
})
After debugging, remove the .only
so all tests run as usual.
Tests are located in the system-tests/src/tests/
folder.
Here are the commands you can use to run and manage system tests:
bin/system-tests-debug
— Debug tests step by step.bin/system-tests-record-test-admin
— Record a test as an admin user.bin/system-tests-record-test-teacher
— Record a test as a teacher user.bin/system-tests-record-test-user
— Record a test as a regular user.bin/system-tests-record-test-without-resetting-db
— Record a test without resetting the database.bin/system-tests-run-tests
— Run all tests.bin/system-tests-update-snapshots
— Update image snapshots.bin/system-tests-run-tests-record-video
— Run tests and save a video of the process (insystem-tests/test-results
).bin/system-tests-run-tests-slowmo
— Run tests in slow motion with the browser visible.
Tip: These
bin/xxx
commands are not magic commands. They show the exact shell commands they run. Take a moment to look at the output to understand what they're doing. Often they also print custom messages that help you to resolve the problems you're having.
To create a new test:
- Use one of the recording commands listed above.
- Create a new file in
system-tests/src/tests/
, such asexample.spec.ts
. - Copy the recorded test code into this file.
- Add assertions to verify the expected outcomes.
- Edit the generated test as described below.
Always update recorded tests to make them more reliable. Key changes include:
- Use Helper Functions: Replace repetitive steps with helper functions (e.g.,
selectCourseInstanceIfPrompted
). - Improve Locators: Replace unstable locators (like autogenerated CSS classes) with stable ones. See Playwright Locators for tips.
- Create New Helper Functions: If you find a repeated pattern that's tricky to get right, write a helper function for it.
Here are some helpful functions available in the codebase:
getLocatorForNthExerciseServiceIframe
: Helps interact with or capture screenshots of exercise service iframes. See an example.selectCourseInstanceIfPrompted
: Selects a course instance when prompted. Always use this instead of writing custom code for the dialog.showNextToastsInfinitely
/showToastsNormally
: Keeps toast notifications visible for screenshots. UseshowNextToastsInfinitely
before triggering the notification andshowToastsNormally
afterward.
- Basic Example: login.spec.ts
- Screenshot Comparison: mediaUpload.spec.ts
We use screenshots to track UI changes over time. The expectScreenshotsToMatchSnapshots
function helps with this by:
- Waiting for content to load and stabilize.
- Taking screenshots at different screen sizes.
- Storing images in version control for future comparison.
- Failing the test if the new image doesn't match the old one.
- Running accessibility checks.
If your UI changes, update snapshots using:
bin/system-tests-update-snapshots
Then, commit the updated snapshots. If you accidentally overwrite snapshots, restore them with:
bin/git-restore-screenshots-from-origin-master
test("test with screenshots", async ({ headless, page }, testInfo) => {
await expectScreenshotsToMatchSnapshots({
page,
testInfo,
headless,
snapshotName: "example-snapshot", // Unique name for the snapshot
waitForThisToBeVisibleAndStable: page.getByText("Welcome to the course"), // Element to wait for
})
})
To analyze test results:
- Enable tracing so that a
trace.zip
file is generated. - View the trace using:
npx playwright view-trace test-results/path_to/trace.zip
This will open a detailed view of each test step.
Run bin/system-tests-debug
and add test.only
to focus on specific tests. A browser and debugger window will open, allowing you to step through the test and adjust locators.
Use the Playwright for VSCode extension. It should already be installed as part of the project workspace. Refer to its documentation for setup instructions.
Use label-based locators for clear and reliable interactions.
To fill a text field with the label "First name":
await page.getByLabel("First name").fill("User")
To check a checkbox or radio button labeled "Draft":
await page.getByLabel("Draft").check()
If you need to skip a known accessibility violation:
-
Get the Rule ID:
- In the browser, click More Info on the violation to see the Rule ID.
- Alternatively, find the ID in the console output.
-
Add the Rule ID to
axeSkip
:
await expectScreenshotsToMatchSnapshots({
axeSkip: ["landmark-one-main"], // Replace with the relevant Rule ID(s)
page,
headless,
snapshotName: "example-snapshot",
waitForThisToBeVisibleAndStable: page.getByText("Example Text"),
})
Unit tests are located in each service's directory.
- For JavaScript/Node.js Services:
Navigate to the service directory and run:
npm run test
- For the Backend (Rust):
From the project root, run:
bin/cargo-test