Test Summary#

Feature in beta

Beta features are subject to change.

Working with tests in your projects should be efficient and easy. The bigger your project becomes, the harder it is to track all the failures in your workflows. Firstly you have to navigate to specific job and then search through the logs for failure messages:

Job Logs

Job summary provides you with a dashboard where you can track your suites and quickly find failing specs.

Test Summary Tab

How does it work?#

Test summary relies on JUnit xml format ↗ report files generated by your test runner. JUnit XML report is compiled by Test Results CLI ↗ and sent to your artifact store. See Test Results CLI documentation page ↗ for more details about the compiler.

How to use it?#

Firstly you have to generate a JUnit report. Most test runners have configurable JUnit formatters that can be used for that.

After this step is complete, you need to compile and publish test results to artifact store. Usualy it's being done in epilogue block:

epilogue:
  always:
    commands:
      - test-results publish PATH_TO_YOUR_JUNIT_FILE

This way if your test runner exits with non-zero status(due to the spec failure) report will be published.

There might be circumstances where only some jobs provide JUnit result reports. You can configure your workflow to reflect that:

blocks:
 - name: Build image
   task:
      prologue:
        commands:
          - checkout
          - make install

      jobs:
        - name: Lint code
          commands:
            - make code.lint
        - name: Static code analysis
          commands:
            - make code.static-check
        - name:
          commands:
            - export PUBLISH_TEST_RESULTS=true
            # Generates junit.xml file
            - make test

      epilogue:
        always:
          commands:
            # Publishes test results only if PUBLISH_TEST_RESULTS environment variable is present
            - '[[ -v PUBLISH_TEST_RESULTS ]] && test-results publish junit.xml'

If you're looking for stack specific configuration check here.

Interface#

The UI gives you access to:

Suite ordering

  • names, alphabetically
  • execution times, the slowest first
  • failed cases first

Querying your results to filter out tests which doesn't include given string.

By default, skipped and passed tests are not being displayed.

Advanced configuration#

Framework configuration#

Currently, the following stacks are fully supported. More will be added in the future.