test.guide connects test execution and management.

1. Introduction

test.guide is a centralized application for the supervision, analysis and follow-up processing of test procedures, which has been specially developed for use in the automotive sector. It significantly facilitates the management of test resources. At the same time, it encourages cross-role cooperation, thereby closing the gap between test execution and test management.

1.1. Release Notes

Information on the latest changes to the product can be found at:

1.2. Installing

See the Operations Manual which is included in the release zip in the /doc directory.

1.2.1. System view

The following system view provides an initial overview of the most important connected subsystems with their interfaces.

SystemviewInterfaces

1.3. Key modules

This section briefly describes the key modules in test.guide.

1.3.1. Test report management

Project-specific management, analysis and visualization of arbitrary test reports and result data

Test execution in development projects leads to the accumulation of various reports and data. Test automation intensifies this, producing very high numbers of test executions, which are distributed over several users and test environments.

Area of Application

test.guide is a centralized database application, flexibly deployable in different development and test environments. It has been developed specifically for use in the automotive area and provides the capability to manage, view and analyze test executions and results via an intuitive, web-based interface.

Usage Scenarios

IntroductionTestReportManagementFeatures

Functionality

test.guide centrally stores test reports and related file-based test data such as log files or images. By capturing all relevant meta data for every single test execution and providing extensive filtering capabilities, test.guide makes analysis of trends or root cause analysis of test failures a breeze.

In order to import test reports of any test execution system, test.guide provides an ASAM ATX compliant interface and ready-to-use adapters for the most common test report formats.

The user interface is fully browser-based and supports using test.guide on various user devices such as notebooks, smartphones and tablets.

1.3.2. Test case coverage

Area of application (Just “cover” everything…​)

The test case coverage module provides features to analyze test case executions regarding scopes, variants and requirements.

Usage Scenarios

TestCaseCoverageQuestion

Functionality

TestCaseCoverageWorkflow

1.3.3. Monitoring

Area of application

The monitoring module continuously collects status and health data of the test benches. From this data, test.guide provides an overview about the complete test environment’s state and allows to relate test results with test bench data. Furthermore test benches observed by the monitoring can be used for automated test execution distribution.

MonitoringOverview

Usage Scenarios

Monitoring provides answers to the following questions:

  • Are all test benches available?

  • Which test bench is currently showing problems?

  • Which tests are currently running on which test benches?

  • How are the test benches currently configured?

  • How are the test benches working to capacity?

Functionality

On each test bench, a small software called the ResourceAdapter is executed. It transfers the nodes vital data, test executions and configurations to the central test.guide server, where the data is aggregated and related to test case executions.

The ResourceAdapter can also be extended via plug-ins, e.g. to capture specific test bench data.

2. Using

test.guide provides multi-project support. If multiple projects exist, you can switch to another project at the top of the navigation. The navigation also is used to access the different features of test.guide.This chapter aims to provide an overview on these features and how to use them.

If you want to use German language, just change the locale at the bottom of the navigation.

The main requirement of our customers is to analyze and manage their test report data, which is usually very extensive. Therefore, the Test report management section can be used. This section provides a lot of different views to your test reports:

  • Project statistics: This view provides high level information about test cases and an overview on the project progress. You can have a look at the verdict distribution and some meta information about the uploads.

  • Test plans: You can use this view if you are interested in the test suites contained in your test reports.

  • Report filter: The report filter contains the key features of this section.

  • Import ATX: This view allows you to manually upload test reports.

  • History: Here you can see a list of the latest uploads.

  • Runtime analysis: In this view you can manage analyses which examine test case executions by their runtime.

2.1.1. Which test report formats are supported?

test.guide works with the ASAM ATX standardized report format. Furthermore, the application also provides an API (X2ATX) to import data from different formats.

At present, the following formats are supported:

If your report format is not listed, there are two solutions available:

  • Create a X2ATX plugin matching your very needs.

  • (Recommended) Make use of the existing JSON2ATX plugin by converting the report to the JSON structure defined in its schema.

To download the JSON schema, go to Import page.Select a ZIP file and choose JSON2ATX plugin from the drop-down menu.A link for downloading the schema file will appear.
An example of an upload via API can be found in chapter Upload via REST API.
Limitations imposed by the ATX format

The way in which the report information is contained in the ASAM ATX report format imposes some constraints on identifiers used for some report elements. In particular test case names, parameter set names, test constant keys and environment names must comply with the schema for identifiers defined in ATX. The valid pattern for those identifiers is:

[a-zA-Z]([a-zA-Z0-9]|_[a-zA-Z0-9])*_?

So identifiers

  • may only contain the letters A to Z (in upper and lower case), numbers and underscores

  • must not contain multiple underscores in sequence

  • must not start with a number or underscore

When uploading a report from ecu.test, the identifiers will be adjusted automatically (for more information see Why is the name of my test case in test.guide different than in ecu.test?). If you upload your report by other means, you have to ensure compliance with this schema yourself. Reports that do not follow this definition will be rejected on upload.

2.1.2. How do I upload my test reports to test.guide?

You can upload your test reports manually at Import ATX. Select a stand-alone ATX XML file or a ZIP archive containing the report and, optionally, additional arbitrary artifacts like traces, plots, the origin TRF report and many more.

Since a major present-day goal is automation, it is recommended to upload all test reports automatically. Therefore, see Upload via REST API.

My additional files (like traces, plots, etc.) are very big. How can I speed up the upload process?

There is the possibility to externally upload additional files in advance in order to reduce the size of data, that needs to be uploaded to test.guide. However, there are some prerequisites to use this feature:

  • You need to use the json format for the upload of the report data. (see supported formats for more information)

  • A depository needs to be configured in the artifact management.

    • This depository needs to have a storage of type SMB, SFTP, Artifactory or AWS S3 configured.

    • Note, that this storage does not necessarily have to be the active storage of that depository.

  • This depository must be the active file repository.

If these requirements are satisfied, you can upload additional files independently to the report itself:

  • First, upload the files to the specified storage that is configured in the depository.

  • Second, specify these files in your json report.

    • In the specification of a test case, next to attached artifacts that are included in the Zip file that is being uploaded, specify artifact references to the files like the following.

    • Note, that you have to provide the md5-Hash value of the files content.

...
  {
    "@type": "testcase",
    ...
    "artifacts": [
      "file/in/zip.trace"
    ],
    "artifactRefs": [
      {
        "ref": "\\\\SMB-Host\\Share\\folder\\in\\share\\uploadedFile.trace",
        "md5": "bd01856bfd2065d0d1ee20c03bd3a9af"
      },
      ...
    ]
  }
...
  • Third, pack this json file together with all files not already uploaded in a zip file and upload it like a normal report upload in the json format.

For more information about the JSON schema download the schema file. Go to the Import page, select a ZIP file and choose JSON2ATX plugin from the drop-down menu. A link for downloading the schema file will appear.
How can I upload my ecu.test reports to test.guide?

To upload ecu.test reports to test.guide, a connection to test.guide must be configured.

At Info/Report management you find YOUR ecu.test-test.guide upload configuration settings

This can be achieved in the following ways:

  • Preferred way: Via the ecu.test Options  Settings  test.guide

    • et Settings

    • a new test.guide instance must be created (using the +)

    • the test.guide server and port must be configured

    • the following ATX generator settings must be set:

      • on the one hand the uploadAuthenticationKey (preferably via the created technical user):

        • AuthKeyFromTechUser

      • on the other hand the projectId:

        • this can be determined for the selected project in test.guide by hovering the mouse cursor over the selected project

        • ProjectIdFromUserProfile

  • The same options can also be set via the TCF ATX report settings.

    • et TCF Settings

  • After configuration (via Options  Settings  test.guide) an upload in the Test report tab of ecu.test to the specific instance is possible.

    • et Upload2TG

    • if configured by the TCF, it must be loaded and activated for execution

External server upload settings

By default, the required parameters for an upload to test.guide (marked orange) are part of the settings in ecu.test.

  • et tg settings

To allow for a more flexible configuration only those parameters can be overridden with an external JSON file. The configuration parameter uploadSettings must point to a file within the ecu.test workspace. The keys correspond to the general settings.

{
    "url": "192.168.178.134",
    "useHttps": true,
    "port": 8085,
    "contextPath": "/testguide",
    "authKey": "YZiA5tf8ORBCbKq...",
    "proxies": {"https": "...", "http": "..."},
    "projectId": 2
}
Why is the name of my test case in test.guide different than in ecu.test?

Reports are transfered to test.guide using the ASAM ATX format in which identifiers have to comply with a given schema (for more information see Limitations imposed by the ATX format). Identifiers that do not fit the schema are adjusted to allow upload to test.guide. Following changes are made on report generation:

  • German umlauts are converted into two vocals, example: "Stoßdämpfer" → "Stossdaempfer"

  • special characters are replaced with underscores, as long as they are not the last character of the name, then they are omitted, example: "Test-5(System)" → "Test_5_System"

  • white spaces are removed, example: "My test" → "Mytest"

  • identifiers starting with numbers will be preceded by an "i", example: "5A" → "i5A"

  • leading underscores are omitted, example: "_test_" → "test_"

  • finally, multiple underscores in sequence will be reduced to only one underscore, example: "My__test: (speed == 5)" → "My_test_speed_5"

These changes are not only applied to test case names but also to test plan names, constant keys and some other identifiers.

2.1.3. How do I find my test reports of interest?

The report filter page is a cornerstone of test.guide.

reportfilter

The report filter can be used to create complex filter queries. Just enter your query terms in the filter mask.

Be aware that the filter form is divided into sections, not all filter fields are visible by default.
The filter and its fields are configurable. See Project settings  Filter settings

2.1.4. How do I save my complex filters?

Filters can be stored easily in order to use them again at a later date. To do so, click the Save filter button.

If you want to allow your colleagues to also use the saved filter, change the filter type setting to Public filter.
If you have the permission for managing projects, also another filter type called Project filter will be available. Filters of this type are usable by all project users and in addition, may be used for creating project subscriptions.

To make the filter easier to use, the start date and end date can be set relative to the current time. The following keywords can be used:

  • "now": current point in time (replaces date and time)

  • "today": current date (replaces date and must be followed by a set time)

In addition, Time shifts are possible, for example: "today 19:00 - 1d" (for yesterday 19:00). Available time shifts are plus and minus Minute (min), Hour (h), Day (d), Week (w), Month (m) and Year (y).

2.1.5. How can I get notified about new test cases matching my filter?

You can create subscriptions for your actual filter. Use the Subscribe filter button above the filter form.

You can also subscribe and unsubscribe to existing project filters. Subscribing to a project filter creates a project subscription, which will be visible to the project manager at Project settings  Project filters. There the project manager can also delete project subscriptions of the project users or manually create a new project subscription for a project member at any time.
To use this feature, the email configuration of test.guide must be configured correctly. This must be done by an admin and is described at System configuration  Email.

2.1.6. How do I share a filter quickly?

You can create a link for sharing the filter by using the Share above the filter form. The drop-down menu located there also offers additional options for sharing a filter.

2.1.7. How do I get detail information for a single test case?

You can analyze a specific test case by clicking on the test case name in the report filter result. This triggers a redirection to the test case detail page. There you can see all executions of the chosen test case. For a specific run, you can use the double arrow button at the end of the line to show further information.

If you want to see a particular test case execution, don’t click on the test case name. Instead, directly click on the verdict slot of the run you are interested in.

2.1.8. How do I compare test cases to see what is changed?

In the test case detail view, using the checkboxes, you can select multiple lines. At the bottom of the table is a Compare selection button, click it to open the Test report diff page for the selected test case executions.

2.1.9. How do I analyze tests with parameter variations?

In the test case detail view, using the checkboxes, you can select multiple lines. At the bottom of the table is a Heatmap button. Configure the heatmap by selecting the points of interest. Further details are explained on the page directly.

2.1.10. How do I restructure the result list of the report filter?

The report filter provides a number of different views for displaying the results. You can select them using the Active view drop-down menu above the filter result. At the moment, test.guide provides the following views:

reportfilterviews

2.1.11. How do I export results to …​

For exporting data, you have to change the active view of the report filter to Export view. In this view, select the test case executions to export. By clicking the checkbox in front of a particular line all executions of this test case are selected, while clicking on a verdict slot selects one particular test case execution. Furthermore, there are some selection strategies. You can use them via the Select all button and the drop-down menu located next to it.

…​ another test.guide?

To export to another test.guide instance, click on the Test management tool button and select the target instance. After that, just submit the form.

The other test.guide instance has to be configured at Project settings  Management tools.
…​ test management tools?

To export to a test management tool, click on the Test management tool button and select the tool in the drop-down choice. After that, just submit the form.

The test management tool has to be configured at Project settings  Management tools. Supported tools are listed at Integrating test management tools.

…​ PDF documents?

You can export PDF documents by clicking the button PDF report. test.guide contains a default test summary report template, which is ready to be used.

If the layout or content of the resulting PDF document does not meet your requirements, you can add custom templates. See Customizing the PDF export.
…​ Excel documents?

test.guide can also export Excel files for your test report data. You can generate it by using the button Excel.

…​ or add the selection to a release?

On the right side of report filter result toolbar, there is a Record release button and also a drop-down menu offering further options. Using them, you can add the selection to an existing release or create a new release.

2.1.12. How do I get test plan focused test report information?

You can use a special page for this feature at 'Testreport management > Test plans'.

2.2. Using the review feature

With automated test executions, it is common case, that someone has to check the result, especially for failed runs or when errors have occurred. After analyzing the result, you want to save your notes, set a reevaluation or create a ticket. These procedures and many more can be addressed with the review feature.

2.2.1. How do I make a review?

You can just click on the reevaluation icon (for the test case execution you want to review) on the test case detail page. The icon may look different depending on the review state: If the execution does not have a review, the icon shows binoculars. If at least one review exists, it shows a clipboard, which is either empty (if the review has no reevaluation) or contains a letter, which represents the reevaluation.

2.2.2. How do I create a defect/issue during the review?

You can select an issue tracker in the Create Ticket in Issue Tracker drop-down choice. After that, select an issue type and provide all information for the issue form.

It is best practice to first complete the review form and then use the Create Ticket feature: In case an issue tracker template exists, the ticket fields can be prefilled with information form the review.
The issue tracker and (if applicable) the issue tracker template has to be configured before. See Project settings  Issue Tracker.

2.2.3. How do I reuse my previous reviews?

If you notice, for example, that yesterday’s analysis is still applicable, you can also reuse your review of yesterday.

previousreview

2.2.4. How do I bulk review test cases with the same error?

If your analysis shows that there is an error affecting multiple executions of one particular test case, you can do a multi-selection of these items on the Test report diff page and click on Add reviews. This creates a review for all selected test case executions.

If your analysis shows that there is an error affecting multiple test cases, you can use the Bulk review view of the report filter. Filter for the affected test case executions you want to review. Next, multi-select the test cases or particular test case executions and click on Edit selected test case reports. On the new page opening, you can further limit your selection and do the bulk review. As a shortcut, if you want to review the whole selection of the report filter page, you can also click Perform review directly.

2.2.5. How do I perform reviews within ecu.test?

You can also create reviews using ecu.test. Reviews created this way are included in the test report that is uploaded from ecu.test to test.guide. There exist two ways for creating reviews in ecu.test:

  1. While the test is executed by using the test step Report, which can be found in the category Default test stepsMiscellaneous. Activate the Report comment in order to create a review whose comment is the specified text.

  2. After the test has run by using the TRF-Viewer. Right-click on a test step and select Manage report comments and revaluations.

After the review has been created (either way), upload the report to from ecu.test to test.guide as described in Integrating ecu.test.

Advanced usage

You may have noticed that neither the report test step nor the TRF-Viewer offer to specify the same amount of properties for the review as test.guide does.In order to compensate, there is a special syntax for the review comment that allows you to specify more review properties.

Review property Syntax example Notes

Summary

 #!summary Bus communication failed!#

Verdict

 #!verdict FAILED!#

Using this syntax, the verdict can only be worsened and never improved. Also be aware that the TRF-Viewer will not show the revaluation verdict.

Tags

 #!tag buscomm!# or #buscomm#

The expected tags must be configured in ecu.test's test.guide setting detectReviewTags. They must match the tags that are configured in the test.guide project (see Project settings  Review settings).

Defect class

 #!defectClass Moderate!# or |Moderate|

The expected defect classes must be configured in ecu.test's test.guide setting detectReviewDefects. They must match the defect classes that are configured in the test.guide project (see Project settings  Review settings).

2.3. Using the release management

2.3.1. Using the coverage

In projects that follow the V-model, test cases are supposed to verify certain elements from the left side of the V. In the context of requirements based testing, most often these elements are requirements. But one can also think of other specification elements like components of a system’s architecture or variants of the system under test that need to be verified. Usually there is a traceability link that connects a test case with the element that it verifies.

The Coverage filter in test.guide allows you to get insights into the completeness of the testing efforts in your project. A so-called Coverage Filter Definition contains all elements from the left side of the V that are relevant to you and connects each of them with a test case filter. This filter selects specifically the test case executions that verify the particular specification element, employing traceability links. test.guide can then apply the Coverage Filter Definition to the set of test case executions that were uploaded to produce a Coverage Filter Result. This artifact provides information about the completeness of the coverage of your specification elements.

Coverages can be created and managed fully through the GUI or the REST API.

CoverageScopeRequirementsTree

How do I make a coverage?

Coverage is created with the usage of a so-called coverage filter definition.

This specifies the criteria to be checked in XML format (see also the corresponding XSD). Examples and further information can be found in the TraceTronic-Example filter definitions for Cartesian Product and Filter Tree General

...
<node>
        <label>Requirements</label>
        <!-- Nodes of the third hierarchy level: -->
        <node>
                <label>Traffic light detection</label>
                <url>http://myreq.company.com/req/42</url>
                <!-- Here, the label of the node (i.e. "Traffic light detection") is
                        used as the scopes value: -->
                <scope name="Crossing assistant" />
                <filters>
                        <attribute name="ReqId">
                                <value>42</value>
                        </attribute>
                </filters>
...

The predicates that can be specified are always transformed to individual result filters, so it is important that the criteria to be checked can also be found in the uploaded report as information, e.g. in the form of a constant or an attribute such as the requirements id.

How do I use placeholders?

If it only distinguishes the coverage, for example, in regression tests, in the form of a build number or another identifier, a placeholder can be defined in the coverage, which must be specified before execution.

If placeholders are defined without default values, certain features such a coverage subscribing will not work!
How do I create coverages based on my requirements?

If a list or an extract of the requirements to be tested is available, e.g. in Excel, then a script can be used to easily read the requirements from Excel and convert them into a test.guide coverage filter definition.

How do I create coverages based on test suites of ALM tools?

It is also possible to import test suites directly from ALM as coverage via the ALM import button in the filter definition section.

Therefore, an ALM connection needs to be configured over the Project settings  Management tools.

This has the advantage, for example, that for very large test suites that take several days to complete, you can use the coverage to check whether all the results are available, i.e. whether all the tests in the planned test suite have been executed.

How do I export coverage data?

Coverage data can be queried and processed via the REST API.

However over the Export to button in the Coverage filter it is also possible to export the data to Excel or to create a PDF for a coverage report.

2.3.2. Using the release train

In test.guide, a Release is a set of test case executions. Releases can be used to collect test case executions that are relevant for an upcoming release of the system under test.

ReleaseTree

You can either manually add test case executions to a Release or use a coverage filter to specify which test case executions belong to a Release. These are then automatically added as soon as they are uploaded.

Releases can be created and managed fully through the GUI or the REST API.

ReleaseTrainTimeline

How do I create a release?

Releases are mapped as tree structures. A release can contain multiple sub-releases.

Release folders (the nodes of the tree) can be created to structure releases, whereas the actual release information comes from the releases (the leaves).

Releases and Release folders can be created over the Release overview. To fill a release with existing test data you can open a report filter in the test report management and either add the results to an existing release or record a new release.

ReleaseAddTestCaseExecution

When creating a release, various metadata such as ticket numbers and release attributes can be specified. This helps in searches, but also offers additional possibilities.

For example, if a release is directly linked to a Jira ticket, all created defect Jira tickets are automatically linked to the release ticket during a review. This makes it easy to track in Jira which bug tickets were created for the release ticket.

If attributes are specified, it is possible that these attributes are automatically mapped to coverage placeholders. So if a placeholder is defined in a coverage, e.g. for a buildNumber, and the value is set via the release attribute buildNumber, the release attribute is automatically set for the coverage.

To benefit from the full functionality of releases, you should assign a coverage to each release. This means that the progress of a release is automatically determined with each test report upload and an overview is easily possible even at the root release level.
How do I assign a coverage to a release?

When a release (leaf) is created, coverage can be assigned. This is also possible at a later date and the progress of the release is then calculated based on the stored release data.

If a release with coverage is created, this release coverage is duplicated in a separate release folder under the coverage filter definitions.
How do I use the release result?

For each release, the result distribution and, if coverage is specified, the progress is also displayed. If the progress is 100%, this means that all reports to be tested have been uploaded. Now you can see from the result distribution how many faults still need to be fixed or whether the release can be released in this state.

A review with a re-evaluation has a direct effect on the release result distribution for releases that have not been locked!
How do I lock/unlock a release?

When a release is completed, it should also be locked. Because if a release is locked, further changes to the release, e.g. through reviews, are no longer possible. Only a user with the necessary permissions can unlock the release again if a change is still necessary.

Another advantage of locked releases is that they are not taken into consideration when uploading new test reports, so all uploads are automatically faster! So if you keep too many releases open with large coverages for a long time, you have to accept that the upload times increase because each upload needs to be checked if it is assigned to a non-locked release.
How do I export release data?

Release data can be queried and processed via the REST API.

However, it is also possible to create PDF for a test summary report, by clicking on the drop-down under the Actions button and selecting Export pdf report.

If the layout or content of the resulting PDF document does not meet your requirements, you can add custom templates. See Project settings  Reporting.

2.3.3. Using Quality-Gates (Q-gates)

Quality gates (Q-gates) enable test processes to be automated beyond simple continuous testing.

They prevent, for example, test artifacts from being processed further if the test criteria for the artifact are not met. But at the same time, it is also possible to pass the artifacts on to other teams, to inform colleagues via mail or to start further tests for successful tests.

With the help of Q-gates, it is possible to achieve easier control of the processes in the project through staging.

Q-gates can be created and managed fully through the GUI or the REST API.

PreConditionQualityGateGraph

How do I set up a Q-gate staging?

Staging involves creating a Q-gate plan that describes the process for releasing a test artifact, for example.

If the plan has been developed, it can be started. This is similar to an automation pipeline that is designed and activated. However, the difference with Q-gate plans is that each plan is created for a specific process execution. If you want to use a Q-gate plan to validate different versions of a test artifact, you have to create a new Q-gate plan for each version of the test artifact. The simplest approach is then to copy the old plan when the commit is triggered and, if necessary, only change the parameters for the new test artifact to be tested and start the plan for the release.

Q-gate states only change after the plan has been started. After starting the plan execution changing, adding or deleting plan elements is no longer possible, but it is possible to add attributes to the plan or to update these. When creating a new plan, the following states exist:

  • UNDER_CONSTRUCTION - if the plan is currently being created and has not yet started.

  • IN_EXECUTION - if the plan has been started.

  • INVALIDATED - if the plan has been cancelled, for example, because of a misconfiguration.

  • FINALIZED - if the plan has been completed

A new plan can be created over the plan overview with add plan.

Which Q-gates are there?

There are currently six different Q-gates:

  • A Manual Q-gate, which is equivalent to a switch. Here the user can trigger an action either via the GUI or REST-API. This can be used, for example, to confirm a release or for automation by an external system.

  • A Approval Q-gate, is similar to a Manual Q-gate. The gate can be used to ensure that users explicitly approve further processing. No STOPPED states are automatically propagated until a user has explicitly approved them.

  • A Release Q-gate is based on a defined test.guide release (with coverage) and is switched automatically as soon as the defined release conditions are fulfilled.

  • The Join Q-gate is a connecting Q-gate that waits for several Q-gate results and combines the current status. This is useful for better visualizing important milestones in the validation process.

  • A Pull Q-gate uses the state of a shared Q-gate from another Q-gate plan, project or test.guide instance. See chapter Sharing and pulling Q-gate states for details.

  • A Workflow Q-gate is designed to integrate with the Workflow Automation. Once its preconditions are met, it will trigger a new flow task execution via the specified flow trigger. The Q-gate will switch to either the STOPPED or PASSED state after the flow task is finished, depending on its final state. This will allow staging to continue or halt the staging process.

How do Q-gates interact with each other?

Q-gates switch depending on their state.

Q-gates can have 4 different states (in order of increasing state weight):

  • UNDEFINED, if there is no result for this stage available

  • IN_PROGRESS, if all preconditions for the stage are fulfilled but the Q-gate itself does not yet have a result.

    • 💡 This means that all Q-gates are automatically in progress if they have no preconditions!

  • PASSED, if the stage is fulfilled

  • STOPPED if the stage is not fulfilled

Q-gates can be connected to each other, which means that a Q-gate can define as an input condition that one or more other Q-gate results must be finished. This is how the different Q-gate states are achieved.

On the one hand, it is important to know that STOPPED has a high priority and is propagated immediately through a Q-gate plan, so that the analysis can start immediately in the event of problems.

On the other hand, if a Q-gate is PASSED, it always remains PASSED and cannot be changed. 💡 A release Q-gate with STOPPED can still be switched to PASSED, for example, by a PASSED re-evaluation.

Q-gates can be shared across Q-gate plans, projects or even test.guide instances. The state of these shared Q-gates is then accessible via so called Pull Q-gates. Technically, this is achieved via a REST API. For authorization, every shared Q-gate specifies which project roles are allowed to access it. For authentication, a technical user is usually used who must have at least one of the specified project roles.

The following procedure is usually advisable to use this feature. At the sharing party:

  • Create a specific project role for the share.

  • Create a technical user and assign this project role.

  • Create an authentication key for the technical user.

  • Assign the project role to the Q-gate to be shared.

At Release management  Settings, by clicking the Create user and project role to share Q-gates button and using the opened dialog, these steps can be carried out in one operation.

The other party attempting to retrieve the status of the shared Q-Gate must be provided with the connection data of the sharing test.guide instance (URL and project ID) and the authentication key of the technical user:

  • Create a new tool of the type test.guide with this data at Project settings  Management tools.

  • Create a Pull Q-gate in the desired Q-gate plan. A filter is defined which must unambiguously determine the shared Q-gate, and additionally the authentication key is specified with which the Pull Q-gate identifies itself at the sharing party.

2.4. Using the monitoring of test resources

A new module called Test infrastructure is currently under development. Some features of the monitoring module will be successively integrated into this new module.
Using the test infrastructure management
All content within this collapsible section describe features of the development preview of the new module Test infrastructure. Please be aware that changes are possible at any time. Any kind of feedback is well appreciated. Please contact us at support@tracetronic.com.

You can manage your test infrastructure in test.guide. Therefore, you can create test resources and test resource machines, add information such as attributes and/or connect a sensor/client called ResourceAdapter. Beneath filtering and the live view of test resource machines with a ResourceAdapter connected, you can distribute test executions to your test resource machines (see Using the test execution).

Test resources and test resource machines

test.guide defines two levels for managing test infrastructure: test resource (TR) and test resource machine (TRM). A TRM is always a child of exactly one TR. The other way round, a TR can consist of multiple TRMs. The main reason for this two-level hierarchy is the need to map complex (mainly hardware) test benches that are controlled by more than one PC. Here, the TR level corresponds to the test bench itself and the TRM level to the individual PCs. In most cases, a 1:1 relationship between TR and TRM is recommended.

Existing users may know these two levels from the ResourceLocationId, where they were the 4th and 5th element, formerly known as test bench and machine.

The picture below shows the general structure of TRs, TRMs and their relationship.

TestResourceAndTestResourceMachines

Each test resource in test.guide receives a UUID, called TR-ID and gets both a location and a name. Each test resource machine also receives a UUID, called TRM-ID and gets a name. There are different kind of data that are stored either at the TR or at the TRM level. The following table provides an overview of the different data types.

Data Level Purpose

Appointments

TR

Booking of test resource in a TR-calendar

Attributes

TRM

Additional information to define a TRM that can be added by the user

Monitoring Data

TRM

Information regarding the TRM mainly gathered by the ResourceAdapter

Pages in the test.guide UI for managing the test infrastructure

There are different pages to get all jobs done regarding the management of test infrastructure. Please find an overview below. Functions that can be used from the pages are described in the following text.

Page Test resource overview (TR overview)
  • Main Purpose: Filter and find all test resources within the respective project.

  • How to get there? In the main navigation bar select [Test infrastructure  Test resources].

Page Activity distribution
  • Main Purpose: Find out which activities were carried out on the TR and its TRMs as a percentage.

  • How to get there? In the main navigation bar select [Test infrastructure  Activity].

Settings page
  • Main Purpose: Change settings regarding test infrastructure e.g. activity types.

  • How to get there? In the main navigation bar select [Test infrastructure  Settings].

Test resource detail page (TR detail page)
  • Main Purpose: See all information belonging to a specific test resource.

  • How to get there?

    • At the TR overview click on the name of a test resource.

    • At the TRM detail page click on the TR icon in the upper right toolbar.

    • At the Comparison of test resource configurations click on the name of a test resource.

    • At the Booking view click on the name of a test resource.

Test resource machine detail page (TRM detail page)
  • Main Purpose: See all information belonging to a specific test resource machine.

  • How to get there?

    • At the TR overview click on the TRM icon in the column "TRM".

    • At the TR detail page click on the TRM name within the TRM panel.

  • This page is deprecated. The TRM information will be moved to the TR detail page in the future!

Page Comparison of test resource configurations
  • Main Purpose: Compare multiple test resources by the configurations items of the underlying TRMs.

  • How to get there? At the TR overview select at least two TRs and click on Compare configurations on the bottom of the page.

Page Booking view
  • Main Purpose: Find a suitable time slot within multiple TR calendars.

  • How to get there? At the TR overview select at least one TR and click on Booking view on the bottom of the page.

Page Central ResourceAdapter settings
  • Main Purpose: Change ResourceAdapter configurations e.g. to activate a new plug-in.

  • How to get there? In the main navigation bar select [Project settings  ResourceAdapters] (Monitoring management rights are required).

Page Monitoring database settings
  • Main Purpose: Configure the settings for the monitoring database connection.

  • How to get there? In the main navigation bar select [System configuration  Monitoring] (Server Admin rights are required).

Page Monitoring IT view
  • Main Purpose: Overview of all test benches within the respective test.guide project (based on the information in the ResourceLocationId).

  • How to get there? In the main navigation bar select [Monitoring  IT View].

  • This page is deprecated and it will be replaced by the TR overview.

Page Monitoring Project view
  • Main Purpose: Overview of all test benches within the respective test.guide project structured by ecu.test projects.

  • How to get there? In the main navigation bar select [Monitoring  Project view].

  • This page is deprecated and it will be replaced by a new filtered view.


Create new TRs and TRMs

New test resources and test resource machines can be created within an existing test.guide project via the GUI or the API. To do so, you need the Monitoring management rights.

  • Creation of a new TR can be done using the button Create TR at the TR overview.

  • To create a new TRM, please go to the TR detail page of the desired parent test resource and click on Create TRM within the TRM panel.

When creating a new TRM using the API, a parent TR-ID is required. Since creating test resources and test resource machines is possible within test.guide, a connected ResourceAdapter is no longer a prerequisite for the creation of a TR and TRM (as beforehand). Accordingly, TRs and TRMs can exist without a ResourceAdapter.

Connecting a ResourceAdapter to a TRM

By connecting a ResourceAdapter to a TRM, monitoring data from the machine on which the ResourceAdapter is running can be collected. A so called "ResourceLocationId" is required for this. The ResourceLocationId will be replaced by the new TR-ID and TRM-ID in the future. To ensure the easiest possible transition for existing connected test benches, TR- and TRM-IDs will be created automatically when new data of a connected ResourceAdapter is processed by test.guide. The information from the ResourceLocationId is automatically linked and appended to the generated TR and TRMs as follows:

ResourceLocationId Building/Level/Room/TestBench/Machine

Elements

Building

Level

Room

TestBench

Machine

Test resource

Location

Name

-

Test resource machine

-

Name

As long as the ResourceLocationId exists, starting a ResourceAdapter with a new ResourceLocationId (see also How do I add a new test resource to my test.guide project?) will result in automated creation of the corresponding TR and TRM. Later on, TR and TRM must be created in test.guide first. At the same time, the Location and Name information of TRs and TRMs is read-only and the location must consist of the three parts "Building", "Level" and "Room" separated by a slash.

To connect a ResourceAdapter after creating TR and TRM in test.guide please ensure that the ResourceLocationId contains all necessary information from the TR and TRM. For example to connect a ResourceAdapter to a TRM with the name "PC1" that is the child of a TR named "HiL1" with the location "HQ/Level2/Room2.5" the resulting ResourceLocationId will be "HQ/Level2/Room2.5/HiL1/PC1".

Changing the ResourceLocationId of an existing ResourceAdapter will cause the creation of a new TR- and TRM-ID!

Remove TRs and TRMs

The deletion of test resources and test resource machines is possible via the GUI and API. The TR overview page provides the ability to delete multiple TRs at once. The TR detail page provides a button to delete the currently displayed TR and buttons to delete underlying TRMs. Monitoring management rights are required for this.

When a TR is deleted via the GUI, all underlying TRMs with all their data will be deleted as well! Deleting a TR via the API requires all underlying TRMs to be deleted first. When deleting a single TRM, all data (monitoring data and custom attribute entries) belonging to this TRM will be deleted as well.

The deletion process may take a while, depending on the amount of data attached to a TRM. Please deactivate all ResourceAdapters before deleting a TR or TRMs, if any are connected.

Add data to TRs and TRMs

Calendars and appointments

To improve collaboration on a shared test resource a calendar for a TR can be created. By adding appointments the resource can be "booked". These booking features only exist on TR-level, because booking a single PC of a test bench makes no sense (see the two-level hierarchy of TR and TRM described above).

To add appointments, a calendar per TR has to be created at the TR detail page. An appointment consists of at least a subject, a start- and an end-time. Furthermore, additional content can be provided in the body and a category can be assigned. Categories has to be defined beforehand on the Settings page (Monitoring management rights are required for this). Each category has a user defined color. The appointments belonging to a category are displayed in the calendar in the respective color.

Appointments can be edited and deleted by users with the appropriate rights (Monitoring data: Create/Update and Delete). For bette traceability and collaboration the user and the time of creation and the last change are stored as metadata and displayed.

API endpoints for managing TR-appointments are also available.

Attributes for test resource machines

For a better description of your test resource machines attributes can be defined at test.guide project level. Monitoring management rights are required to create these attributes at the Settings page. The defined attributes can then be filled with values for each TRM (so-called custom attribute entries). Currently, only strings are supported. The right to create and update monitoring data is required to fill the values. The attributes are visible and their values are editable at the TRM detail page.

Managing attributes is also possible using the Test infrastructure API.

To "delete" a single entry manually, please send the value null using the endpoint POST api/testinfrastructure/testresourcemachine/{testResourceMachineId}/customattribute/{customAttributeId}.

Monitoring data

To keep informed about states, activities and more details regarding a single test resource machine, monitoring data can be collected by a connected ResourceAdapter. Please refer to further chapters from How do I add a new test resource to my test.guide project? for setting up and using this feature.

Using test resources and test resource machines

Test resources overview

At the Test resource overview page, a list of all TRs and their aggregated state is displayed. On this page, TRs can be filtered based on different information, e.g. configuration items or attributes. Selection of TRs e.g. for comparisons is also possible here.

Filtering of test resources

The filtering of test resources is mainly based on the data, that is attached to the underlying TRMs of a TR. Filtering is only possible by TR name, locations, configuration items and attributes. Those filters are performend in multiple steps:

Filtering by test resource name

  1. Filtering of the Value

Filtering by locations:

  1. Filtering of the Value(s)

Filtering by configuration items:

  1. Filtering of the Key, which consists of Category and Label of the configuration item

  2. Filtering of the Value(s)

Filtering by attributes:

  1. Filtering of the Key, which is the name of the custom attribute

  2. Filtering of the Value(s), which is the corresponding custom attribute entry

To make filtering of complex expressions for configuration items easier, quick filters for the Category/Label level can be defined by a user with Monitoring management rights by clicking on the disk-button in the filter area after entering a filter expression (see image below). After saving a quick filter it can be used by any project user with the right to see the test resource list. Quick filters are only available for configuration items. Deleting quick filters also requires the Monitoring management rights and can be done by clicking on the x at the beginning of the respective quick filter button.

TI QuickFiltersForConfigurationFiltering

If there are several values available, more than one entry can be selected for the values. The selection of several values corresponds to an OR link in filtering. In the following example, all TRs will be shown, which have TRMs with the configuration item SOFTWARE/Notepad++ Team/Notepad++ with the value 6.7.7 or 6.9.2. This could be the case, when the software scan plug-in of the ResourceAdapter detected an installed Notepad++ with one of the mentioned versions.

TI TrFilteringOrLinked

If several keys are used for filtering, the individual elements are AND-linked on the test resource level. In the example shown below, all TRs will be shown, which have TRMs with the configuration item SOFTWARE/Notepad++ Team/Notepad++ with the value 6.9.2 and with the configuration item SOFTWARE/tracetronic GmbH, Dresden/ecu.test 2024.1 with the value *. This could mean for example that you are looking for a TR with a specific version of Notepad++ and any version of an ecu.test 2024.1, e.g. ecu.test 2024.1.2.0.

TI TrFilteringAndLinked

This applies to both two-step filters (custom attribute and configuration). Generally, each filter criterion must be fulfilled by at least one underlying TRM of a TR. It is not necessary that one or more underlying TRMs fulfill all filter criteria.

Wildcards such as * can be used. When using wildcards for both key and value within a filter, only TRs for which the underlying TRMs have values for the respective key are displayed. E.g. filtering attributes by the expression *:* will only return TRs with TRMs having any values (that are not equal to null) for any custom attribute.

Compare test resources by their configurations

To compare multiple test resources, you can select the desired TRs on the left-hand side and click Compare configurations. You can use the same features known from other comparisons in test.guide.

The comparison of test resources is based on configuration items of their associated TRMs. If more than one TRM has the same configuration item, only the latest value is displayed.

Find a suitable time slot of desired test resources

If you have found several suitable test resources for your purposes after filtering, you can view all TR calendars at the same time to quickly find a free time slot. Just mark the desired test resources in the TR overview and click on Booking view. In this view you see all calendars of the selected TRs one below the other. Creating new appointments here is also possible by selecting a suitable time in the calendar.

The monitoring module provides a live status and additional information of all connected test resources. The connection to test.guide is established by the application ResourceAdapter which comes with test.guide and which needs to be executed on the respective test resource machine. All provided information are gathered by plug-ins. Typical information are a heartbeat (connection check), system information (CPU, RAM, HDD, …), execution states, test bench activity and the hard- and software configuration of the test resource machine and the system under test.

All types of test resources are supported. Thus, a test resource can be a huge Hardware-in-the-Loop (HiL) test bench, a node within a container-based cloud infrastructure used for example for Software- or Model-in-the-Loop (SiL/MiL) tests or a local computer of a developer, where also tests are executed. workstation computers, that are used for test data pre- or post-processing (e.g. complex trace analysis), can also be seen as a test resource.

All provided data from any connected test resource is stored in a separate database and can automatically be linked to test executions stored within the test report management of test.guide. This is helpful for comparisons of test executions and reviews of test reports. To use this feature, a test execution info plug-in has to be configured. Further information can be found below.

MonitoringOverview

Access to the monitoring features can be restricted by special user permissions, which can be assigned on per-user basis via authentication settings page. Monitoring management permission allows a user to setup monitoring features.

2.4.1. Prerequisites for using the monitoring

First of all the monitoring feature has to be enabled by a server manager by configuring a monitoring database. See also: → System configuration  Monitoring

A retention period has to be set at the database configuration page. That means, that all data is erased after the configured time period (standard setting is 31 days). Navigate to System configuration  Test infrastructure to set the retention period.

After activation the monitoring feature, test resources can be connected to test.guide.

2.4.2. How do I add a new test resource to my test.guide project?

To connect a test resource to test.guide, the ResourceAdapter must be running on the respective machine, e.g. a control computer of a HiL test bench. An authentication key of a technical user with appropriate permissions (see also Project settings  Project users) is necessary to operate the ResourceAdapter. There are different ways to set up the ResourceAdapter that are described below.

IMPORTANT SAFETY NOTICE

You are about to install the test.guide ResourceAdapter. When using test.guide test execution, the ResourceAdapter enables automated test execution on the respective test bench.

ATTENTION: The ResourceAdapter must be deactivated when performing manual testing or other manual activities on the test bench. Otherwise, automated testing may be started during such time (which may, for example, affect switches on a terminal automatically). Non-compliance with this safety notice may cause serious bodily harm and damage to property.

Please forward this important safety notice to your company’s responsible safety representative. Please also contact this safety representative for any further questions or concerns.

There are test resources that consist of more than one node/computer. In general the ResourceAdapter should be executed at least on the node where the test automation is running. It is also possible to add more than one node for one test resource to monitor all nodes.
  • Create a technical user and an authentication key for this user (Project management and Manage project user rights required)

    • Navigate to Project settings  Project users

    • Click on the dropdown next to Create new user located in the upper left corner and then Create technical user

    • Fill in at least the required fields

    • Assign the following permissions:

      • Synchronize ResourceAdapter configuration

      • Monitoring data: Read and Create/Update

    • Click on Add user

    • Find the newly created technical user in the list and click on the dropdown next to Edit and then Show authentication key

    • Click on Create key and follow the dialog to create a new authentication key which needs to be copied before closing the window again

  • Create a preconfigured ResourceAdapter package

    • Navigate to Project settings  ResourceAdapters

    • Click on Preconfigure ResourceAdapter package located in the upper right corner

    • Fill in the required fields using the authentication key you copied earlier

    • Choose a template for the new configuration

The first time you create a preconfigured package, you’ll have to choose "Example configuration". As soon as there is a ResourceAdapter connected, you can also choose the respective configurations as templates.
  • Download the preconfigured package and transfer it to the desired test resource

  • Extract the package

  • If necessary: Adapt the config file resourceAdapter.preconfigured.config

Some plug-ins are configured and activated by default (see also General information). Others are not. So if you need a special plug-in configuration from beginning, you have to adapt the config file before starting the ResourceAdapter. How to do so is explained here. Configuration changes are of course possible later on.
  • Run the included starter script (.bat for windows, .sh for Linux)

Way 2: Manual setup
  • Download the ResourceAdapter

    • Navigate to Info  Monitoring

    • In the upper right corner you’ll find a button, where you can download a zip archive

    • Download it, transfer it to the desired test resource and extract it

  • Adapt the config

    • Open the config file (resourceAdapter.example.config)

    • Configure at least all necessary parameters under "test.guide Host"

    • Fill in an authentication key of a technical user with appropriate permissions

    • Helpful information, e.g. the necessary authentication key can be found at Info  Monitoring

    • After that you can configure all plug-ins you need

    • Save the config file

  • Run the ResourceAdapter

    • Start the application by running the included starter-script (.bat for windows, .sh for Linux)

General information
  • Check, whether the connection is successfully established

    • Navigate in test.guide to Monitoring  IT view

    • Here the previously connected test resource should be listed in the view

  • In both packages, some plug-ins are configured and activated

  • To deactivate a plug-in just comment out the respective line by using # or !

  • After the first start a workspace directory is being created

    • If there are no changes in the starter script, the workspace directory is located in

      • %USERPROFILE%/TTS-TM-ResourcenAdapter for Windows

      • /home/~user~/TTS-TM-ResourcenAdapter for Linux

    • In the workspace directory you can find the resulting configuration file resourceAdapter.config

    • After the first start, local configuration changes has to be done in the resulting configuration file named above

    • Changes in the resourceAdapter.preconfigured.config or resourceAdapter.example.config will not be taken into account anymore after first start of the ResourceAdapter and as long as a workspace exists

    • Logs are also available in this directory

You can change the workspace directory by editing the starter script. Just set the parameter CONFIG_DIR to the desired path.

Due to technical reasons, the "AutoUpdate" functionality provided by the ResourceAdapter can only update the ResourceAdapter application itself and the included plug-ins. The necessary bootstrap.jar file and especially the included Java Runtime Environment (JRE) cannot be updated automatically.

Please refer to the ResourceAdapterUpdateReadme.txt in the extracted installation package or the ResourceAdapter installation folder for further details and manual update steps! If the effort for the manual update of the included JRE is too high, please use your own JRE.

2.4.3. How can I change existing ResourceAdapter configurations…​

Configuration changes of a test resource already connected can be done both within the test.guide web UI and in the configuration file locally on the connected test resource. The configuration is key-value based. You can find all possible keys in the resourceAdapter.example.config file, located in the .zip-File.

Because of the two ways to edit the configuration, test.guide validates the configuration file by a hash. That is why all comments are removed within the resulting configuration file.
…​ via the test.guide web UI?
  • Navigate to Project settings  ResourceAdapters

  • Here you can configure one or more existing ResourceAdapter configurations

  • First of all, you need to select the desired configuration(s) in the section Configure ResourceAdapters

    • If you have a large amount of test resources connected, use the filter field to find the desired ResourceAdapters

    • Select all filter results or only a specific one

    • On the right side you see your selection

  • To edit the selected configurations use the Edit configurations

RA Config WebUI

  • The first column contains all existing keys, the concrete values of the respective configurations are listed in the following columns

  • Find out the key you want to change and edit the value for the desired ResourceAdapter in the matching column on the right side by clicking into the respective cell

  • Bulk changes are also possible by using the small pen-button on the right side within the key-cell

  • A restart of the affected ResourceAdapters will be performed automatically

Please be aware, that the ResourceAdapters on test resources that have been changed are restarted automatically after hitting Save configuration. If there is an automated test execution via the ResourceAdapter actually running on an updated node, this might lead to problems.
In a bulk change only visible columns are edited. If you want to change only a few configurations, hide the columns you don’t want to change beforehand.
…​ by editing the configuration file locally?
  • To change the configuration of an existing test resource locally please edit the resourceAdapter.config within the workspace directory

  • A manual restart of the ResourceAdapter is required

2.4.4. How do I add/configure a new ResourceAdapter plug-in?

Every plug-in needs to be initialized and configured properly. The syntax is as followed:

plugin.[internal plug-in name].[instance].config.[parameter/property]

  • [internal plug-in name]: Internal name of the plug-in

  • [instance]: Unique number of the instance, because a plug-in can be instantiated multiple times

  • [parameter/property]: There are some default parameters for all plug-ins and some special parameters for single plug-ins, called properties

All possible parameters and properties can be found in the resourceAdapter.example.config.

To add a new plug-in you just need to edit the configuration file of the desired ResourceAdapter as described above. Please refer to the resourceAdapter.example.config file, that can be found in the extracted installation package. It is recommended to just copy and paste the respective lines and fill in the needed values.

In the web UI you can also add new rows by clicking on Add row. Make sure to insert the whole path of the desired key, e.g. plugin.de.tracetronic.ttstm.monitoring.plugin.heartbeat.HeartBeatPlugin.1.config.polling. After adding a key, you can add the value(s) as described above. If you add more than one key for a plug-in, test.guide will automatically group them.

2.4.5. How do I use test resource monitoring data?

All monitoring information can be found in the monitoring menu. There are different views for different use cases. In addition to the views, which all mainly focus on the test resources, monitoring details can also be linked to test results automatically. This is very helpful for evaluating test results.

How do I get a quick overview over all my test resources?

To get a quick overview over all connected test resources, navigate to Monitoring  IT view. Here you can find all connected test resources and status information of these, e.g. vital data of a node such as CPU and RAM usage. By using thresholds vital data is evaluated. There are distinct thresholds for serious and critical values. Values can be configured in the ResourceAdapter setup. The state of the entire node is evaluated using the worst single value state.

MonitoringItViewNoTestRun All test resources are sorted by their location and the respective test resource name. The full location path is displayed at the top. By clicking on a lower location level (the row beneath) only test resources matching this location are displayed. The view of a test resource shows the health state (provided by the TestbenchHealth Plug-in) on the top left, the test resource name in the middle and the by thresholds interpreted vital data of the test resource on the right. Below that, the current action is shown. As long as there is no active test execution, in the lower line on the right you can find an icon indicating whether there is a manual user activity on the test resource.

The view of a test resource depends on the configured plug-ins. Hence, a test resource where no user activity plug-in is configured will not show the button for user activity.

MonitoringItViewTestRun As soon as a test was startet, different information about the current test run is displayed. This can be for example the test project, test suite and test case name. The test execution status is also shown at the executing test resource. Therefore, the icon in the bottom line is hidden to show the established time and the number of finished test cases.

By clicking on a test resource name you’ll get a detail view of the selected test resource with all information provided by the configured plug-ins.

How can I find the last test executions of my test project?

Use Monitoring  Project view to get an overview of all configured test projects and the assigned test resources. By selecting the project on the top of the list, you will get an overview of the last test executions of the respective project arranged by the test resource on which they were executed.

To use this feature, a test execution info plug-in has to be configured accordingly. E.g. for ecu.test the parameter fixedProjectLabel has to be provided to map test resources (or more specifically: ResourceAdapter instances) to test projects. Another possibility is to set the parameter workspaceMappingLabels to map workspaces (by the name of their root folder) to test projects.
Can I find out, what activities were ongoing on my test resources?

Yes, by using the Activity view. Navigate to Test infrastructure  Activity where you can filter test resources by their location and after that by their name.

To provide wildcard filtering on both levels, location and test resource name, it is not possible to start filtering by the test resource name. When you don’t know the location of your test resource, use the wildcard * to see all available locations.

Beneath the test resource filter you can also use time filters to show only a desired period of time.

After filling in the filter parameters, click on Filter to see the filtered results. On the bottom you now can see the activity distribution.

In the top right corner of the distribution area of the activity view you can find a possibility to export the shown data to Excel. Exporting is also possible by using the API. See Monitoring API Docs for further information.
How to set up (custom) test resource machine activities?
Please be aware that there is the TRM activity (more details) on the one hand and the TRM state (more details) on the other hand. Activity refers to things going on at the test resource machine. The state of a test resource machine contains information about the system health or parts of that.

There are five activity types configured by default:

  1. Offline

  2. Idle

  3. User activity

  4. CI activity

  5. Automated test execution

Every activity type has an ID, a name, a priority and a color. You can add additional activity types and give them a name, priority and color at Test infrastructure  Settings. The ID will be assigned automatically. It is also possible to change the attributes of existing activity types.

MonitoringActivityConfiguration

The deletion of activity types is not possible because this could cause gaps within the activity distribution of test resources. Furthermore, it is not recommended to change the five default activity types shown in the picture above because there are several mechanisms in test.guide that assume that these are set as in the default.

Test resource activities are set by some plug-ins within the ResourceAdapter. For all plug-ins, that set the test resource activity, the resourceAdapter.example.config contains a suggestion for the activity type ID. The suggested activity type ID can be changed in the ResourceAdapter configuration. Please watch out for the property testBenchActivityType within the plug-in configuration. The most powerful and flexible plug-in for setting test resource activities is the Test Bench Activity REST API Plug-in, that provides a REST-API endpoint on the machine, where the ResourceAdapter is running. The user can set every configured activity by using simple POST requests. Moreover, optional information can be provided by the user as well.

For more information, please refer to the resourceAdapter.example.config file, that can be found in the extracted ResourceAdapter installation package.

Because of the multiple opportunities to set an activity, the priority of an activity type is necessary. The higher the priority the more important is an activity. That means if there are overlapping activities, only the one with the highest priority will be displayed in the activity view.

How can I find out what test resource configuration was active during my test run?

test.guide can link monitoring details such as configuration information to test reports uploaded to test.guide automatically. Therefore, the ResourceAdapter has to be running during the test execution and a test execution info plug-in must be configured. For EXAM this only works, when the parameter createAtxTestReport is set to true configured in the EXAM test execution info plug-in.

In the standard ResourceAdapter configuration, the ecu.test test execution info plug-in is already activated.

After the activation you can find monitoring details in the detail view of test reports on the bottom. You can use the provided information e.g. for comparisons of test runs to find out whether a configuration change caused a changing test result.

How to use the aggregated overall state of a test resource?
Please be aware that there is the TRM activity (more details) on the one hand and the TRM state (more details) on the other hand. Activity refers to things going on at the test resource machine. The state of a test resource machine contains information about the system health or parts of that.

The aggregated test resource state aggregates all state information of a test resource. The information of connection state (gathered by the Heart-Beat Plug-in), monitoring state (gathered by different monitoring plug-ins, e.g. CPU Load Plug-in) and the component state (provided by the TestbenchHealth Plug-in, the Jenkins Agent Plug-in and the Execution Plug-in) are taken into account for the aggregated state of a test resource.

Currently, all state information are used for displaying purposes only. It is planned for the future to use state information for controlling workflows, e.g. in automated test execution, as well.

There are six states, which are shown in ascending order of severity in the following table.

State Meaning Possible reasons

Static Unknown

the state could not be determined

there are some information missing, e.g. due to missing configuration of plug-ins

Static OK

everything is ok

none of the configured plug-ins reported a problem

Static Warning

a non-critical problem has occurred

there is an overrun of a yellow threshold of a monitoring plug-in

Static Not available

the test resource is not available for automated test execution

a manual interaction is in progress and/or an executor plug-in is in pause mode

Static Critical

a critical problem has occurred

a problem was reported at at least one component state and/or there is an overrun of a red threshold of a monitoring plug-in

Static Offline

there is no connection of the respective ResourceAdapter to test.guide

no heartbeats sent by the Heart-Beat Plug-in

Details about the state-affecting values of the individual plug-ins can be found on the respective test resource detail page.

Mapping of single (component) states to the aggregated test resource state

Connection state (Heart-Beat Plug-in)

The connection between a ResourceAdapter and test.guide is a one-way polling connection. This means that the ResourceAdapter sends data to test.guide at cyclic intervals. To signal the availability of the test resource to test.guide, the ResourceAdapter sends so-called heartbeats with the aid of the Heart-Beat plug-in. If the heartbeats do not occur for a certain time, it is assumed that the respective ResourceAdapter no longer has a connection to test.guide and is therefore "offline". This can mean that there is a problem with the network connection or that the ResourceAdapter is simply not running on the respective host system.

Connection state (Heart-Beat Plug-in) Mapped aggregated test resource state

No known connection state (Heart-Beat Plug-in is not activated (no active configuration exists in the ResourceAdapter) - Not recommended!)

Static Unknown

ResourceAdapter is online and connected to test.guide (Heart-Beat Plug-in sends heartbeats)

any other state not shown in this table

ResourceAdapter is offline and not connected to test.guide (no heartbeats are sent by Heart-Beat Plug-in)

Static Offline

Monitoring state

There are different plug-ins gathering system information and compare it with thresholds (green, yellow, red). The thresholds can be defined in the plug-in configurations. The following plug-ins are monitoring plug-ins:

  • CPU Load Plug-in

  • CPU Load Per Process Plug-in

  • RAM Usage Plug-in

  • RAM Usage Per Process Plug-in

  • HDD Usage Plug-in

  • Network Traffic Plug-in

  • Network Adapter Plug-in

In addition to this list, there is the test.guide Report Upload Plug-in, which also has a built-in monitoring, that can optionally be enabled.

Monitoring state Mapped aggregated test resource state

Grey: occurs e.g. if a monitored process is not running (depends on plug-in configuration)

Static Unknown

Green: no yellow threshold is exceeded in the configured monitoring plug-ins

Static OK

Yellow: at least one yellow threshold is exceeded in the configured monitoring plug-ins

Static Warning

Red: at least one red threshold is exceeded in the configured monitoring plug-ins

Static Critical

Component state

Users can set custom component states by using the TestbenchHealth Plug-in. The main use case for this plug-in is reporting automatic healing processes at the test resources to test.guide. Further information about the plug-in can be found in the resourceAdapter.example.config.

TestbenchHealth Plug-in information Component state Mapped aggregated test resource state

default value

INSUFFICIENT_DATA

Static Unknown

user set component state to "fixed"

INTERVENTION_SUCCESSFUL

Static OK

user set component state to "error", which could mean that an automatic healing is in progress

ACTION_IN_PROGRESS

Static Not available

user set component state to "broken", which could mean that all automatic healing attempts have failed and a manual interaction is now necessary to fix the problem

MANUAL_ACTION_NECESSARY

Static Critical

In addition to the TestbenchHealth Plug-in, plug-ins used to make the respective test resource an executor also report information about their state to test.guide using the component state. These plug-ins are the Jenkins Agent Plug-in, the Execution Plug-in, and the FlowAutomation Plug-in.

Plug-in information Component state Mapped aggregated test resource state

Jenkins Agent Plug-in: No connection to Jenkins controller with no additional information about the reason (e.g. during an initialization)

Execution Plug-in: The test resource is not ready for executing an execution task due to configuration restraints (e.g. ecu.test is running)

FlowAutomation Plug-in: -

INSUFFICIENT_DATA

Static Unknown

Jenkins Agent Plug-in: The executor is ready to receive a job or a job is currently being executed

Execution Plug-in: The executor is ready to receive an execution task or an execution task is currently being executed

FlowAutomation Plug-in: The FlowAutomation Plug-in is ready to receive a flow task or a flow task is currently being executed

OK

Static OK

Jenkins Agent Plug-in: The agent is being restarted (e.g. due to a restart of Jenkins controller)

Execution Plug-in: -

FlowAutomation Plug-in: -

ACTION_IN_PROGRESS

Static Not available

Jenkins Agent Plug-in: The agent (executor) is temporarily set offline by the user, the agent is still connected to Jenkins controller

Execution Plug-in: The executor is manually set offline by the user

FlowAutomation Plug-in: -

OFFLINE (EXECUTOR_OFFLINE)

Static Not available

Jenkins Agent Plug-in: The agent (executor) has been disconnected by the user

Execution Plug-in: The executor has no connection to test.guide

FlowAutomation Plug-in: -

DISCONNECTED (EXECUTOR_DISCONNECTED)

Static Critical

Jenkins Agent Plug-in: Connection to the Jenkins controller is not possible (e.g. due to a not matching token)

Execution Plug-in: The Execution Plug-in has crashed

FlowAutomation Plug-in: The FlowAutomation Plug-in has crashed

CRASHED

Static Critical

Jenkins Agent Plug-in: There is a configuration error, the initialization was not successful or the Jenkins controller is offline

Execution Plug-in: -

FlowAutomation Plug-in: The ResourceAdapter user is missing the FLOW_TASK_EXECUTE permission in test.guide

MANUAL_ACTION_NECESSARY

Static Critical

The state of a single component is displayed by an icon in the panel "Component state" on the respective test resource detail page. The name of the state is shown on mouseover.

2.5. Using the test execution

Test execution enables a distributed processing of test jobs on the test benches connected to test.guide. The so-called execution tasks can be prioritized and are sorted in a queue according to priority first and creation date second. Scheduled tasks can be deleted, updated, and reprioritized. Execution tasks are based on test bundles, which can be executed directly with ecu.test or via user defined Jenkins jobs.

TestExecutionOverview

You can get a short introduction into the basic concepts of the automated test execution and playbooks with the following videos:

Click here for introduction videos.

There are also 4 advanced videos which explain the concepts and possibilities of the automated test execution with playbooks in more detail:

Click here for advanced videos.

2.5.1. Requirements

In any case the Execution Plug-in in the ResourceAdapter has to be activated and configured properly. See ResourceAdapter plug-in configuration for more details and please refer to the resourceAdapter.example.config file, that can be found in the extracted ResourceAdapter package.

The plug-in is not activated by default because the necessary parameter ecutest.executable is dependent of the installation path of ecu.test. Thus, the parameter can not have an initial generic value.

To use the test execution feature without Jenkins, a test bench needs to have ecu.test 2020.3 or newer installed together with the test.guide ResourceAdapter.

2.5.2. Execution tasks

What is a test bundle?

We distinguish between direct test execution with ecu.test, test execution via user defined Jenkins jobs and generic task execution.

The direct test execution with ecu.test supports multiple types of test bundles:

  • single ecu.test packages

  • single ecu.test projects

  • whole ecu.test workspaces (zipped or as folder) and

  • archived ecu.test projects (in ecu.test → right-click on the project file → Export project…​ → as archive…​)

Test bundles which are executed with user defined Jenkins jobs are freely designable.The job has to take care of the execution itself, including starting and stopping ecu.test (or another test tool) and uploading test reports to test.guide.

For the generic execution of tasks an arbitrary workspace needs to be specified. Within this workspace specified commands will be executed.

Test bundles can either be managed outside test.guide and provided by reference (see "location" and "scm" in the next section). Or test bundles can be uploaded to the artifact management and used from there (see "artifact" in the next section).

How do I create execution tasks via file upload?

Execution tasks can be created via "Upload execution tasks" under Test execution  Tasks. There you can upload a JSON file (*.json) to test.guide that describes the execution task.The file must at least contain the following information:

  • "name" - a freely selectable name of the execution task

  • "priority" - the priority of the task as a whole number between 0 (lowest) and 100 (highest)

  • "source" - the specification of the test bundle source

  • "xilConfigRequirements" - a list of all requirements that a test bench must fulfil to execute the task

The test bundle source has to be one of the following options:

  • location - defines a path to a test bundle that is accessible on the test bench and has the following JSON data:

    • "sourceType" - needs to be set to "location"

    • "path" - the path to the test bundle, either on a network or a local drive

  • artifact - defines the ID of the test bundle artifact in the test.guide artifact management and has the following JSON data:

    • "sourceType" - needs to be set to "artifact"

    • "artifactId" - the ID of the test bundle artifact in the format "<depository ID>-<number>"

  • scm - defines the path and version of a test bundle in a source control management tool and has the following JSON data:

    • "sourceType" - needs to be set to "scm"

    • "scmLabel" - label of the previously integrated SCM tool (if specified, "scmUrl" is not allowed!)

    • "scmUrl" - url of the previously integrated SCM tool (if specified, "scmLabel" is not allowed!)

    • "scmCommit" - if Git: a commit hash, branch name or tag; if SVN: a commit number or "HEAD" for the latest commit

If you use a Git repository as a source, the test bench must have Git installed in a version of 2.10 or newer. Otherwise, a JGit client will be used as fallback (not recommended for performance reasons). You can also explicitly configure the used Git client in the resourceAdapter.config file: plugin.de.tracetronic.ttstm.monitoring.plugin.execution.ExecutionPlugin.1.config.properties.gitClient=git or =jgit.
SCM clone and checkout operations are tried several times before the error is passed to test.guide. This is to bypass sporadic errors at the test bench.

The scheme of a test bundle’s requirement corresponds to that of a configuration property of a test bench from the Monitoring section. A requirement is therefore a triple with:

  • "label" - the identifier of the configuration property

  • "category" - the category of the configuration property

  • "value" - the value to be matched of the configuration property

Optionally, there can also be details:

  • "details" - the list of details to be checked against the configuration property (a detail again consists of "label", "category" and "value")

In all of these fields you can use the wildcards * (matching multiple characters) and ? (matching exactly one character).

You can find additional setting options, such as workspace dependencies, user-defined parameters or playbooks, in the Execution API Docs.

After pressing the "Create execution task" button, an execution task is created and queued according to its priority.

How do I create execution tasks for delayed or recurring execution?

When uploading execution tasks via "Upload execution tasks" under Test execution  Tasks, the type of the tasks can also be specified. In addition to immediate one-time execution, a task may be scheduled as delayed one-time execution or recurring execution.

If you select the delayed one-time execution, a project job will be created which generates one execution task at a certain time. You have to specify the time of the execution task creation.

If you select the recurring execution, a project job will be created which generates new execution tasks at times that match a certain cron expression in the UNIX format. Additionally, you may specify a time of first execution of this recurring task. However, note that this may not be the actual first time of creation of the task. The first execution time will be moved to the first timestamp matching the specified cron expression after the specified time. If Check last execution is enabled, the recurring job will check if the last execution tasks created by this job are already finished. Otherwise, no new execution tasks will be created.

You can find and manage all project jobs under User Profile  Manage jobs.

Can I also assign execution tasks directly to test benches?

Yes, this is possible.If the test.guide ResourceAdapter of a test bench has been correctly configured for test execution, the test bench automatically gets the following configuration property:

  • Category: XIL

  • Label: Identifier

  • Value: resource location ID of the test bench

You can reference this configuration in the execution task XIL configuration requirements:

"xilConfigRequirements": [{
    "category": "XIL",
    "label": "Identifier",
    "value": "<resource location ID of the test bench>"
}]
How do I create execution tasks through artifacts?

When using the test execution with ecu.test, it is possible to create execution tasks through test bundle sources uploaded to the artifact management. All uploaded artifacts that have the attribute "ArtifactType" set to one of the following values can be used for creating execution tasks:

  • "Workspace" - zipped ecu.test workspace or

  • "ProjectArchive" - archived ecu.test project

To assign the artifact to a certain test bench (or a group of test benches), an additional attribute "RequiredXilType" can be added. In this case, the test bundle will only be executed on test benches with the following configuration property:

  • Category: XIL

  • Label: Type

  • Value: value of attribute RequiredXilType

You can reference this configuration in the execution task XIL configuration requirements:

"xilConfigRequirements": [{
    "category": "XIL",
    "label": "Type",
    "value": "<value of RequiredXilType>"
}]

Creating an execution task through an artifact can either be triggered by clicking the Rerun as execution task on an artifact’s details page or by clicking the corresponding menu entry on the page Artifact Management  Artifacts.

Doing so brings up a new window. After pressing the "Create" button, an execution task is created and queued according to its priority. Furthermore, for every of these execution tasks a release coverage will be created.

User-defined parameters can be used to filter your execution tasks on the Test execution  Tasks page. They are also available during the execution. See also task parameters in Jenkins pipelines, task parameters in ecu.test and task parameters during playbook execution.
How can execution tasks reference artifacts that are not part of the workspace?

Execution tasks can contain a "dependencies" section. Workspace dependencies reference sources that are not part of the workspace source itself but are required for the execution.

As a source for a workspace dependency you can provide an artifact in the test.guide artifact management, an SCM repository or an HTTP link to a file (e.g. in the test.guide file storage). You can specify whether the artifact shall be additionally copied to a local path on the test bench or whether it shall be unzipped (if the dependency references a zipped folder).

In the JSON description dependencies could look like the following:

"dependencies": [
    {
        "label": "A2L",
        "source": {
            "sourceType": "artifact",
            "artifactId": "mydepot-123"
        }
    },
    {
        "label": "TRACES",
        "source": {
            "sourceType": "link",
            "link": "http://link/in/file/storage"
        },
        "unzip": true,
        "copyTo": "local/file/path"
    }
]

The dependency source has to be one of the following options:

  • Artifact - defines the ID of an artifact in the test.guide artifact management and has the following JSON data:

    • "sourceType" - needs to be set to "artifact"

    • "artifactId" - the ID of the artifact in the format "<depository ID>-<number>"

  • SCM - defines the path and version of a repository in a source control management tool and has the following JSON data:

    • "sourceType" - needs to be set to "scm"

    • "scmLabel" - label of the previously integrated SCM tool

    • "scmCommit" - if Git: a commit hash, branch name or tag; if SVN: a commit number or "HEAD" for the latest commit

  • Link - defines the HTTP link to a file (e.g. in the test.guide file storage) and has the following JSON data:

    • "sourceType" - needs to be set to "link"

    • "link" - a hyperlink to the referenced resource, e.g. https://example.com/resource.zip

If you want to use the same SCM repository with different commit hashes, branch names or tags in different dependencies you need to use the copyTo option in each dependency that uses this SCM repository. Furthermore, it is not allowed to specify the same SCM repository in the dependencies, that is already used as source SCM of the task or playbook.

Workspace dependency labels can be referenced with the notation ${LABEL} in some places of playbook. See variables replacement in playbooks for more information. At the beginning of an execution, the referenced labels will be replaced with the actual absolute paths of the workspace dependencies.
If you use the execution with Jenkins together with workspace dependencies, the dependencies will be available as build parameters in the Jenkins job. The dependency label will be given as parameter name and the path of the provided dependency will be given as parameter value.

You can find a more detailed description of the workspace dependencies schema in the Execution API Docs.

Where can I see, edit and delete execution tasks?

The page under Test execution  Tasks provides an overview of all created execution tasks.

Each task can be downloaded, deleted or rescheduled. Furthermore, not yet started execution tasks can be edited or re-prioritized. Finished execution tasks provide a link to their test results.

It is also possible to filter execution tasks by their properties. The filter form can be used to create the filter.

The filter connects multiple entries for the same property using the logical OR operation while entries in different properties are connected using the logical AND operation.

In addition to the user interface on the page, a filter may be provided using URL page parameters. The following table gives an overview of filterable properties and their respective parameter keys that can be used in the page parameters.

Execution task property (as in JSON) Parameter key Note

name

taskName

matchingXils

matchingXil

state

state

Provide the state in capital letters (e.g. BLOCKED, WAITING, …​)

taskId

taskId

uploadUser.userName

uploadUser

source.artifactId

artifactId

Only for execution tasks with source type "artifact".

source.scmLabel

scmTool

Only for execution tasks with source type "scm".

source.scmCommit

scmReference

Only for execution tasks with source type "scm".

source.path

location

Only for execution tasks with source type "location".

playbook.playbookId

playbookId

parameters

parameter

Provide the key and value you want to filter by using the [key]value syntax.

attributes

attribute

Provide the key and value you want to filter by using the [key]value syntax.

priority

priorityMin and priorityMax

Provide the lower bound (priorityMin) and/or the upper bound (priorityMax) for the priority.

creationDate

createdAfter and createdBefore

Provide the lower bound (createdAfter) and/or the upper bound (createdBefore) for the creation date. The provided date expression will be evaluated using the timezone of the browser. Examples (make sure to URL encode the expression):

  • now + 2d (current timestamp +2 days)

  • now - 7y (current timestamp -7 years)

  • today 19:00 (current day 19:00)

  • today 19:00 - 1d (yesterday 19:00)

  • 23.04.2012

  • 23.04.2012 16:55

  • 23.04.2012 04:55 PM (12-hour-system)

  • 04/ 23/ 2012 16:55 (month-year-day format)

  • 23.04.2012 16:55 - 7h (-7 hours)

  • 23.04.2012 16:55 + 7min (+7 minutes)

  • 23.04.2012 16:55 - 2d (-2 days)

  • 23.04.2012 16:55 - 2w (-2 weeks)

  • 23.04.2012 16:55 - 2m (-2 months)

  • 23.04.2012 16:55 - 2y (-2 years)

Example: Assume you want to filter for all execution tasks that have the name "My task" and the state "finished". The according URL looks like the following:

<test-guide-base-url>/execution/overview?taskName=My+task&state=FINISHED

If you want to provide multiple entries for the same property, just specify the page parameter multiple times:

<test-guide-base-url>/execution/overview?taskName=My+task&taskName=Your+task

Using this query, the page will show all execution tasks that have the name "My task" or "Your task".

You may need to properly escape special characters in URLs. For example spaces may be inserted using + or %20 while slashes (e.g. in order to filter on the matchingXils property) can be escaped by %2F.
What are the different states of the tasks about?

After creating an execution task, all currently suitable test benches will be calculated based on the tasks XIL configuration requirements, and a corresponding state will be set. The following states are possible:

  • blocked: There is currently no test bench that meets the requirements.

  • waiting: There are one or more suitable test benches, but they are currently occupied.

  • running: The task is currently being executed on a suitable test bench.

  • finished: The execution of the task on a suitable test bench has successfully completed.

  • failed: An error has occurred while executing the task on a test bench.

  • unknown: The current state of the task could not be determined.

If the configuration of a test bench changes, the states and suitable test benches of all execution tasks are recalculated automatically.

Please note the mandatory order of events: Set test bench offline → Change test bench configuration → Set test bench online.
How can I abort an execution task?

There are two different ways to abort an execution task.

  • Via the context menu of the ResourceAdapter.

During the execution of an execution task the ResourceAdapter offers a context menu entry to abort the execution task. Using the respective button leads to an immediate abort of the task. The currently running test step will be aborted, no teardown will be executed and ecu.test will not be shutdown. If the execution plugin of the ResourceAdapter uses an automation strategy that is not blocked by a running ecu.test, it might be helpful to first take the execution plugin offline and then aborting the execution task.

  • Via test.guide.

In the overview page of execution tasks as well as in the details page of an execution task there is an action that aborts the task. If the task is not yet running, the task is immediately set to the error state and assigned with an error message. If the task is currently executed on any test bench, the task is marked accordingly and will be aborted soon. Aborting a task via test.guide will abort the current test case and continue with the teardown block. If the teardown block is already executing the abort request will be ignored and the task will be finished normally.

Both abort strategies do only effect the execution if the executing dispatcher of the execution plugin is the ecu.test dispatcher or the generic dispatcher. Aborting tasks using the Jenkins dispatcher is not yet supported.

Can I set an expiration date for an execution task?

In the additionalSettings section of an execution task it is possible to specify the expirationTimeInMinutes field. If not specified the execution task will wait infinitely until its execution is finally started on a matching test bench. If the expiration time is set, the execution task will be set to the error state after the specified time if the task has not been started yet.

Example of an execution task with an expiration time of 6 hours:

{
  "name" : "My Task",
  ...
  "additionalSettings" : {
    "expirationTimeInMinutes" : 360,
    ...
  }
}
Can I also create, delete, and edit execution tasks programmatically?

In addition to the control via the graphical user interface, a REST API is provided for all important functionalities of the execution distribution. The documentation of the API and examples for the JSON files described above can be found at the following link: Execution API Docs.

2.5.3. Execution with ecu.test

How does the task execution work directly with ecu.test?

You can execute your execution tasks directly with ecu.test. The ResourceAdapter then executes the task on the test bench using the configured ecu.test and automatically uploads the test results to test.guide.

All user-defined parameters of the execution task are available as constants in ecu.test during the execution. See details about constants in ecu.test during execution for more information.
How should I configure the Resource Adapter for the Execution with ecu.test?

The test bench needs to have an ecu.test installed and the test.guide ResourceAdapter started. An ecu.test process must not be running, otherwise the test bench will be considered busy and does not accept new tasks. The following ResourceAdapter plugins have to be configured:

  • Execution-Plugin

  • ecu.test-TestExecutionInfo-Plugin

Example configurations for each plugin can be found in ResourceAdapter example config.

How can I set up the ResourceAdapter to use multiple versions of ecu.test?

Referring to the requirements for the test execution, it is necessary to configure a path to an ecu.test executable using the ecutest.executable property of the executable plugin.

If there are multiple versions of ecu.test installed on the test bench you can configure these versions in the ResourceAdapter config using labels (for example ecutest.executable.<myLabel> = path/to/executable). See the ResourceAdapter example config for an explicit example.

For each label configured in this way, the following configuration property is sent to test.guide:

  • Category: XIL/ecu.test

  • Label: label of the ecu.test installation (<myLabel>)

  • Value: path to the ecu.test executable (path/to/executable)

If you want to use one specific of the configured ecu.test versions for task execution, you can add an entry to the XIL configuration requirements of the execution task (or playbook). You can use the * wildcard for the "value" field as the ecu.test executable may be installed in different locations on different test benches:

"xilConfigRequirements": [{
    "category": "XIL/ecu.test",
    "label": "<myLabel>",
    "value": "*"
}]
At most one XIL configuration requirement with "category": "XIL/ecu.test" per execution task (or playbook) is allowed. If none is specified the default executable (given at ecutest.executable in the ResourceAdapter config) will be used.

2.5.4. Execution with Jenkins

How does the task execution work with Jenkins?

The execution via Jenkins is based on Jenkins jobs, which have to be preconfigured by the user. The connection to Jenkins can be configured in the ResourceAdapter of the respective test benches (see sample configuration of the ResourceAdapter). Each execution task is linked to an executable test bundle, which is located on a network drive, in the test.guide artifact management or in a source control management tool. The design of the test bundles is therefore completely arbitrary and depends only on their executability in the job. Following three parameters are passed to Jenkins job builds:

  • "taskId" - the ID of the corresponding execution task in test.guide

  • "location" - the path to the test bundle located on a network drive or path to the downloaded artifact

  • "agentName" - the name of the Jenkins agent running on the same PC as the accessed ResourceAdapter

Additionally, all workspace dependencies and all user-defined parameters of the execution task will be provided as build parameters to the Jenkins job. User-defined parameters overwrite workspace dependencies and workspace dependencies overwrite "taskId", "location" and "agentName".
If you want to upload test reports to test.guide it is important to set the additional constant TT_TASK_ID to the value of "taskId" parameter! Otherwise, the result link in the execution task overview will not show any test results.
How should I configure the Resource Adapter for the Execution with Jenkins?

The test bench needs to have test.guide ResourceAdapter started. If you use ecu.test to run the test bundles in your Jenkins job, no ecu.test execution may run there, otherwise the test bench will be considered busy and does not accept new tasks. The following ResourceAdapter plugins have to be configured:

  • Execution-Plugin

  • Jenkins-Agent-Plugin

Example configurations for each plugin can be found in ResourceAdapter example config.

2.5.5. Execution of generic tasks

How does the task execution work for generic tasks?

The ResourceAdapter allows the execution of arbitrary commands on the test bench. Therefore, the generic dispatcher can be configured in the ResourceAdapter configuration.

The execution task that shall be executed on the test bench needs to have a playbook that only contains steps of type "command". Steps of this type contain a list of strings that represent the command to be executed. Furthermore, additional environment variables can be specified.

Example: the following command step will output the value of the additional environment variable NEW_ENV through a python script:

{
  "type": "command",
  "command": [
    "python",
    "-c",
    "import os; print(os.environ['NEW_ENV'])"
  ],
  "environmentVariables": [
    {
      "label": "NEW_ENV",
      "value": "Hello World"
    }
  ]
}

For more information on the command playbook step, see the Execution API Docs.

If you want to create a report for your command step, you can create a report as a ZIP in JSON2ATX format. The path where the ResourceAdapter will search for a report is provided via the environment variable TT_TASK_REPORT_DESTINATION. If ResourceAdapter finds a report at the specified path, it will automatically upload the report to test.guide for you. See here for more information about the JSON2ATX format.

The command to be executed will not be run in a command line tool like CMD, PowerShell or bash. You may execute your command inside a command line tool by specifying it in the command step, for example, if you want to run a batch script: cmd /C .\Scripts\myScript.bat parameter. Note, that you may have to address the batch script using backslashes on windows.
How should I configure the ResourceAdapter for the Execution of generic tasks?

The test bench needs to have test.guide ResourceAdapter started. The following ResourceAdapter plugins have to be configured:

  • Execution-Plugin

In order to use the generic task execution the generic dispatcher has to be activated:

plugin.de.tracetronic.ttstm.monitoring.plugin.execution.ExecutionPlugin.1.config.properties.dispatcher=generic

More information on the configuration of the Execution-Plugin can be found in the ResourceAdapter example config.

2.5.6. Playbooks

What is a playbook?

A playbook describes a plan for a test execution. It references what shall be tested (the workspace and additional data), what shall be executed (setup steps, test cases and teardown steps) and where it shall be executed (XIL requirements).

For the actual execution you can create one or more execution tasks from a playbook. These tasks can then run independently on different test benches. This way you can parallelize your test executions.

Playbooks can only be executed directly with ecu.test or via the generic test execution and not with user defined Jenkins jobs. For the generic test execution only "command" playbook steps are allowed within the playbook.
How can I create a playbook?

Playbooks can be created via "Upload playbook" under Test execution  Playbooks. There you can upload a JSON file (*.json) to test.guide that describes the playbook. The file must at least contain the following information:

  • "metadata" - metadata that describe the playbook

    • "name" - the name of the playbook

  • "workspace" - all sources that are required to run the tests

  • "execution" - a container for the test cases

    • "testcases" - a list of the test cases that will be executed

An example playbook could look like this:

{
    "metadata": {
        "name": "MyPlaybook"
    },
    "workspace": {
        "source": {
            "sourceType": "scm",
            "scmType": "git",
            "scmUrl": "https://my/repo.git",
            "scmCommit": "master",
            "relativeWorkspacePath": "path/to/workspace"
        },
        "dependencies": [
            {
                "label": "A2L",
                "source": {
                    "sourceType": "artifact",
                    "artifactId": "mydepot-123"
                }
            }
        ]
    },
    "setup" : {
        "steps" : [
            {
                "type" : "ecuTestStart"
            }
        ]
    },
    "execution": {
        "testcases": [
            {
                "type": "command",
                "command": ["python", "my_tests.py"]
            },
            {
                "type": "project",
                "relativePath": "Packages/MyProject.prj",
                "constants": [
                    {
                        "label": "a2lPath",
                        "value": "'${A2L}'"
                    }
                ],
                "timeout": 10
            },
            {
                "type": "package",
                "relativePath": "Packages/MyPackage.pkg",
                "packageParameters": [
                    {
                        "label": "startingSpeed",
                        "value": "42.0"
                    }
                ]
            }
        ]
    },
    "teardown" : {
        "steps" : [
            {
                "type" : "ecuTestStop"
            }
        ]
    },
    "xilConfigRequirements": [
        {
            "label": "Identifier",
            "category": "Type",
            "value": "FAS"
        }
    ]
}

You can find a more detailed description of the playbook schema in the Execution API Docs.

After uploading, each playbook gets a revision number. If you upload a playbook with a specific name for the first time, the playbook will get the revision number 1. If you upload a playbook with an already existing name, the playbook will get the next higher revision number. This way, different versions of a playbook can be tracked, and you can see the whole history of your playbook in test.guide.

You can also upload playbooks directly from ecu.test via the playbook export (see ecu.test documentation).

Another possibility is to create playbooks via the test.guide user interface. Under Test execution  Playbooks there is a button called Add new playbook. You then have to provide the workspace source of the newly created playbook. After submitting the workspace source, you will find yourself in the playbook editor, with an empty playbook.

How can I prepare and clean up the environment for the test cases?

In addition to test cases ("execution"-section), playbooks can also contain a "setup" and a "teardown" section. Both contain steps for preparing or cleaning up the environment that is required for executing the test cases.

In the JSON description, the setup and teardown steps look quite similar to the test cases. A step type that exists only in the setup and teardown sections is the configuration step. This step contains relative paths to *.tbc and *.tcf files which should be loaded before the next package or project step. The keyword "KEEP" ensures that the previously loaded configuration is retained.

Starting and stopping ecu.test can be done explicitly by placing ecu.test start and stop steps anywhere in the setup or teardown section. If no start step is defined in setup or teardown, ecu.test will be startet by the ResourceAdapter right before the setup section begins. Accordingly, an ecu.test stop step will be automatically placed right after the teardown section if no such step has been defined explicitly. Therefore, it is possible to execute one or more command steps before starting or after shutting down ecu.test.

An example of using ecu.test start and stop steps could look like the following:

"setup": {
    "steps": [
        {
            "command": [
                "./scripts/someSetupScript.sh"
            ],
            "type": "command"
        },
        {
            "type": "ecuTestStart",
            "reuse": "never"
        }
    ]
},
"execution": {
    "testcases": [
        {
            "relativePath": "Packages/Testpackage.pkg",
            "type": "package"
        }
    ]
},
"teardown": {
    "steps": [
        {
            "type": "ecuTestStop"
        },
        {
            "command": [
                "./scripts/someTeardownScript.sh"
            ],
            "type": "command"
        }
    ]
}

You can also configure the ecu.test start steps to reuse a running ecu.test instance instead of restarting it. For this purpose the field "reuse" must be set to the value "always". Also, ecu.test will not be closed at the end of the current execution, so that it can be reused for the following execution task (unless ecu.test stop steps are used). This way you can save the time for starting and stopping ecu.test over multiple playbook executions.

You can find a more detailed description of the playbook schema in the Execution API Docs.

How do I find my playbooks in test.guide?

You can find an overview of the latest revisions of all uploaded playbooks under Test execution  Playbooks. Click on the details button next to a playbook to view the playbook details and switch to older revisions.

In both views you can also download the originally uploaded playbook via the "Download playbook" button. The prerequisite for this is that a playbook archive has been configured under Test execution  Settings.

How can I create execution tasks from a playbook?

After pressing the play button next to a playbook under Test execution  Playbooks, a new window will open that shows some settings for creating execution tasks from the playbook. Here you can adjust the following information:

  • the name and priority of the created execution tasks

  • advanced settings for execution

    • the strategy of distributing the playbook to multiple execution tasks

      • If you chose "distribution by fixed number of partitions", you also need to specify how many tasks the playbook should be distributed to.

    • the execution mode of trace analyses attached to the test cases

    • user-defined parameters for the created execution tasks

    • Frequency of executions

      • As for execution tasks, the execution of playbooks can be scheduled for delayed of recurring execution.

      • Equal to the execution of tasks, enabling Check last execution causes the recurring job to check if the last execution tasks created by this job are already terminated. Otherwise, no new execution tasks will be created.

The delayed or recurring execution will always execute the latest persisted version of the corresponding playbook. Thus, even if the execution has been created through the REST API providing a concrete playbook ID, a newer version of the playbook might be executed later on. Therefore, the option Frequency of executions will not be available if there are any unsaved changes in the playbook details page.
All user-defined parameters of the execution task are available as constants in ecu.test- and as environment variables in command steps.

After pressing the "Create" button, one or multiple execution tasks are created and queued according to their priority. Depending on the chosen distribution strategy, the created execution tasks have different contents:

  • No distribution: Only one execution task is created. The source description, the test cases, the setup and teardown steps and the test resource requirements are taken over to the execution task.

  • Distribution by fixed number of partitions: The source description, the setup and teardown steps and the test resource requirements are taken over to the execution tasks. The test cases will be distributed over the created execution tasks. This way you can parallelize your test executions.

  • Distribution on all currently matching test resources: The source description, the test cases and the setup and teardown steps are taken over to the execution tasks. The currently matching test benches are determined. The test resource requirements are extended such that every matching test bench executes exactly one execution task. This way you can clone your test executions to all suitable test benches.

Is it possible to have default parameters for the creation of execution tasks from playbooks?

In the playbook editor as well as via the test.guide REST-API it is possible to define playbook parameters. Playbook parameters have a name, which then corresponds to the name of the parameter in the created execution task. Furthermore, a default value can be set and parameters can be set as required. In addition, a description can be stored.

If a default value is provided by the playbook parameter you may override it by providing another value while creating the execution task from the playbook. Parameters containing a default value can not be deleted and will always have at least their default value assigned to them after creating the execution task. If the parameter is set as required (while no default value is set), an execution task can only be created, if a parameter with this name is provided (see How to create execution tasks from playbooks for more information).

Can I change test resource requirements dynamically when creating execution tasks from a playbook?

It is possible to use variables of the form ${VAR} in the xilConfigRequirements section of a playbook.

Example:

"xilConfigRequirements": [{
    "category": "XIL",
    "label": "Identifier",
    "value": "${TEST-BENCH}"
}]

When creating execution tasks from the playbook you have to provide a value for each used variable as parameter of the playbook execution (a parameter named TEST-BENCH in the above example). In the resulting execution task, the variables will be replaced by the corresponding parameter values. The creation of the execution task will fail, if you use a variable without providing a corresponding parameter. However, you can escape a variable that should not be replaced by using the form $${VAR}. Variables may be used in every field of a test resource requirement and even in the details of the requirements.

For more information on test resource requirements, see here.

Which constants can be used in ecu.test during execution?

The test execution provides ecu.test with some special global constants that can be used during package or project execution. These are at least:

  • TT_TASK_ID: the ID of the execution task

  • all user-defined parameters of the execution task

If the execution task was created from a playbook, the following global constants are additionally provided:

  • TT_PLAYBOOK_ID: the ID of the playbook

  • TT_PLAYBOOK_SECTION: the playbook section where the playbook step belongs to with the value SETUP, EXECUTION or TEARDOWN

  • TT_PLAYBOOK_RUN_ID: the ID of the playbook run

  • TT_PLAYBOOK_STEP_ID: the ID of the playbook step to be executed

If the execution tasks source is an artifact from the test.guide artifact management, the following global constants are also provided:

  • the ID of the artifact with the key TT_ARTIFACT_ID

  • the download URL to the artifact with the key TT_ARTIFACT_URL

Additional global constants for package and project steps can be defined at the playbook step definition itself with the keyword "constants" (see the Execution API Docs).

This example shows how to use global constants in ecu.test: Example.pkg. For more information please see the ecu.test documentation.
Which environment variables can be used in script executions?

In a playbook you can use command steps for executing scripts (see generic dispatcher) environment>>). The test execution provides the following environment variables in all script executions:

  • LOCATION: location part of the resource location ID

  • TESTBENCH: test bench part of the resource location ID

  • MACHINE: machine part of the resource location ID

  • TEST_GUIDE_URL: URL of the test.guide server defined in ResourceAdapter config

  • TEST_GUIDE_PROJECT_ID: ID of the test.guide project defined in ResourceAdapter config

  • TEST_GUIDE_AUTH_KEY: the authentication key of the ResourceAdapter for the access to test.guide defined in ResourceAdapter config

  • SOURCE_PATH: absolute path to the source on the local machine

  • WORKSPACE_PATH: absolute path to the workspace on the local machine (equals to the path to the source on the local machine concatenated with the relative workspace path specified in the task)

  • ECUTEST_EXECUTABLE: absolute path to the ecu.test executable which is used for the test execution (if existing)

  • all key-value pairs which are also provided as global constants in ecu.test (see Which constants can be used in ecu.test during execution?)

Additional environment variables for command steps can be defined at the playbook step definition itself with the keyword "environmentVariables" (see the Execution API Docs).

Environment variables can be used in Batch-Files with %VAR% and in Shell-Files with $VAR.
Which variables are automatically replaced in a playbook at the beginning of each execution?

You can use some special variables as placeholders in playbooks. At the beginning of an execution via the ResourceAdapter, these variables are automatically replaced by their resolved values.

The test execution provides and replaces the following variables:

  • LOCATION: location part of the resource location ID

  • TESTBENCH: test bench part of the resource location ID

  • MACHINE: machine part of the resource location ID

  • TEST_GUIDE_URL: URL of the test.guide server defined in ResourceAdapter config

  • TEST_GUIDE_PROJECT_ID: ID of the test.guide project defined in ResourceAdapter config

  • TEST_GUIDE_AUTH_KEY: authentication key of the ResourceAdapter for the access to test.guide defined in ResourceAdapter config

  • all defined workspace dependencies labels (see workspace dependencies of execution tasks)

  • SOURCE_PATH: absolute path to the source on the local machine

  • WORKSPACE_PATH: absolute path to the workspace on the local machine (equals to the path to the source on the local machine concatenated with the relative workspace path specified in the task)

  • ECUTEST_EXECUTABLE: absolute path to the ecu.test executable which is used for the test execution (if existing)

  • all defined system environment variables

  • all labels becoming global constants (see Which constants can be used in ecu.test during execution?)

All of these variables can be referenced with the notation ${VARIABLE_NAME} in the following places of a playbook:

  • in relative paths of testcases, setup steps and teardown steps

  • in constant values of package or project steps

    • constant values have to be python expressions, if you want to use a string value additional quotation marks have to be set (e.g. "'${VARIABLE_NAME}'")

    • special characters in expanded variables will be escaped

    • if you want to avoid escaping of the variables value you may prefix the variables name with "raw:" (e.g. "${raw:VARIABLE_NAME}")

  • in package parameter values of package steps

    • package parameter values have to be python expressions, if you want to use a string value additional quotation marks have to be set (e.g. "'${VARIABLE_NAME}'")

    • special characters in expanded variables will be escaped

    • if you want to avoid escaping of the variables value you may prefix the variables name with "raw:" (e.g. "${raw:VARIABLE_NAME}")

  • in the relative workspace path of the workspace source

  • in the "copy to"-path of workspace dependencies

  • in the commit reference and sparse checkout path of SCM sources

  • in the path of location sources

  • in the URL of link sources

  • in the artifact ID of artifact sources

Some variables are not available for substitution in specific parts of a playbook/task:

  • the following variables are not available for substitution in the source- and dependency-blocks of a playbook/task:

    • workspace dependencies labels

    • the variables TT_ARTIFACT_ID and TT_ARTIFACT_URL

    • the variables SOURCE_PATH and WORKSPACE_SOURCE

  • the variable TT_PLAYBOOK_SECTION can only be replaced in setup-, execution- und teardown-blocks of a playbook

  • the variables TT_PLAYBOOK_STEP_ID and TT_PLAYBOOK_STEP_ANALYSIS_ID can only be replaced in the execution-block of a playbook

  • the variables TT_ARTIFACT_ID and TT_ARTIFACT_URL can only be replaced if the workspace source is an artifact

Occurrences of ${…​} in playbooks which are not supposed to be replaced have to be escaped with $${…​}.
What happens if the execution of a playbook step fails?

The following table describes which states a playbook step assumes for which test results. A general error in this context means that the step could not be executed, or the test report could not be created:

Verdict SUCCESS, NONE or INCONCLUSIVE Verdict FAILED or ERROR General Error

Setup step

Test case

Teardown step

The state of the execution task results from the aggregated states of all playbook steps.

Setup steps

The whole setup may fail for example in the following cases:

  • a setup step is finished with verdict "ERROR" or "FAILED"

  • the package or project associated with a setup step does not exist

  • a setup step runs into a general error

  • a setup step is cancelled manually

In all of these cases the execution of the setup is aborted immediately, the test cases are skipped and the execution of the teardown steps is started. The state of the execution task is set to error.

Test cases

Test cases are completely independent of each other during the execution. This means if a test case fails with the verdict "FAILED" or "ERROR", the execution is not aborted, but the remaining test cases are executed normally. The state of the execution task will be set to finished at the end of the execution.

If a test case runs into a general error, the remaining test cases are executed normally and the state of the execution task is set to error at the end of the execution.

If you cancel the execution of a test case yourself, for example by manually aborting the test execution in ecu.test, the execution of the test cases will be stopped immediately, the execution of the teardown steps is started and the state of the execution task is set to error at the end of the execution.

Teardown steps

If a teardown step fails with the verdict "FAILED" or "ERROR", the execution is not aborted, but the remaining teardown steps are executed normally. The state of the execution task will be set to finished at the end of the execution.

If you cancel the execution of a teardown step yourself, for example by manually aborting the test execution in ecu.test, or the teardown step runs into a general error the remaining teardown steps are executed normally and the state of the execution task is set to error at the end of the execution.

2.6. Cross-company collaboration

In order to be able to collaborate across project, instance or company boundaries, test.guide offers the possibility to export report data directly to other instances. To do this, test.guide itself must be configured as a test management tool. The data can then be transferred to other projects or instances via the export view.

CrossCompanyCollaboration

If direct access to the other test.guide instance is not possible, an ATX document can also be created as a zip file in the export view via the option Export ATX documents, which contains all important test report data for the colleagues to import manually. Therefore, a classic exchange via e-mail or exchange drive is also possible at any time.

2.7. Using the workflow automation

2.7.1. Introduction

With the workflow automation, you can easily create the automation for your individual test processes within test.guide. Using simple blocks you can construct your user-defined workflows which are triggered by events in test.guide. Triggering events for the workflows can be

  • test.guide internal (e.g. a new artifact was uploaded to test.guide or a test.guide release was locked) or

  • test.guide external (e.g. call triggered by a status change of a Jira ticket).

Examplary use cases for the workflow automation:

  • Automatically start a test execution if a new software deliverable was uploaded to test.guide.

  • Automatically change the status of a related Jira ticket if a release in test.guide was locked.

How can you use the workflow automation?

The following image and description gives a high-level overview of the elements and steps involved in the workflow automation.

WorkflowAutomationOverview

Manual setup

1. Flow bundle

A flow bundle is an artifact which automates your user-defined workflow. It consists of

  • the flow definition (a runnable description of your workflow) and

  • the flow.kit (a framework to create and execute a flow definition).

In the flow definition you represent your workflow by configuring and connecting flow blocks from the flow.kit library. Each flow block encapsulates an action to perform, e.g. requests to test.guide or external tools, a local operation.

2. Flow trigger

You create a flow trigger for your flow bundle. The flow trigger connects the flow bundle with a specification for test.guide events. This event specification defines the events on which your flow bundle reacts. These events can be test.guide internal (e.g. a new artifact was uploaded to test.guide) or external (e.g. call triggered by a status change of a Jira ticket).

Automatic execution

3. Flow task

If an event occurs that matches a flow trigger’s event specification, a flow task is created. The flow task contains the specific event information (e.g. id of the uploaded artifact) and a link to the flow bundle which is specified in the flow trigger.

4. Flow bundle execution

The flow task is fetched for execution by a ResourceAdapter with activated flow automation plugin. It runs your flow bundle providing it with the information of the specific event that triggered the flow task.

5. Flow report

After running your flow bundle, the Flow Automation plug-in creates a report for the flow task execution: a flow report. The flow report is uploaded to test.guide and shows detailed information about the execution of your flow bundle.

2.7.2. Requirements to setup workflow automation

The following initial steps are required to use the workflow automation:

  • Flow bundle creation:

    • You need to create a flow bundle for your workflow using the flow.kit. The flow.kit is a framework to create flow bundles which automate your user-defined workflows.

You can download a template flow bundle with the flow.kit from the Flow trigger overview page (see section Flow trigger). You build your flow bundle by adapting this downloaded flow bundle.
A description on how to create your flow bundle is provided within the downloaded flow bundle.
  • Flow source preparation:

    • To use a flow bundle for the workflow automation you need to make it available for test.guide. You have two options to make flow bundles available:

      • push the flow bundle to a SCM (source code management) and add a reference to the repository in test.guide (see Project settings > SCM tools for details)

      • add a flow bundle as a zipped artifact to the artifact management (See Artifact Management for details)

  • ResourceAdapter configuration:

    • To have execution resources for the workflow automation you need to enable and configure the flow automation plugin in the ResourceAdapter.

    • See ResourceAdapter plug-in configuration for how to configure the ResourceAdapter.

    • Refer to the resourceAdapter.example.config file for full information about the configuration options of the flow automation plugin.

The resourceAdapter.example.config can be found in the extracted ResourceAdapter package.
  • User permissions for ResourceAdapter user:

    • The test.guide user of the ResourceAdapter needs the project permission Execute flow tasks.

    • If the flow bundles are provided via the artifact management, the test.guide user of the ResourceAdapter also requires Read/download permission for the corresponding depositories.

2.7.3. Flow trigger

WorkflowAutomationOverview Trigger

This chapter addresses the central part of the test.guide workflow automation: the flow trigger.

The flow trigger defines which flow bundle to execute for which triggering event. Flow triggers are defined per test.guide project.

The central entry point to configure flow triggers is the triggers page in test.guide (Workflow automation > Triggers).

FlowTriggerOverviewPage

The page consists of 3 major elements:

  1. Flow bundle download buttons

  2. Add flow trigger button

  3. Table of all configured flow triggers in this project

Creating a flow trigger

The "Add flow trigger" button opens the following flow trigger dialog.

AddFlowTriggerDialog

Field Description

Name

  • Your name for the flow trigger.

The name should reflect the use case e.g. automatic test review.

Description

  • User description of the flow trigger.

Execution user

  • The execution user who executes the flow bundle.

  • This user needs all permissions relevant to the flow bundle and event.

  • The execution user has to be a technical user, which is authenticated by an authentication key.

The option to use your own user is for test purposes only!
Why do I need to set up the execution user every time I change something in the trigger?
This is a security feature which ensures that you have access to the technical user at the point of time when changing the code that will be executed with these permissions. This security feature does apply to changes of the flow source and event type.

Source

  • The source of the flow bundle.

  • The flow bundle contains the implementation of your specific workflow automation.

  • Further information can be found in the section: Flow bundle source.

Priority

  • Set on the tasks to determine execution order.

  • Available range 1 (least) to 100 (highest).

Threshold value of failed flow tasks for FAULTY flag

  • The number of consecutively failing flow tasks for the flow trigger to get flagged as FAULTY.

FAULTY flag
The FAULTY flag is set automatically by test.guide if the threshold is reached. This feature is to notify the user about problems of the flow trigger (e.g. 3rd party system unavailable) and prevent unnecessary execution tries. A trigger flagged as FAULTY will still create flow tasks to cover all incoming events. The created flow tasks will have the RETAINED state.

Delete tasks older than (in days)

  • Time-based cleanup rule for tasks with a final state of this trigger.

  • The deciding factor for deletion is the task’s completion time.

  • When left empty, tasks will not be removed automatically.

Event type

  • Event type on which this trigger subscribes.

  • Further information can be found in the section: Events and filters.

When the trigger is created, it is active and will create flow tasks for matching events.

Events and filters

WorkflowAutomationOverview Event

Some event types have additional filter options. The following table contains all event types and their corresponding filter options.

Filter behaviour
Different filter items of an event are linked with a logical AND. Empty items are ignored.
Event type Description Filter options

Artifact uploads

An artifact is uploaded to a depository in the artifact management related to the project.

The execution user needs access to the depository which the flow is designed for. No tasks are created for depositories to which the execution user does not have access.
  • Attributes

    • artifact attributes as key-value filter

    • values support wildcards

  • Depository ID

  • File name*

  • File extension*

Execution task state has changed

An execution task changes its state.

  • Name*

  • Attributes

    • execution task attributes as key-value filter

    • values support wildcards

  • New state

External event via REST API

Event type to react on external events. This event type is designed to be used as webhook receiver addressed by other systems. Triggered by calling the API route: <test.guide url>/api/triggers/<triggerId>/start. For usage, refer to the Flow API doc.

Permission
To start a flow task via the REST API an authentication key is needed. The user attached to the authentication key needs the Flow task Create/Update permission.

PDF export

An export of test result as PDF is finished.

  • Report design*

  • Report parameters

    • Report parameters as key-value filter

    • Values support wildcards

Q-gate state has changed

A Q-gate state has changed.

  • New Q-gate state

  • Q-gate plan key*

  • Q-gate name*

Release state changes

A release state has changed.

  • New release state

  • Release ID

  • Release type

Report uploads

A report got uploaded.

  • Project filter ID to filter uploaded test case executions

Review creation

A review was created.

  • ID of a test case execution project filter to filter created reviews by the corresponding test case execution

  • Reevaluations to filter created reviews

  • Tags of the review to filter created reviews

Test case report export with flow trigger

The Export report filter to flow trigger button in the report filter GUI is clicked. It contains a list of selected test case execution IDs, which can be used to get more information.

Test resource machine went offline

Test resource machine sends no heartbeat.

Time-controlled

Event type for time-based triggering of flows.

Workflow in a Q-gate plan

Use this event type to specify a workflow that can be embedded in a Q-gate plan.

*supports wildcards

Flow bundle source

As the flow bundle source of a flow trigger you define the flow bundle which runs if a matching event occurs.

You have two options to provide your flow bundle as flow bundle source:

  • SCM source: a SCM (source code management) repository storing the flow bundle files

  • Artifact source: an artifact in the artifact management storing the flow bundle

SCM (source code management)

The SCM source is configured on the flow trigger with the following dialog.

FlowTriggerScmSourceDialog

Field Description

SCM label

Label of SCM that can be found in project settings.

SCM configuration
The SCM source uses the test.guide SCM configuration in the project settings. To use a an SCM repository as source, it first needs to be created there.

SCM commit

Commit reference in your SCM tool.

Commit-ish reference
Git specific commit hashes, branch names or tags can be used.

Sparse checkout path

(available for git) Sparse checkout selectively downloads parts of the repository.

Override SCM tool settings

Local override of settings made in the SCM configuration in the project settings.

Relative path

This relative path needs to point to your flow bundle, if the flow bundle is not at the root of the depository.

Sparse checkout and relative path
When used in conjunction with sparse checkout, this path will be used relative to the checked out directory. For example: Sparse checkout path "folder" and relative path "subfolder" result in "folder/subfolder" in the application.
Artifact

To use an artifact as flow source, the flow bundle needs to be uploaded to a depository in the test.guide artifact management as *.zip file.

The ResourceAdapter user need the Read/download permission on the depository in order to download the flow bundle during execution.

Subsequently, you can configure an artifact source with this dialog:

FlowTriggerArtifactSourceDialog

Field Description

Artifact ID

ID of the artifact uploaded in the artifact management.

Relative Path

If the bundle is not in the root folder but a subfolder within the *.zip archive, you need to set the relative path here.

2.7.4. Flow task

WorkflowAutomationOverview Task

For every event matching the event specification of a flow trigger, a flow task is created.

You can access the overview page for all flow tasks under Workflow Automation > Tasks. For every flow task, a detail page is available.

A flow task consists of:

  • a reference to the triggering flow trigger

  • a flow source which is initially copied from the triggering flow trigger

  • information of triggering event

  • a flow block state

  • runtime meta information (ID of executing ResourceAdapter, start and end time of the execution)

  • a flow report (detailed information for the flow task run)

  • links related to the flow task run (trigger link and interaction links)

  • links to the log files (if flow task log archive was set)

Flow task states

Every flow task has a state which shows its position in the flow task execution process.

These are the states for a flow task as a whole. Flow blocks running as part of the flow task’s flow definition have different states. These are described in the section flow block states.

The following flow task states exist:

FlowTaskStates

Name Description

FlowTaskStateWAITING WAITING

Flow task is ready to be executed by a ResourceAdapter.

If a flow trigger goes into state FAULTY, all flow tasks which are in state WAITING and were created from that flow trigger go into state RETAINED.

FlowTaskStateRETAINED RETAINED

Flow was not run and is retained from execution. It will not be fetched and run by a ResourceAdapter while it is in this state.

A flow task can go into this state in two cases:

  • The flow task was triggered by a flow trigger that has a FAULTY flag.

  • The triggering flow trigger was flagged FAULTY while the flow task was in the state WAITING.

If you fixed the reason for the fault, you can manually release the flow task. This brings this flow task in state WAITING.

FlowTaskStateRUNNING RUNNING

Flow task is being executed by a ResourceAdapter.

FlowTaskStateFINISHED FINISHED

Flow task was executed completely and without an error (all blocks of the flow definition have result FINISHED).

FlowTaskStateSKIPPED SKIPPED

The flow task was executed without an error in a flow block but at least one block of the flow definition has result SKIPPED.

FlowTaskStateABORTED ABORTED

An error occured while the flow task was executed. This can be

  • an error in the flow definition (the result of at least one block is ABORTED).

  • an error outside the flow definition while trying to run it (e.g. error while fetching the flow (insufficient permissions, incorrect credentials for SCM repo)).

Execution by the ResourceAdapter

WorkflowAutomationOverview Execution

The flow bundles of flow tasks are executed by ResourceAdapters with configured Flow Automation plugin (see section Requirements for details on the configuration). As the flow tasks run on systems with an active ResourceAdapter, all resources available on these machines can be accessed while running the flow bundle, e.g. present network drives.

The available ResourceAdapters fetch the flow tasks in state WAITING for execution. Currently, it is not possible to limit which flow tasks a ResourceAdapter can fetch and execute (e.g. no limitation based on operating system possible).
If both the execution task plugin and the flow automation plugin are enabled on the same ResourceAdapter runs of both plugins do not block each other.

For every flow task execution the ResourceAdapter creates logs. These logs are located in the subdirectory Flow_Automation_Resources\FlowKitLog of the ResourceAdapter's workspace directory with a separate directory for each flow task execution.

If the depository for log file upload is set execution log files are automatically uploaded to test.guide and can be accessed by links on the flow task detail page.

Every flow task execution creates the following log and report files:

File name Description

flowKitValidate.log

Validation phase log of the flow bundle execution

Can be automatically uploaded to the test.guide depository for log file upload

flowKitExecute.log

Execution phase log of the flow bundle execution

Can be automatically uploaded to the test.guide depository for log file upload

report.json

Report of the last flow bundle execution

Flow report

WorkflowAutomationOverview Report

The flow report for a flow task can be viewed on the detail page of the flow task. It is available after the execution of a flow task. The following screenshot shows an example for the flow report.

FlowTaskDetailReport

The report shows information about the flow blocks of the flow definition for the flow bundle execution. The report includes a graphical visualisation of the run flow definition and the following information for every flow block:

  • type of the flow block

  • state of the flow block (see section flow block states)

  • dependencies (dependency flow blocks of this flow block)

  • parameters (name and value of all parameters of this flow block)

  • result

  • errors from the execution of this flow block

  • links to related resources (see section links and interactions)

Flow block states

Reported states of the flow blocks can be the following:

Name Description

FlowBlockStateINIT INIT

Flow block was not executed yet. Present if only the validation was run.

FlowBlockStateFINISHED FINISHED

Flow block was executed successfully.

FlowBlockStateSKIPPED SKIPPED

A flow block can have this state in two cases:

  • Flow block was executed and explicitly set to SKIPPED (e.g. flow block ConditionalSkip).

  • A dependency block of this flow block is SKIPPED or ABORTED.

FlowBlockStateABORTED ABORTED

Flow block was executed and resulted in an error. This leads to the state ABORTED of the overall flow task.

FlowTaskDetailLinkAndInteraction

The flow task detail page shows a links section. It contains links to resources connected to the flow task. There are 4 categories of links:

Category Description

Trigger event

Link to the test.guide resource connected with the triggering event of this flow task.

Interaction

Created

Links to resources created during the flow task run.

Updated

Links to resources updated during the flow task run.

Related

Links to resources that were used during the flow run (e.g. if a test.guide release was read) or generic links added as interactions in the flow definition.

In the Interaction categories a link to a single resource is only shown once. If a resource would fall into multiple categories it is only shown in the highest category. The order of the categories is the following: Created > Updated > Related. If that resource was triggering the flow task it is also shown as the Trigger event.

Example: If a release was created and read during the flow task execution it is only shown in the category Created and not in Related. If a change to this release triggered the flow task the Trigger event category will show a link to the release as well.
Actions

For flow tasks, the following actions are available.

Rerun flow task

This action creates a new flow task copying the flow trigger reference, flow source and the event information of the original flow task. Regardless of the original flow tasks state, the state of the newly created flow task is WAITING.

The rerun flow task is fetched and run independently of the original flow task. This means that it does not necessarily run on the same ResourceAdapter as the original flow task.

You can only rerun flow tasks where the triggering flow trigger exists and was not deleted.

You need the permission Flow tasks Create/Update for this action.

Delete flow task

This deletes the selected flow task.

You need the permission Flow tasks Delete for this action.

Release retained flow task

(Only for tasks in state RETAINED)

Flow tasks in state RETAINED can be released. This changes its state to WAITING, thus marking them as executable. In a context menu, you can choose the flow source for the released flow task from two options:

  • keep flow source of flow task

  • use current flow source of the flow trigger

You need the permission Flow tasks Create/Update for this action.

2.7.5. Env variables

Env variables are key value pairs that are available to all flow bundle executions of a test.guide project. You can set them on the page Workflow automation > Env variables.

In the flow bundle, the value can be accessed via the standard Python function os.getenv('MY_ENV_VAR').
To protect the variable content the values cannot be displayed again in test.guide.

2.7.6. Settings

Flow task log archive configuration

You can select a depository for log file upload, where log files from flow task executions will be uploaded. If no depository is selected, the log files will not be uploaded.

The ResourceAdapter user needs the Upload permission on the depository to upload the log files.

The uploaded log files for each flow task are linked on the flow task detail page.

You can setup remove rules in the depository settings to automatically remove old log files. For fine-grained control you can use the attributes TT_FlowTaskId (for selecting flow task execution log files in general) or TT_FlowTriggerId (for selecting log files of a specific flow trigger).

3. Integrating

This chapter provides information about tools that can interact with test.guide. If it is necessary to change the configuration of the external tool to allow interaction, these changes are explained in this chapter. If the configuration has to be done in test.guide have a look on menu:Project settings.

3.1. Integrating in ecu.test

ecu.test and test.guide are well connected. To address your test.guide project within ecu.test just set up a few options in ecu.test. A short guide is directly accessible in test.guide at the bottom of Info > Report management.

3.2. Integrating in Jenkins

tracetronic provides an ecu.test plugin for Jenkins. Within this plugin test.guide can also be configured, see the plugin documentation at GitHub.

3.3. Integrating in test.guide

The following sections act only as a short briefing about the possibilities of test.guide. For the configuration of the tools, see menu:Project settings. For the usage of the features supported with the tools, see Using.

3.3.1. Integrating issue trackers

Issue trackers can be utilized in test.guide to access your issue/defect management. You can create reviews or request the status of the linked tickets.

test.guide provides access to:

Jira

When utilizing Jira, test.guide supports different authentication methods:

  • HTTP basic authentication using username and password,

  • HTTP basic authentication using username and an API token and

  • HTTP bearer authentication by Personal Access Token.

When configuring the Jira connection, please select Authentication Method: Basic Authentication if one of the first two methods should be used. In case you want to make use of a Personal Access Token previously created in Jira, set Authentication Method to Personal Access Token.

When using a Personal Access Tokens, for technical reasons, test.guide will insist on a username to be entered. You can enter any value in the username field, it will not be used.

3.3.2. Integrating test management tools

Test management tools (or Application Lifecycle Management (ALM) tools) can be used to synchronize test specification, test planning, all-in-all test management with the test report data of test.guide. Furthermore, it is possible to get access to your test plans and use them as data basis for coverages.

test.guide provides access to:

The connection of one of the tools can be configured over Project settings  Management tools.
Jama

If you want to use the test run <> defect linking of test.guide you have to provide the identification number for the relation type that you want to use. The easiest way to get this number is by using the Swagger UI of your Jama instance at <jama-url>/api-docs. Try out GET relationship types and choose the ID of your need.

3.3.3. Integrating source control management

test.guide allows integration of source control management (SCM) tools. These can be used to manage your test bundles for distributed test execution.

test.guide provides access to:

3.3.4. Integrating metrics tools

test.guide provides three metrics endpoints for scraping with Prometheus: * /api/metrics provides operational metrics, for example JVM status, but also usage statistics * /api/metrics/performance provides performance-related metrics that provide an insight into the number of times performance-critical operations are executed and their average execution time * /api/metrics/database provides metrics pertaining the database state and configuration

To get started a basic Grafana dashboard can be used: testguide_Dashboard.json

This section is for intended for usage by project managers. The project settings made here affect all members of the project.

For binary report data such as recordings, logs or plots to be available in the test.guide, a file repository must be configured in the project.

There are 7 different types of file repositories available:

  • File system (may also be used with mounted network drives)

  • FTP server

  • Artifactory Repository Manager

  • Server Message Block (SMB) / Common Internet File System (CIFS)

Assuming one can access the SMB resource by navigating to \\my-storage-server\my-share\path\in\share in Windows Explorer, this corresponds to the following SMB configuration in test.guide: SmbFileRepositoryExample

  • Amazon S3 (Simple Storage Service)

  • A depository from the test.guide artifact management

4.1.1. Migration

If a file repository moves from one storage (e.g. NAS storage) to another storage and as a result, the access path changes, test.guide provides the option to migrate existing deposited files.

There is no downtime for new project uploads during the migration.

4.1.2. Auto Remove Rules

Sometimes files need to be retained for a limited time period only, for example to analyze them. In this case, the auto remove rules can be used to set which files are automatically removed from test.guide based on predefined criteria.

Using this feature may save a lot of storage space!

Each project manages its own users.

4.2.1. Let users register themselves instead of creating them beforehand

Users who want to join the project should not be created manually by the project manager, even though this is possible. Instead, they should register themselves at /register.

4.2.2. Manage users in project context

At Project settings  Project users menu, a project manager can maintain users, which are assigned to the project.

The following operations are available:

Action Description

Add new user

Create a new user, which is added to the project. This will send a notification if System configuration  Email is set up.

Add user to project

Add an existing user to the project.

Add technical user

Create a technical user account for this project. Technical users can only interact with test.guide using the API (upload reports, etc.). To learn how to get the authentication key of a technical user, please refer to "How can I upload my ecu.test reports to test.guide?".

Edit

Change the user’s profile or project permissions.

Remove from project

Remove the user from this project, revoking former access privileges.

Deactivate

Disables access to the project, keeping project permissions untouched. The user can be reactivated later.

Permissions can be granted user specific by clicking the Edit button next to the respective user. There are different context specific permissions that are described in the table below.

Context Permission Description API value

General

Project management

Allows the user to manage project settings such as

  • Config file storage, external tools

  • Config upload rules

  • Config filter settings, review settings

  • Export/import reports

  • Show project tasks in task manager

  • Send message to all users

  • Delete public report filter for other users

  • Delete public coverage filter for other users

  • Create and edit project filters

  • Create dashboards

PROJECT_MANAGEMENT

General

Manage project users

Allows the user to

  • create new users

  • add/remove existing users to/from project

  • edit project roles

  • edit and deactivate existing users

PROJECT_USER_MANAGEMENT

Test report management

Manage business report templates

Allows the user to access Project settings  Reporting

BUSINESSREPORT_MANAGEMENT

Release management

Lock release

Allows the user to lock releases in Release management  Release overview

LOCK_RELEASE

Release management

Unlock release

Allows the user to unlock releases Release management  Release overview

UNLOCK_RELEASE

Test report management

Manage dashboards

Allows the user to access Project settings  Dashboard views

DASHBOARD_MANAGEMENT

Test report management

Manage project filters

Allows the user to manage project filters at Project settings  Project filters and create new ones using the Test report management  Report filter

PROJECT_FILTER_MANAGEMENT

Test report management

Category Test reports

Read

Allows the user to see test reports

TESTCASE_VIEWER

Create/Update

Allows the user to upload test reports to test.guide

TESTCASE_UPLOADER

Delete

Allows the user to delete test reports

UPLOAD_CLEANER

Test report management

Category Deposited files

Read

To see deposited files of test reports, the user needs the "Read" permission for test reports

TESTCASE_VIEWER

Create/Update

Allows the user to attach files to test reports

ARCHIVE_UPLOADER

Delete

Allows the user to delete deposited files of test reports

ARCHIVE_DELETE

Test report management

Category Reviews

Read

To see reviews of test reports, the user needs the "Read" permission for test reports

TESTCASE_VIEWER

Create/Update

Allows the user to create reviews for test reports

REVIEWER

Delete

Deleting reviews is not possible

-

Release management

Category Releases

Read

Allows the user to see releases

RELEASE_READ

Create/Update

Allows the user to create new or edit existing releases, as well as Q-Gates

RELEASE_CREATE

Delete

Allows the user to delete releases

RELEASE_DELETE

Monitoring (test infrastructure)

Monitoring management

Allows the user to manage test resources and test resource machines (create, delete) and related features (activity types, attributes, appointment categories, …​) and enables access to Project settings  ResourceAdapters, where the configurations of all ResourceAdapters can be managed

HIL_ADMIN

Monitoring (test infrastructure)

Synchronize ResourceAdapter configuration

Enables access to APIs required by ResourceAdapters to synchronize their respective configuration files with the test.guide server

RA_CONFIG_SYNC

Monitoring (test infrastructure)

Category Monitoring data

Read

Allows the user to see test resource and test resource machine data within the modules "Test infrastructure" and the "Monitoring"

HIL_USER

Create/Update

Allows the user to add data of test resources and to upload test resource machine data to test.guide (e.g. used for technical users of ResourceAdapters)

MONITORING_UPLOADER

Delete

Allows the user to delete data of test resources and test resource machines from the monitoring database of test.guide

MONITORING_DELETE

Test execution

Category Test executions

Read

Allows the user to see test execution tasks and playbooks within the Test execution module

EXECUTION_READ

Create/Update

Allows the user to

  • upload new test execution tasks and playbooks

  • add new playbooks

  • edit existing test execution tasks and playbooks

  • execute playbooks

  • rerun test execution tasks

EXECUTION_CREATE

Delete

Allows the user to delete execution tasks and playbooks

EXECUTION_DELETE

Workflow automation (ResourceAdapter flow automation plugin)

Execute flow tasks

Permission to access APIs required to execute flow tasks, necessary for user of ResourceAdapter when using flow automation plugin

FLOW_TASK_EXECUTE

Workflow automation

Category Flow trigger

Read

Allows the user to see flow triggers within the workflow automation module.

FLOW_TRIGGER_READ

Create/Update

Allows the user to

  • add new flow triggers

  • edit existing flow triggers

  • activate/deactivate flow triggers

  • remove faulty flag on flow triggers

  • see flow secret entries

  • add new flow secrets

  • edit flow secrets.

FLOW_TRIGGER_CREATE

Delete

Allows the user to delete flow triggers and flow secrets.

FLOW_TRIGGER_DELETE

Workflow automation

Category Flow task

Read

Allows the user to see flow tasks within the workflow automation module

FLOW_TASK_READ

Create/Update

Allows the user to

  • edit existing test execution tasks and playbooks

  • rerun flow tasks

  • release flow tasks in state RETAINED.

FLOW_TASK_CREATE

Delete

Allows the user to delete flow tasks.

FLOW_TASK_DELETE

Release management

Category Q-gate plans

Read

Allows the user to see Q-gate plans and their details

QGATE_PLAN_READ

Create/Update

Allows the user to create Q-gate plans and manage a Q-gate plans properties (e.g. attributes) and contents (creating, editing and removing Q-gates and dependencies in between)

QGATE_PLAN_CREATE

Delete

Allows the user to delete Q-gate plans.

QGATE_PLAN_DELETE

4.2.4. Project roles

Instead of assigning individual project permissions to users, you should use roles in your project and assign them to users.

Diagram
Figure 1. Relation between users, system groups and project roles

At Project settings  Project roles, existing roles can be maintained and new roles created. The following operations are available:

Action Description

Edit

Change the permissions assigned to the role

Manage members

Assign and remove users, who are member of this role. The members inherit all permissions assign to the role.

Associate system groups

Besides individual users, system groups can be associated with a role. All users who are members of the associated system group will inherit membership of the role and therefore all permissions linked to it. See also System groups.

Delete

Remove the role. The members will lose all permissions brought by this role.

The roles assigned to a project user are also visible on the project user page where users can be assigned roles, too.

Effective permissions

When working with project roles, individual permissions and system groups, it can be hard to find out which permissions a certain user has in the end. Therefore, when editing a project user and their permissions, you can see their roles and groups. Additionally you can use the "Show effective permissions"-button at the end of the form to get an overview of all their accumulated permissions.

4.2.5. Using project roles and system groups

  • If you are using LDAP authentication and your LDAP directory contains groups that you want to use for authorization:

    1. Enable the system group synchronization feature (see System groups)

    2. Create project roles

    3. For each role, specify the permissions that the role members should have

    4. Assign system groups with their project role counterparts

  • If your authentication provider does not contain usable groups but you still want to use roles for permission management:

    1. Create project roles

    2. For each role, specify the permissions that the role members should have

    3. Add users directly to the roles

  • By best practices, individual permissions should rarely be used. Individual permission make it hard to keep track of which user has which permission and why.

The report filter fields can be customized, for example to resemble the projects individual naming conventions. This will help project members to orientate.

New filter fields can also be defined for each project. More information can be found at Customizing the report filter.

To ensure that the review process is consistent, specific review requirements can be set in the project.

Besides the definition of

  • defect classes

  • defect priorities

  • tags

  • and custom evaluations

also the mandatory review fields can be set.

Another review option operates in conjunction with a configured issue tracker and releases. It automates linking between issues within the issue tracker. Currently supported system:

  • Atlassian Jira (supports interlinking projects)

Even issues from different projects will be linkable. Any issues assigned to a release will be linked with issues assigned to reviews of test case executions added to the release. The link will be created on two occasions (assuming the release and reviews have issues assigned):

  • Create a review on a test case execution that is part of a release (via browser or API)

  • Add test case executions with existing reviews to a release (via browser or API)

To create a link between two issues the user credentials may be required, depending on the provided issue tracker configuration. When using the web frontend a prompt will be displayed. Requests on the API that encounter missing credentials will be aborted with http status 412 (precondition failed).

If you want to change content and design of test summary or test case coverage PDF reports, templates for PDF generation can be managed in this section.

Please refer to Customizing the PDF export for more information on this topic.

test.guide supports the connection to various issue trackers. This way, defects detected during the review process can be directly fed back into the corresponding issue tracker. Due to the seamless integration, the user does not have to leave test.guide to do so.

More information about supported issue trackers and their configuration can be found at Integrating issue trackers.

4.6.1. Issue tracker templates

To ease the process of creating defect tickets, individual templates can be defined for each issue tracker: By using templates, selected fields of a defect ticket can be prefilled by information available in test.guide.

It is possible to specify a template for each free text field ((as specified by the issue tracker) of a ticket. A template can contain static text, but also placeholders that are replaced with information, for example, from the review or the underlying test case execution (e.g. a certain global constant). A description of supported placeholders can be found on the configuration page.

Currently, issue tracker templates are only supported for Jira.

This avoids mistakes when creating the ticket and minimizes the coordination effort. IssueTrackerJireTemplateWorkflowExample

Specific template fields
Multi input fields

For fields, that allow multiple values, more than one value can be defined in the template. To separate the values, use the separator sequence |||.

Jira Asset fields

Templates for custom fields that contain Jira Assets objects or references must be filled with the key (not the name) of an existing asset. If you e.g. want to reference an Jira Asset "my Tool" with the key "TOOL-1", please fill the template so the value of the field resolves to "TOOL-1".

test.guide is the linking element between test execution and test management.

After the review of test case executions has been completed, it is a common task to transfer back the current status to the test management tool. Therefore, different test management tools can be integrated into test.guide.

More information about the supported test management tools (or Application Lifecycle Management (ALM) tools) and possibilities can be found at Integrating test management tools.

One of the major challenges, especially for test case coverage calculation, is to ensure consistent metadata for the test case execution reports in the project. Therefore, upload rules can be defined, which ensure that specific required metadata (for example the requirement ID) must be present in each test case report - otherwise, the upload will be rejected.

In addition, the issue tracker JIRA also offers the possibility to automatically update related tickets when uploading test reports.

Using test.guide over a period of time can leave you with a lot of reports. Some of which may be no longer needed as they are not assigned to a locked release, probe runs or simply outdated. To keep your database clean of those reports you can set up rules that are applied automatically every evening between 8 pm and midnight. A rule will search for all reports older than x days and filter those not contained in any locked release. The resulting reports are then deleted together with associated reviews and files.

Use the recycle bin to remove unnecessary attributes and constants from test case executions to reduce the database size and filtering workload. This is a two-step process: First, the unnecessary data must be moved to the recycle bin using the Move to Recycle Bin dialog. Then you can permanently delete or restore the items in the recycle bin using the appropriate buttons.

Recycle bin actions are processed as background tasks and can take some time depending on the amount of data. While a task is processed, it is not possible to start further operations for consistency reasons.

The ecu.test ATX Generator settings can be configured centrally via the page. If the configuration is activated, all ATX Generator settings except the connection settings are synchronized and applied with the test.guide configuration before the ATX report generation in ecu.test.

To allow ecu.test to be managed centrally the ATX Generator settings useSettingsFromServer must be activated in the ecu.test workspace.

This page contains a list of all existing project report filters. For each filter, you can see the amount of subscriptions that have been created. It is possible to create new subscriptions (also simultaneously for multiple users) and delete existing subscriptions.

When creating or deleting filter subscriptions, the affected users do not get notified.

For project announcements, for example when new upload rules are put into action, it is possible to notify all registered project users via email.

To be able to use this feature the System configuration  Email settings must be configured. If this is not the case, please ask your admin for help.

If a second test.guide instance or project is to be set up and you want to transfer and re-use current project settings, this is possible via Transfer setup. Using this feature, a backup of current project settings can be made, while at another test.guide project, this backup can be imported again.

It is only possible to import the backed-up settings if the same test.guide version is used.

In the project team, you often want a quick overview of the current test case execution status of specific software variants or special test sections. With the help of the dashboard widgets it is possible to create different dashboards for the wanted overviews in the project.

The dashboard widgets in use are based on the project filters and are updated for the live view at specific time intervals.

On this page you can manage scripts and deploy new scripts to the Script-Server. This page is only available if the project has been granted permission to use the Script-Server at System configuration  Script-Server.

4.16.1. Deploy a script

To deploy a script you will need an SCM configuration for the repository your script is stored in and an authentication key that your script can use to access the test.guide API. SCM configurations can be created at Project settings  SCM tools, authentication keys can be created at User Profile  Auth Keys.

Select an SCM configuration and specify the branch and directory where your script is located. Put in your authentication key and press Add. If deployment of your script was successful you will find it in the list of known scripts.

4.16.2. Manage and execute scripts

Under Known scripts you will find all scripts that were deployed to the Script-Server from this project so far. You can execute a script by calling the scripts URL. You can also create a webhook to trigger your script automatically whenever specified events occur in test.guide. This can be done at Project settings  Webhooks by setting your scripts URL as the callback URL of a new webhook and selecting the events for which you want your script to trigger. To delete a script you can use the button in the scripts drop-down menu in the Actions column. To delete all of this projects scripts you can use the Delete all button.

5. Administering the system

This section provides an overview about the system configurations that are important for operating test.guide.

General operating settings for test.guide are configured in this section.

  • Logging

  • Application base URL

  • Upload buffer

  • DoS API access protection

It is important that the application base URL is set up correctly. Otherwise e.g. links in sent emails or in PDF reports created by test.guide will not work correctly.

The report database for all test case execution imports can be operated with different database systems. We recommend using the PostgreSQL database.

For additional information on setup, see the Operations Manual.

A database must be configured for the recording of test bench data by the ResourceAdapter.

The monitoring database for all test bench data (PC vital data, test executions, test bench configuration, …​) can be operated with different database systems. We recommend using the PostgreSQL database.

For additional information on setup, see the Operations Manual.

test.guide supports different authentication methods for authenticating users to the test.guide and the company.

5.4.1. Internal User Management

Users can be created with the internal user management.

User names must be unique.

Users may modify or update their own profiles, except for permissions and the user name.

To determine the LDAP parameters used in your company’s environment, we recommend using the following Microsoft tool: Active Directory Explorer (AD Explorer).

To enable LDAP, the following fields must be filled by the responsible IT staff member.

Host

to LDAP server

Port

to LDAP server

Use SSL

SSL (be aware that the certificate must be stored in the Java installation)

Use TLS

TLS

Manager DN
  • a user with read permissions on the LDAP server, which will be used to lookup for users to be used in test.guide

  • this requires the full LDAP path to be entered

  • cn=anonymous,…​ , dc=local may be used to login without Manager DN

Manager Password

password for Manager DN user (optional)

User Organization Unit

LDAP path in which users are looked up (optional)

User Domain Component

domain in which users are looked up

User ID Attribute
  • the LDAP attribute which should be used as login name

  • e.g. sAMAccountName, uid or even mail

  • this attribute will be mapped to the user name for authentication at login

For each LDAP account to be used in conjunction with test.guide there has to exist a corresponding test.guide user. This user needs a user name matching the value of the User ID Login Attribute.

5.4.3. OpenID Connect

When this option is selected, test.guide acts as an OpenID Connect Relying Party and can authenticate users which are managed by an OpenID Provider.

  • test.guide must be statically registered at the OpenID Provider. The credentials obtained in the process (Client Identifier and Secret) must be entered in test.guide’s OpenID Configuration dialog.

  • test.guide uses OpenID Connect Discovery to obtain the OpenID Provider’s metadata

  • test.guide uses the Grant Type "Authorization Code"

  • The Redirection URI is <test.guide Base URL>/api/oidc/authn-response. It is also shown in the OpenID Configuration dialog.

  • test.guide supports the following authentication methods for the Token endpoint: client_secret_basic and client_secret_post

  • by default test.guide requests the following scope values: openid, profile and email

    • if additional scopes should be requested, please specify them in test.guide’s OpenID Configuration

  • test.guide uses the following claims from the UserInfo response: preferred_username, name and email

  • test.guide supports Front Channel Logout. The URL is <test.guide Base URL>/logout.

As server manager all registered users including their project assignment can be managed here.

5.5.1. The special user ServerAdmin

The user ServerAdmin is a reserved user that has unrestricted access. This user will always have unrestricted access to the application.

The ServerAdmin password was set during the setup of test.guide.

5.5.2. Global permissions

Global permissions are permissions that are related to operations that affect the entire test.guide instance (as opposed to project permissions, which are related to a specific project and granted within the respective project setting section, see Project settings  Project users).

The following global permissions exist and may be assigned to any user:

Permission Description API value

Server manager

Allows the user to manage the system, giving access to most options of System configuration section (see details of other global permissions for exceptions).

SERVER_MANAGER

User management

In combination with Server manager permissions, this allows managing users in a global context (adding and deleting users, configuring the employee directory, system groups and system group synchronization). Details about that can be found in the upcoming sections.

USER_MANAGEMENT

Metrics access

Allows the user to access usage metrics.

METRICS_ACCESS

5.5.3. Manage users in global context

A user with User Management permission can access the menu at System configuration  User management, where all users are listed. It is possible to create both regular and technical users at this page. After creation, a user account does not have any projects assigned.

The user list supports the following operations:

Action Description

Edit

Change the user’s profile and/or global permissions.

Projects

Assign or remove a user to/from a project. This will open a new page listing assigned projects.

Delete

Deletes the user permanently, also deleting all project assignments.

5.5.4. Wipe out username

If a colleague leaves the company or project it is possible to make his user name anonymous in test.guide.

Look at System configuration  User management for Wipe out username.

5.5.5. Configure employee directory

Clicking this menu button opens a new page where an employee directory can be configured. The directory is an external LDAP service that has knowledge about the existing employees.

If an employee directory is set up, the filter option Show only orphaned users will be visible at System configuration  User management. If one enables this filter option, only the test.guide users unknown to the employee directory will be shown, which allows to quickly deal with leftover test.guide users belonging to former employees.

Configuration is done by specifying the parameters required for read access on a LDAP server. Furthermore, one must provide the Domain Component and Organizantional Unit, where employees are listed in the directory and also the LDAP attribute which maps to the test.guide user name.

5.5.6. Configure disclaimer

When this button is pressed, a new page will be opened that is intended to publish and manage disclaimers.

A disclaimer is (in the context of test.guide) a compliance-related tool intended to require users to take note of and accept the terms that are written down in the disclaimer text.

DisclaimerResult

The feature is characterized as follows:

It is not possible to edit or remove a disclaimer once it is published!
  • only one disclaimer is valid at a time

  • disclaimer history on Configure disclaimer page also shows old disclaimers, which are not valid anymore

  • users must accept the current disclaimer, this is checked when they try to log in

  • if a user accepts the disclaimer, he will not be prompted again until a new disclaimer is published

  • if a user does not accept the disclaimer, he will not be able to log in

  • when exporting the user list, the Confirmed disclaimer version column contains the disclaimer version confirmed by the user

The disclaimer can be formatted using HTML tags as needed.

A user can be a member of any number of system groups. A system group can be associated with different project roles for each project, giving the user the appropriate permissions in a project. If the roles include permissions for a project to which the user is not yet a member, the user is automatically added and enabled as a project member. In this case, the project managers are notified by email.

System groups are either synchronized to the user based on LDAP user groups or OpenID Connect roles.

At System configuration  System groups server managers can find an overview over all imported system groups and their members. When during synchronization a new system group is found, it will be added to this view.

If server managers want to configure project roles for a system group that has not been synchronized yet, they can also add system groups in advance by using the Create new system group button on the page. Group members will be added later through synchronization. For synchronization to work, the group name must be written exactly like it would be provided by either LDAP or OpenID Connect synchronization.

Existing system groups are not removed automatically. They can be deleted by server managers. If there are still members of the group, the group will be re-created the next time their data is synchronized, but the assigned roles can not be restored.

The synchronization needs to be configured at System configuration  System groups page, by hitting the Configure synchronization button. The following sections explain how the configuration must be carried out for the different mechanisms and highlight any specific requirements that apply:

5.6.1. Using LDAP system group synchronization

User groups of LDAP can be used to create system groups in test.guide. The name of a system group is the full LDAP path of the group.

This feature can be used regardless of whether LDAP is also used to authenticate users or not. For example, synchronization of the system groups via LDAP is also possible if OpenID Connect is the used authentication method. However, the prerequisite is that the users can be located in the LDAP directory via their test.guide user name.

At the Configure system group synchronization page, select LDAP as synchronization mechanism. Next, please specify the configuration that can be used to connect to the LDAP server and locate the users in the directory. The chapter on how to configure LDAP Authentication may provide helpful information on how to do this. Additionally, the following fields are important:

Attribute containing the system groups

the LDAP attribute whose values should be interpreted as user groups, e.g. "memberOf"

Path suffix for system group selection

if a value is entered, only groups whose LDAP paths end with the given DN are imported

When the synchronization is enabled, the LDAP groups of a user are retrieved on login. Additionally a background service updates group informations for all known system users hourly. If a user is added to a group in LDAP, on next synchronization they will become a member on the respective system group in test.guide.

5.6.2. Using OpenID Connect system group synchronization

Only in case OpenID Connect is used to authenticate users in test.guide, it can also be used to create system groups based on a users OpenID Connect access token claims.

Please note that due to the characteristics of OpenID Connect, synchronization can only take place when a user logs in. Automatic synchronization of all users at regular intervals, as is possible with LDAP, is not possible with OpenID Connect.
If additional scopes should be requested from the OpenID Connect provider, because they contain necessary group information, please add the respective scopes to the authentication config at → System configuration  Authentication.

This feature can be enabled at System configuration  System groups by hitting the Configure synchronization button. On the page that opens, select OpenID Connect as synchronization mechanism and provide one or more valid JsonPath expression to extract the system group names from the access token and / or the user info response the OpenID Connect provider hands over to test.guide during user authentication.

The following information is only intended as a reference, the actual structure of the Access Token and the JsonPath expressions required with it depends heavily on the respective infrastructure and must be specifically adapted to this.

The following snippet shows an excerpt of a typical access token:

{
  "realm_access": {
    "roles": [
      "offline_access"
    ]
  },
  "resource_access": {
    "account": {
      "roles": [
        "hil_user",
        "tg_admin"
      ]
    }
  }
}

Based on this excerpt, the following examples show valid JsonPath expressions to create system groups based on the roles of this access token:

  • $['realm_access']['roles'][*] - this will import only the offline_access role as a test.guide system group

  • $['resource_access']['account']['roles'][*] - this will import both the hil_user and the tg_admin role

  • $..['roles'][*] - this will import all array values of every roles object, in this example offline_access, hil_user and tg_admin

The same is applicable for the user info response, which is also a JSON object and may have an array containing the system group names.

For details on JsonPath syntax please consult this resource. There also exist tools to interactively try out JsonPath expressions, which can be a good way to find the correct expression for your own access token.

With test.guide it is possible to manage multiple projects.

When creating the project, the project manager must be specified. If the Project Manager is not yet available as a user in the test.guide, then the Server Manager should also be entered as Project Manager and the user should be assigned after registration.

The project managers can then manage their project at menu:Project settings.

We recommend to always have at least two project managers, so that a stand-in is guaranteed in case of, for example, illness or vacation.

5.7.1. Disable and delete projects

A Server Manager can disable a project at → System configuration  Project management. When a project is disabled, it is not possible to view or change any data in the project, to change permissions within the project or to apply for membership. A disabled project can be re-enabled at any time.

To delete a project, it has to be disabled first. Afterwards, a Server Manager can request its deletion. When requesting project deletion, one has the choice to specify that test.guide also removes data stored in the projects file repositories and upload buffer. Use this option if you want to remove a project completely. However, if the data is still accessed from elsewhere (e.g. if the project was moved to another test.guide instance, but using the same storage for the repositories), this option should be left unchecked.

Deleting the data of the project might start with some delay, as it is performed by a background job.

Once project deletion has been requested, this operation cannot be cancelled. Deleting a project is irreversible.

test.guide offers several features which use email communication to notify users of new results or important events. This includes the following features:

  • User registration: Users with User management permission get notified if a new user requests registration. The user who requested registration gets notified in case his request was accepted or declined.

  • License reminder: Prior test.guides license expires, a warning is sent to Server management users.

  • Notify users: Server management users may send a notification to all users who have their email address set, for example to inform them about upcoming maintenance work or downtimes.

  • Filter subscription: At Report filter page a user may create a subscription, which automatically executes the given filter in four-hour intervals and sends a notification if new results were found.

  • Test case subscription: At Test case detail page a user may create a subscription, which checks in five-minute intervals if new results exists for this test case. In this case a notification is sent.

  • Test case coverage subscription: At Coverage filter page a user may subscribe to a coverage filter, which will be executed automatically in intervals of 24 hours. The results will be sent to the user by email.

  • Test summary report: Because the creation of test summary reports can take some time, it is possible to let them run as a background task. The resulting PDF file will be sent by email.

5.8.1. Email configuration

To use these features a SMTP server must be configured which delivers outgoing mails. This configuration is possible via the page Settings  Email. At this page it is necessary to check Activate email and fill out the form according to the configuration of the SMTP server which is to be used.

Additionally, the value of the field Application base URL has to be set to the URL test.guide currently is accessible at. For example, this may be https://localhost:8443 if the application is running locally and HTTPS is enabled. This value is used for linking to the web application: If it is not set correctly, the links embodied in mails sent by the application may not work.

By clicking the Save button the changed configuration will be enabled.

It is possible to send a test email to a certain recipient by filling the address field and clicking the Send button at the bottom of the configuration page.

5.8.2. Setting email addresses

The above features will only work for users whose email addresses are set.

This can be done in two ways:

  • A administrative user, who has the permission User management (see System configuration  User management), may add or change email addresses of other users.

  • Each user may add an email address on his own by clicking on the user name at the down left of the navigation section and selecting Edit profile. The page opened allows to add an email address for the current user.

For announcements in the form of e.g. maintenance work it is possible to inform all

  • registered test.guide users

  • or only selected project groups

via mail.

To be able to use this feature the System configuration  Email settings must be configured.

The Script-Server integration is a feature with which scripts can be deployed to the FaaS (Function as a Service) Framework OpenWhisk and then triggered by calling a URL. It is suitable for automating access to the test.guide API and for implementing business logic. To use the Script-Server integration, you need a Git repository where your script is stored and an OpenWhisk instance to deploy the script to. The Script-Server integration is a preview feature and is still under development. If you have any questions or requests, feel free to contact tracetronic support to get in touch.

5.10.1. Configuring the Script-Server

Enter the URL and credentials of your Openwhisk instance and the associated CouchDB. Test-Guide will verify the entered data by attempting to connect to OpenWhisk.

5.10.2. Allow usage for Project

To allow your project to use the Script-Server, use the button to add your project to the list.

If a second test.guide is to be set up and you want to take over the settings and users, it is possible via Transfer setup to make a backup of the current settings and import them into the second empty test.guide.

The transfer of the backed-up settings is only supported for test.guides with identical versions!

This page provides:

  • an up-to-date overview of the vital status of the server

  • the project setups

  • and server settings

This page allows to enable or disable the maintenance mode, which is useful to inform users that test.guide is not available for a while (for example when moving from one database server to another) and will not accept results.

While test.guide is in maintenance mode, all users without the Server Manager permission will not be able to use the application through the browser or API. Instead, they will be redirected to a page informing about the maintenance being in progress.

Furthermore, requests to the applications APIs will not be processed in the usual way, but are responded to with the HTTP status code 503 Service unavailable. This response will contain a Retry-After header, which suggests the client to retry its original request after a certain time span, which can be specified when enabling the maintenance mode.

To return to normal operation, any user with the global permission Server Manager may return to System configuration  Maintenance mode page and disable the maintenance mode.

Configure backup of all uploaded reports to a SMB storage:

In a worst-case scenario the database can get corrupted or fails without a database backup (which is strongly recommended) to restore. To somehow reduce the impact of such an event test.guide can copy every incoming report to a remote storage.

The Retention time option specifies how long these backed up reports are kept.

If your DBMS is configured to create snapshots of the test.guide database bi-weekly, it is reasonable to choose the retention time setting to reflect this and provide an additional buffer. Hence, a value of 15 days would be appropriate to handle the period between automatic database backups.

5.14.1. Restoring backed up reports

Clicking Begin backup restore opens a dialog that allows to restore backups from a specified time period. The restore status may be supervised using the Task manager, it is listed there as a System task.

The dates entered are considered inclusive, for example, specifying Mar 20, 2021 to Mar 22, 2021 will restore three full days worth of reports.
Restoring backups may also be done in Maintenance mode.

When updating test.guide, it is helpful to set test.guide to Shutdown mode before. In this mode, no new requests for processing are accepted, only current ones are processed.

If this mode is activated, then test.guide has to be restarted to be working again!

6. Customizing

Learn how to customize test.guide according to the needs of your project.

6.1. Customizing project with dashboards

You can customize the home page of your project by using dashboards in test.guide.

To do so you first need to create a new dashboard at Project settings  Dashboard views. At first creation you have to specify some parameters like the name of the dashboard and the refresh period. You can also add widgets and arrange them within the specified columns by using drag and drop. Some widgets are based on a saved report filter. So if you want to use one of these widgets and you don’t have any saved filter definitions please add one beforehand.

Existing dashboards can be edited as well at this page.

To set a configured dashboard as the home page of your project, you need to activate the option "As home page" in the dropdown menu on the settings page (Project settings  Dashboard views). Only one dashboard can be set as the home page. This dashboard will also be shown at first when entering the Dashboard-page and by clicking on Dashboard  Home page. When there is no home page dashboard defined, a project statistics will be displayed at this place.

After creating a dashboard, all available dashboards of a project become accessible in the menu bar under Dashboard  Dashboards on the left.

6.2. Customizing the report filter

The report filter page of a project can be customized at Project settings  Filter settings, allowing to define individual settings for each test.guide project.

Various customizations may be performed:

  • The default view of report filter page can be defined.

  • The filter field labels can be customized by replacing the default value.

  • Filter fields may be rearranged using the dialog opened by clicking the Sort filter button.

  • Filter fields are arranged in filter groups, which also are customizable:

    • It is possible to rename existing filter groups.

    • New filter groups can be created by using the Add group button in the drop-down menu.

    • Filter groups can be rearranged using the Sort filter button.

  • Furthermore, custom filter fields can be added by using the drop-down menu, which allows to provide a project-specific view on chosen attributes or constants, thus effectively simplifying the use of the report filter.

6.3. Customizing the PDF export

When creating PDF reports, e.g. by using the report filters export view or by using the export menu of the coverage filter, a powerful template engine is used to create the resulting PDF document. This engine is part of the Business Intelligence and Reporting Tools (BIRT) developed by the Eclipse Foundation, which uses so called report designs as templates.

tracetronic provides default report designs, which are delivered alongside test.guide. Nevertheless, it may be desirable to adapt these templates to your own needs or to develop new report design from scratch.

The default report designs files are distributed as an integral part of tracetronic’s test.guide software and may only be used in connection with and pursuant to the terms and conditions of a valid test.guide license. Copyright © by tracetronic GmbH, Dresden

BIRT is a powerful and complex framework, hence intense and self-dependent training on its usage is strongly recommended. The following description is intended to explain the specific aspects and recommendations of the interaction between test.guide and BIRT. For general questions about report designs, please refer to the BIRT documentation. Please note that implementing data access also requires knowledge of SQL.

6.3.1. Setting up the BIRT Report Designer IDE

Report designs are developed using the Eclipse BIRT Report Designer. We recommend using the Report Designer All In One package available here: eclipse.org (External link)

Recommended version of the Report Designer is 4.20.0-202506110821. Windows users may use the bundle found under the filename birt-report-designer-all-in-one-4.20.0-202506110821-win32.win32.x86_64.zip on the above page. The all-in-one version includes the required Java SE 21 runtime (for more details see the readme_eclipse.html contained in the mentioned download).

With this application report designs can be created, edited and (if a data source is connected) also tried out.

  • Download the Report Designer and unzip the archive to a location of your choice.

  • Start the executable contained (Windows users want to start birt.exe).

  • When prompted, create a new workspace.

Because test.guide hands over data to BIRT using a temporary H2 database, the corresponding database driver must be obtained as well.

For maximum compatibility, the required database driver version is 1.4.199. Please visit h2database.com (External link) and use the 1.4.199 Platform-Independent Zip link to download it.
  • Open the downloaded ZIP file and inside, navigate to \h2\bin\.

  • Extract the file h2-1.4.199.jar, for example into the folder you extracted BIRT Report Designer to.

Your setup is now ready to being used. In the following section you will learn what qualifies a valid report and how to use the database driver for accessing a data source.

6.3.2. Customizing existing report designs

All existing report designs, including the default report designs provided by tracetronic, are available for download at Project settings  Reporting. These report designs can serve as a starting point for modifications.

If you want to make changes to an existing report design, the following procedure is recommended:

  • Start the BIRT Report Designer and create a new report project by clicking File New Other Business Intelligence and Reporting Tools Report Project.

  • Copy the existing base report design into this project. It is recommended to give the copied report design a meaningful name.

  • Open the report design within the Report Designer.

Next, the data source must be configured:

  • In the Data Explorer, open the TEST-GUIDE_DataSource.

  • It is important, that the database driver (see Setting up the BIRT Report Designer IDE) of this data source is set up correctly:

    • To do so, open the data source configuration (e.g. by double clicking it) and click the Manage drivers button.

    • In the Manage JDBC Drivers dialog click Add and choose the JAR file of the H2 database driver.

  • Modify the report design according to your needs. You may want to try it out while doing so to check if the modifications made are correct.

  • After finishing modification, save the report design file and upload it to test.guide at Project settings  Reporting.

If the desired structure and content of the PDF is very different from existing report designs, it may make sense to create a report design from scratch. When doing so, it is important not to forget about the prerequisites mentioned at Report design requirements.

6.3.3. Trying out report designs during development

During report design development it is necessary to continuously check whether the latest changes work correctly. In order to avoid having to upload the report design to test.guide and execute it every single time, one can specify a fixed "example" data source in the report design, which is used by the Report Designer. By doing so, developing report designs is possible even without access to test.guide.

The following procedure can be used to create a suitable "offline" data source and use it during report design development:

  • In test.guide, depending on the type of the report design, calculate either a suitable report filter or coverage filter.

The filter used should be similar to the one that will be used with the report design in production, but it is reasonable to limit the amount of data.
  • Open the dialog for exporting a PDF and select any existing report design.

  • Before starting the export, it is important to enable the option ☑ Include report data source in download.

  • Wait for the process to finish and download the ZIP file created.

  • Open the ZIP file and extract the file with extension .mv.db. It is recommended to put the file next to the file of the report design under development.

  • In BIRT Report Designer, open properties for the primary data source called test.guide_DataSource.

  • Modify the database URL so it points to the database file.

Please note that the URL must be a valid JDBC URL and that the file extension of the data source must not be specified.
If not already done, refer to Report design requirements on how to setup the JDBC driver.

Now, when using the preview tab of a data set or when clicking the Run report button in BIRT Report Designer, data is fetched from this data source.

Regardless of the database URL setting in data source properties, test.guide will always use real data when using the report design in production.

6.3.4. Report design requirements

For a report design to be compatible with test.guide, it must meet a number of requirements. If you are creating a new report design from scratch, please make sure it meets the requirements described in this section. If instead you are modifying the default report design provided by tracetronic, these requirements are already met, but you may want to customize the report properties listed below.

Data source

Every report design must contain a JDBC Data Source named TEST-GUIDE_DataSource.

This primary data source is a fundamental element of the report design. It is the single point of data access and provides a link between the report and the rich data stock of test.guide.
Report properties

In the root element of the report design (select it by clicking on the upper entry in the outline view of the Report Designer), by using the Property Editor, some settings must be made. These settings are used to manage the report designs in test.guide and ensure that the report design can be processed correctly.

  • Tab General, property Author: The author of the report design or, if applicable, a department or company should be entered here.

  • Tab User Properties: Three user properties of type String with the names DataSourceSpec, ReportDesignType and Version must be created here.

  • Tab Advanced: The previously mentioned user properties can be found in this list and their value should be set meaningfully:

    • DataSourceSpec tells test.guide which format the primary data source expects. Currently, the only valid value is: H2_1.0.0

    • ReportDesignType tells test.guide the type of the report and thus decides, in which context the report design can be used. Valid values: TEST_SUMMARY_REPORT (used at report filter), TEST_COVERAGE_REPORT (used at coverage filter)

    • Version, like the Author property, can be entered freely. Its value is displayed in test.guide when managing and using report designs. For keeping track on report designs, it is recommended to maintain a semantic versioning, possibly combined with a date (e.g. 1.0.1 (2020-03-18)).

7. Developing

This section contains information for developers who want to use the API to automate processes or develop new applications based on the API.

7.1. REST-API

The API is a Representational State Transfer (REST)ful web service that provides a simple, user-friendly interface through standard HTTP request and response messages. The REST API documentation including a tryout environment for the API is available at /api-docs.

7.2. Checking server readiness

API calls can only be successful if the test.guide server is ready, i.e. it is fully started up and all database connections are established. Via the Health API, test.guide exposes a REST endpoint that allows clients to check for server readiness before using the API. The following Python script demonstrates how to query the endpoint and wait for test.guide's readiness:

# -*- coding: utf-8 -*-
import requests
import time
from datetime import datetime


baseUrl = r'https://your.test-guide.instance'  (1)

def sleep(retryAfter):
    '''
    Blocks until the specified retry-after point in time is reached.
    :param retryAfter: Value of the Retry-After HTTP header
    :type: str
    :return: None
    '''

    if not retryAfter:
        print("Server didn't send a Retry-After header, using default value")
        sleepSec = 60
    elif retryAfter.isdigit():
        sleepSec = int(retryAfter)
    else:
        # RFC 1123 format: Thu, 01 Dec 1994 16:00:00 GMT
        dateRetryAfter = datetime.strptime(retryAfter, '%a, %d %b %Y %H:%M:%S GMT')
        dateNow = datetime.now()
        sleepSec = int(dateRetryAfter.timestamp() - dateNow.timestamp())

    print("Retrying after {0} seconds".format(sleepSec))
    time.sleep(sleepSec)


def waitForReadiness():
    '''
    Blocks until test.guide is ready to use.
    :return: None
    '''
    url = r'{0}/api/health/ready'.format(baseUrl)

    ready = False

    while not ready:
        response = requests.get(url, verify=False)

        if response.status_code == 200:
            ready = True
        elif response.status_code in [429, 503]:
            sleep(response.headers.get("Retry-After"))
        else:
            raise Exception("Unexpected response")

    print("test.guide is ready")


waitForReadiness()

# Proceed here with using the API
1 test.guide host URL

To use the API it is necessary to authenticate yourself. This is realized with the authentication key and the project Id.

You can find out the current project Id by hovering over the project at the top of the navigation on the left. Authentication keys can be created and revoked in your profile at User Profile  Auth Keys.

With the following Python script you can easily check your settings:

# -*- coding: utf-8 -*-
import requests

headers = {'TestGuide-AuthKey':'nOPVJ3T6JwqUAR8skQ~yourAuthKey~'}  (1)

params = {"projectId": 1} (2)

host = r'https://your.test-guide.instance'  (3)

url = r'{0}/api/releases'.format(host)  (4)

r = requests.get(url, params=params, verify=False, headers=headers) (5)

if r.status_code < 300:
    print("API access successful.")
else:
    print("No API access! Check your settings.")
1 User authentication key is provided in the header
2 Project ID
3 test.guide host URL
4 API call: Retrieve all releases of a project
5 Parameter handling: requests takes care of proper encoding of the parameters

7.3.1. Authentication when using OpenID Connect

When test.guide is configured to use OpenID Connect, it is also possible to authenticate using a JWT Bearer Access Token instead of an authentication key. An access token must fulfill the following requirements to be accepted:

  • It must be signed by the OpenID Provider that the test.guide instance uses

  • Its audience claim (aud) must include the Client ID of the test.guide instance at the OpenID Provider

  • Its subject claim (sub) must identify a test.guide user that has before logged in to the Web interface at least once. Hence, it cannot be a technical user.

It is in the scope of the API client to obtain such an access token from the OpenID Provider. For example, the authorization code flow might be used for that.

For each API request, the API client must provide the access token in the Authorization HTTP header using the Bearer scheme.

7.4. X2ATX Upload

Via the X2ATX framework, it is easily possible to upload report data in other formats than ATX.

The following is an example for uploading a JUnit report and a Json report to test.guide.

# -*- coding: utf-8 -*-
import os
import requests
import time


class TGReportUploader:

    def __init__(self, url, authKey, projectId=1):
        """
        Constructor
        :param url: test.guide URL e.g.: http://your.test-guide.instance:8085
        :type url: str
        :param authKey: test.guide upload authentication key
        :type authKey: str
        :param projectId: ID of the test.guide project into which the data is to be
                          imported
        :type projectId: integer
        """
        self.__url = url.strip("/")
        self.__authKey = authKey
        self.__projectId = projectId
        self.__defaultAuthPayLoad = {'projectId':  self.__projectId}

    def UploadBundle(self, path, converter=""):
        """
        Uploads the specified test report bundle to test.guide.
        :param path: Path to the test report bundle to be uploaded.
        :type path: str
        :param converter: Id of the x2ATX converter to be used.
        :type converter: str
        """
        uploadUrl = f"{self.__url}/api/report/reports"

        payload = self.__defaultAuthPayLoad.copy()
        payload['converterId'] = converter

        print('Upload to test.guide:')
        print(f'\tUploading "{path}" to {uploadUrl}')

        headers={
            'Content-type':'application/zip',
            'Accept':'application/json',
            'TestGuide-AuthKey': self.__authKey
        }

        with open(path, 'rb') as file:
            response = requests.post(uploadUrl, params=payload, data=file, headers=headers, verify=False)

            print(f"\tUpload file status code from test.guide: {response.status_code}")
            response.raise_for_status()

            taskId = response.json().get("taskId")
            print(f'\tUpload is being processed under task id {taskId}.')
            return self.__WaitUntilUploadIsFinished(taskId)

    def __WaitUntilUploadIsFinished(self, taskId):
        headers={
            'Accept':'application/json',
            'TestGuide-AuthKey': self.__authKey
        }
        endpoint = f"{self.__url}/api/report/reports/uploadstatus/{taskId}"
        while True:
            response = requests.request("GET", endpoint, headers=headers,
                                        params=self.__defaultAuthPayLoad)
            jsonResponse = response.json()
            uploadStatus = jsonResponse.get("status")
            if uploadStatus == 'finished':
                print(f'\tUpload task {taskId} is finished')
                uploadReturnCode = jsonResponse.get("uploadResult").get("uploadReturnCode")
                msg = jsonResponse.get("uploadResult").get("resultMessages")
                if uploadReturnCode >= 400:
                    print(f'\tUpload failed: {msg}')
                    return False
                else:
                    print(f'\tUpload successfull: {msg}')
                    return True
            else:
                print(f'\tUpload task is not done yet... status: {uploadStatus}.')
                time.sleep(1)


if __name__ == '__main__':
    # test.guide Upload Parameter
    AUTHKEY = "EQmVm00=.RYk1mS21GYNhCQeqaQG0EZsFWENk5VaRsvXrVQGDako="  (1)
    URL = "http://localhost:8085"  (2)
    PROJECT_ID = 1   (3)

    # Upload File
    CUR_FOLDER = os.path.dirname(__file__)
    JSON2ATX_FILE = os.path.join(CUR_FOLDER,"JsonReportWithArtifacts.zip")  (4)
    JUNIT2ATX_FILE = os.path.join(CUR_FOLDER,"junitExample.zip")  (5)

    # x2ATX converter
    JSON_ATX_CONVERTER = "json2atx"   (6)
    JUNIT_ATX_CONVERTER = "JUnitSurefirePlugin"   (6)

    UPLOADER = TGReportUploader(URL, AUTHKEY, PROJECT_ID)
    UPLOADER.UploadBundle(JSON2ATX_FILE, JSON_ATX_CONVERTER)
    UPLOADER.UploadBundle(JUNIT2ATX_FILE, JUNIT_ATX_CONVERTER)
1 User authentication key
2 test.guide host URL
3 Project ID
4 JsonReport2ATX example or use Basic Json report example
5 JUnitReport2ATX example
6 X2ATX Converter ID specification

7.4.1. X2ATX Converter IDs

The following converter IDs are available:

DEFAULT

for ecu.test ATX report

CANoePlugin

for CANoe reports

cucumber2atx

for cucumber reports

json2atx

A special JSON test report format. The current JSON schema is available for download at Test report management  Import ATX, if the Json2ATX converter is selected.

JUnitMatlab

for Matlab JUnit reports

JUnitSurefirePlugin

for JUnit maven surefire reports e.g. from Jenkins

NUnit3Plugin

for NUnit reports

7.5. Artifact management

The artifact management in test.guide allows to manage files for testing in an automation-friendly way. This is the reason for the Artifact-API.

7.5.1. Upload Artifacts

The following code section shows the parameterization of a example upload artifact script:

# -*- coding: utf-8 -*-
import requests
import os
import json

file = "./ConstructionSite"  (1)
projectId = 1  (2)
depositoryId = "Scenarios1"  (3)
authKey = 'rn~YourAuthKey~'  (4)
host = "https://localhost:8443"  (5)

payload = {}
payload['projectId'] = projectId
payload['depositoryId'] = depositoryId
payload['attributes'] = {'Key1=Value1', 'LaneCount=2', 'StreetType=Lane'}  (6)

headers = {'TestGuide-AuthKey': authKey}  (7)

uploadFile = {'file': (os.path.basename(file), open(file, 'rb'))}
r = requests.post(u"{0}/api/artifact/artifacts".format(host),
                  params=payload, files=uploadFile, verify=False, headers=headers)

if r.status_code < 300:
    print("Upload successful.")
else:
    print("Upload error: {0} - {1}", r.status_code, r.content)
1 Path to the upload file
2 Project ID
3 Depository ID into which the file should be uploaded.
4 User authentication key
5 test.guide host URL
6 Additional important file information for finding the file.
7 Header containing the authentication information.

The finished uploaded artifact in test.guide looks as follows: ArtifactUploadResult

7.6. Archiving project

In many projects, a large number of reports are collected over the project duration. Not all reports are relevant for delivery and are not required for current developments, but they do take up a lot of database storage space over the years. For this reason, it is advisable to delete old reports that are no longer required or to export them from the project.

7.6.1. Archiving Script

With the following script it is possible to select a specific time range to export all reports in this range to a specified directory of your choice.

The exported reports are stored in a folder structure under <Year>/<Month>/<Day>/<executionTimestamp>-<reportId>-<testsuiteName>.zip so that they can be easily imported again, e.g. from a specific month or day.

After successful export, the report is also deleted from test.guide if it is not assigned to a locked release.

The following code section shows the parameterization of the script:

    def __WaitUntilDeletionIsFinished(self, taskId: str):
        headers = {
            'Accept': 'application/json',
            'TestGuide-AuthKey': self.__authKey
        }
        endpoint = f"{self.__url}/api/report/reports/deletestatus/{taskId}"
        while True:
            response = requests.get(endpoint, headers=headers,
                                    params=self.__defaultAuthPayLoad,
                                    verify=self.REQUESTS_VERIFY)
            response.raise_for_status()

            jsonResponse = response.json()
            deleteStatus = jsonResponse.get("status")
            msg = jsonResponse.get("detailedMessage")
            if deleteStatus == 'success':
                logging.info(f'\tDeletion of task id {taskId} is finished: {msg}')
                return True
            if deleteStatus == 'error':
                if "test reports belonging to a locked release" in msg:
                    logging.warning(f'\tDeletion task aborted because: {msg}')
                    return True
                else:
                    raise RuntimeError(f'\tDeletion of task id {taskId} failed: {msg}')
            else:
                logging.debug(f'\tDeletion task is not done yet.... {deleteStatus}')
                time.sleep(5)


if __name__ == '__main__':
    # test.guide access parameter
    # Permissions to view and delete reports are required for archiving.
    AUTHKEY = "jRx1le8=.HanQ3VS7ZlqAgq1YI86P_7sS_5t5WxFKfUMyT1trZMQ="  (1)
    URL = "http://localhost:8085"  (2)
    PROJECT_ID = 1   (3)

    # Export Parameter
    ARCHIVE_FOLDER = os.path.dirname(__file__)  (4)
    ARCHIVE_START_DATE = datetime(2023, 1, 1, 0, 0)  (5)
    ARCHIVE_END_DATE = datetime(2024, 3, 31, 0, 0)  (6)
    USE_EXPORT_STORAGE = False  (7)

    ARCHIVAR = TGReportArchivist(URL, AUTHKEY, PROJECT_ID, ARCHIVE_FOLDER, USE_EXPORT_STORAGE)
    ARCHIVAR.StartArchiving(ARCHIVE_START_DATE, ARCHIVE_END_DATE)
1 User authentication key with permissions to view and delete reports
2 test.guide host URL
3 test.guide project ID
4 Directory in which the exported reports are to be archived structured by date. The directory in which the script is started is currently used.
5 Start date from which the reports are exported.
6 End date up to which day the reports are exported.
7 True if larger resulting export files are to be expected (> 1GB). It is then necessary to use the configured export/import storage.

The finished export looks as follows:

Extract from the log of script
### Archive report id 280 ###
Start export of report id 280
Export of report id 280 is being processed under task id 9d2814a2-9f6c-4721-8a55-ad9384cf713d.
Export task 9d2814a2-9f6c-4721-8a55-ad9384cf713d is finished
Download report c:\...\ArchiveProject\2023\10\27\1698434468.0-280-ProjectD.zip...
Download report c:\...\ArchiveProject\2023\10\27\1698434468.0-280-ProjectD.zip finished
Start Deletion of report id 280
Deletion of report id 280 is being processed under task id 62733baa-d240-4ee0-9aee-9c031188b134.
Deletion task is not done yet....
Deletion of task id 62733baa-d240-4ee0-9aee-9c031188b134 is finished: Report was removed successfully: Report ID 280
#############################
### Archive report id 127 ###
Start export of report id 127
Export of report id 127 is being processed under task id 4fff1ffc-232f-494a-892d-71356f098924.
Export task 4fff1ffc-232f-494a-892d-71356f098924 is finished
Download report c:\...\ArchiveProject\2023\12\12\1702402808.0-127-ProjectB.zip...
Download report c:\...\ArchiveProject\2023\12\12\1702402808.0-127-ProjectB.zip finished
Start Deletion of report id 127
Deletion of report id 127 is being processed under task id 25cbb50e-d49e-4c52-8647-c153c292b626.
Deletion task aborted because: Could not remove 'TestSuiteExecution(id=127, projectId=1, name=ProjectB, start=2023-12-12T17:38:38Z, atxId=127)' (ATX report AtxId(127)) because it contains test reports belonging to a locked release.
#############################

Successful archiving then looks like this: PythonScriptToArchiveProjectDateStructure

7.7. X2Coverage (Just “cover” everything…​)

The coverage analysis in test.guide is a powerful and highly customizable feature. As the use cases and data sources are different for each customer and project, we recommend using a script to create coverage filter definitions. The script acts as an adaptor between the source of data to cover test case executions against and the coverage filter XML representation that test.guide expects. After creating the coverage filter XML, the script may upload the coverage filter definition using test.guide's API.

7.7.1. Requirements-Excel2Coverage

In the following you will find an example that creates a coverage filter definition XML from a list of Requirements that are managed in an Excel file. The script is written in Python.

The following code section shows the parameterization of the script for testing:

if __name__ == '__main__':
    # Excel file converter specifications
    INPUT = "./Requirements.xlsx"  (1)
    OUTPUT = "./filterTree2.xml"  (2)

    # test.guide upload parameters
    AUTHKEY = "nOPVJ3T6JwqUAR8skQ~yourAuthKey~"  (3)
    URL = "https://your.test-guide.instance:8495"  (4)
    PROJECT_ID = 4  (5)
    FILTER_CATEGORY = "Requirement/Excel"  (6)
    FILTERNAME = "Excel-Export"  (7)
    IS_FILTER_PUBLIC = True  (8)
    FILTER_DESC = "List of all requirements"  (9)

    CON = Converter(INPUT, OUTPUT)
    if CON.CreateFilterTree():
        UPLOADER = TGUploader(URL, AUTHKEY, PROJECT_ID)
        UPLOADER.UploadXmlConfig(OUTPUT, FILTERNAME, FILTER_CATEGORY,
                                 IS_FILTER_PUBLIC, FILTER_DESC, True)
1 Path to the requirements Excel example file
2 Path of the coverage filter definition output file
3 User authentication key
4 test.guide host URL
5 Project ID
6 Optional category of the coverage filter.
7 Coverage filter name
8 True, if the coverage should be public usable for everyone in the project, otherwise False.
9 Optional description for coverage

The finished coverage in test.guide looks as follows: Excel2CoverageResult

7.8. Monitoring API

With the Monitoring API it is possible to get the current configuration of the test benches. A typical use case for this is for example to select the test bench available for the next test execution. Furthermore, test bench data can also be added or updated via the API.

7.8.1. Upload test bench configuration

In the following you will find an example on how to send test bench configurations to test.guide using the API. The script is written in Python.

The following code section shows the parameterization of the script:

# -*- coding: utf-8 -*-
from datetime import datetime
import requests


def SendTestbenchConfiguration():
    
    host = "http://localhost:8080"  (1)
    uploadUrl = "{0}/api/monitoring-upload".format(host)
    authKey = "rn~yourAuthKey~"  (2)

    # Set parameters
    payload = {}

    payload['projectId'] = 1  (3)
    payload['prio'] = 2  (4)

    # e.g. 2020-06-18T07:45:04.189+01:00
    timestamp = datetime.now().strftime('%Y-%m-%dT%H:%M:%S.%f%z')

    xmlContent = '''<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
                <CONFIGURATION>
                  <TIMESTAMP>{0}</TIMESTAMP>
                  <PATH-IDENTIFIER>TT/Level1/103/TraceTroniCar/PC1</PATH-IDENTIFIER>
                  <COMPUTER-NAME>tt-ddm560</COMPUTER-NAME>
                  <PLUGIN-REFERENCE>de.tracetronic.ttstm.monitoring.plugin.configuration.ConfigurationPlugin.1</PLUGIN-REFERENCE>
                  <CATEGORY>Electrical system</CATEGORY>
                  <LABEL>Battery</LABEL>
                  <VALUE>0356X2_Montevideo</VALUE>
                  <DESCRIPTION>An automotive battery is ...</DESCRIPTION>
                  <DETAILS>
                      <DETAIL>
                          <LABEL>Capacity</LABEL>
                          <CATEGORY></CATEGORY>
                          <VALUE>120Ah</VALUE>
                      </DETAIL>
                  </DETAILS>
                </CONFIGURATION>
                '''.format(timestamp)  (5)

    uploadFile = {'xml': (None, xmlContent, 'application/xml')}

    r = requests.post(uploadUrl, params=payload, files=uploadFile,
                      headers={'accept': 'application/json', 'TestGuide-AuthKey': authKey})

    print('Upload ResourceAdapter config to test.guide. Status code: {0}'.format(r.status_code))


SendTestbenchConfiguration()
1 test.guide host URL
2 User authentication key
3 Project ID
4 Priority with which the request should be processed. From high (1) to low (100).
5 Test bench configuration that is sent as XML datagram (see XSD schema)

After running the script and uploading the test bench configuration, the test.guide IT view will look as follows: TestbenchApiConfigUpdateResult

7.9. Report Management API

The Report Management API allows access to test reports and test case executions.

7.9.1. Exporting the test case executions of a release to an Excel sheet

This Python example script illustrates how the Release API and the Report Management API can be used together to export the contents of a release (that is, the test case executions it contains) to an Excel file.

The following excerpt from the script shows how to use it:

# pylint: disable=missing-docstring,invalid-name


class TGReleaseExport(object):

    def __init__(self, url, authKey): (1)
        '''
        Constructor.
        @param url: URL of test.guide instance
        @type url: unicode
        @param authKey: authentication key for accessing API
        @type authKey: unicode
        '''
        self.apiUrl = url + '/api/'
        self.authKey = authKey

    def exportReleaseAsExcel(self, releaseId, projectId, targetFile): (2)
        '''
        Export the release as XLSX file.
        @param releaseId: ID of the release
        @type releaseId: int
        @param projectId: ID of the project the release belongs to
        @type projectId: int
        @param targetFile: Path to target Excel file.
        '''
        tagSetId = self._getTagSetId(releaseId)
        taskId = self._startExport(tagSetId, projectId)

        self._waitUntilExportDone(taskId)
1 Create a new TGReleaseExport object by calling the constructor and supplying the URL of the test.guide host and the authentication key of a user (preferably a technical user).
2 Calling this method starts the export, waits for the export to finish and finally downloads the Excel file. To use it, supply a release ID, a matching project ID as well as the path of the target Excel file.

This can be done, for example, as follows:

from TGReleaseExport import TGReleaseExport

url = u"http://localhost:8085"
authKey = u"bkN0ZGtpdFA0bz..."

releaseId = 112
projectId = 2
targetFile = "myExcelFile.xlsx"

exporter = TGReleaseExport(url, authKey)
exporter.exportReleaseAsExcel(releaseId, projectId, targetFile)

8. Troubleshooting

8.1. Working with logs

Acquiring the log files for test.guide

  • Via test.guide: System configuration  General Download Logfiles

  • Via the file system of the host that runs test.guide: ~TG_WORKSPACE~\TTS-TM\logs

You can set the logging level for troubleshooting at System configuration  General.
Log file Description

ops.log

Main log for operating test.guide, contains the internal logs depending on the log level plus logs of external frameworks and libraries on log level ERROR

debug.log

Log for debugging test.guide (intended mainly for developers), contains the internal logs depending on the log level

thirdparty.log

Logs from external frameworks and libraries

systeminfo.log

Contains general system information, e.g. which test.guide version is used, PostgreSQL configuration, …​

performance.log

Contains the performance logs that log critical parts of the application

db-migration.log

Logs from Liquibase and custom changes, useful for debugging problems while migrating the database

admin.log

Logs of relevant configuration changes in test.guide, e.g. deleting users

api-access.csv

Log of all REST API accesses to test.guide

usage.log

Contains the usage statistics as shown on the Usage statistics page. They are particularly useful for the operator to analyze which features are used and to size the system accordingly for the relevant use.

In addition, there may be other log files, e.g. to log special frameworks (EclipseLink, BIRT) separately or to create dedicated logs with a longer retention period for troubleshooting.

Acquiring the log files for the ResourceAdapter

  • Via the ResourceAdapter context menu: Open log

  • Via the file system of the client that runs the ResourceAdapter: ~RA_WORKSPACE~\TTS-TM-ResourcenAdapter\logs

You can set the logging level for troubleshooting in the ResourceAdapter configuration.

9. FAQs

Frequently asked questions and other interesting topics about test.guide and its connection to ecu.test can be found in the tracetronic Knowledge Base.

10. Glossary

The following glossary explains technical and professional terms that are used in the documentation.

Automotive Test Exchange Format (ATX)

ATX was created by the Association for Standardization of Automation and Measuring Systems (ASAM) as a standard for the uniform exchange of tests in the automotive area.

Issue Tracker

test.guide can interact with certain third party tools that can be summarized as "Issue Trackers". From a test.guide user perspective, these are tools for managing bug or defect tickets. Interaction with these tickets is possible in different ways as described here or here.

Management Tool / Test Management Tool

test.guide can interact with certain third party tools that can be summarized as "Test Management Tools". These tools are typically used to manage test designs, as well as final test results. Many of those tools also span the entire life cycle of a product and are also referred to as ALM Application Lifecycle Management Tools. As such, they are also often used to manage requirements as well as tickets such as bug tickets and defect tickets. test.guide can interact with these tools in different ways as described here.

ecu.test package

A package in ecu.test is a XML-based file, that can contain both, a test case or just a library function used within a test case.

ResourceAdapter (RA)

A java-based client application that is used as a sensor and agent for connecting test resource machines to test.guide. This application comes with test.guide.

Test bench

A test resource that can be used for executing (automated) test cases (e.g. hardware-in-the-loop test bench (HiL)).

Test infrastructure

Entirety of all test resources available for testing purposes. A module in test.guide for managing all the test resources is named like this.

Test resource (TR)

Generic term for a technical device used to perform tests or parts of tests like analysis (e.g. a test bench, a workstation, a virtual (SiL) test bench in the cloud, …​).

Test resource machine (TRM)

A test resource machine is always a child element of a test resource and means a system on which a ResourceAdapter can be executed (e.g. a control PC of a HiL test bench).

TRM activity

Activity of a test resource machine that can be reported by various plug-ins. Describes what is currently happening on the test resource machine.

TRM state

Aggregation of all status information for a test resource machine, which is made up of connection state, monitoring state and component state. Describes the condition or health of the test resource machine.

TRF report

TRF is the file extension of a test report generated by ecu.test. This is a proprietary format that can only be opened with ecu.test or the report viewer it contains.

Workstation

A test resource on which analysis or other CPU-intensive pre- or post-processing for tests can be carried out that do not necessarily have to run on a test bench.

x2atx (X2ATX)

A plug-in framework within test.guide to import report data from other formats than ATX.

X-in-the-loop (XiL)

A X-in-the-loop test bench means any kind of a closed-loop test bench. The "X" can be replaced by "H" for hardware, "S" for software, "M" for model, "P" for processor or "V" for vehicle.

Something is missing? More documentation is coming soon!