Logging of the TAS and the SUT

Logging is very important in TAS, both for test automation itself and for the SUT. Test logs are a resource that is often used to analyze potential problems. The following section provides examples of test logging, categorized by TAS or SUT.

TAS logging (whether the TAF or the test case itself logs the information is not as important and depends on the context) should include the following:

  • Which test case is currently executing, including start and end times.
  • The status of the test case execution, since while errors can be easily identified in the log files, the framework itself should also have this information and report on it via a dashboard. The test case execution status can be pass, fail, or TAS error. The result of TAS error is used for situations where the problem is not in the SUT.
  • High level test log details (logging of significant steps) including timing information.
  • Dynamic information about the SUT (e.g., memory leaks) that the test case was able to identify using third-party tools. Actual results and errors of these dynamic measurements should be logged with the test case that was running at the time the incident was discovered.
  • In the case of reliability testing/stress testing (where numerous cycles are performed), a counter should be logged so that it can be easily determined how many times test cases were executed.
  • If test cases have random parts (e.g. random parameters or random steps in state machine tests), the random numbers or random selection should be logged.
  • All actions performed by a test case should be logged in such a way that the log file (or parts of it) can be replayed to run the test again with exactly the same steps and schedule. This is useful to verify the reproducibility of a detected failure and to capture additional information.
    The information about the test case action can also be logged on the SUT itself for use in reproducing
    (the customer runs the scenario, the log information is recorded, and can then be replayed by the development team during troubleshooting).
  • Screenshots and other visual recordings can be saved during test execution for further use in bug analysis.
  • If a test case fails, the TAS should ensure that all information needed to analyze the problem is available/saved, as well as any information on how to continue the test, if applicable. All associated crash dumps and stack traces should be stored by the TAS in a secure location. Also, any log files that could be overwritten (cyclic buffers are often used for log files on the SUT) should be copied to this location where they are available for later analysis.
  • The use of colors can help distinguish different types of logged information (e.g., errors in red, progress information in green).

SUT logging:

  • When the SUT identifies a problem, all necessary information needed to analyze the problem should be logged, including date and time stamps, source location of the problem, error messages, etc.
  • The SUT can log all user interactions (directly through the available user interface, but also through network interfaces, etc.). In this way, problems identified by customers can be properly analyzed and development can attempt to reproduce the problem.
  • When the system is started, configuration information should be logged to a file, consisting of the various software/firmware versions, the configuration of the SUT, the configuration of the operating system, etc.

The various logging information should be easily searchable. A problem identified by the TAS in the log file should be easily identified in the log file of the SUT and vice versa (with or without additional tools).
Synchronizing different logs with a timestamp makes it easier to associate what happened when an error was reported.

Source: ISTQB®: CTAL Test Automation Engineer Syllabus Version 1.0

Was this article helpful?

Related Articles