So far in this chapter, we have seen how Test Studio offers the standard built-in reports. However, knowing how much reporting is critical for making comprehensive decisions, your organization may need more specialized reports. So, this section takes the VS result file generated by Test Studio and suggests methods to create customized advanced reporting. These reports are going to be the result of the following processes:
First, let's examine the content of these files. Open the result file of the preceding example in a text editor to view its content. As mentioned before, it contains XML data where some of its significance is described in the following elements:
TestRun
element holds:id
attributerunUser
attributeTestRunConfiguration
element holds:id
attribute, which varies based on whether the test was run locally or, say, remotelyDescription
node containing the test list name in its inner textResultSummary
element holds important attributes in its Counters
child that refers to the number of executed and non-executed tests that fall under one or more criterion such as failing, passing, aborted, completed, and so on. Each of the aforementioned criteria is represented by a corresponding attribute and will be referred to in the next section by a basic metric.Times
element out of the available dates contains:startTime
attributeendTime
attributeResults
element contains:testId
, and the testName
attributes respectivelyexecutionId
attributeoutcome
attributestartTime
attributeendTime
attributeAfter getting familiar with the structure of the test file and its significance, we can now draw a relation between its different components that allows building further customizable reports. In the following sections, we are going to create these reports by firstly submitting data to SQL tables and then charting them via Excel pivot tables.
The FC_DB
database introduced in the previous chapters contains tables that could suggest an underlying storage design for the XML architecture. The following diagram depicts the relationship between the various tables:
The File Comparer
application has a tab called Import Results, which serves as the tool for submitting the file results into the SQL tables. The database is already populated with some sample data coming from the execution of the created test lists in this chapter. To upload other files, the following steps are to be performed:
File Comparer
application.Submitting the data to SQL has now granted us more flexibility in crafting the reports based on metrics that were extracted from any GQM paradigm activity. The forthcoming charts are examples of reports that could contribute to test reporting in your environment now or perhaps after slight variations. The charts can be found in the File Comparer – Reports.xlsx
workbook.
SELECT [Executed] ,[Passed] ,[Error] ,[Failed] ,[TimeOut] ,[Aborted] ,[Warning] ,[StartTime] ,tl.[Description] FROM [ListExecution] AS le INNER JOIN(SELECT TestListID, MAX(StartTime) as LastExecution FROM [ListExecution] GROUP BY TestListID) AS lae ON le.TestListID = lae.TestListID AND le.StartTime = lae.LastExecution INNER JOIN [TestList] AS tl ON le.TestListID = tl.ListID
File Comparer – Reports.xlsx
workbook.SELECT (COUNT(te.TestID) * 100)/(SELECT COUNT(distinct TestID) FROM [TestExecution]) AS [Defect Density] FROM (SELECT MAX(StartTime) AS startTime, TestID FROM [TestExecution] WHERE Outcome = 'Failed' GROUP BY TestID) AS te
SELECT [Executed], [Passed], [Error], [Failed], [TimeOut], [Aborted], [Warning], [StartTime], tl.[Description] FROM [ListExecution] AS le INNER JOIN [TestList] AS tl ON le.TestListID = tl.ListID WHERE le.TestListID = 3
File Comparer – Reports.xlsx
workbook.SELECT a.FeatureName, a.ListID, a.TestTotal, a.Outcome, a.Name, CASE WHEN a.Outcome = 'NotRun' THEN '0' WHEN a.Outcome is NULL THEN '0' ELSE '1' END AS Executed FROM (SELECT t.FeatureName, t.ListID, t.TestTotal, lts.TestID, test.Name, (SELECT TOP(1) outcome FROM [TestExecution] AS te WHERE exists (SELECT * FROM [TestExecution] WHERE te.TestID = lts.TestID) ORDER BY StartTime DESC) AS Outcome FROM ( SELECT FeatureName, lt.ListID, lt.TestTotal FROM [FeatureList] AS fl left join [TestList] AS lt ON lt.[Description] LIKE '%' + FeatureName ) AS t LEFT JOIN [ListTests] AS lts ON t.ListID = lts.ListID LEFT JOIN [Test] AS test ON lts.TestID = test.TestID GROUP BY t.FeatureName, t.ListID, t.TestTotal, lts.TestID, test.Name ) AS a
[FeatureList]
table. The ListID
column has the ID of the test list that covers the feature appearing to its left. The TestTotal
column has the total number of tests under this test list, where the Name
column lists these test names. The Outcome
column shows the last execution result for the test appearing in the Name
column. The Planned Tests
column has the estimated number of tests entered manually:File Comparer – Reports.xlsx
workbook.SELECT [ExecutionID] ,[TestRunID] ,[TestID] ,[StartTime] ,[EndTime] ,[Outcome] ,DATEDIFF(SECOND, StartTime, EndTime) AS duration, 'Func-1_FileCompare_Equal_Successful' AS Name FROM [TestExecution] WHERE TestID = (SELECT TestID FROM [Test] WHERE Name = 'Func-1_FileCompare_Equal_Successful') AND Outcome = 'Passed'
Row Labels
column has the execution times for the Func-1_FileCompare_Equal_Successful
test. The Test Execution Duration
column contains the amount of time taken by the test to finish execution:Func-1_FileCompare_Equal_Successful
test in its sections multiple times. The pivot table for this metric can be further customized using the field list panel contained in the Test Execution Duration sheet of the File Comparer – Reports.xlsx
workbook.The VS Result file has further benefits that allow you to come up with a customized way of comparing test execution results. The
File Compare tab of the File Comparer
application, that we have previously automated, actually accepts these types of files generated by Test Studio and calculates the comparison scores based on the following formula:
Test Execution Score = Total number of executed test - (number of errors – number of failed – number of timed out – number of aborted – number of inconclusive – number of not executed – number of disconnected – number of warnings / 2).
18.118.37.154