1*ff41da50SThomas HuthDefinition of terms
2*ff41da50SThomas Huth===================
3*ff41da50SThomas Huth
4*ff41da50SThomas HuthThis section defines the terms used in this document and correlates them with
5*ff41da50SThomas Huthwhat is currently used on QEMU.
6*ff41da50SThomas Huth
7*ff41da50SThomas HuthAutomated tests
8*ff41da50SThomas Huth---------------
9*ff41da50SThomas Huth
10*ff41da50SThomas HuthAn automated test is written on a test framework using its generic test
11*ff41da50SThomas Huthfunctions/classes. The test framework can run the tests and report their
12*ff41da50SThomas Huthsuccess or failure [1]_.
13*ff41da50SThomas Huth
14*ff41da50SThomas HuthAn automated test has essentially three parts:
15*ff41da50SThomas Huth
16*ff41da50SThomas Huth1. The test initialization of the parameters, where the expected parameters,
17*ff41da50SThomas Huth   like inputs and expected results, are set up;
18*ff41da50SThomas Huth2. The call to the code that should be tested;
19*ff41da50SThomas Huth3. An assertion, comparing the result from the previous call with the expected
20*ff41da50SThomas Huth   result set during the initialization of the parameters. If the result
21*ff41da50SThomas Huth   matches the expected result, the test has been successful; otherwise, it has
22*ff41da50SThomas Huth   failed.
23*ff41da50SThomas Huth
24*ff41da50SThomas HuthUnit testing
25*ff41da50SThomas Huth------------
26*ff41da50SThomas Huth
27*ff41da50SThomas HuthA unit test is responsible for exercising individual software components as a
28*ff41da50SThomas Huthunit, like interfaces, data structures, and functionality, uncovering errors
29*ff41da50SThomas Huthwithin the boundaries of a component. The verification effort is in the
30*ff41da50SThomas Huthsmallest software unit and focuses on the internal processing logic and data
31*ff41da50SThomas Huthstructures. A test case of unit tests should be designed to uncover errors due
32*ff41da50SThomas Huthto erroneous computations, incorrect comparisons, or improper control flow [2]_.
33*ff41da50SThomas Huth
34*ff41da50SThomas HuthOn QEMU, unit testing is represented by the 'check-unit' target from 'make'.
35*ff41da50SThomas Huth
36*ff41da50SThomas HuthFunctional testing
37*ff41da50SThomas Huth------------------
38*ff41da50SThomas Huth
39*ff41da50SThomas HuthA functional test focuses on the functional requirement of the software.
40*ff41da50SThomas HuthDeriving sets of input conditions, the functional tests should fully exercise
41*ff41da50SThomas Huthall the functional requirements for a program. Functional testing is
42*ff41da50SThomas Huthcomplementary to other testing techniques, attempting to find errors like
43*ff41da50SThomas Huthincorrect or missing functions, interface errors, behavior errors, and
44*ff41da50SThomas Huthinitialization and termination errors [3]_.
45*ff41da50SThomas Huth
46*ff41da50SThomas HuthOn QEMU, functional testing is represented by the 'check-qtest' target from
47*ff41da50SThomas Huth'make'.
48*ff41da50SThomas Huth
49*ff41da50SThomas HuthSystem testing
50*ff41da50SThomas Huth--------------
51*ff41da50SThomas Huth
52*ff41da50SThomas HuthSystem tests ensure all application elements mesh properly while the overall
53*ff41da50SThomas Huthfunctionality and performance are achieved [4]_. Some or all system components
54*ff41da50SThomas Huthare integrated to create a complete system to be tested as a whole. System
55*ff41da50SThomas Huthtesting ensures that components are compatible, interact correctly, and
56*ff41da50SThomas Huthtransfer the right data at the right time across their interfaces. As system
57*ff41da50SThomas Huthtesting focuses on interactions, use case-based testing is a practical approach
58*ff41da50SThomas Huthto system testing [5]_. Note that, in some cases, system testing may require
59*ff41da50SThomas Huthinteraction with third-party software, like operating system images, databases,
60*ff41da50SThomas Huthnetworks, and so on.
61*ff41da50SThomas Huth
62*ff41da50SThomas HuthOn QEMU, system testing is represented by the 'check-avocado' target from
63*ff41da50SThomas Huth'make'.
64*ff41da50SThomas Huth
65*ff41da50SThomas HuthFlaky tests
66*ff41da50SThomas Huth-----------
67*ff41da50SThomas Huth
68*ff41da50SThomas HuthA flaky test is defined as a test that exhibits both a passing and a failing
69*ff41da50SThomas Huthresult with the same code on different runs. Some usual reasons for an
70*ff41da50SThomas Huthintermittent/flaky test are async wait, concurrency, and test order dependency
71*ff41da50SThomas Huth[6]_.
72*ff41da50SThomas Huth
73*ff41da50SThomas HuthGating
74*ff41da50SThomas Huth------
75*ff41da50SThomas Huth
76*ff41da50SThomas HuthA gate restricts the move of code from one stage to another on a
77*ff41da50SThomas Huthtest/deployment pipeline. The step move is granted with approval. The approval
78*ff41da50SThomas Huthcan be a manual intervention or a set of tests succeeding [7]_.
79*ff41da50SThomas Huth
80*ff41da50SThomas HuthOn QEMU, the gating process happens during the pull request. The approval is
81*ff41da50SThomas Huthdone by the project leader running its own set of tests. The pull request gets
82*ff41da50SThomas Huthmerged when the tests succeed.
83*ff41da50SThomas Huth
84*ff41da50SThomas HuthContinuous Integration (CI)
85*ff41da50SThomas Huth---------------------------
86*ff41da50SThomas Huth
87*ff41da50SThomas HuthContinuous integration (CI) requires the builds of the entire application and
88*ff41da50SThomas Huththe execution of a comprehensive set of automated tests every time there is a
89*ff41da50SThomas Huthneed to commit any set of changes [8]_. The automated tests can be composed of
90*ff41da50SThomas Huththe unit, functional, system, and other tests.
91*ff41da50SThomas Huth
92*ff41da50SThomas HuthKeynotes about continuous integration (CI) [9]_:
93*ff41da50SThomas Huth
94*ff41da50SThomas Huth1. System tests may depend on external software (operating system images,
95*ff41da50SThomas Huth   firmware, database, network).
96*ff41da50SThomas Huth2. It may take a long time to build and test. It may be impractical to build
97*ff41da50SThomas Huth   the system being developed several times per day.
98*ff41da50SThomas Huth3. If the development platform is different from the target platform, it may
99*ff41da50SThomas Huth   not be possible to run system tests in the developer’s private workspace.
100*ff41da50SThomas Huth   There may be differences in hardware, operating system, or installed
101*ff41da50SThomas Huth   software. Therefore, more time is required for testing the system.
102*ff41da50SThomas Huth
103*ff41da50SThomas HuthReferences
104*ff41da50SThomas Huth----------
105*ff41da50SThomas Huth
106*ff41da50SThomas Huth.. [1] Sommerville, Ian (2016). Software Engineering. p. 233.
107*ff41da50SThomas Huth.. [2] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering,
108*ff41da50SThomas Huth       A Practitioner’s Approach. p. 48, 376, 378, 381.
109*ff41da50SThomas Huth.. [3] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering,
110*ff41da50SThomas Huth       A Practitioner’s Approach. p. 388.
111*ff41da50SThomas Huth.. [4] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering,
112*ff41da50SThomas Huth       A Practitioner’s Approach. Software Engineering, p. 377.
113*ff41da50SThomas Huth.. [5] Sommerville, Ian (2016). Software Engineering. p. 59, 232, 240.
114*ff41da50SThomas Huth.. [6] Luo, Qingzhou, et al. An empirical analysis of flaky tests.
115*ff41da50SThomas Huth       Proceedings of the 22nd ACM SIGSOFT International Symposium on
116*ff41da50SThomas Huth       Foundations of Software Engineering. 2014.
117*ff41da50SThomas Huth.. [7] Humble, Jez & Farley, David (2010). Continuous Delivery:
118*ff41da50SThomas Huth       Reliable Software Releases Through Build, Test, and Deployment, p. 122.
119*ff41da50SThomas Huth.. [8] Humble, Jez & Farley, David (2010). Continuous Delivery:
120*ff41da50SThomas Huth       Reliable Software Releases Through Build, Test, and Deployment, p. 55.
121*ff41da50SThomas Huth.. [9] Sommerville, Ian (2016). Software Engineering. p. 743.
122