1.. SPDX-License-Identifier: GPL-2.0 2 3======================================== 4The Kernel Test Anything Protocol (KTAP) 5======================================== 6 7TAP, or the Test Anything Protocol is a format for specifying test results used 8by a number of projects. It's website and specification are found at this `link 9<https://testanything.org/>`_. The Linux Kernel largely uses TAP output for test 10results. However, Kernel testing frameworks have special needs for test results 11which don't align with the original TAP specification. Thus, a "Kernel TAP" 12(KTAP) format is specified to extend and alter TAP to support these use-cases. 13This specification describes the generally accepted format of KTAP as it is 14currently used in the kernel. 15 16KTAP test results describe a series of tests (which may be nested: i.e., test 17can have subtests), each of which can contain both diagnostic data -- e.g., log 18lines -- and a final result. The test structure and results are 19machine-readable, whereas the diagnostic data is unstructured and is there to 20aid human debugging. 21 22KTAP output is built from four different types of lines: 23- Version lines 24- Plan lines 25- Test case result lines 26- Diagnostic lines 27 28In general, valid KTAP output should also form valid TAP output, but some 29information, in particular nested test results, may be lost. Also note that 30there is a stagnant draft specification for TAP14, KTAP diverges from this in 31a couple of places (notably the "Subtest" header), which are described where 32relevant later in this document. 33 34Version lines 35------------- 36 37All KTAP-formatted results begin with a "version line" which specifies which 38version of the (K)TAP standard the result is compliant with. 39 40For example: 41- "KTAP version 1" 42- "TAP version 13" 43- "TAP version 14" 44 45Note that, in KTAP, subtests also begin with a version line, which denotes the 46start of the nested test results. This differs from TAP14, which uses a 47separate "Subtest" line. 48 49While, going forward, "KTAP version 1" should be used by compliant tests, it 50is expected that most parsers and other tooling will accept the other versions 51listed here for compatibility with existing tests and frameworks. 52 53Plan lines 54---------- 55 56A test plan provides the number of tests (or subtests) in the KTAP output. 57 58Plan lines must follow the format of "1..N" where N is the number of tests or subtests. 59Plan lines follow version lines to indicate the number of nested tests. 60 61While there are cases where the number of tests is not known in advance -- in 62which case the test plan may be omitted -- it is strongly recommended one is 63present where possible. 64 65Test case result lines 66---------------------- 67 68Test case result lines indicate the final status of a test. 69They are required and must have the format: 70 71.. code-block:: 72 73 <result> <number> [<description>][ # [<directive>] [<diagnostic data>]] 74 75The result can be either "ok", which indicates the test case passed, 76or "not ok", which indicates that the test case failed. 77 78<number> represents the number of the test being performed. The first test must 79have the number 1 and the number then must increase by 1 for each additional 80subtest within the same test at the same nesting level. 81 82The description is a description of the test, generally the name of 83the test, and can be any string of words (can't include #). The 84description is optional, but recommended. 85 86The directive and any diagnostic data is optional. If either are present, they 87must follow a hash sign, "#". 88 89A directive is a keyword that indicates a different outcome for a test other 90than passed and failed. The directive is optional, and consists of a single 91keyword preceding the diagnostic data. In the event that a parser encounters 92a directive it doesn't support, it should fall back to the "ok" / "not ok" 93result. 94 95Currently accepted directives are: 96 97- "SKIP", which indicates a test was skipped (note the result of the test case 98 result line can be either "ok" or "not ok" if the SKIP directive is used) 99- "TODO", which indicates that a test is not expected to pass at the moment, 100 e.g. because the feature it is testing is known to be broken. While this 101 directive is inherited from TAP, its use in the kernel is discouraged. 102- "XFAIL", which indicates that a test is expected to fail. This is similar 103 to "TODO", above, and is used by some kselftest tests. 104- “TIMEOUT”, which indicates a test has timed out (note the result of the test 105 case result line should be “not ok” if the TIMEOUT directive is used) 106- “ERROR”, which indicates that the execution of a test has failed due to a 107 specific error that is included in the diagnostic data. (note the result of 108 the test case result line should be “not ok” if the ERROR directive is used) 109 110The diagnostic data is a plain-text field which contains any additional details 111about why this result was produced. This is typically an error message for ERROR 112or failed tests, or a description of missing dependencies for a SKIP result. 113 114The diagnostic data field is optional, and results which have neither a 115directive nor any diagnostic data do not need to include the "#" field 116separator. 117 118Example result lines include: 119 120.. code-block:: 121 122 ok 1 test_case_name 123 124The test "test_case_name" passed. 125 126.. code-block:: 127 128 not ok 1 test_case_name 129 130The test "test_case_name" failed. 131 132.. code-block:: 133 134 ok 1 test # SKIP necessary dependency unavailable 135 136The test "test" was SKIPPED with the diagnostic message "necessary dependency 137unavailable". 138 139.. code-block:: 140 141 not ok 1 test # TIMEOUT 30 seconds 142 143The test "test" timed out, with diagnostic data "30 seconds". 144 145.. code-block:: 146 147 ok 5 check return code # rcode=0 148 149The test "check return code" passed, with additional diagnostic data “rcode=0” 150 151 152Diagnostic lines 153---------------- 154 155If tests wish to output any further information, they should do so using 156"diagnostic lines". Diagnostic lines are optional, freeform text, and are 157often used to describe what is being tested and any intermediate results in 158more detail than the final result and diagnostic data line provides. 159 160Diagnostic lines are formatted as "# <diagnostic_description>", where the 161description can be any string. Diagnostic lines can be anywhere in the test 162output. As a rule, diagnostic lines regarding a test are directly before the 163test result line for that test. 164 165Note that most tools will treat unknown lines (see below) as diagnostic lines, 166even if they do not start with a "#": this is to capture any other useful 167kernel output which may help debug the test. It is nevertheless recommended 168that tests always prefix any diagnostic output they have with a "#" character. 169 170Unknown lines 171------------- 172 173There may be lines within KTAP output that do not follow the format of one of 174the four formats for lines described above. This is allowed, however, they will 175not influence the status of the tests. 176 177Nested tests 178------------ 179 180In KTAP, tests can be nested. This is done by having a test include within its 181output an entire set of KTAP-formatted results. This can be used to categorize 182and group related tests, or to split out different results from the same test. 183 184The "parent" test's result should consist of all of its subtests' results, 185starting with another KTAP version line and test plan, and end with the overall 186result. If one of the subtests fail, for example, the parent test should also 187fail. 188 189Additionally, all result lines in a subtest should be indented. One level of 190indentation is two spaces: " ". The indentation should begin at the version 191line and should end before the parent test's result line. 192 193An example of a test with two nested subtests: 194 195.. code-block:: 196 197 KTAP version 1 198 1..1 199 KTAP version 1 200 1..2 201 ok 1 test_1 202 not ok 2 test_2 203 # example failed 204 not ok 1 example 205 206An example format with multiple levels of nested testing: 207 208.. code-block:: 209 210 KTAP version 1 211 1..2 212 KTAP version 1 213 1..2 214 KTAP version 1 215 1..2 216 not ok 1 test_1 217 ok 2 test_2 218 not ok 1 test_3 219 ok 2 test_4 # SKIP 220 not ok 1 example_test_1 221 ok 2 example_test_2 222 223 224Major differences between TAP and KTAP 225-------------------------------------- 226 227Note the major differences between the TAP and KTAP specification: 228- yaml and json are not recommended in diagnostic messages 229- TODO directive not recognized 230- KTAP allows for an arbitrary number of tests to be nested 231 232The TAP14 specification does permit nested tests, but instead of using another 233nested version line, uses a line of the form 234"Subtest: <name>" where <name> is the name of the parent test. 235 236Example KTAP output 237-------------------- 238.. code-block:: 239 240 KTAP version 1 241 1..1 242 KTAP version 1 243 1..3 244 KTAP version 1 245 1..1 246 # test_1: initializing test_1 247 ok 1 test_1 248 ok 1 example_test_1 249 KTAP version 1 250 1..2 251 ok 1 test_1 # SKIP test_1 skipped 252 ok 2 test_2 253 ok 2 example_test_2 254 KTAP version 1 255 1..3 256 ok 1 test_1 257 # test_2: FAIL 258 not ok 2 test_2 259 ok 3 test_3 # SKIP test_3 skipped 260 not ok 3 example_test_3 261 not ok 1 main_test 262 263This output defines the following hierarchy: 264 265A single test called "main_test", which fails, and has three subtests: 266- "example_test_1", which passes, and has one subtest: 267 268 - "test_1", which passes, and outputs the diagnostic message "test_1: initializing test_1" 269 270- "example_test_2", which passes, and has two subtests: 271 272 - "test_1", which is skipped, with the explanation "test_1 skipped" 273 - "test_2", which passes 274 275- "example_test_3", which fails, and has three subtests 276 277 - "test_1", which passes 278 - "test_2", which outputs the diagnostic line "test_2: FAIL", and fails. 279 - "test_3", which is skipped with the explanation "test_3 skipped" 280 281Note that the individual subtests with the same names do not conflict, as they 282are found in different parent tests. This output also exhibits some sensible 283rules for "bubbling up" test results: a test fails if any of its subtests fail. 284Skipped tests do not affect the result of the parent test (though it often 285makes sense for a test to be marked skipped if _all_ of its subtests have been 286skipped). 287 288See also: 289--------- 290 291- The TAP specification: 292 https://testanything.org/tap-version-13-specification.html 293- The (stagnant) TAP version 14 specification: 294 https://github.com/TestAnything/Specification/blob/tap-14-specification/specification.md 295- The kselftest documentation: 296 Documentation/dev-tools/kselftest.rst 297- The KUnit documentation: 298 Documentation/dev-tools/kunit/index.rst 299