History log of /openbmc/linux/tools/testing/kunit/kunit_parser.py (Results 1 – 25 of 54)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5
# 29482da8 07-Dec-2023 Rae Moar <rmoar@google.com>

kunit: tool: fix parsing of test attributes

[ Upstream commit 8ae27bc7fff4ef467a7964821a6cedb34a05d3b2 ]

Add parsing of attributes as diagnostic data. Fixes issue with test plan
being parsed incorr

kunit: tool: fix parsing of test attributes

[ Upstream commit 8ae27bc7fff4ef467a7964821a6cedb34a05d3b2 ]

Add parsing of attributes as diagnostic data. Fixes issue with test plan
being parsed incorrectly as diagnostic data when located after
suite-level attributes.

Note that if there does not exist a test plan line, the diagnostic lines
between the suite header and the first result will be saved in the suite
log rather than the first test case log.

Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42
# 723c8258 25-Jul-2023 Rae Moar <rmoar@google.com>

kunit: tool: Add command line interface to filter and report attributes

Add ability to kunit.py to filter attributes and report a list of tests
including attributes without running tests.

Add flag

kunit: tool: Add command line interface to filter and report attributes

Add ability to kunit.py to filter attributes and report a list of tests
including attributes without running tests.

Add flag "--filter" to input filters on test attributes. Tests will be
filtered out if they do not match all inputted filters.

Example: --filter speed=slow (This filter would run only the tests that are
marked as slow)

Filters have operations: <, >, <=, >=, !=, and =. But note that the
characters < and > are often interpreted by the shell, so they may need to
be quoted or escaped.

Example: --filter "speed>slow" or --filter speed\>slow (This filter would
run only the tests that have the speed faster than slow.

Additionally, multiple filters can be used.

Example: --filter "speed=slow, module!=example" (This filter would run
only the tests that have the speed slow and are not in the "example"
module)

Note if the user wants to skip filtered tests instead of not
running/showing them use the "--filter_action=skip" flag instead.

Expose the output of kunit.action=list option with flag "--list_tests" to
output a list of tests. Additionally, add flag "--list_tests_attr" to
output a list of tests and their attributes. These flags are useful to see
tests and test attributes without needing to run tests.

Example of the output of "--list_tests_attr":
example
example.test_1
example.test_2
# example.test_2.speed: slow

This output includes a suite, example, with two test cases, test_1 and
test_2. And in this instance test_2 has been marked as slow.

Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Rae Moar <rmoar@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20
# 126901ba 16-Mar-2023 Daniel Latypov <dlatypov@google.com>

kunit: tool: remove unused imports and variables

We don't run a linter regularly over kunit.py code (the default settings
on most don't like kernel style, e.g. tabs) so some of these imports
didn't

kunit: tool: remove unused imports and variables

We don't run a linter regularly over kunit.py code (the default settings
on most don't like kernel style, e.g. tabs) so some of these imports
didn't get removed when they stopped being used.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11
# c2bb92bc 30-Nov-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: make parser preserve whitespace when printing test log

Currently, kunit_parser.py is stripping all leading whitespace to make
parsing easier. But this means we can't accurately show ker

kunit: tool: make parser preserve whitespace when printing test log

Currently, kunit_parser.py is stripping all leading whitespace to make
parsing easier. But this means we can't accurately show kernel output
for failing tests or when the kernel crashes.

Embarassingly, this affects even KUnit's own output, e.g.
[13:40:46] Expected 2 + 1 == 2, but
[13:40:46] 2 + 1 == 3 (0x3)
[13:40:46] not ok 1 example_simple_test
[13:40:46] [FAILED] example_simple_test

After this change, here's what the output in context would look like
[13:40:46] =================== example (4 subtests) ===================
[13:40:46] # example_simple_test: initializing
[13:40:46] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29
[13:40:46] Expected 2 + 1 == 2, but
[13:40:46] 2 + 1 == 3 (0x3)
[13:40:46] [FAILED] example_simple_test
[13:40:46] [SKIPPED] example_skip_test
[13:40:46] [SKIPPED] example_mark_skipped_test
[13:40:46] [PASSED] example_all_expect_macros_test
[13:40:46] # example: initializing suite
[13:40:46] # example: pass:1 fail:1 skip:2 total:4
[13:40:46] # Totals: pass:1 fail:1 skip:2 total:4
[13:40:46] ===================== [FAILED] example =====================

This example shows one minor cosmetic defect this approach has.
The test counts lines prevent us from dedenting the suite-level output.
But at the same time, any form of non-KUnit output would do the same
unless it happened to be indented as well.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# 5937e0c0 28-Nov-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: don't include KTAP headers and the like in the test log

We print the "test log" on failure.
This is meant to be all the kernel output that happened during the test.

But we also include

kunit: tool: don't include KTAP headers and the like in the test log

We print the "test log" on failure.
This is meant to be all the kernel output that happened during the test.

But we also include the special KTAP lines in it, which are often
redundant.

E.g. we include the "not ok" line in the log, right before we print
that the test case failed...
[13:51:48] Expected 2 + 1 == 2, but
[13:51:48] 2 + 1 == 3 (0x3)
[13:51:48] not ok 1 example_simple_test
[13:51:48] [FAILED] example_simple_test

More full example after this patch:
[13:51:48] =================== example (4 subtests) ===================
[13:51:48] # example_simple_test: initializing
[13:51:48] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29
[13:51:48] Expected 2 + 1 == 2, but
[13:51:48] 2 + 1 == 3 (0x3)
[13:51:48] [FAILED] example_simple_test

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.0.10, v5.15.80
# 434498a6 23-Nov-2022 Rae Moar <rmoar@google.com>

kunit: tool: parse KTAP compliant test output

Change the KUnit parser to be able to parse test output that complies with
the KTAP version 1 specification format found here:
https://kernel.org/doc/ht

kunit: tool: parse KTAP compliant test output

Change the KUnit parser to be able to parse test output that complies with
the KTAP version 1 specification format found here:
https://kernel.org/doc/html/latest/dev-tools/ktap.html. Ensure the parser
is able to parse tests with the original KUnit test output format as
well.

KUnit parser now accepts any of the following test output formats:

Original KUnit test output format:

TAP version 14
1..1
# Subtest: kunit-test-suite
1..3
ok 1 - kunit_test_1
ok 2 - kunit_test_2
ok 3 - kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 - kunit-test-suite

KTAP version 1 test output format:

KTAP version 1
1..1
KTAP version 1
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
ok 1 kunit-test-suite

New KUnit test output format (changes made in the next patch of
this series):

KTAP version 1
1..1
KTAP version 1
# Subtest: kunit-test-suite
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 kunit-test-suite

Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.0.9, v5.15.79
# 0a7d5c30 10-Nov-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: tweak error message when no KTAP found

We currently tell people we "couldn't find any KTAP output" with no
indication as to what this might mean.

After this patch, we get:

$ ./tools/t

kunit: tool: tweak error message when no KTAP found

We currently tell people we "couldn't find any KTAP output" with no
indication as to what this might mean.

After this patch, we get:

$ ./tools/testing/kunit/kunit.py parse /dev/null
============================================================
[ERROR] Test: <missing>: Could not find any KTAP output. Did any KUnit tests run?
============================================================
Testing complete. Ran 0 tests: errors: 1

Note: we could try and generate a more verbose message like
> Please check .kunit/test.log to see the raw kernel output.
or the like, but we'd need to know what the build dir was to know where
test.log actually lives.

This patch tries to make a more minimal improvement.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.0.8, v5.15.78
# f473dd94 03-Nov-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: make TestCounts a dataclass

Since we're using Python 3.7+, we can use dataclasses to tersen the
code.

It also lets us create pre-populated TestCounts() objects and compare
them in our

kunit: tool: make TestCounts a dataclass

Since we're using Python 3.7+, we can use dataclasses to tersen the
code.

It also lets us create pre-populated TestCounts() objects and compare
them in our unit test. (Before, you could only create empty ones).

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.0.7, v5.15.77, v5.15.76, v6.0.6
# f19dd011 28-Oct-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: print summary of failed tests if a few failed out of a lot

E.g. all the hw_breakpoint tests are failing right now.
So if I run `kunit.py run --altests --arch=x86_64`, then I see
> Testi

kunit: tool: print summary of failed tests if a few failed out of a lot

E.g. all the hw_breakpoint tests are failing right now.
So if I run `kunit.py run --altests --arch=x86_64`, then I see
> Testing complete. Ran 408 tests: passed: 392, failed: 9, skipped: 7

Seeing which 9 tests failed out of the hundreds is annoying.
If my terminal doesn't have scrollback support, I have to resort to
looking at `.kunit/test.log` for the `not ok` lines.

Teach kunit.py to print a summarized list of failures if the # of tests
reachs an arbitrary threshold (>=100 tests).

To try and keep the output from being too long/noisy, this new logic
a) just reports "parent_test failed" if every child test failed
b) won't print anything if there are >10 failures (also arbitrary).

With this patch, we get an extra line of output showing:
> Testing complete. Ran 408 tests: passed: 392, failed: 9, skipped: 7
> Failures: hw_breakpoint

This also works with parameterized tests, e.g. if I add a fake failure
> Failures: kcsan.test_atomic_builtins_missing_barrier.threads=6

Note: we didn't have enough tests for this to be a problem before.
But with commit 980ac3ad0512 ("kunit: tool: rename all_test_uml.config,
use it for --alltests"), --alltests works and thus running >100 tests
will probably become more common.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60
# a15cfa39 10-Aug-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: make --raw_output=kunit (aka --raw_output) preserve leading spaces

With
$ kunit.py run --raw_output=all ...
you get the raw output from the kernel, e.g. something like
> TAP version 14

kunit: tool: make --raw_output=kunit (aka --raw_output) preserve leading spaces

With
$ kunit.py run --raw_output=all ...
you get the raw output from the kernel, e.g. something like
> TAP version 14
> 1..26
> # Subtest: time_test_cases
> 1..1
> ok 1 - time64_to_tm_test_date_range
> ok 1 - time_test_cases

But --raw_output=kunit or equivalently --raw_output, you get
> TAP version 14
> 1..26
> # Subtest: time_test_cases
> 1..1
> ok 1 - time64_to_tm_test_date_range
> ok 1 - time_test_cases

It looks less readable in my opinion, and it also isn't "raw output."

This is due to sharing code with kunit_parser.py, which wants to strip
leading whitespace since it uses anchored regexes.
We could update the kunit_parser.py code to tolerate leaading spaces,
but this patch takes the easier way out and adds a bool flag.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41
# e756dbeb 16-May-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: refactoring printing logic into kunit_printer.py

Context:
* kunit_kernel.py is importing kunit_parser.py just to use the
print_with_timestamp() function
* the parser is directly print

kunit: tool: refactoring printing logic into kunit_printer.py

Context:
* kunit_kernel.py is importing kunit_parser.py just to use the
print_with_timestamp() function
* the parser is directly printing to stdout, which will become an issue
if we ever try to run multiple kernels in parallel

This patch introduces a kunit_printer.py file and migrates callers of
kunit_parser.print_with_timestamp() to call
kunit_printer.stdout.print_with_timestamp() instead.

Future changes:
If we want to support showing results for parallel runs, we could then
create new Printer's that don't directly write to stdout and refactor
the code to pass around these Printer objects.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.15.40, v5.15.39
# 0453f984 09-May-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: misc cleanups

This primarily comes from running pylint over kunit tool code and
ignoring some warnings we don't care about.
If we ever got a fully clean setup, we could add this to run_

kunit: tool: misc cleanups

This primarily comes from running pylint over kunit tool code and
ignoring some warnings we don't care about.
If we ever got a fully clean setup, we could add this to run_checks.py,
but we're not there yet.

Fix things like
* Drop unused imports
* check `is None`, not `== None` (see PEP 8)
* remove redundant parens around returns
* remove redundant `else` / convert `elif` to `if` where appropriate
* rename make_arch_qemuconfig() param to base_kunitconfig (this is the
name used in the subclass, and it's a better one)
* kunit_tool_test: check the exit code for SystemExit (could be 0)

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# 94507ee3 12-May-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: minor cosmetic cleanups in kunit_parser.py

There should be no behavioral changes from this patch.

This patch removes redundant comment text, inlines a function used in
only one place,

kunit: tool: minor cosmetic cleanups in kunit_parser.py

There should be no behavioral changes from this patch.

This patch removes redundant comment text, inlines a function used in
only one place, and other such minor tweaks.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# dbf0b0d5 12-May-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: make parser stop overwriting status of suites w/ no_tests

Consider this invocation
$ ./tools/testing/kunit/kunit.py parse <<EOF
TAP version 14
1..2
ok 1 - suite
# Subtest: no_

kunit: tool: make parser stop overwriting status of suites w/ no_tests

Consider this invocation
$ ./tools/testing/kunit/kunit.py parse <<EOF
TAP version 14
1..2
ok 1 - suite
# Subtest: no_tests_suite
# catastrophic error!
not ok 1 - no_tests_suite
EOF

It will have a 0 exit code even though there's a "not ok".

Consider this one:
$ ./tools/testing/kunit/kunit.py parse <<EOF
TAP version 14
1..2
ok 1 - suite
not ok 1 - no_tests_suite
EOF

It will a non-zero exit code.

Why?
We have this line in the kunit_parser.py
> parent_test = parse_test_header(lines, test)
where we have special handling when we see "# Subtest" and we ignore the
explicit reported "not ok 1" status!

Also, NO_TESTS at a suite-level only results in a non-zero status code
where then there's only one suite atm.

This change is the minimal one to make sure we don't overwrite it.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# 33d4a933 12-May-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: remove dead parse_crash_in_log() logic

This logic depends on the kernel logging a message containing
'kunit test case crashed', but there is no corresponding logic to do so.

This is li

kunit: tool: remove dead parse_crash_in_log() logic

This logic depends on the kernel logging a message containing
'kunit test case crashed', but there is no corresponding logic to do so.

This is likely a relic of the revision process KUnit initially went
through when being upstreamed.

Delete it given
1) it's been missing for years and likely won't get implemented
2) the parser has been moving to be a more general KTAP parser,
kunit-only magic like this isn't how we'd want to implement it.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33
# 9660209d 29-Mar-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: print clearer error message when there's no TAP output

Before:
$ ./tools/testing/kunit/kunit.py parse /dev/null
...
[ERROR] Test : invalid KTAP input!

After:
$ ./tools/testing/kunit/ku

kunit: tool: print clearer error message when there's no TAP output

Before:
$ ./tools/testing/kunit/kunit.py parse /dev/null
...
[ERROR] Test : invalid KTAP input!

After:
$ ./tools/testing/kunit/kunit.py parse /dev/null
...
[ERROR] Test <missing>: could not find any KTAP output!

This error message gets printed out when extract_tap_output() yielded no
lines. So while it could be because of malformed KTAP output from KUnit,
it could also be due to not having any KTAP output at all.

Try and make the error message here more clear.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# c2497643 08-Apr-2022 Daniel Latypov <dlatypov@google.com>

kunit: tool: update test counts summary line format

Before:
> Testing complete. Passed: 137, Failed: 0, Crashed: 0, Skipped: 36, Errors: 0

After:
> Testing complete. Ran 173 tests: passed: 137, ski

kunit: tool: update test counts summary line format

Before:
> Testing complete. Passed: 137, Failed: 0, Crashed: 0, Skipped: 36, Errors: 0

After:
> Testing complete. Ran 173 tests: passed: 137, skipped: 36

Even with our current set of statuses, the output is a bit verbose.
It could get worse in the future if we add more (e.g. timeout, kasan).
Let's only print the relevant ones.

I had previously been sympathetic to the argument that always
printing out all the statuses would make it easier to parse results.
But now we have commit acd8e8407b8f ("kunit: Print test statistics on
failure"), there are test counts printed out in the raw output.
We don't currently print out an overall total across all suites, but it
would be easy to add, if we see a need for that.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Co-developed-by: David Gow <davidgow@google.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26
# d34f82d6 23-Feb-2022 Kees Cook <keescook@chromium.org>

kunit: tool: Do not colorize output when redirected

Filling log files with color codes makes diffs and other comparisons
difficult. Only emit vt100 codes when the stdout is a TTY.

Cc: Brendan Higgi

kunit: tool: Do not colorize output when redirected

Filling log files with color codes makes diffs and other comparisons
difficult. Only emit vt100 codes when the stdout is a TTY.

Cc: Brendan Higgins <brendanhiggins@google.com>
Cc: linux-kselftest@vger.kernel.org
Cc: kunit-dev@googlegroups.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9
# 85310a62 15-Dec-2021 Daniel Latypov <dlatypov@google.com>

kunit: tool: fix newly introduced typechecker errors

After upgrading mypy and pytype from pip, we see 2 new errors when
running ./tools/testing/kunit/run_checks.py.

Error #1: mypy and pytype
They n

kunit: tool: fix newly introduced typechecker errors

After upgrading mypy and pytype from pip, we see 2 new errors when
running ./tools/testing/kunit/run_checks.py.

Error #1: mypy and pytype
They now deduce that importlib.util.spec_from_file_location() can return
None and note that we're not checking for this.

We validate that the arch is valid (i.e. the file exists) beforehand.
Add in an `asssert spec is not None` to appease the checkers.

Error #2: pytype bug https://github.com/google/pytype/issues/1057
It doesn't like `from datetime import datetime`, specifically that a
type shares a name with a module.

We can workaround this by either
* renaming the import or just using `import datetime`
* passing the new `--fix-module-collisions` flag to pytype.

We pick the first option for now because
* the flag is quite new, only in the 2021.11.29 release.
* I'd prefer if people can just run `pytype <file>`

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# e0cc8c05 14-Dec-2021 Daniel Latypov <dlatypov@google.com>

kunit: tool: delete kunit_parser.TestResult type

The `log` field is unused, and the `status` field is accessible via
`test.status`.

So it's simpler to just return the main `Test` object directly.

kunit: tool: delete kunit_parser.TestResult type

The `log` field is unused, and the `status` field is accessible via
`test.status`.

So it's simpler to just return the main `Test` object directly.

And since we're no longer returning a namedtuple, which has no type
annotations, this hopefully means typecheckers are better equipped to
find any errors.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11
# 142189f0 07-Oct-2021 Daniel Latypov <dlatypov@google.com>

kunit: tool: print parsed test results fully incrementally

With the parser rework [1] and run_kernel() rework [2], this allows the
parser to print out test results incrementally.

Currently, that's

kunit: tool: print parsed test results fully incrementally

With the parser rework [1] and run_kernel() rework [2], this allows the
parser to print out test results incrementally.

Currently, that's held up by the fact that the LineStream eagerly
pre-fetches the next line when you call pop().
This blocks parse_test_result() from returning until the line *after*
the "ok 1 - test name" line is also printed.

One can see this with the following example:
$ (echo -e 'TAP version 14\n1..3\nok 1 - fake test'; sleep 2; echo -e 'ok 2 - fake test 2'; sleep 3; echo -e 'ok 3 - fake test 3') | ./tools/testing/kunit/kunit.py parse

Before this patch [1]: there's a pause before 'fake test' is printed.
After this patch: 'fake test' is printed out immediately.

This patch also adds
* a unit test to verify LineStream's behavior directly
* a test case to ensure that it's lazily calling the generator
* an explicit exception for when users go beyond EOF

[1] https://lore.kernel.org/linux-kselftest/20211006170049.106852-1-dlatypov@google.com/
[2] https://lore.kernel.org/linux-kselftest/20211005011340.2826268-1-dlatypov@google.com/

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# e56e4828 02-Nov-2021 David Gow <davidgow@google.com>

kunit: tool: Report an error if any test has no subtests

It's possible for a test to have a subtest header, but zero valid
subtests. We used to error on this if the test plan had no subtests
listed,

kunit: tool: Report an error if any test has no subtests

It's possible for a test to have a subtest header, but zero valid
subtests. We used to error on this if the test plan had no subtests
listed, but it's possible to have subtests without a test plan (indeed,
this is how parameterised tests work).

Tests with 0 subtests now have the result NO_TESTS, and will report an
error (which does not halt test execution, but is printed in a scary red
colour and is noted in the results summary).

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# c68077b1 02-Nov-2021 David Gow <davidgow@google.com>

kunit: tool: Do not error on tests without test plans

The (K)TAP spec encourages test output to begin with a 'test plan': a
count of the number of tests being run of the form:
1..n

However, some te

kunit: tool: Do not error on tests without test plans

The (K)TAP spec encourages test output to begin with a 'test plan': a
count of the number of tests being run of the form:
1..n

However, some test suites might not know the number of subtests in
advance (for example, KUnit's parameterised tests use a generator
function). In this case, it's not possible to print the test plan in
advance.

kunit_tool already parses test output which doesn't contain a plan, but
reports an error. Since we want to use nested subtests with KUnit
paramterised tests, remove this error.

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


# d65d07cb 11-Oct-2021 Rae Moar <rmoar@google.com>

kunit: tool: improve compatibility of kunit_parser with KTAP specification

Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplis

kunit: tool: improve compatibility of kunit_parser with KTAP specification

Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:

- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested tests and promotes the idea that both test suites and test
cases are tests.

- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing. Note that kunit.py parse does
not print incrementally yet. However, this fix brings us closer to
this feature.

- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.

This patch now implements the draft KTAP specification here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqaJk+r-K1YJzPggFDQ@mail.gmail.com/
We'll update the parser as the spec evolves.

This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).

This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.

Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


Revision tags: v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60
# acd8e840 04-Aug-2021 David Gow <davidgow@google.com>

kunit: Print test statistics on failure

When a number of tests fail, it can be useful to get higher-level
statistics of how many tests are failing (or how many parameters are
failing in parameterise

kunit: Print test statistics on failure

When a number of tests fail, it can be useful to get higher-level
statistics of how many tests are failing (or how many parameters are
failing in parameterised tests), and in what cases or suites. This is
already done by some non-KUnit tests, so add support for automatically
generating these for KUnit tests.

This change adds a 'kunit.stats_enabled' switch which has three values:
- 0: No stats are printed (current behaviour)
- 1: Stats are printed only for tests/suites with more than one
subtest (new default)
- 2: Always print test statistics

For parameterised tests, the summary line looks as follows:
" # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16"
For test suites, there are two lines looking like this:
"# ext4_inode_test: pass:1 fail:0 skip:0 total:1"
"# Totals: pass:16 fail:0 skip:0 total:16"

The first line gives the number of direct subtests, the second "Totals"
line is the accumulated sum of all tests and test parameters.

This format is based on the one used by kselftest[1].

[1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kselftest.h#L109

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>

show more ...


123