Commit Graph

91 Commits

Author SHA1 Message Date
Daniel Latypov
52a5d80a22 kunit: tool: fix typecheck errors about loading qemu configs
Currently, we have these errors:
$ mypy ./tools/testing/kunit/*.py
tools/testing/kunit/kunit_kernel.py:213: error: Item "_Loader" of "Optional[_Loader]" has no attribute "exec_module"
tools/testing/kunit/kunit_kernel.py:213: error: Item "None" of "Optional[_Loader]" has no attribute "exec_module"
tools/testing/kunit/kunit_kernel.py:214: error: Module has no attribute "QEMU_ARCH"
tools/testing/kunit/kunit_kernel.py:215: error: Module has no attribute "QEMU_ARCH"

exec_module
===========

pytype currently reports no errors, but that's because there's a comment
disabling it on 213.

This is due to https://github.com/python/typeshed/pull/2626.
The fix is to assert the loaded module implements the ABC
(abstract base class) we want which has exec_module support.

QEMU_ARCH
=========

pytype is fine with this, but mypy is not:
https://github.com/python/mypy/issues/5059

Add a check that the loaded module does indeed have QEMU_ARCH.
Note: this is not enough to appease mypy, so we also add a comment to
squash the warning.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-29 13:05:47 -06:00
Daniel Latypov
2ab5d5e67f kunit: tool: continue past invalid utf-8 output
kunit.py currently crashes and fails to parse kernel output if it's not
fully valid utf-8.

This can come from memory corruption or just inadvertently printing
out binary data as strings.

E.g. adding this line into a kunit test
  pr_info("\x80")
will cause this exception
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position
  1961: invalid start byte

We can tell Python how to handle errors, see
https://docs.python.org/3/library/codecs.html#error-handlers

Unfortunately, it doesn't seem like there's a way to specify this in
just one location, so we need to repeat ourselves quite a bit.

Specify `errors='backslashreplace'` so we instead:
* print out the offending byte as '\x80'
* try and continue parsing the output.
  * as long as the TAP lines themselves are valid, we're fine.

Fixed spelling/grammar in commit log:
Shuah Khan <<skhan@linuxfoundation.org>

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Tested-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-25 13:06:45 -06:00
Rae Moar
d65d07cb5b kunit: tool: improve compatibility of kunit_parser with KTAP specification
Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:

- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested tests and promotes the idea that both test suites and test
cases are tests.

- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing. Note that kunit.py parse does
not print incrementally yet. However, this fix brings us closer to
this feature.

- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.

This patch now implements the draft KTAP specification here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqaJk+r-K1YJzPggFDQ@mail.gmail.com/
We'll update the parser as the spec evolves.

This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).

This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.

Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:22:02 -06:00
Daniel Latypov
7d7c48df81 kunit: tool: yield output from run_kernel in real time
Currently, `run_kernel()` dumps all the kernel output to a file
(.kunit/test.log) and then opens the file and yields it to callers.
This made it easier to respect the requested timeout, if any.

But it means that we can't yield the results in real time, either to the
parser or to stdout (if --raw_output is set).

This change spins up a background thread to enforce the timeout, which
allows us to yield the kernel output in real time, while also copying it
to the .kunit/test.log file.
It's also careful to ensure that the .kunit/test.log file is complete,
even in the kunit_parser throws an exception/otherwise doesn't consume
every line, see the new `finally` block and unit test.

For example:

$ ./tools/testing/kunit/kunit.py run --arch=x86_64 --raw_output
<configure + build steps>
...
<can now see output from QEMU in real time>

This does not currently have a visible effect when --raw_output is not
passed, as kunit_parser.py currently only outputs everything at the end.
But that could change, and this patch is a necessary step towards
showing parsed test results in real time.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:22:02 -06:00
Daniel Latypov
ff9e09a376 kunit: tool: support running each suite/test separately
The new --run_isolated flag makes the tool boot the kernel once per
suite or test, preventing leftover state from one suite to impact the
other. This can be useful as a starting point to debugging test
hermeticity issues.

Note: it takes a lot longer, so people should not use it normally.

Consider the following very simplified example:

  bool disable_something_for_test = false;
  void function_being_tested() {
    ...
    if (disable_something_for_test) return;
    ...
  }

  static void test_before(struct kunit *test)
  {
    disable_something_for_test = true;
    function_being_tested();
    /* oops, we forgot to reset it back to false */
  }

  static void test_after(struct kunit *test)
  {
    /* oops, now "fixing" test_before can cause test_after to fail! */
    function_being_tested();
  }

Presented like this, the issues are obvious, but it gets a lot more
complicated to track down as the amount of test setup and helper
functions increases.

Another use case is memory corruption. It might not be surfaced as a
failure/crash in the test case or suite that caused it. I've noticed in
kunit's own unit tests, the 3rd suite after might be the one to finally
crash after an out-of-bounds write, for example.

Example usage:

Per suite:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite
...
Starting KUnit Kernel (1/7)...
============================================================
======== [PASSED] kunit_executor_test ========
....
Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped.
Starting KUnit Kernel (2/7)...
============================================================
======== [PASSED] kunit-try-catch-test ========
...

Per test:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test
Starting KUnit Kernel (1/23)...
============================================================
======== [PASSED] kunit_executor_test ========
[PASSED] parse_filter_test
============================================================
Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped.
Starting KUnit Kernel (2/23)...
============================================================
======== [PASSED] kunit_executor_test ========
[PASSED] filter_subsuite_test
...

It works with filters as well:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example
...
Starting KUnit Kernel (1/1)...
============================================================
======== [PASSED] example ========
...

It also handles test filters, '*.*skip*' runs these 3 tests:
  kunit_status.kunit_status_mark_skipped_test
  example.example_skip_test
  example.example_mark_skipped_test

Fixed up merge conflict between:
  d8c23ead70 ("kunit: tool: better handling of quasi-bool args (--json, --raw_output)") and
  6710951ee039 ("kunit: tool: support running each suite/test separately")
    Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
    Shuah Khan <skhan@linuxfoundation.org>

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:21:08 -06:00
Daniel Latypov
5f6aa6d82e kunit: tool: actually track how long it took to run tests
This is a long standing bug in kunit tool.
Since these files were added, run_kernel() has always yielded lines.

That means, the call to run_kernel() returns before the kernel finishes
executing tests, potentially before a single line of output is even
produced.

So code like this
  time_start = time.time()
  result = linux.run_kernel(...)
  time_end = time.time()

would only measure the time taken for python to give back the generator
object.

From a caller's perspective, the only way to know the kernel has exited
is for us to consume all the output from the `result` generator object.
Alternatively, we could change run_kernel() to try and do its own book
keeping and return the total time, but that doesn't seem worth it.

This change makes us record `time_end` after we're done parsing all the
output (which should mean we've consumed all of it, or errored out).
That means we're including in the parsing time as well, but that should
be quite small, and it's better than claiming it took 0s to run tests.

Let's use this as an example:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example

Before:
Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running

After:
Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:18:50 -06:00
Daniel Latypov
7ef925ea81 kunit: tool: factor exec + parse steps into a function
Currently this code is copy-pasted between the normal "run" subcommand
and the "exec" subcommand.

Given we don't have any interest in just executing the tests without
giving the user any indication what happened (i.e. parsing the output),
make a function that does both this things and can be reused.

This will be useful when we allow more complicated ways of running
tests, e.g. invoking the kernel multiple times instead of just once,
etc.

We remove input_data from the ParseRequest so the callers don't have to
pass in a dummy value for this field. Named tuples are also immutable,
so if they did pass in a dummy, exec_tests() would need to make a copy
to call parse_tests().

Removing it also makes KunitParseRequest match the other *Request types,
as they only contain user arguments/flags, not data.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Acked-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:18:50 -06:00
Daniel Latypov
fe678fed2c kunit: tool: show list of valid --arch options when invalid
Consider this attempt to run KUnit in QEMU:
$ ./tools/testing/kunit/kunit.py run --arch=x86

Before you'd get this error message:
kunit_kernel.ConfigError: x86 is not a valid arch

After:
kunit_kernel.ConfigError: x86 is not a valid arch, options are ['alpha', 'arm', 'arm64', 'i386', 'powerpc', 'riscv', 's390', 'sparc', 'x86_64']

This should make it a bit easier for people to notice when they make
typos, etc. Currently, one would have to dive into the python code to
figure out what the valid set is.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:18:50 -06:00
Daniel Latypov
a54ea2e057 kunit: tool: misc fixes (unused vars, imports, leaked files)
Drop some variables in unit tests that were unused and/or add assertions
based on them.

For ExitStack, it was imported, but the `es` variable wasn't used so it
didn't do anything, and we were leaking the file objects.
Refactor it to just use nested `with` statements to properly close them.

And drop the direct use of .close() on file objects in the kunit tool
unit test, as these can be leaked if test assertions fail.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:18:49 -06:00
Daniel Latypov
a127b154a8 kunit: tool: allow filtering test cases via glob
Commit 1d71307a6f ("kunit: add unit test for filtering suites by
names") introduced the ability to filter which suites we run via glob.

This change extends it so we can also filter individual test cases
inside of suites as well.

This is quite useful when, e.g.
* trying to run just the tests cases you've just added or are working on
* trying to debug issues with test hermeticity

Examples:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit '*exec*.parse*'
...
============================================================
======== [PASSED] kunit_executor_test ========
[PASSED] parse_filter_test
============================================================
Testing complete. 1 tests run. 0 failed. 0 crashed.

$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit '*.no_matching_tests'
...
[ERROR] no tests run!

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19 14:18:49 -06:00
Daniel Latypov
d8c23ead70 kunit: tool: better handling of quasi-bool args (--json, --raw_output)
Problem:

What does this do?
$ kunit.py run --json
Well, it runs all the tests and prints test results out as JSON.

And next is
$ kunit.py run my-test-suite --json
This runs just `my-test-suite` and prints results out as JSON.

But what about?
$ kunit.py run --json my-test-suite
This runs all the tests and stores the json results in a "my-test-suite"
file.

Why:
--json, and now --raw_output are actually string flags. They just have a
default value. --json in particular takes the name of an output file.

It was intended that you'd do
$ kunit.py run --json=my_output_file my-test-suite
if you ever wanted to specify the value.

Workaround:
It doesn't seem like there's a way to make
https://docs.python.org/3/library/argparse.html only accept arg values
after a '='.

I believe that `--json` should "just work" regardless of where it is.
So this patch automatically rewrites a bare `--json` to `--json=stdout`.

That makes the examples above work the same way.
Add a regression test that can catch this for --raw_output.

Fixes: 6a499c9c42 ("kunit: tool: make --raw_output support only showing kunit output")
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Tested-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-01 13:45:25 -06:00
David Gow
acd8e8407b kunit: Print test statistics on failure
When a number of tests fail, it can be useful to get higher-level
statistics of how many tests are failing (or how many parameters are
failing in parameterised tests), and in what cases or suites. This is
already done by some non-KUnit tests, so add support for automatically
generating these for KUnit tests.

This change adds a 'kunit.stats_enabled' switch which has three values:
- 0: No stats are printed (current behaviour)
- 1: Stats are printed only for tests/suites with more than one
     subtest (new default)
- 2: Always print test statistics

For parameterised tests, the summary line looks as follows:
"    # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16"
For test suites, there are two lines looking like this:
"# ext4_inode_test: pass:1 fail:0 skip:0 total:1"
"# Totals: pass:16 fail:0 skip:0 total:16"

The first line gives the number of direct subtests, the second "Totals"
line is the accumulated sum of all tests and test parameters.

This format is based on the one used by kselftest[1].

[1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kselftest.h#L109

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-08-13 13:38:31 -06:00
Daniel Latypov
6a499c9c42 kunit: tool: make --raw_output support only showing kunit output
--raw_output is nice, but it would be nicer if could show only output
after KUnit tests have started.

So change the flag to allow specifying a string ('kunit').
Make it so `--raw_output` alone will default to `--raw_output=all` and
have the same original behavior.

Drop the small kunit_parser.raw_output() function since it feels wrong
to put it in "kunit_parser.py" when the point of it is to not parse
anything.

E.g.

$ ./tools/testing/kunit/kunit.py run --raw_output=kunit
...
[15:24:07] Starting KUnit Kernel ...
TAP version 14
1..1
    # Subtest: example
    1..3
    # example_simple_test: initializing
    ok 1 - example_simple_test
    # example_skip_test: initializing
    # example_skip_test: You should not see a line below.
    ok 2 - example_skip_test # SKIP this test should be skipped
    # example_mark_skipped_test: initializing
    # example_mark_skipped_test: You should see a line below.
    # example_mark_skipped_test: You should see this line.
    ok 3 - example_mark_skipped_test # SKIP this test should be skipped
ok 1 - example
[15:24:10] Elapsed time: 6.487s total, 0.001s configuring, 3.510s building, 0.000s running

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-08-13 13:32:01 -06:00
Daniel Latypov
6cb51a1874 kunit: tool: add --kernel_args to allow setting module params
kunit.py currently does not make it possible for users to specify module
parameters (/kernel arguments more generally) unless one directly tweaks
the kunit.py code itself.

This hasn't mattered much so far, but this would make it easier to port
existing tests that expose module parameters over to KUnit and/or let
current KUnit tests take advantage of them.

Tested using an kunit internal parameter:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit \
    --kernel_args=kunit.filter_glob=kunit_status
...
Testing complete. 2 tests run. 0 failed. 0 crashed. 0 skipped.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-08-13 13:28:47 -06:00
SeongJae Park
df4b0807ca kunit: tool: Assert the version requirement
Commit 87c9c16317 ("kunit: tool: add support for QEMU") on the 'next'
tree adds 'from __future__ import annotations' in 'kunit_kernel.py'.
Because it is supported on only >=3.7 Python, people using older Python
will get below error:

    Traceback (most recent call last):
      File "./tools/testing/kunit/kunit.py", line 20, in <module>
        import kunit_kernel
      File "/home/sjpark/linux/tools/testing/kunit/kunit_kernel.py", line 9
        from __future__ import annotations
        ^
    SyntaxError: future feature annotations is not defined

This commit adds a version assertion in 'kunit.py', so that people get
more explicit error message like below:

    Traceback (most recent call last):
      File "./tools/testing/kunit/kunit.py", line 15, in <module>
        assert sys.version_info >= (3, 7), "Python version is too old"
    AssertionError: Python version is too old

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-07-12 14:02:32 -06:00
Daniel Latypov
58c965d84e kunit: tool: remove unnecessary "annotations" import
The import was working around the fact "tuple[T]" was used instead of
typing.Tuple[T].

Convert it to use type.Tuple to be consistent with how the rest of the
code is anotated.

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Tested-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-07-12 13:54:29 -06:00
Rae Moar
ed01ad3a2f kunit: tool: Fix error messages for cases of no tests and wrong TAP header
This patch addresses misleading error messages reported by kunit_tool in
two cases. First, in the case of TAP output having an incorrect header
format or missing a header, the parser used to output an error message of
'no tests run!'. Now the parser outputs an error message of 'could not
parse test results!'.

As an example:

Before:
$ ./tools/testing/kunit/kunit.py parse /dev/null
[ERROR] no tests run!
...

After:
$ ./tools/testing/kunit/kunit.py parse /dev/null
[ERROR] could not parse test results!
...

Second, in the case of TAP output with the correct header but no
tests, the parser used to output an error message of 'could not parse
test results!'. Now the parser outputs an error message of 'no tests
run!'.

As an example:

Before:
$ echo -e 'TAP version 14\n1..0' | ./tools/testing/kunit/kunit.py parse
[ERROR] could not parse test results!

After:
$ echo -e 'TAP version 14\n1..0' | ./tools/testing/kunit/kunit.py parse
[ERROR] no tests run!

Additionally, this patch also corrects the tests in kunit_tool_test.py
and adds a test to check the error in the case of TAP output with the
correct header but no tests.

Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-07-12 13:53:05 -06:00
David Gow
5acaf6031f kunit: tool: Support skipped tests in kunit_tool
Add support for the SKIP directive to kunit_tool's TAP parser.

Skipped tests now show up as such in the printed summary. The number of
skipped tests is counted, and if all tests in a suite are skipped, the
suite is also marked as skipped. Otherwise, skipped tests do affect the
suite result.

Example output:
[00:22:34] ======== [SKIPPED] example_skip ========
[00:22:34] [SKIPPED] example_skip_test # SKIP this test should be skipped
[00:22:34] [SKIPPED] example_mark_skipped_test # SKIP this test should be skipped
[00:22:34] ============================================================
[00:22:34] Testing complete. 2 tests run. 0 failed. 0 crashed. 2 skipped.

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-25 11:31:03 -06:00
Daniel Latypov
b29b14f11d kunit: tool: internal refactor of parser input handling
Note: this does not change the parser behavior at all (except for making
one error message more useful). This is just an internal refactor.

The TAP output parser currently operates over a List[str].
This works, but we only ever need to be able to "peek" at the current
line and the ability to "pop" it off.

Also, using a List means we need to wait for all the output before we
can start parsing. While this is not an issue for most tests which are
really lightweight, we do have some longer (~5 minutes) tests.

This patch introduces an LineStream wrapper class that
* Exposes a peek()/pop() interface instead of manipulating an array
  * this allows us to more easily add debugging code [1]
* Can consume an input from a generator
  * we can now parse results as tests are running (the parser code
  currently doesn't print until the end, so no impact yet).
* Tracks the current line number to print better error messages
* Would allow us to add additional features more easily, e.g. storing
  N previous lines so we can print out invalid lines in context, etc.

[1] The parsing logic is currently quite fragile.
E.g. it'll often say the kernel "CRASHED" if there's something slightly
wrong with the output format. When debugging a test that had some memory
corruption issues, it resulted in very misleading errors from the parser.

Now we could easily add this to trace all the lines consumed and why
+import inspect
...
        def pop(self) -> str:
                n = self._next
+               print(f'popping {n[0]}: {n[1].ljust(40, " ")}| caller={inspect.stack()[1].function}')

Example output:
popping 77: TAP version 14                          | caller=parse_tap_header
popping 78: 1..1                                    | caller=parse_test_plan
popping 79:     # Subtest: kunit_executor_test      | caller=parse_subtest_header
popping 80:     1..2                                | caller=parse_subtest_plan
popping 81:     ok 1 - parse_filter_test            | caller=parse_ok_not_ok_test_case
popping 82:     ok 2 - filter_subsuite_test         | caller=parse_ok_not_ok_test_case
popping 83: ok 1 - kunit_executor_test              | caller=parse_ok_not_ok_test_suite

If we introduce an invalid line, we can see the parser go down the wrong path:
popping 77: TAP version 14                          | caller=parse_tap_header
popping 78: 1..1                                    | caller=parse_test_plan
popping 79:     # Subtest: kunit_executor_test      | caller=parse_subtest_header
popping 80:     1..2                                | caller=parse_subtest_plan
popping 81:     1..2 # this is invalid!             | caller=parse_ok_not_ok_test_case
popping 82:     ok 1 - parse_filter_test            | caller=parse_ok_not_ok_test_case
popping 83:     ok 2 - filter_subsuite_test         | caller=parse_ok_not_ok_test_case
popping 84: ok 1 - kunit_executor_test              | caller=parse_ok_not_ok_test_case
[ERROR] ran out of lines before end token

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Acked-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-25 11:31:03 -06:00
David Gow
8a5124c0f3 kunit: Remove the unused all_tests.config
This isn't used anywhere. While it's possible that people were manually
referencing it, the new default config (in default.config in the same
path) provides equivalent functionality.

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-23 17:49:41 -06:00
David Gow
d9d6b8225e kunit: Move default config from arch/um -> tools/testing/kunit
The default .kunitconfig file is currently kept in
arch/um/configs/kunit_defconfig, but -- with the impending QEMU patch
-- will no-longer be exclusively used for UML-based kernels.

Move it alongside the other KUnit configs in
tools/testing/kunit/configs, and give it a name which matches the
existing all_tests.config and broken_on_uml.config files.

Also update the Getting Started documentation to point to the new file.

Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-23 17:49:17 -06:00
Brendan Higgins
87c9c16317 kunit: tool: add support for QEMU
Add basic support to run QEMU via kunit_tool. Add support for i386,
x86_64, arm, arm64, and a bunch more.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
Tested-by: David Gow <davidgow@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-11 16:10:23 -06:00
David Gow
b6d5799b0b kunit: Add 'kunit_shutdown' option
Add a new kernel command-line option, 'kunit_shutdown', which allows the
user to specify that the kernel poweroff, halt, or reboot after
completing all KUnit tests; this is very handy for running KUnit tests
on UML or a VM so that the UML/VM process exits cleanly immediately
after running all tests without needing a special initramfs.

Signed-off-by: David Gow <davidgow@google.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
Reviewed-by: Stephen Boyd <sboyd@kernel.org>
Tested-By: Daniel Latypov <dlatypov@google.com>
Reviewed-by: Daniel Latypov <dlatypov@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-11 16:04:57 -06:00
Linus Torvalds
1e9599dfc4 Merge tag 'linux-kselftest-kunit-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull KUnit updates from Shuah Khan:
 "Several fixes and a new feature to support failure from dynamic
  analysis tools such as UBSAN and fake ops for testing.

   - a fake ops struct for testing a "free" function to complain if it
     was called with an invalid argument, or caught a double-free. Most
     return void and have no normal means of signalling failure (e.g.
     super_operations, iommu_ops, etc.)"

* tag 'linux-kselftest-kunit-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
  Documentation: kunit: add tips for using current->kunit_test
  kunit: fix -Wunused-function warning for __kunit_fail_current_test
  kunit: support failure from dynamic analysis tools
  kunit: tool: make --kunitconfig accept dirs, add lib/kunit fragment
  kunit: make KUNIT_EXPECT_STREQ() quote values, don't print literals
  kunit: Match parenthesis alignment to improve code readability
2021-04-27 18:56:29 -07:00
Daniel Latypov
9854781dba kunit: tool: make --kunitconfig accept dirs, add lib/kunit fragment
TL;DR
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit

Per suggestion from Ted [1], we can reduce the amount of typing by
assuming a convention that these files are named '.kunitconfig'.

In the case of [1], we now have
$ ./tools/testing/kunit/kunit.py run --kunitconfig=fs/ext4

Also add in such a fragment for kunit itself so we can give that as an
example more close to home (and thus less likely to be accidentally
broken).

[1] https://lore.kernel.org/linux-ext4/YCNF4yP1dB97zzwD@mit.edu/

Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-04-02 14:14:36 -06:00