Message ID | 20240229005920.2407409-1-kuba@kernel.org |
---|---|
Headers | show |
Series | selftests: kselftest_harness: support using xfail | expand |
Hello: This series was applied to netdev/net-next.git (main) by David S. Miller <davem@davemloft.net>: On Wed, 28 Feb 2024 16:59:07 -0800 you wrote: > Hi! > > When running selftests for our subsystem in our CI we'd like all > tests to pass. Currently some tests use SKIP for cases they > expect to fail, because the kselftest_harness limits the return > codes to pass/fail/skip. XFAIL which would be a great match > here cannot be used. > > [...] Here is the summary with links: - [v4,01/12] selftests/landlock: Redefine TEST_F() as TEST_F_FORK() https://git.kernel.org/netdev/net-next/c/e74048650eaf - [v4,02/12] selftests/harness: Merge TEST_F_FORK() into TEST_F() https://git.kernel.org/netdev/net-next/c/0710a1a73fb4 - [v4,03/12] selftests: kselftest_harness: use KSFT_* exit codes https://git.kernel.org/netdev/net-next/c/a724707976b0 - [v4,04/12] selftests: kselftest_harness: generate test name once https://git.kernel.org/netdev/net-next/c/38c957f07038 - [v4,05/12] selftests: kselftest_harness: save full exit code in metadata https://git.kernel.org/netdev/net-next/c/69fe8ec4f673 - [v4,06/12] selftests: kselftest_harness: use exit code to store skip https://git.kernel.org/netdev/net-next/c/796a344fa431 - [v4,07/12] selftests: kselftest: add ksft_test_result_code(), handling all exit codes https://git.kernel.org/netdev/net-next/c/fa1a53d83674 - [v4,08/12] selftests: kselftest_harness: print test name for SKIP https://git.kernel.org/netdev/net-next/c/732e2035280b - [v4,09/12] selftests: kselftest_harness: separate diagnostic message with # in ksft_test_result_code() https://git.kernel.org/netdev/net-next/c/42ab727eb95f - [v4,10/12] selftests: kselftest_harness: let PASS / FAIL provide diagnostic https://git.kernel.org/netdev/net-next/c/378193eff339 - [v4,11/12] selftests: kselftest_harness: support using xfail https://git.kernel.org/netdev/net-next/c/2709473c9386 - [v4,12/12] selftests: ip_local_port_range: use XFAIL instead of SKIP https://git.kernel.org/netdev/net-next/c/c05bf0e93312 You are awesome, thank you!
On Mon, 4 Mar 2024 15:14:04 -0800 Kees Cook wrote: > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals, > > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael. > > Oh no, that does seem bad. Since Mickaël is also seeing weird issues, > can we drop the vfork changes for now? Seems doable, but won't be a simple revert. "drop" means we'd need to bring ->step back. More or less go back to v3. No preference here, Mickael?
On 3/5/24 00:04, Jakub Kicinski wrote: > On Mon, 4 Mar 2024 22:20:03 +0000 Mark Brown wrote: >> On Wed, Feb 28, 2024 at 04:59:07PM -0800, Jakub Kicinski wrote: >> >>> When running selftests for our subsystem in our CI we'd like all >>> tests to pass. Currently some tests use SKIP for cases they >>> expect to fail, because the kselftest_harness limits the return >>> codes to pass/fail/skip. XFAIL which would be a great match >>> here cannot be used. >>> >>> Remove the no_print handling and use vfork() to run the test in >>> a different process than the setup. This way we don't need to >>> pass "failing step" via the exit code. Further clean up the exit >>> codes so that we can use all KSFT_* values. Rewrite the result >>> printing to make handling XFAIL/XPASS easier. Support tests >>> declaring combinations of fixture + variant they expect to fail. >> >> This series landed in -next today and has caused breakage on all >> platforms in the ALSA pcmtest-driver test. When run on systems that >> don't have the driver it needs loaded the test skip but since this >> series was merged skipped tests are logged but then reported back as >> failures: >> >> # selftests: alsa: test-pcmtest-driver >> # TAP version 13 >> # 1..5 >> # # Starting 5 tests from 1 test cases. >> # # RUN pcmtest.playback ... >> # # SKIP Can't read patterns. Probably, module isn't loaded >> # # playback: Test failed >> # # FAIL pcmtest.playback >> # not ok 1 pcmtest.playback # Can't read patterns. Probably, module isn't loaded >> # # RUN pcmtest.capture ... >> # # SKIP Can't read patterns. Probably, module isn't loaded >> # # capture: Test failed >> # # FAIL pcmtest.capture >> # not ok 2 pcmtest.capture # Can't read patterns. Probably, module isn't loaded >> # # RUN pcmtest.ni_capture ... >> # # SKIP Can't read patterns. Probably, module isn't loaded >> # # ni_capture: Test failed >> # # FAIL pcmtest.ni_capture >> # not ok 3 pcmtest.ni_capture # Can't read patterns. Probably, module isn't loaded >> # # RUN pcmtest.ni_playback ... >> # # SKIP Can't read patterns. Probably, module isn't loaded >> # # ni_playback: Test failed >> # # FAIL pcmtest.ni_playback >> # not ok 4 pcmtest.ni_playback # Can't read patterns. Probably, module isn't loaded >> # # RUN pcmtest.reset_ioctl ... >> # # SKIP Can't read patterns. Probably, module isn't loaded >> # # reset_ioctl: Test failed >> # # FAIL pcmtest.reset_ioctl >> # not ok 5 pcmtest.reset_ioctl # Can't read patterns. Probably, module isn't loaded >> # # FAILED: 0 / 5 tests passed. >> # # Totals: pass:0 fail:5 xfail:0 xpass:0 skip:0 error:0 >> >> I haven't completely isolated the issue due to some other breakage >> that's making it harder that it should be to test. >> >> A sample full log can be seen at: >> >> https://lava.sirena.org.uk/scheduler/job/659576#L1349 > > Thanks! the exit() inside the skip evaded my grep, I'm testing this: > > diff --git a/tools/testing/selftests/alsa/test-pcmtest-driver.c b/tools/testing/selftests/alsa/test-pcmtest-driver.c > index a52ecd43dbe3..7ab81d6f9e05 100644 > --- a/tools/testing/selftests/alsa/test-pcmtest-driver.c > +++ b/tools/testing/selftests/alsa/test-pcmtest-driver.c > @@ -127,11 +127,11 @@ FIXTURE_SETUP(pcmtest) { > int err; > > if (geteuid()) > - SKIP(exit(-1), "This test needs root to run!"); > + SKIP(exit(KSFT_SKIP), "This test needs root to run!"); > > err = read_patterns(); > if (err) > - SKIP(exit(-1), "Can't read patterns. Probably, module isn't loaded"); > + SKIP(exit(KSFT_SKIP), "Can't read patterns. Probably, module isn't loaded"); > > card_name = malloc(127); > ASSERT_NE(card_name, NULL); > diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c > index 20294553a5dd..356ba5f3b68c 100644 > --- a/tools/testing/selftests/mm/hmm-tests.c > +++ b/tools/testing/selftests/mm/hmm-tests.c > @@ -138,7 +138,7 @@ FIXTURE_SETUP(hmm) > > self->fd = hmm_open(variant->device_number); > if (self->fd < 0 && hmm_is_coherent_type(variant->device_number)) > - SKIP(exit(0), "DEVICE_COHERENT not available"); > + SKIP(exit(KSFT_SKIP), "DEVICE_COHERENT not available"); > ASSERT_GE(self->fd, 0); > } > > @@ -149,7 +149,7 @@ FIXTURE_SETUP(hmm2) > > self->fd0 = hmm_open(variant->device_number0); > if (self->fd0 < 0 && hmm_is_coherent_type(variant->device_number0)) > - SKIP(exit(0), "DEVICE_COHERENT not available"); > + SKIP(exit(KSFT_SKIP), "DEVICE_COHERENT not available"); > ASSERT_GE(self->fd0, 0); > self->fd1 = hmm_open(variant->device_number1); > ASSERT_GE(self->fd1, 0); > >> but there's no more context. I'm also seeing some breakage in the >> seccomp selftests which also use kselftest-harness: >> >> # # RUN TRAP.dfl ... >> # # dfl: Test exited normally instead of by signal (code: 0) >> # # FAIL TRAP.dfl >> # not ok 56 TRAP.dfl >> # # RUN TRAP.ign ... >> # # ign: Test exited normally instead of by signal (code: 0) >> # # FAIL TRAP.ign >> # not ok 57 TRAP.ign > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals, > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael. > Hi, sorry for not trying to reproduce it locally and still commenting, but my vfork() man page says: | The child must not return from the current function or call | exit(3) (which would have the effect of calling exit handlers | established by the parent process and flushing the parent's stdio(3) | buffers), but may call _exit(2). And you still have some exit(3) calls.
On Tue, Mar 05, 2024 at 01:43:14AM -0800, Kees Cook wrote: > On Mon, Mar 04, 2024 at 03:39:02PM -0800, Jakub Kicinski wrote: > > On Mon, 4 Mar 2024 15:14:04 -0800 Kees Cook wrote: > > > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals, > > > > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael. > > > > > > Oh no, that does seem bad. Since Mickaël is also seeing weird issues, > > > can we drop the vfork changes for now? > > > > Seems doable, but won't be a simple revert. "drop" means we'd need > > to bring ->step back. More or less go back to v3. > > I think we have to -- other CIs are now showing the most of seccomp > failing now. (And I can confirm this now -- I had only tested seccomp > on earlier versions of the series.) Sorry for the trouble, I found and fixed the vfork issues. I tested with seccomp and Landlock. You can find a dedicated branch here (with some Reviewed-by and Acked-by removed because of the changes): https://git.kernel.org/pub/scm/linux/kernel/git/mic/linux.git/log/?h=kselftest-xfail-fix Jakub, please send a v5 series with this updated patch and your exit/_exit fixes. > > -- > Kees Cook >
On Tue, Mar 05, 2024 at 05:00:13PM +0100, Mickaël Salaün wrote: > On Tue, Mar 05, 2024 at 04:48:06PM +0100, Przemek Kitszel wrote: > > On 3/5/24 00:04, Jakub Kicinski wrote: > > > On Mon, 4 Mar 2024 22:20:03 +0000 Mark Brown wrote: > > > > On Wed, Feb 28, 2024 at 04:59:07PM -0800, Jakub Kicinski wrote: > > > > > > > > > When running selftests for our subsystem in our CI we'd like all > > > > > tests to pass. Currently some tests use SKIP for cases they > > > > > expect to fail, because the kselftest_harness limits the return > > > > > codes to pass/fail/skip. XFAIL which would be a great match > > > > > here cannot be used. > > > > > > > > > > Remove the no_print handling and use vfork() to run the test in > > > > > a different process than the setup. This way we don't need to > > > > > pass "failing step" via the exit code. Further clean up the exit > > > > > codes so that we can use all KSFT_* values. Rewrite the result > > > > > printing to make handling XFAIL/XPASS easier. Support tests > > > > > declaring combinations of fixture + variant they expect to fail. > > > > > > > > This series landed in -next today and has caused breakage on all > > > > platforms in the ALSA pcmtest-driver test. When run on systems that > > > > don't have the driver it needs loaded the test skip but since this > > > > series was merged skipped tests are logged but then reported back as > > > > failures: > > > > > > > > # selftests: alsa: test-pcmtest-driver > > > > # TAP version 13 > > > > # 1..5 > > > > # # Starting 5 tests from 1 test cases. > > > > # # RUN pcmtest.playback ... > > > > # # SKIP Can't read patterns. Probably, module isn't loaded > > > > # # playback: Test failed > > > > # # FAIL pcmtest.playback > > > > # not ok 1 pcmtest.playback # Can't read patterns. Probably, module isn't loaded > > > > # # RUN pcmtest.capture ... > > > > # # SKIP Can't read patterns. Probably, module isn't loaded > > > > # # capture: Test failed > > > > # # FAIL pcmtest.capture > > > > # not ok 2 pcmtest.capture # Can't read patterns. Probably, module isn't loaded > > > > # # RUN pcmtest.ni_capture ... > > > > # # SKIP Can't read patterns. Probably, module isn't loaded > > > > # # ni_capture: Test failed > > > > # # FAIL pcmtest.ni_capture > > > > # not ok 3 pcmtest.ni_capture # Can't read patterns. Probably, module isn't loaded > > > > # # RUN pcmtest.ni_playback ... > > > > # # SKIP Can't read patterns. Probably, module isn't loaded > > > > # # ni_playback: Test failed > > > > # # FAIL pcmtest.ni_playback > > > > # not ok 4 pcmtest.ni_playback # Can't read patterns. Probably, module isn't loaded > > > > # # RUN pcmtest.reset_ioctl ... > > > > # # SKIP Can't read patterns. Probably, module isn't loaded > > > > # # reset_ioctl: Test failed > > > > # # FAIL pcmtest.reset_ioctl > > > > # not ok 5 pcmtest.reset_ioctl # Can't read patterns. Probably, module isn't loaded > > > > # # FAILED: 0 / 5 tests passed. > > > > # # Totals: pass:0 fail:5 xfail:0 xpass:0 skip:0 error:0 > > > > > > > > I haven't completely isolated the issue due to some other breakage > > > > that's making it harder that it should be to test. > > > > > > > > A sample full log can be seen at: > > > > > > > > https://lava.sirena.org.uk/scheduler/job/659576#L1349 > > > > > > Thanks! the exit() inside the skip evaded my grep, I'm testing this: > > > > > > diff --git a/tools/testing/selftests/alsa/test-pcmtest-driver.c b/tools/testing/selftests/alsa/test-pcmtest-driver.c > > > index a52ecd43dbe3..7ab81d6f9e05 100644 > > > --- a/tools/testing/selftests/alsa/test-pcmtest-driver.c > > > +++ b/tools/testing/selftests/alsa/test-pcmtest-driver.c > > > @@ -127,11 +127,11 @@ FIXTURE_SETUP(pcmtest) { > > > int err; > > > if (geteuid()) > > > - SKIP(exit(-1), "This test needs root to run!"); > > > + SKIP(exit(KSFT_SKIP), "This test needs root to run!"); > > > err = read_patterns(); > > > if (err) > > > - SKIP(exit(-1), "Can't read patterns. Probably, module isn't loaded"); > > > + SKIP(exit(KSFT_SKIP), "Can't read patterns. Probably, module isn't loaded"); > > > card_name = malloc(127); > > > ASSERT_NE(card_name, NULL); > > > diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c > > > index 20294553a5dd..356ba5f3b68c 100644 > > > --- a/tools/testing/selftests/mm/hmm-tests.c > > > +++ b/tools/testing/selftests/mm/hmm-tests.c > > > @@ -138,7 +138,7 @@ FIXTURE_SETUP(hmm) > > > self->fd = hmm_open(variant->device_number); > > > if (self->fd < 0 && hmm_is_coherent_type(variant->device_number)) > > > - SKIP(exit(0), "DEVICE_COHERENT not available"); > > > + SKIP(exit(KSFT_SKIP), "DEVICE_COHERENT not available"); > > > ASSERT_GE(self->fd, 0); > > > } > > > @@ -149,7 +149,7 @@ FIXTURE_SETUP(hmm2) > > > self->fd0 = hmm_open(variant->device_number0); > > > if (self->fd0 < 0 && hmm_is_coherent_type(variant->device_number0)) > > > - SKIP(exit(0), "DEVICE_COHERENT not available"); > > > + SKIP(exit(KSFT_SKIP), "DEVICE_COHERENT not available"); > > > ASSERT_GE(self->fd0, 0); > > > self->fd1 = hmm_open(variant->device_number1); > > > ASSERT_GE(self->fd1, 0); > > > > > > > but there's no more context. I'm also seeing some breakage in the > > > > seccomp selftests which also use kselftest-harness: > > > > > > > > # # RUN TRAP.dfl ... > > > > # # dfl: Test exited normally instead of by signal (code: 0) > > > > # # FAIL TRAP.dfl > > > > # not ok 56 TRAP.dfl > > > > # # RUN TRAP.ign ... > > > > # # ign: Test exited normally instead of by signal (code: 0) > > > > # # FAIL TRAP.ign > > > > # not ok 57 TRAP.ign > > > > > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals, > > > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael. > > > > > > > Hi, sorry for not trying to reproduce it locally and still commenting, > > but my vfork() man page says: > > > > | The child must not return from the current function or call > > | exit(3) (which would have the effect of calling exit handlers > > | established by the parent process and flushing the parent's stdio(3) > > | buffers), but may call _exit(2). > > > > And you still have some exit(3) calls. > > Correct, exit(3) should be replaced with _exit(2). Well, I think we should be good even if some exit(3) calls remain because the envirenment in which the vfork() call happen is already dedicated to the running test (with flushed stdio, setpgrp() call), see __run_test() and the fork() call just before running the fixture/test/teardown. Even if the test configures its own exit handlers, they will not be run by its parent because it never calls exit(), and the returned function either ends with a call to _exit() or a signal.
On Tue, Mar 05, 2024 at 10:06:39AM -0800, Jakub Kicinski wrote: > On Tue, 5 Mar 2024 17:05:51 +0100 Mickaël Salaün wrote: > > > I think we have to -- other CIs are now showing the most of seccomp > > > failing now. (And I can confirm this now -- I had only tested seccomp > > > on earlier versions of the series.) > > > > Sorry for the trouble, I found and fixed the vfork issues. I tested > > with seccomp and Landlock. You can find a dedicated branch here (with > > some Reviewed-by and Acked-by removed because of the changes): > > https://git.kernel.org/pub/scm/linux/kernel/git/mic/linux.git/log/?h=kselftest-xfail-fix > > > > Jakub, please send a v5 series with this updated patch and your > > exit/_exit fixes. > > DaveM merged this already, unfortunately. Could send your changes > as incremental fixes on top of net-next? Ok, I'll send that today.
+kvm On Tue, Mar 05, 2024, Mickaël Salaün wrote: > On Tue, Mar 05, 2024 at 01:43:14AM -0800, Kees Cook wrote: > > On Mon, Mar 04, 2024 at 03:39:02PM -0800, Jakub Kicinski wrote: > > > On Mon, 4 Mar 2024 15:14:04 -0800 Kees Cook wrote: > > > > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals, > > > > > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael. > > > > > > > > Oh no, that does seem bad. Since Mickaël is also seeing weird issues, > > > > can we drop the vfork changes for now? > > > > > > Seems doable, but won't be a simple revert. "drop" means we'd need > > > to bring ->step back. More or less go back to v3. > > > > I think we have to -- other CIs are now showing the most of seccomp > > failing now. (And I can confirm this now -- I had only tested seccomp > > on earlier versions of the series.) > > Sorry for the trouble, I found and fixed the vfork issues. Heh, you found and fixed _some of_ the vfork issues. This whole mess completely breaks existing tests that use TEST_F() and exit() with non-zero values to indicate failure, including failures that occur during FIXTURE_SETUP(). E.g. all of the KVM selftests that use KVM_ONE_VCPU_TEST() are broken and will always show all tests as passing. The below gets things working for KVM selftests again, but (a) I have no idea if it's a complete fix, (b) I don't know if it will break other users of the harness, and (c) I don't understand why spawning a grandchild is the default behavior, i.e. why usage that has zero need of separating teardown from setup+run is subjected to the complexity of the handful of tests that do. diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h index 4fd735e48ee7..24e95828976f 100644 --- a/tools/testing/selftests/kselftest_harness.h +++ b/tools/testing/selftests/kselftest_harness.h @@ -391,7 +391,7 @@ fixture_name##_setup(_metadata, &self, variant->data); \ /* Let setup failure terminate early. */ \ if (_metadata->exit_code) \ - _exit(0); \ + _exit(_metadata->exit_code); \ _metadata->setup_completed = true; \ fixture_name##_##test_name(_metadata, &self, variant->data); \ } else if (child < 0 || child != waitpid(child, &status, 0)) { \ @@ -406,8 +406,10 @@ } \ if (_metadata->setup_completed && _metadata->teardown_parent) \ fixture_name##_teardown(_metadata, &self, variant->data); \ - if (!WIFEXITED(status) && WIFSIGNALED(status)) \ - /* Forward signal to __wait_for_test(). */ \ + /* Forward exit codes and signals to __wait_for_test(). */ \ + if (WIFEXITED(status)) \ + _exit(WEXITSTATUS(status)); \ + else if (WIFSIGNALED(status)) \ kill(getpid(), WTERMSIG(status)); \ __test_check_assert(_metadata); \ } \
On Thu, May 02, 2024 at 11:42:29AM GMT, Sean Christopherson wrote: > +kvm > > On Tue, Mar 05, 2024, Mickaël Salaün wrote: > > On Tue, Mar 05, 2024 at 01:43:14AM -0800, Kees Cook wrote: > > > On Mon, Mar 04, 2024 at 03:39:02PM -0800, Jakub Kicinski wrote: > > > > On Mon, 4 Mar 2024 15:14:04 -0800 Kees Cook wrote: > > > > > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals, > > > > > > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael. > > > > > > > > > > Oh no, that does seem bad. Since Mickaël is also seeing weird issues, > > > > > can we drop the vfork changes for now? > > > > > > > > Seems doable, but won't be a simple revert. "drop" means we'd need > > > > to bring ->step back. More or less go back to v3. > > > > > > I think we have to -- other CIs are now showing the most of seccomp > > > failing now. (And I can confirm this now -- I had only tested seccomp > > > on earlier versions of the series.) > > > > Sorry for the trouble, I found and fixed the vfork issues. > > Heh, you found and fixed _some of_ the vfork issues. This whole mess completely > breaks existing tests that use TEST_F() and exit() with non-zero values to > indicate failure, including failures that occur during FIXTURE_SETUP(). > > E.g. all of the KVM selftests that use KVM_ONE_VCPU_TEST() are broken and will > always show all tests as passing. > > The below gets things working for KVM selftests again, but (a) I have no idea if > it's a complete fix, (b) I don't know if it will break other users of the harness, > and (c) I don't understand why spawning a grandchild is the default behavior, i.e. > why usage that has zero need of separating teardown from setup+run is subjected to > the complexity of the handful of tests that do. Thanks for the fix. I think it covers almost all cases. I'd handle the same way the remaining _exit() though. The grandchild changes was a long due patch from the time I added kselftest_harness.h and forked the TEST_F() macro. I'll send a new patch series with this fix. > > diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h > index 4fd735e48ee7..24e95828976f 100644 > --- a/tools/testing/selftests/kselftest_harness.h > +++ b/tools/testing/selftests/kselftest_harness.h > @@ -391,7 +391,7 @@ > fixture_name##_setup(_metadata, &self, variant->data); \ > /* Let setup failure terminate early. */ \ > if (_metadata->exit_code) \ > - _exit(0); \ > + _exit(_metadata->exit_code); \ > _metadata->setup_completed = true; \ > fixture_name##_##test_name(_metadata, &self, variant->data); \ > } else if (child < 0 || child != waitpid(child, &status, 0)) { \ > @@ -406,8 +406,10 @@ > } \ > if (_metadata->setup_completed && _metadata->teardown_parent) \ > fixture_name##_teardown(_metadata, &self, variant->data); \ > - if (!WIFEXITED(status) && WIFSIGNALED(status)) \ > - /* Forward signal to __wait_for_test(). */ \ > + /* Forward exit codes and signals to __wait_for_test(). */ \ > + if (WIFEXITED(status)) \ > + _exit(WEXITSTATUS(status)); \ > + else if (WIFSIGNALED(status)) \ > kill(getpid(), WTERMSIG(status)); \ > __test_check_assert(_metadata); \ > } \ >