2017-06-15 17:57:23

by Sumit Semwal

[permalink] [raw]
Subject: LTS testing with latest kselftests - some failures

Hello Greg, Shuah,

While testing 4.4.y and 4.9.y LTS kernels with latest kselftest, we
found a couple more test failures due to test-kernel mismatch:

1. firmware tests: - linux 4.5 [1] and 4.10 [2] added a few updates to
tests, and related updates to lib/test_firmware.c to improve the
tests. Stable-4.4 misses these patches to lib/test_firmware.c. Stable
4.9 misses the second update.

2. Bitmap test - test got added in 4.5, fails if test_bitmap.ko isn't present.

3. 'seccomp ptrace hole closure' patches got added in 4.7 [3] -
feature and test together.
- This one also seems like a security hole being closed, and the
'feature' could be a candidate for stable backports, but Arnd tried
that, and it was quite non-trivial. So perhaps we'll need some help
from the subsystem developers here.

For all the 3 listed above, we will try and update the tests to gracefully exit.


4. bpf tests: These seem to have build failures in mainline as well -
I also tried to build kselftest-next, but a simple 'make -C
tools/testing/selftests/bpf' seems to error out. Are there any special
instructions to build these? [I tried x86_64, arm64 cross-compile on x86_64]


I will also individually request subsystem authors / mailing lists for
each of these towards help in improving these tests if required, but
wanted to use this thread as a converging point.

Thanks and Best regards,
Sumit.


[1]: https://lkml.org/lkml/2015/12/8/816
Patches added via [1]:
eb910947c82f (test: firmware_class: add asynchronous request trigger)
be4a1326d12c (test: firmware_class: use kstrndup() where appropriate)
47e0bbb7fa98 (test: firmware_class: report errors properly on failure)

[2]: https://lkml.org/lkml/2017/1/23/440
Patch added via [2]:
061132d2b9c9 (test_firmware: add test custom fallback trigger)

[3]: https://lkml.org/lkml/2016/6/9/627


2017-06-15 18:30:15

by Shuah Khan

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

Hi Sumit,

On 06/15/2017 11:56 AM, Sumit Semwal wrote:
> Hello Greg, Shuah,
>
> While testing 4.4.y and 4.9.y LTS kernels with latest kselftest, we
> found a couple more test failures due to test-kernel mismatch:
>
> 1. firmware tests: - linux 4.5 [1] and 4.10 [2] added a few updates to
> tests, and related updates to lib/test_firmware.c to improve the
> tests. Stable-4.4 misses these patches to lib/test_firmware.c. Stable
> 4.9 misses the second update.

I will take a look the commit list your provided and see if it makes sense
to back-port. As we discussed earlier in this thread, fixes are something
that can be back-ported to include in the stable releases. Updates will not
be.

> 2. Bitmap test - test got added in 4.5, fails if test_bitmap.ko isn't present.

If you can send a patch to have it exit gracefully and change the message
to say unsupported feature, I can pull that into mainline can mark it for
stable inclusion if it qualifies.

>
> 3. 'seccomp ptrace hole closure' patches got added in 4.7 [3] -
> feature and test together.
> - This one also seems like a security hole being closed, and the
> 'feature' could be a candidate for stable backports, but Arnd tried
> that, and it was quite non-trivial. So perhaps we'll need some help
> from the subsystem developers here.

This is something I need to look at and consult with Security maintainers.

>
> For all the 3 listed above, we will try and update the tests to gracefully exit.

That is great.

>
>
> 4. bpf tests: These seem to have build failures in mainline as well -
> I also tried to build kselftest-next, but a simple 'make -C
> tools/testing/selftests/bpf' seems to error out. Are there any special
> instructions to build these? [I tried x86_64, arm64 cross-compile on x86_64]

Hmm. I didn't notice this one. I will check and see what's going on there.

>
>
> I will also individually request subsystem authors / mailing lists for
> each of these towards help in improving these tests if required, but
> wanted to use this thread as a converging point.

Please cc me and linux-kselftest mailing list.

Thanks for reporting the problems.

>
>
> [1]: https://lkml.org/lkml/2015/12/8/816
> Patches added via [1]:
> eb910947c82f (test: firmware_class: add asynchronous request trigger)
> be4a1326d12c (test: firmware_class: use kstrndup() where appropriate)
> 47e0bbb7fa98 (test: firmware_class: report errors properly on failure)
>
> [2]: https://lkml.org/lkml/2017/1/23/440
> Patch added via [2]:
> 061132d2b9c9 (test_firmware: add test custom fallback trigger)
>
> [3]: https://lkml.org/lkml/2016/6/9/627
>
>

thanks,
-- Shuah

2017-06-15 23:05:08

by Alexander Alemayhu

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
>
> 4. bpf tests: These seem to have build failures in mainline as well -
> I also tried to build kselftest-next, but a simple 'make -C
> tools/testing/selftests/bpf' seems to error out. Are there any special
> instructions to build these? [I tried x86_64, arm64 cross-compile on x86_64]
>
Do you have the full failure output? If you haven't already you might
also want to run 'make headers_install' in the top level directory.

--
Mit freundlichen Gr??en

Alexander Alemayhu

2017-06-16 04:32:02

by Sumit Semwal

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

Hi Alexander,

On 16 June 2017 at 04:35, Alexander Alemayhu <[email protected]> wrote:
> On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
>>
>> 4. bpf tests: These seem to have build failures in mainline as well -
>> I also tried to build kselftest-next, but a simple 'make -C
>> tools/testing/selftests/bpf' seems to error out. Are there any special
>> instructions to build these? [I tried x86_64, arm64 cross-compile on x86_64]
>>
> Do you have the full failure output? If you haven't already you might
> also want to run 'make headers_install' in the top level directory.

make headers_install was missing, but it still didn't improve the
build - here's the pastebin: https://paste.debian.net/971652/

>
> --
> Mit freundlichen Grüßen
>
> Alexander Alemayhu

Best,
Sumit.

2017-06-16 07:14:07

by Alexander Alemayhu

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 10:01:37AM +0530, Sumit Semwal wrote:
>
> make headers_install was missing, but it still didn't improve the
> build - here's the pastebin: https://paste.debian.net/971652/
>
Last time I saw similar kinds of errors gcc libraries were missing.
Can you try rerunning after

apt-get install -y gcc-multilib g++-multilib

You probably don't need or have most of them, but if the above doesn't
help, try

apt-get install -y make gcc libssl-dev bc libelf-dev libcap-dev \
clang gcc-multilib llvm libncurses5-dev git


--
Mit freundlichen Gr??en

Alexander Alemayhu

2017-06-16 07:38:28

by Sumit Semwal

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

Hi Alexander,

On 16 June 2017 at 12:44, Alexander Alemayhu <[email protected]> wrote:
> Last time I saw similar kinds of errors gcc libraries were missing.
> Can you try rerunning after
>
> apt-get install -y gcc-multilib g++-multilib

Thanks, this was quite helpful, and so now bpf tests build on x86_64
with current mainline for me. Perhaps we should document these
somewhere, as dependencies?

Best,
Sumit.

2017-06-16 16:46:55

by Luis Chamberlain

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

Kees, please review 47e0bbb7fa98 below.
Brian, please review be4a1326d12c below.

On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
> Hello Greg, Shuah,
>
> While testing 4.4.y and 4.9.y LTS kernels with latest kselftest,

To be clear it seems like you are taking the latest upstream ksefltest and run
it against older stable kernels. Furthermore you seem to only run the shell
script tests but are using older kselftests drivers? Is this all correct?
Otherwise it is unclear how you are running into the issues below.

Does 0-day so the same? I thought 0-day takes just the kselftest from each tree
submitted. That *seemed* to me like the way it was designed. Shuah ?

What's the name of *this* testing effort BTW? Is this part of the overall
kselftest ? Or is this something Linaro does for LTS kernels ? If there
is a name to your effort can you document it here so that others are aware:

https://bottest.wiki.kernel.org/

Replying below only the firmware stuff.

> we found a couple more test failures due to test-kernel mismatch:
>
> 1. firmware tests: - linux 4.5 [1] and 4.10 [2] added a few updates to
> tests, and related updates to lib/test_firmware.c to improve the
> tests. Stable-4.4 misses these patches to lib/test_firmware.c. Stable
> 4.9 misses the second update.

<-- snip, skipped 2. and 3. -->

> For all the 3 listed above, we will try and update the tests to gracefully exit.

Hmm, this actually raises a good kselftest question:

I *though* kselftests were running tests on par with the kernels, so we would
*not* take latest upstream kselftests to test against older kernels. Is this
incorrect?

If this is indeed incorrect then indeed you have a problem and then I understand
this email, however this manual approach seems rather fragile. If I would have
understood this practice was expected I would have tried to design tests cases
a bit differently, but *also* it does beg the question about what to do when
the latest kselftest shell script does require some new knob form a test driver
which *is* fine to backport to the respective ksefltest C test-driver for an
older kernel. What makes this hard is that C test-drivers may depend on new API,
so you might have to do some manual work to backport some fancy new API in old
ways. This makes me question the value to this mismatch between shell and C
test-drivers on kselftests. Your effort seems to be all manual and empirical ?
Did we design kselftests with this in mind? Even though using the latest
kselftest shell tests against older stable kernels with older kselftest C
drivers seems to be like a good idea (provided the above is resolved) your
current suggestion to just drop some tests seems worrisome and seems to
*invalidate* the gains of such effort and all the pains you are going through.

If you are just dropping patches / tests loosely your approach could be missing
out on valid tests which *may* have missed out on respective stable patches.

The test-firmware async knobs are a good example, and so is the firmware custom
fallback trigger. These patches are just extending test coverage, so they help
test the existing old kernel API.

Its not worthy to Cc stable on those as they are not fixing a stable issue,
however their test at times may reveal an issue which a subsequent patch *does
fix* which *is* Cc stable.

An alternative to the way you are doing things, if I understood it correctly,
would be for us to consider evaluating pegging as stable candidates not only
ksefltest shell but kselftest C driver extensions, then instead of using the
latest ksefltests against older kernels you could just use the kselftest on the
respective old stable kernel, the *backport* effort becomes part of the stable
pipeline. Note I think this is very debatable.... and I would not be surprised
if Greg does not like it, but its worth *considering* if there is indeed value to
your current attempted approach.

The alternative of course, is to only use ksefltest from each respective
kernel, under the assumption each stable fix does makes its way through.

So -- where is the metric value of your current approach? Do we have stats?

> I will also individually request subsystem authors / mailing lists for
> each of these towards help in improving these tests if required, but
> wanted to use this thread as a converging point.
>
> Thanks and Best regards,
> Sumit.
>
>
> [1]: https://lkml.org/lkml/2015/12/8/816
> Patches added via [1]:
> eb910947c82f (test: firmware_class: add asynchronous request trigger)

This is an example extension to the C test driver which could be useful for
kselftest.

> be4a1326d12c (test: firmware_class: use kstrndup() where appropriate)

I can't see this being a stable candidate, its unclear why this has come up on
this thread ?

> 47e0bbb7fa98 (test: firmware_class: report errors properly on failure)

Hrm, come to think of it, this *might* have been a stable fix, however the
fix did not mention any specific about real issue with this. Kees?

> [2]: https://lkml.org/lkml/2017/1/23/440
> Patch added via [2]:
> 061132d2b9c9 (test_firmware: add test custom fallback trigger)

This is another C test driver extension for kselftest which is useful to test
old kernels.

Also just a heads up these are other stable fixes for firmware in the pipeline,
they are not merged yet though. In this case no new test driver C functionality
is extended, just shell. But the test extensions do help test an old issue,
so the tests cases are worthy to be cherry picked into kselftests , as there is
a fix tagged as stable which is pending stable integration. Of course, since
they are not upstream yet it means it still has to go through final review and
integration.

[PATCH 0/4] firmware: fix fallback mechanism by ignoring SIGCHLD
http://lkml.kernel.org/r/[email protected]

[PATCH 1/4] test_firmware: add test case for SIGCHLD on sync fallback
http://lkml.kernel.org/r/[email protected]

[PATCH 2/4] swait: add the missing killable swaits
http://lkml.kernel.org/r/[email protected]

[PATCH 3/4] firmware: avoid invalid fallback aborts by using killable swait
http://lkml.kernel.org/r/[email protected]

[PATCH 4/4] firmware: send -EINTR on signal abort on fallback mechanism
http://lkml.kernel.org/r/[email protected]

Luis

2017-06-16 19:26:53

by Alexander Alemayhu

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 01:08:04PM +0530, Sumit Semwal wrote:
>
> Thanks, this was quite helpful, and so now bpf tests build on x86_64
> with current mainline for me. Perhaps we should document these
> somewhere, as dependencies?
>
There is already some documentation available[0], but something in the kernel
tree would be nice. Please send the patch(es) to netdev.

Thanks.

[0]: http://docs.cilium.io/en/latest/bpf/#development-environment

--
Mit freundlichen Gr??en

Alexander Alemayhu

2017-06-16 19:30:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 06:46:51PM +0200, Luis R. Rodriguez wrote:
> Kees, please review 47e0bbb7fa98 below.
> Brian, please review be4a1326d12c below.
>
> On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
> > Hello Greg, Shuah,
> >
> > While testing 4.4.y and 4.9.y LTS kernels with latest kselftest,
>
> To be clear it seems like you are taking the latest upstream ksefltest and run
> it against older stable kernels. Furthermore you seem to only run the shell
> script tests but are using older kselftests drivers? Is this all correct?
> Otherwise it is unclear how you are running into the issues below.
>
> Does 0-day so the same? I thought 0-day takes just the kselftest from each tree
> submitted. That *seemed* to me like the way it was designed. Shuah ?
>
> What's the name of *this* testing effort BTW? Is this part of the overall
> kselftest ? Or is this something Linaro does for LTS kernels ? If there
> is a name to your effort can you document it here so that others are aware:

It's a "test LTS kernels to make sure Greg didn't break anything" type
of testing effort that Linaro is helping out with.

This could also be called, "it's about time someone did this..." :)

> > we found a couple more test failures due to test-kernel mismatch:
> >
> > 1. firmware tests: - linux 4.5 [1] and 4.10 [2] added a few updates to
> > tests, and related updates to lib/test_firmware.c to improve the
> > tests. Stable-4.4 misses these patches to lib/test_firmware.c. Stable
> > 4.9 misses the second update.
>
> <-- snip, skipped 2. and 3. -->
>
> > For all the 3 listed above, we will try and update the tests to gracefully exit.
>
> Hmm, this actually raises a good kselftest question:
>
> I *though* kselftests were running tests on par with the kernels, so we would
> *not* take latest upstream kselftests to test against older kernels. Is this
> incorrect?

That is incorrect. Your test should always degrade gracefully if the
feature is not present in the kernel under test. If the test is for a
bug that was fixed, then that fix should also go to a stable kernel
release.

thanks,

greg k-h

2017-06-16 19:47:30

by Luis Chamberlain

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 09:29:52PM +0200, Greg Kroah-Hartman wrote:
> On Fri, Jun 16, 2017 at 06:46:51PM +0200, Luis R. Rodriguez wrote:
> > Kees, please review 47e0bbb7fa98 below.
> > Brian, please review be4a1326d12c below.
> >
> > On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
> > > Hello Greg, Shuah,
> > >
> > > While testing 4.4.y and 4.9.y LTS kernels with latest kselftest,
> >
> > To be clear it seems like you are taking the latest upstream ksefltest and run
> > it against older stable kernels. Furthermore you seem to only run the shell
> > script tests but are using older kselftests drivers? Is this all correct?
> > Otherwise it is unclear how you are running into the issues below.
> >
> > Does 0-day so the same? I thought 0-day takes just the kselftest from each tree
> > submitted. That *seemed* to me like the way it was designed. Shuah ?
> >
> > What's the name of *this* testing effort BTW? Is this part of the overall
> > kselftest ? Or is this something Linaro does for LTS kernels ? If there
> > is a name to your effort can you document it here so that others are aware:
>
> It's a "test LTS kernels to make sure Greg didn't break anything" type
> of testing effort that Linaro is helping out with.

OK so its "standard" :)

> This could also be called, "it's about time someone did this..." :)

Good to know!

> > > we found a couple more test failures due to test-kernel mismatch:
> > >
> > > 1. firmware tests: - linux 4.5 [1] and 4.10 [2] added a few updates to
> > > tests, and related updates to lib/test_firmware.c to improve the
> > > tests. Stable-4.4 misses these patches to lib/test_firmware.c. Stable
> > > 4.9 misses the second update.
> >
> > <-- snip, skipped 2. and 3. -->
> >
> > > For all the 3 listed above, we will try and update the tests to gracefully exit.
> >
> > Hmm, this actually raises a good kselftest question:
> >
> > I *though* kselftests were running tests on par with the kernels, so we would
> > *not* take latest upstream kselftests to test against older kernels. Is this
> > incorrect?
>
> That is incorrect. Your test should always degrade gracefully if the
> feature is not present in the kernel under test.

OK perfect, now I know to look for knobs in the shell tests to ensure this
doesn't happen again.

Some of the knobs however are for extending tests for
existing APIs in older kernels, the async and custom fallback one are an
example. There are a series of test cases later added which could help
test LTS kernels. Would Linaro pick these test driver enhancements to help
increase coverage of tests? Or is it not worth it? If its worth it then
what I was curious was how to help make this easier for this process to
bloom.

> If the test is for a
> bug that was fixed, then that fix should also go to a stable kernel
> release.

Indeed, that was perfectly clear.

Luis

2017-06-16 23:55:55

by Fengguang Wu

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 06:46:51PM +0200, Luis R. Rodriguez wrote:
>Kees, please review 47e0bbb7fa98 below.
>Brian, please review be4a1326d12c below.
>
>On Thu, Jun 15, 2017 at 11:26:53PM +0530, Sumit Semwal wrote:
>> Hello Greg, Shuah,
>>
>> While testing 4.4.y and 4.9.y LTS kernels with latest kselftest,
>
>To be clear it seems like you are taking the latest upstream ksefltest and run
>it against older stable kernels. Furthermore you seem to only run the shell
>script tests but are using older kselftests drivers? Is this all correct?
>Otherwise it is unclear how you are running into the issues below.
>
>Does 0-day so the same? I thought 0-day takes just the kselftest from each tree
>submitted. That *seemed* to me like the way it was designed. Shuah ?

Yes in 0-day, we run the kselftest code corresponding to the current kernel.

Thanks,
Fengguang

2017-06-17 04:16:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 09:47:21PM +0200, Luis R. Rodriguez wrote:
> Some of the knobs however are for extending tests for
> existing APIs in older kernels, the async and custom fallback one are an
> example. There are a series of test cases later added which could help
> test LTS kernels. Would Linaro pick these test driver enhancements to help
> increase coverage of tests? Or is it not worth it? If its worth it then
> what I was curious was how to help make this easier for this process to
> bloom.

I don't understand, what do you mean by "pick these test driver
enhancements"? What kind of "knobs" are there in tests? Shouldn't the
tests "just work" with no kind of special configuration of the tests be
needed? No user is going to know to enable something special.

Make the tests "just work" please, because given the large number of
them, no one is going to know to look for special things.

thanks,

greg k-h

2017-06-19 14:48:10

by Luis Chamberlain

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Sat, Jun 17, 2017 at 06:16:35AM +0200, Greg Kroah-Hartman wrote:
> On Fri, Jun 16, 2017 at 09:47:21PM +0200, Luis R. Rodriguez wrote:
> > Some of the knobs however are for extending tests for
> > existing APIs in older kernels, the async and custom fallback one are an
> > example. There are a series of test cases later added which could help
> > test LTS kernels. Would Linaro pick these test driver enhancements to help
> > increase coverage of tests? Or is it not worth it? If its worth it then
> > what I was curious was how to help make this easier for this process to
> > bloom.
>
> I don't understand, what do you mean by "pick these test driver
> enhancements"? What kind of "knobs" are there in tests? Shouldn't the
> tests "just work" with no kind of special configuration of the tests be
> needed? No user is going to know to enable something special.

Test driver knobs, so for instance the async and custom patches referenced
enable the shell script to use the async api and the custom API.

Luis

2017-06-19 14:55:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Mon, Jun 19, 2017 at 04:48:05PM +0200, Luis R. Rodriguez wrote:
> On Sat, Jun 17, 2017 at 06:16:35AM +0200, Greg Kroah-Hartman wrote:
> > On Fri, Jun 16, 2017 at 09:47:21PM +0200, Luis R. Rodriguez wrote:
> > > Some of the knobs however are for extending tests for
> > > existing APIs in older kernels, the async and custom fallback one are an
> > > example. There are a series of test cases later added which could help
> > > test LTS kernels. Would Linaro pick these test driver enhancements to help
> > > increase coverage of tests? Or is it not worth it? If its worth it then
> > > what I was curious was how to help make this easier for this process to
> > > bloom.
> >
> > I don't understand, what do you mean by "pick these test driver
> > enhancements"? What kind of "knobs" are there in tests? Shouldn't the
> > tests "just work" with no kind of special configuration of the tests be
> > needed? No user is going to know to enable something special.
>
> Test driver knobs, so for instance the async and custom patches referenced
> enable the shell script to use the async api and the custom API.

Ah, testing kernel code, that makes more sense. I don't really know, if
the apis are present in the older kernel trees, I don't have a problem
having them be backported to stable kernel releases, as this isn't code
that people are actually running on a "normal" system.

thanks,

greg k-h

2017-06-19 17:32:42

by Luis Chamberlain

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Mon, Jun 19, 2017 at 10:55:01PM +0800, Greg Kroah-Hartman wrote:
> On Mon, Jun 19, 2017 at 04:48:05PM +0200, Luis R. Rodriguez wrote:
> > On Sat, Jun 17, 2017 at 06:16:35AM +0200, Greg Kroah-Hartman wrote:
> > > On Fri, Jun 16, 2017 at 09:47:21PM +0200, Luis R. Rodriguez wrote:
> > > > Some of the knobs however are for extending tests for
> > > > existing APIs in older kernels, the async and custom fallback one are an
> > > > example. There are a series of test cases later added which could help
> > > > test LTS kernels. Would Linaro pick these test driver enhancements to help
> > > > increase coverage of tests? Or is it not worth it? If its worth it then
> > > > what I was curious was how to help make this easier for this process to
> > > > bloom.
> > >
> > > I don't understand, what do you mean by "pick these test driver
> > > enhancements"? What kind of "knobs" are there in tests? Shouldn't the
> > > tests "just work" with no kind of special configuration of the tests be
> > > needed? No user is going to know to enable something special.
> >
> > Test driver knobs, so for instance the async and custom patches referenced
> > enable the shell script to use the async api and the custom API.
>
> Ah, testing kernel code, that makes more sense. I don't really know, if
> the apis are present in the older kernel trees, I don't have a problem
> having them be backported to stable kernel releases, as this isn't code
> that people are actually running on a "normal" system.

Wonderful, will peg test-driver changes as stable then when this fits. I really
do think this will make test coverage better.

Luis

2017-06-19 18:56:15

by Kees Cook

[permalink] [raw]
Subject: Re: LTS testing with latest kselftests - some failures

On Fri, Jun 16, 2017 at 9:46 AM, Luis R. Rodriguez <[email protected]> wrote:
>> 47e0bbb7fa98 (test: firmware_class: report errors properly on failure)
>
> Hrm, come to think of it, this *might* have been a stable fix, however the
> fix did not mention any specific about real issue with this. Kees?

This was mostly a cosmetic fix, though it does fix the return code. It
can certainly go to stable, but I try to only push more critical
things to -stable.

-Kees

--
Kees Cook
Pixel Security