On Thu, 20 Sep 2012, Matthew Garrett wrote:
> This is pretty much identical to the first patchset, but with the capability
> renamed (CAP_COMPROMISE_KERNEL) and the kexec patch dropped. If anyone wants
> to deploy these then they should disable kexec until support for signed
> kexec payloads has been merged.
Apparently your patchset currently doesn't handle device firmware loading,
nor do you seem to mention in in the comments.
I believe signed firmware loading should be put on plate as well, right?
Thanks,
--
Jiri Kosina
SUSE Labs
On Mon, Oct 29, 2012 at 08:49:41AM +0100, Jiri Kosina wrote:
> On Thu, 20 Sep 2012, Matthew Garrett wrote:
>
> > This is pretty much identical to the first patchset, but with the capability
> > renamed (CAP_COMPROMISE_KERNEL) and the kexec patch dropped. If anyone wants
> > to deploy these then they should disable kexec until support for signed
> > kexec payloads has been merged.
>
> Apparently your patchset currently doesn't handle device firmware loading,
> nor do you seem to mention in in the comments.
Correct.
> I believe signed firmware loading should be put on plate as well, right?
I think that's definitely something that should be covered. I hadn't
worried about it immediately as any attack would be limited to machines
with a specific piece of hardware, and the attacker would need to expend
a significant amount of reverse engineering work on the firmware - and
we'd probably benefit from them doing that in the long run...
Having said that, yes, we should worry about this. Firmware distribution
licenses often forbid any distribution of modified versions, so
signatures would probably need to be detached. udev could easily glue
together a signature and firmware when loading, but if we're moving
towards an in-kernel firmware loader for the common case then it'll need
to be implemented there as well.
--
Matthew Garrett | [email protected]
On Mon, 29 Oct 2012, Matthew Garrett wrote:
> > > This is pretty much identical to the first patchset, but with the capability
> > > renamed (CAP_COMPROMISE_KERNEL) and the kexec patch dropped. If anyone wants
> > > to deploy these then they should disable kexec until support for signed
> > > kexec payloads has been merged.
> >
> > Apparently your patchset currently doesn't handle device firmware loading,
> > nor do you seem to mention in in the comments.
>
> Correct.
>
> > I believe signed firmware loading should be put on plate as well, right?
>
> I think that's definitely something that should be covered. I hadn't
> worried about it immediately as any attack would be limited to machines
> with a specific piece of hardware, and the attacker would need to expend
> a significant amount of reverse engineering work on the firmware - and
> we'd probably benefit from them doing that in the long run...
Now -- how about resuming from S4?
Reading stored memory image (potentially tampered before reboot) from disk
is basically DMA-ing arbitrary data over the whole RAM. I am currently not
able to imagine a scenario how this could be made "secure" (without
storing private keys to sign the hibernation image on the machine itself
which, well, doesn't sound secure either).
--
Jiri Kosina
SUSE Labs
On Wed, Oct 31, 2012 at 10:50 AM, Jiri Kosina <[email protected]> wrote:
> On Mon, 29 Oct 2012, Matthew Garrett wrote:
>
>> > > This is pretty much identical to the first patchset, but with the capability
>> > > renamed (CAP_COMPROMISE_KERNEL) and the kexec patch dropped. If anyone wants
>> > > to deploy these then they should disable kexec until support for signed
>> > > kexec payloads has been merged.
>> >
>> > Apparently your patchset currently doesn't handle device firmware loading,
>> > nor do you seem to mention in in the comments.
>>
>> Correct.
>>
>> > I believe signed firmware loading should be put on plate as well, right?
>>
>> I think that's definitely something that should be covered. I hadn't
>> worried about it immediately as any attack would be limited to machines
>> with a specific piece of hardware, and the attacker would need to expend
>> a significant amount of reverse engineering work on the firmware - and
>> we'd probably benefit from them doing that in the long run...
>
> Now -- how about resuming from S4?
>
> Reading stored memory image (potentially tampered before reboot) from disk
> is basically DMA-ing arbitrary data over the whole RAM. I am currently not
> able to imagine a scenario how this could be made "secure" (without
> storing private keys to sign the hibernation image on the machine itself
> which, well, doesn't sound secure either).
I have a patch that disables that. I imagine it will be included in the
next submission of the patchset.
You can find it here in the meantime:
http://jwboyer.fedorapeople.org/pub/0001-hibernate-Disable-in-a-Secure-Boot-environment.patch
josh
On 10/31/2012 10:54 AM, Josh Boyer wrote:
> On Wed, Oct 31, 2012 at 10:50 AM, Jiri Kosina <[email protected]> wrote:
>> On Mon, 29 Oct 2012, Matthew Garrett wrote:
>>
>>>>> This is pretty much identical to the first patchset, but with the capability
>>>>> renamed (CAP_COMPROMISE_KERNEL) and the kexec patch dropped. If anyone wants
>>>>> to deploy these then they should disable kexec until support for signed
>>>>> kexec payloads has been merged.
>>>> Apparently your patchset currently doesn't handle device firmware loading,
>>>> nor do you seem to mention in in the comments.
>>> Correct.
>>>
>>>> I believe signed firmware loading should be put on plate as well, right?
>>> I think that's definitely something that should be covered. I hadn't
>>> worried about it immediately as any attack would be limited to machines
>>> with a specific piece of hardware, and the attacker would need to expend
>>> a significant amount of reverse engineering work on the firmware - and
>>> we'd probably benefit from them doing that in the long run...
>> Now -- how about resuming from S4?
>>
>> Reading stored memory image (potentially tampered before reboot) from disk
>> is basically DMA-ing arbitrary data over the whole RAM. I am currently not
>> able to imagine a scenario how this could be made "secure" (without
>> storing private keys to sign the hibernation image on the machine itself
>> which, well, doesn't sound secure either).
> I have a patch that disables that. I imagine it will be included in the
> next submission of the patchset.
>
> You can find it here in the meantime:
>
> http://jwboyer.fedorapeople.org/pub/0001-hibernate-Disable-in-a-Secure-Boot-environment.patch
>
> josh
> --
> To unsubscribe from this list: send the line "unsubscribe linux-efi" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Perhaps this is overkill and too efi-specific, but on systems (like efi)
where there is firmware-manged storage that is protected from unsigned
access (e.g. efi vars), couldn't the kernel store a hash of the
hibernation image in that storage?
On Wed, Oct 31, 2012 at 03:50:00PM +0100, Jiri Kosina wrote:
> Reading stored memory image (potentially tampered before reboot) from disk
> is basically DMA-ing arbitrary data over the whole RAM. I am currently not
> able to imagine a scenario how this could be made "secure" (without
> storing private keys to sign the hibernation image on the machine itself
> which, well, doesn't sound secure either).
shim generates a public and private key. It hands the kernel the private
key in a boot parameter and stores the public key in a boot variable. On
suspend, the kernel signs the suspend image with that private key and
discards it. On the next boot, shim generates a new key pair and hands
the new private key to the kernel along with the old public key. The
kernel verifies the suspend image before resuming it. The only way to
subvert this would be to be able to access kernel memory directly, which
means the attacker has already won.
Now someone just needs to write it.
--
Matthew Garrett | [email protected]
On 10/31/2012 11:02 AM, Matthew Garrett wrote:
> On Wed, Oct 31, 2012 at 03:50:00PM +0100, Jiri Kosina wrote:
>
>> Reading stored memory image (potentially tampered before reboot) from disk
>> is basically DMA-ing arbitrary data over the whole RAM. I am currently not
>> able to imagine a scenario how this could be made "secure" (without
>> storing private keys to sign the hibernation image on the machine itself
>> which, well, doesn't sound secure either).
> shim generates a public and private key. It hands the kernel the private
> key in a boot parameter and stores the public key in a boot variable. On
> suspend, the kernel signs the suspend image with that private key and
> discards it. On the next boot, shim generates a new key pair and hands
> the new private key to the kernel along with the old public key. The
> kernel verifies the suspend image before resuming it. The only way to
> subvert this would be to be able to access kernel memory
Or the boot variable where you stored the key, but in that case I'd say
the attacker has won too.
> directly, which
> means the attacker has already won.
>
> Now someone just needs to write it.
>
On Wed, Oct 31, 2012 at 11:05:08AM -0400, Shea Levy wrote:
> Or the boot variable where you stored the key, but in that case I'd
> say the attacker has won too.
Right, in that case they can compromise MOK.
--
Matthew Garrett | [email protected]
> > is basically DMA-ing arbitrary data over the whole RAM. I am currently not
> > able to imagine a scenario how this could be made "secure" (without
> > storing private keys to sign the hibernation image on the machine itself
> > which, well, doesn't sound secure either).
That's what the TPM is for (in fact all of this stuff can be done
properly with a TPM while the 'secure' boot stuff can do little if any of
it.
>
> I have a patch that disables that. I imagine it will be included in the
> next submission of the patchset.
>
> You can find it here in the meantime:
>
> http://jwboyer.fedorapeople.org/pub/0001-hibernate-Disable-in-a-Secure-Boot-environment.patch
All this depends on your threat model. If I have physical access to
suspend/resume your machine then you already lost. If I don't have
physical access then I can't boot my unsigned OS to patch your S4 image
so it doesn't matter.
In fact the more I think about this the more it seems disabling hibernate
is basically farting in the wind.
Alan
On Wed, 31 Oct 2012, Alan Cox wrote:
> All this depends on your threat model. If I have physical access to
> suspend/resume your machine then you already lost. If I don't have
> physical access then I can't boot my unsigned OS to patch your S4 image
> so it doesn't matter.
Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
from that artificial image.
It can be viewed as a very obscure way of rewriting the kernel through
/dev/mem (which is obviously not possible when in 'secure boot'
environment).
--
Jiri Kosina
SUSE Labs
1) Gain root.
2) Modify swap partition directly.
3) Force reboot.
4) Win.
Root should not have the ability to elevate themselves to running
arbitrary kernel code. Therefore, the above attack needs to be
impossible.
--
Matthew Garrett | [email protected]
On Wed, 31 Oct 2012, Josh Boyer wrote:
> I have a patch that disables that. I imagine it will be included in the
> next submission of the patchset.
>
> You can find it here in the meantime:
>
> http://jwboyer.fedorapeople.org/pub/0001-hibernate-Disable-in-a-Secure-Boot-environment.patch
I don't see that patch touching kernel/power/user.c, so using 's2disk' to
suspend machine seems to be still possible even with this patch applied,
right?
--
Jiri Kosina
SUSE Labs
On Wed, Oct 31, 2012 at 12:04 PM, Jiri Kosina <[email protected]> wrote:
> On Wed, 31 Oct 2012, Josh Boyer wrote:
>
>> I have a patch that disables that. I imagine it will be included in the
>> next submission of the patchset.
>>
>> You can find it here in the meantime:
>>
>> http://jwboyer.fedorapeople.org/pub/0001-hibernate-Disable-in-a-Secure-Boot-environment.patch
>
> I don't see that patch touching kernel/power/user.c, so using 's2disk' to
> suspend machine seems to be still possible even with this patch applied,
> right?
Oh, yes. Good catch. I'll add similar checks there as well in the next
revision. Thanks!
josh
On Wed, 31 Oct 2012 16:55:04 +0100 (CET)
Jiri Kosina <[email protected]> wrote:
> On Wed, 31 Oct 2012, Alan Cox wrote:
>
> > All this depends on your threat model. If I have physical access to
> > suspend/resume your machine then you already lost. If I don't have
> > physical access then I can't boot my unsigned OS to patch your S4 image
> > so it doesn't matter.
>
> Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
> from that artificial image.
It's not signed. It won't reboot from that image.
Alan
On 10/31/2012 01:03 PM, Alan Cox wrote:
> On Wed, 31 Oct 2012 16:55:04 +0100 (CET)
> Jiri Kosina <[email protected]> wrote:
>
>> On Wed, 31 Oct 2012, Alan Cox wrote:
>>
>>> All this depends on your threat model. If I have physical access to
>>> suspend/resume your machine then you already lost. If I don't have
>>> physical access then I can't boot my unsigned OS to patch your S4 image
>>> so it doesn't matter.
>> Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
>> from that artificial image.
> It's not signed. It won't reboot from that image.
So then to hibernate the kernel must have a signing key?
> Alan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-efi" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 31 Oct 2012 15:56:35 +0000
Matthew Garrett <[email protected]> wrote:
> 1) Gain root.
> 2) Modify swap partition directly.
> 3) Force reboot.
> 4) Win.
>
> Root should not have the ability to elevate themselves to running
> arbitrary kernel code. Therefore, the above attack needs to be
> impossible.
To protect swap you need to basically disallow any unencrypted swap (as
he OS can't prove any given swap device is local and inside the case) and
disallow the use of most disk management tools (so you'll need to write a
few new management interfaces or implement the BPF based command filters
that have been discussed for years).
In addition of course there is no requirement that a device returns
the data you put on it so subverted removable media is a potential issue.
Or indeed just cheap memory sticks that do it anyway ;)
Oh and of course the file systems in default mode don't guarantee this so
you'll need to fix ext3, ext4 8)
On 10/31/2012 01:08 PM, Alan Cox wrote:
> On Wed, 31 Oct 2012 15:56:35 +0000
> Matthew Garrett <[email protected]> wrote:
>
>> 1) Gain root.
>> 2) Modify swap partition directly.
>> 3) Force reboot.
>> 4) Win.
>>
>> Root should not have the ability to elevate themselves to running
>> arbitrary kernel code. Therefore, the above attack needs to be
>> impossible.
> To protect swap you need to basically disallow any unencrypted swap (as
> he OS can't prove any given swap device is local and inside the case) and
> disallow the use of most disk management tools (so you'll need to write a
> few new management interfaces or implement the BPF based command filters
> that have been discussed for years).
Can any kernel memory get swapped? If not all root can do is mess with
the memory of other userspace processes, which isn't a use-case that
secure boot cares about from my understanding.
> In addition of course there is no requirement that a device returns
> the data you put on it so subverted removable media is a potential issue.
> Or indeed just cheap memory sticks that do it anyway ;)
>
> Oh and of course the file systems in default mode don't guarantee this so
> you'll need to fix ext3, ext4 8)
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-efi" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Oct 31, 2012 at 05:03:34PM +0000, Alan Cox wrote:
> On Wed, 31 Oct 2012 16:55:04 +0100 (CET)
> Jiri Kosina <[email protected]> wrote:
> > Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
> > from that artificial image.
>
> It's not signed. It won't reboot from that image.
The kernel is signed. The kernel doesn't check the signature on the
suspend image.
--
Matthew Garrett | [email protected]
> >> Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
> >> from that artificial image.
> > It's not signed. It won't reboot from that image.
>
> So then to hibernate the kernel must have a signing key?
No.
If you break the kernel so you can patch swap we already lost.
If you add a new bootable image and reboot your image won't boot anyway
If you've got physical access you've already won
So you can't break the swap image before hibernation. You can't boot
something else to tamper with it and you've not got physical access.
On Wed, 31 Oct 2012 17:10:48 +0000
Matthew Garrett <[email protected]> wrote:
> On Wed, Oct 31, 2012 at 05:03:34PM +0000, Alan Cox wrote:
> > On Wed, 31 Oct 2012 16:55:04 +0100 (CET)
> > Jiri Kosina <[email protected]> wrote:
> > > Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
> > > from that artificial image.
> >
> > It's not signed. It won't reboot from that image.
>
> The kernel is signed. The kernel doesn't check the signature on the
> suspend image.
Which doesn't matter. How are you going to create the tampered image in
the first place ?
On Wed, Oct 31, 2012 at 05:21:21PM +0000, Alan Cox wrote:
> On Wed, 31 Oct 2012 17:10:48 +0000
> Matthew Garrett <[email protected]> wrote:
> > The kernel is signed. The kernel doesn't check the signature on the
> > suspend image.
>
> Which doesn't matter. How are you going to create the tampered image in
> the first place ?
By booting a signed kernel, not turning on swap and writing directly to
the swap partition.
--
Matthew Garrett | [email protected]
On Wed, 31 Oct 2012, Alan Cox wrote:
> > > > Prepare (as a root) a hand-crafted image, reboot, let the kernel resume
> > > > from that artificial image.
> > >
> > > It's not signed. It won't reboot from that image.
> >
> > The kernel is signed. The kernel doesn't check the signature on the
> > suspend image.
>
> Which doesn't matter. How are you going to create the tampered image in
> the first place ?
Editing the suspend partition/file directly when trusted kernel is booted.
--
Jiri Kosina
SUSE Labs
At Mon, 29 Oct 2012 17:41:31 +0000,
Matthew Garrett wrote:
>
> On Mon, Oct 29, 2012 at 08:49:41AM +0100, Jiri Kosina wrote:
> > On Thu, 20 Sep 2012, Matthew Garrett wrote:
> >
> > > This is pretty much identical to the first patchset, but with the capability
> > > renamed (CAP_COMPROMISE_KERNEL) and the kexec patch dropped. If anyone wants
> > > to deploy these then they should disable kexec until support for signed
> > > kexec payloads has been merged.
> >
> > Apparently your patchset currently doesn't handle device firmware loading,
> > nor do you seem to mention in in the comments.
>
> Correct.
>
> > I believe signed firmware loading should be put on plate as well, right?
>
> I think that's definitely something that should be covered. I hadn't
> worried about it immediately as any attack would be limited to machines
> with a specific piece of hardware, and the attacker would need to expend
> a significant amount of reverse engineering work on the firmware - and
> we'd probably benefit from them doing that in the long run...
request_firmware() is used for microcode loading, too, so it's fairly
a core part to cover, I'm afraid.
I played a bit about this yesterday. The patch below is a proof of
concept to (ab)use the module signing mechanism for firmware loading
too. Sign firmware files via scripts/sign-file, and put to
/lib/firmware/signed directory.
It's just a rough cut, and the module options are other pieces there
should be polished better, of course. Also another signature string
should be better for firmware files :)
Takashi
---
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b34b5cd..2bc8415 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -145,6 +145,12 @@ config EXTRA_FIRMWARE_DIR
this option you can point it elsewhere, such as /lib/firmware/ or
some other directory containing the firmware files.
+config FIRMWARE_SIG
+ bool "Firmware signature check"
+ depends on FW_LOADER && MODULE_SIG
+ help
+ Check the embedded signature of firmware files like signed modules.
+
config DEBUG_DRIVER
bool "Driver Core verbose debug messages"
depends on DEBUG_KERNEL
diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
index 8945f4e..81fc8a4 100644
--- a/drivers/base/firmware_class.c
+++ b/drivers/base/firmware_class.c
@@ -268,6 +268,12 @@ static void fw_free_buf(struct firmware_buf *buf)
/* direct firmware loading support */
static const char *fw_path[] = {
+#ifdef CONFIG_FIRMWARE_SIG
+ "/lib/firmware/updates/" UTS_RELEASE "/signed",
+ "/lib/firmware/updates/signed",
+ "/lib/firmware/" UTS_RELEASE "/signed",
+ "/lib/firmware/signed",
+#endif
"/lib/firmware/updates/" UTS_RELEASE,
"/lib/firmware/updates",
"/lib/firmware/" UTS_RELEASE,
@@ -844,6 +850,41 @@ exit:
return fw_priv;
}
+#ifdef CONFIG_FIRMWARE_SIG
+/* XXX */
+extern int mod_verify_sig(const void *mod, unsigned long *_modlen);
+
+static bool sig_enforce;
+module_param(sig_enforce, bool, 0444);
+
+static int firmware_sig_check(struct firmware_buf *buf)
+{
+ unsigned long markerlen = sizeof(MODULE_SIG_STRING) - 1;
+ long len;
+ int err;
+
+ len = buf->size - markerlen;
+ if (len <= 0 ||
+ memcmp(buf->data + len, MODULE_SIG_STRING, markerlen)) {
+ pr_debug("%s: no signature found\n", buf->fw_id);
+ return sig_enforce ? -ENOKEY : 0;
+ }
+ err = mod_verify_sig(buf->data, &len);
+ if (err < 0) {
+ pr_debug("%s: signature error: %d\n", buf->fw_id, err);
+ return err;
+ }
+ buf->size = len;
+ pr_debug("%s: signature OK!\n", buf->fw_id);
+ return 0;
+}
+#else
+static inline int firmware_sig_check(struct firmware_buf *buf)
+{
+ return 0;
+}
+#endif
+
static int _request_firmware_load(struct firmware_priv *fw_priv, bool uevent,
long timeout)
{
@@ -909,6 +950,9 @@ handle_fw:
if (!buf->size || test_bit(FW_STATUS_ABORT, &buf->status))
retval = -ENOENT;
+ if (!retval)
+ retval = firmware_sig_check(buf);
+
/*
* add firmware name into devres list so that we can auto cache
* and uncache firmware for device.
diff --git a/kernel/module_signing.c b/kernel/module_signing.c
index ea1b1df..c39f49b 100644
--- a/kernel/module_signing.c
+++ b/kernel/module_signing.c
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/err.h>
+#include <linux/export.h>
#include <crypto/public_key.h>
#include <crypto/hash.h>
#include <keys/asymmetric-type.h>
@@ -247,3 +248,4 @@ error_put_key:
pr_devel("<==%s() = %d\n", __func__, ret);
return ret;
}
+EXPORT_SYMBOL_GPL(mod_verify_sig);
On Wed, 31 Oct 2012 17:17:43 +0000
Matthew Garrett <[email protected]> wrote:
> On Wed, Oct 31, 2012 at 05:21:21PM +0000, Alan Cox wrote:
> > On Wed, 31 Oct 2012 17:10:48 +0000
> > Matthew Garrett <[email protected]> wrote:
> > > The kernel is signed. The kernel doesn't check the signature on the
> > > suspend image.
> >
> > Which doesn't matter. How are you going to create the tampered image in
> > the first place ?
>
> By booting a signed kernel, not turning on swap and writing directly to
> the swap partition.
Ok so the actual problem is that you are signing kernels that allow the
user to skip the S4 resume check ?
On Wed, Oct 31, 2012 at 06:28:16PM +0100, Takashi Iwai wrote:
> request_firmware() is used for microcode loading, too, so it's fairly
> a core part to cover, I'm afraid.
>
> I played a bit about this yesterday. The patch below is a proof of
> concept to (ab)use the module signing mechanism for firmware loading
> too. Sign firmware files via scripts/sign-file, and put to
> /lib/firmware/signed directory.
That does still leave me a little uneasy as far as the microcode
licenses go. I don't know that we can distribute signed copies of some
of them, and we obviously can't sign at the user end.
--
Matthew Garrett | [email protected]
On Wed, Oct 31, 2012 at 05:39:19PM +0000, Alan Cox wrote:
> On Wed, 31 Oct 2012 17:17:43 +0000
> Matthew Garrett <[email protected]> wrote:
> > By booting a signed kernel, not turning on swap and writing directly to
> > the swap partition.
>
> Ok so the actual problem is that you are signing kernels that allow the
> user to skip the S4 resume check ?
What S4 resume check?
--
Matthew Garrett | [email protected]
> That does still leave me a little uneasy as far as the microcode
> licenses go. I don't know that we can distribute signed copies of some
> of them, and we obviously can't sign at the user end.
You seem to put them in signed rpm packages ?
On Wed, 31 Oct 2012 17:37:50 +0000
Matthew Garrett <[email protected]> wrote:
> On Wed, Oct 31, 2012 at 05:39:19PM +0000, Alan Cox wrote:
> > On Wed, 31 Oct 2012 17:17:43 +0000
> > Matthew Garrett <[email protected]> wrote:
> > > By booting a signed kernel, not turning on swap and writing directly to
> > > the swap partition.
> >
> > Ok so the actual problem is that you are signing kernels that allow the
> > user to skip the S4 resume check ?
>
> What S4 resume check?
One you would add .. but no I'm wrong there - its a problem at the
suspend point so you do need a signature for it. Oh well yet another
reason it's nto useful.
Alan
On Wed, Oct 31, 2012 at 05:44:40PM +0000, Alan Cox wrote:
> > That does still leave me a little uneasy as far as the microcode
> > licenses go. I don't know that we can distribute signed copies of some
> > of them, and we obviously can't sign at the user end.
>
> You seem to put them in signed rpm packages ?
That's not a modification of the files that say "You have permission to
distribute unmodified versions of this file". If a lawyer says this is
fine, I'm happy.
--
Matthew Garrett | [email protected]
On Wed, Oct 31, 2012 at 05:49:19PM +0000, Alan Cox wrote:
> On Wed, 31 Oct 2012 17:37:50 +0000
> Matthew Garrett <[email protected]> wrote:
> > What S4 resume check?
>
> One you would add .. but no I'm wrong there - its a problem at the
> suspend point so you do need a signature for it. Oh well yet another
> reason it's nto useful.
Right. Hence the idea of a protected keystore at boot time. We can do
this securely, but it does involve writing code.
--
Matthew Garrett | [email protected]
At Wed, 31 Oct 2012 17:37:28 +0000,
Matthew Garrett wrote:
>
> On Wed, Oct 31, 2012 at 06:28:16PM +0100, Takashi Iwai wrote:
>
> > request_firmware() is used for microcode loading, too, so it's fairly
> > a core part to cover, I'm afraid.
> >
> > I played a bit about this yesterday. The patch below is a proof of
> > concept to (ab)use the module signing mechanism for firmware loading
> > too. Sign firmware files via scripts/sign-file, and put to
> > /lib/firmware/signed directory.
>
> That does still leave me a little uneasy as far as the microcode
> licenses go. I don't know that we can distribute signed copies of some
> of them, and we obviously can't sign at the user end.
Yeah, that's a concern. Although this is a sort of "container" and
keeping the original data as is, it might be regarded as a
modification.
Another approach would be to a signature in a separate file, but I'm
not sure whether it makes sense.
Takashi
On Wednesday 31 October 2012 17:39:19 Alan Cox wrote:
> On Wed, 31 Oct 2012 17:17:43 +0000
> Matthew Garrett <[email protected]> wrote:
>
> > On Wed, Oct 31, 2012 at 05:21:21PM +0000, Alan Cox wrote:
> > > On Wed, 31 Oct 2012 17:10:48 +0000
> > > Matthew Garrett <[email protected]> wrote:
> > > > The kernel is signed. The kernel doesn't check the signature on the
> > > > suspend image.
> > >
> > > Which doesn't matter. How are you going to create the tampered image in
> > > the first place ?
> >
> > By booting a signed kernel, not turning on swap and writing directly to
> > the swap partition.
>
> Ok so the actual problem is that you are signing kernels that allow the
> user to skip the S4 resume check ?
No. The problem is principal in nature.
swapoff /dev/sdb6 ; dd if=/tmp/malicious_image of=/dev/sdb6 ; sync ; reboot
That would do it on my system.
Maybe in theory you could solve this by the kernel invalidating images
it hasn't written itself and forbidding to change the resume partition from the
kernel command line, but that would break user space hibernation.
Regards
Oliver
On 10/31/2012 02:14 PM, Oliver Neukum wrote:
> On Wednesday 31 October 2012 17:39:19 Alan Cox wrote:
>> On Wed, 31 Oct 2012 17:17:43 +0000
>> Matthew Garrett<[email protected]> wrote:
>>
>>> On Wed, Oct 31, 2012 at 05:21:21PM +0000, Alan Cox wrote:
>>>> On Wed, 31 Oct 2012 17:10:48 +0000
>>>> Matthew Garrett<[email protected]> wrote:
>>>>> The kernel is signed. The kernel doesn't check the signature on the
>>>>> suspend image.
>>>>
>>>> Which doesn't matter. How are you going to create the tampered image in
>>>> the first place ?
>>>
>>> By booting a signed kernel, not turning on swap and writing directly to
>>> the swap partition.
>>
>> Ok so the actual problem is that you are signing kernels that allow the
>> user to skip the S4 resume check ?
>
> No. The problem is principal in nature.
>
> swapoff /dev/sdb6 ; dd if=/tmp/malicious_image of=/dev/sdb6 ; sync ; reboot
>
> That would do it on my system.
> Maybe in theory you could solve this by the kernel invalidating images
> it hasn't written itself and forbidding to change the resume partition from the
> kernel command line, but that would break user space hibernation.
If the resuming kernel refuses to resume from images it didn't create
itself, why do you need to forbid changing the resume partition from the
kernel command line?
Chris
On Wed, 31 Oct 2012, Chris Friesen wrote:
> > That would do it on my system.
> > Maybe in theory you could solve this by the kernel invalidating images
> > it hasn't written itself and forbidding to change the resume partition from
> > the
> > kernel command line, but that would break user space hibernation.
>
> If the resuming kernel refuses to resume from images it didn't create itself,
> why do you need to forbid changing the resume partition from the kernel
> command line?
Yeah, it can definitely be solved by pushing keys around from shim to
kernel (and kernel discarding the private keys at right moments). It just
needs to be implemented.
--
Jiri Kosina
SUSE Labs
On Wednesday 31 October 2012 15:58:05 Chris Friesen wrote:
> On 10/31/2012 02:14 PM, Oliver Neukum wrote:
> > That would do it on my system.
> > Maybe in theory you could solve this by the kernel invalidating images
> > it hasn't written itself and forbidding to change the resume partition from the
> > kernel command line, but that would break user space hibernation.
>
> If the resuming kernel refuses to resume from images it didn't create
> itself, why do you need to forbid changing the resume partition from the
> kernel command line?
You don't. Signed images solve the problem.
I was responding to Alan's assertation that the problem could be solved
without signing the images. It turns out tht any such scheme would have
unacceptable limitations.
The key problem is actually safely storing the public key needed to verify
the signature. This problem is also solvable. It just needs help from the
UEFI infrastructure.
Regards
Oliver
於 三,2012-10-31 於 19:53 +0100,Takashi Iwai 提到:
> At Wed, 31 Oct 2012 17:37:28 +0000,
> Matthew Garrett wrote:
> >
> > On Wed, Oct 31, 2012 at 06:28:16PM +0100, Takashi Iwai wrote:
> >
> > > request_firmware() is used for microcode loading, too, so it's fairly
> > > a core part to cover, I'm afraid.
> > >
> > > I played a bit about this yesterday. The patch below is a proof of
> > > concept to (ab)use the module signing mechanism for firmware loading
> > > too. Sign firmware files via scripts/sign-file, and put to
> > > /lib/firmware/signed directory.
> >
> > That does still leave me a little uneasy as far as the microcode
> > licenses go. I don't know that we can distribute signed copies of some
> > of them, and we obviously can't sign at the user end.
>
> Yeah, that's a concern. Although this is a sort of "container" and
> keeping the original data as is, it might be regarded as a
> modification.
>
> Another approach would be to a signature in a separate file, but I'm
> not sure whether it makes sense.
>
I think it make sense because the private key is still protected by
signer. Any hacker who modified firmware is still need use private key
to generate signature, but hacker's private key is impossible to match
with the public key that kernel used to verify firmware.
And, I afraid we have no choice that we need put the firmware signature
in a separate file. Contacting with those company's legal department
will be very time-consuming, and I am not sure all company will agree we
put the signature with firmware then distribute.
Thanks a lot!
Joey Lee
On Wed, 2012-10-31 at 23:19 +0100, Oliver Neukum wrote:
> On Wednesday 31 October 2012 15:58:05 Chris Friesen wrote:
> > On 10/31/2012 02:14 PM, Oliver Neukum wrote:
>
> > > That would do it on my system.
> > > Maybe in theory you could solve this by the kernel invalidating images
> > > it hasn't written itself and forbidding to change the resume partition from the
> > > kernel command line, but that would break user space hibernation.
> >
> > If the resuming kernel refuses to resume from images it didn't create
> > itself, why do you need to forbid changing the resume partition from the
> > kernel command line?
>
> You don't. Signed images solve the problem.
I really don't think they do. The proposed attack vector is to try to
prevent a local root exploit from running arbitrary in-kernel code,
because that would compromise the secure boot part of the kernel.
I really think that's mythical: a local privilege elevation attack
usually exploits some bug (classically a buffer overflow) which executes
arbitrary code in kernel context. In that case, the same attack vector
can be used to compromise any in-kernel protection mechanism including
turning off the secure boot capability and reading the in-kernel private
signing key.
There have been one or two privilege elevation attacks that didn't
involve in-kernel code (usually by compromising a suid binary or other
cross domain scripting attack) that would only compromise local root and
thus be confined to the secure boot prison but they are, historically, a
minority.
The point I'm making is that given that the majority of exploits will
already be able to execute arbitrary code in-kernel, there's not much
point trying to consider features like this as attacker prevention. We
should really be focusing on discussing why we'd want to prevent a
legitimate local root from writing to the suspend partition in a secure
boot environment.
James
On Thu, 1 Nov 2012, James Bottomley wrote:
> The point I'm making is that given that the majority of exploits will
> already be able to execute arbitrary code in-kernel, there's not much
> point trying to consider features like this as attacker prevention. We
> should really be focusing on discussing why we'd want to prevent a
> legitimate local root from writing to the suspend partition in a secure
> boot environment.
Well, this is being repeated over and over again when talking about secure
boot, right?
My understanding is that we are not trying to protect against root
exploiting the kernel. We are trying to protect against root tampering
with the kernel code and data through legitimate use of kernel-provided
facilitiies (/dev/mem, ioperm, reprogramming devices to DMA to arbitrary
memory locations, resuming from hibernation image that has been tampered
with, etc).
Or perhaps I just misunderstood the point you were trying to make?
Thanks,
--
Jiri Kosina
SUSE Labs
On Thu, 2012-11-01 at 10:20 +0100, Jiri Kosina wrote:
> On Thu, 1 Nov 2012, James Bottomley wrote:
>
> > The point I'm making is that given that the majority of exploits will
> > already be able to execute arbitrary code in-kernel, there's not much
> > point trying to consider features like this as attacker prevention. We
> > should really be focusing on discussing why we'd want to prevent a
> > legitimate local root from writing to the suspend partition in a secure
> > boot environment.
>
> Well, this is being repeated over and over again when talking about secure
> boot, right?
>
> My understanding is that we are not trying to protect against root
> exploiting the kernel. We are trying to protect against root tampering
> with the kernel code and data through legitimate use of kernel-provided
> facilitiies (/dev/mem, ioperm, reprogramming devices to DMA to arbitrary
> memory locations, resuming from hibernation image that has been tampered
> with, etc).
>
> Or perhaps I just misunderstood the point you were trying to make?
I'm actually just struggling to understand the use case for these more
esoteric protections.
So the assumption is malice on the part of a legitimate local root? I
just don't see what such a user would gain by compromising resume in
this way (given that their scope for damage in the rest of the system
and data is huge) and I don't really see why a non-malicious local root
would be interested. A legitimate local root entails quite a measure of
trust, so what I don't really see is the use case for investing all that
trust in someone but not trusting them with the boot system. In a
proper capability separated limited trust environment, you simply don't
allow less trusted users raw access to all or some devices and that
solves the problem far more simply.
James
On Thu, 1 Nov 2012, James Bottomley wrote:
> I'm actually just struggling to understand the use case for these more
> esoteric protections.
I believe the real point is drawing a clear line between trusted and
untrusted (with root being userspace, hence implicitly untrusted), and
disallowing "legitimate crossing" of this line.
--
Jiri Kosina
SUSE Labs
On Thu, 2012-11-01 at 10:45 +0100, Jiri Kosina wrote:
> On Thu, 1 Nov 2012, James Bottomley wrote:
>
> > I'm actually just struggling to understand the use case for these more
> > esoteric protections.
>
> I believe the real point is drawing a clear line between trusted and
> untrusted (with root being userspace, hence implicitly untrusted), and
> disallowing "legitimate crossing" of this line.
But that doesn't really help me: untrusted root is an oxymoron. I get
capability separated systems, where you invest trust in layers and you
make each layer small and verifiable, so you have a granular trust
policy you build up. I really don't understand the use case for trying
to remove a small portion of trust from the huge trust domain of root
and then doing a massive amount of fixup around the edges because
there's leaks all over the place from the trust that root still has. It
all seems to be a bit backwards. If you just begin with the capability
separated granular system, I don't see why it doesn't all just work with
what we have today.
James
On Thu, 1 Nov 2012, James Bottomley wrote:
> > > I'm actually just struggling to understand the use case for these more
> > > esoteric protections.
> >
> > I believe the real point is drawing a clear line between trusted and
> > untrusted (with root being userspace, hence implicitly untrusted), and
> > disallowing "legitimate crossing" of this line.
>
> But that doesn't really help me: untrusted root is an oxymoron. I get
> capability separated systems, where you invest trust in layers and you
> make each layer small and verifiable, so you have a granular trust
> policy you build up. I really don't understand the use case for trying
> to remove a small portion of trust from the huge trust domain of root
> and then doing a massive amount of fixup around the edges because
> there's leaks all over the place from the trust that root still has. It
> all seems to be a bit backwards. If you just begin with the capability
> separated granular system, I don't see why it doesn't all just work with
> what we have today.
Please don't get me wrong -- I personally don't believe in the secure boot
stuff at all.
But if you take the secure/trusted boot as a basic paradigma, then you
really need the separation.
In such model, the root is untrusted, exactly because the code running
under those privileges hasn't been signed, period. If you allow such code
to modify the trusted/signed code, you just basically violate the complete
model, rendering it completely moot.
--
Jiri Kosina
SUSE Labs
On Thursday 01 November 2012 09:08:25 James Bottomley wrote:
> On Wed, 2012-10-31 at 23:19 +0100, Oliver Neukum wrote:
> > On Wednesday 31 October 2012 15:58:05 Chris Friesen wrote:
> > > On 10/31/2012 02:14 PM, Oliver Neukum wrote:
> >
> > > > That would do it on my system.
> > > > Maybe in theory you could solve this by the kernel invalidating images
> > > > it hasn't written itself and forbidding to change the resume partition from the
> > > > kernel command line, but that would break user space hibernation.
> > >
> > > If the resuming kernel refuses to resume from images it didn't create
> > > itself, why do you need to forbid changing the resume partition from the
> > > kernel command line?
> >
> > You don't. Signed images solve the problem.
>
> I really don't think they do. The proposed attack vector is to try to
> prevent a local root exploit from running arbitrary in-kernel code,
> because that would compromise the secure boot part of the kernel.
Well, it is an attempt to prevent unsigned code from altering signed
code or data structures private to signed code. That can be seen as
a technical question. What that is useful for is not strictly a technical
question.
We can of course discuss whether secure boot makes sense at all.
But that is a different discussion. Once it is decided that it is to be
implemented, some issues arise logically.
> The point I'm making is that given that the majority of exploits will
> already be able to execute arbitrary code in-kernel, there's not much
> point trying to consider features like this as attacker prevention. We
> should really be focusing on discussing why we'd want to prevent a
> legitimate local root from writing to the suspend partition in a secure
> boot environment.
That is strictly speaking what we are discussing.
First, it is not given that root is local.
Second, we don't want to stop root from writing to a partition.
We just want to prevent that from altering kernel memory.
Regards
Oliver
> I think it make sense because the private key is still protected by
> signer. Any hacker who modified firmware is still need use private key
> to generate signature, but hacker's private key is impossible to match
> with the public key that kernel used to verify firmware.
>
> And, I afraid we have no choice that we need put the firmware signature
> in a separate file. Contacting with those company's legal department
> will be very time-consuming, and I am not sure all company will agree we
> put the signature with firmware then distribute.
Then you'd better stop storing it on disk because your disk drive is FEC
encoding it and adding a CRC 8)
It does want checking with a lawyer but my understanding is that if you
have a file which is a package that contains the firmware and a signature
then there is not generally a problem, any more than putting it in an RPM
file - it's packaging/aggregation. This should be referred to the Linux
Foundation folks perhaps - no point designing something badly to work
around a non existant issue.
Also the interface needs to consider that a lot of device firmware is
already signed. Nobody notices because they don't ever try and do their
own thus many drivers don't need extra signatures in fact.
Alan
On Thu, Nov 1, 2012 at 5:59 AM, James Bottomley
<[email protected]> wrote:
> But that doesn't really help me: untrusted root is an oxymoron.
Imagine you run windows and you've never heard of Linux. You like
that only windows kernels can boot on your box and not those mean
nasty hacked up malware kernels. Now some attacker manages to take
over your box because you clicked on that executable for young models
in skimpy bathing suits. That executable rewrote your bootloader to
launch a very small carefully crafted Linux environment. This
environment does nothing but launch a perfectly valid signed Linux
kernel, which gets a Windows environment all ready to launch after
resume and goes to sleep. Now you have to hit the power button twice
every time you turn on your computer, weird, but Windows comes up, and
secureboot is still on, so you must be safe!
In this case we have a completely 'untrusted' root inside Linux. From
the user PoV root and Linux are both malware. Notice the EXACT same
attack would work launching rootkit'd Linux from Linux. So don't
pretend not to care about Windows. It's just that launching malware
Linux seems like a reason to get your key revoked. We don't want
signed code which can be used as an attack vector on ourselves or on
others.
That make sense?
-Eric
> Imagine you run windows and you've never heard of Linux.
To those people I think you mean "never heard of Ubuntu" ;-)
> In this case we have a completely 'untrusted' root inside Linux. From
> the user PoV root and Linux are both malware. Notice the EXACT same
> attack would work launching rootkit'd Linux from Linux. So don't
> pretend not to care about Windows. It's just that launching malware
> Linux seems like a reason to get your key revoked. We don't want
> signed code which can be used as an attack vector on ourselves or on
> others.
>
> That make sense?
Not really but it keeps some of the Red Hat security people happy and out
of harms way. With all the current posted RH patches I can still take over
the box as root trivially enough and you seem to have so far abolished
suspend to disk, kexec and a pile of other useful stuff. To actually lock
it down you'll have to do a ton more of this. I suspect folks who know
windows innards well are probably thinking the same about Windows 8 8)
Almost anyone attacking a secure boot box will do it via windows or more
likely via EFI. EFI because its large, new and doesn't a great history,
windows because its the larger target. Actually from what I've seen on
the security front there seems to a distinct view that secure boot is
irrelevant because Windows 8 is so suspend/resume focussed that you might
as well just trojan the box until the next reboot as its likely to be a
couple of weeks a way.
Alan
On Thu, 2012-11-01 at 10:29 -0400, Eric Paris wrote:
> On Thu, Nov 1, 2012 at 5:59 AM, James Bottomley
> <[email protected]> wrote:
>
> > But that doesn't really help me: untrusted root is an oxymoron.
>
> Imagine you run windows and you've never heard of Linux. You like
> that only windows kernels can boot on your box and not those mean
> nasty hacked up malware kernels. Now some attacker manages to take
> over your box because you clicked on that executable for young models
> in skimpy bathing suits. That executable rewrote your bootloader to
> launch a very small carefully crafted Linux environment. This
> environment does nothing but launch a perfectly valid signed Linux
> kernel, which gets a Windows environment all ready to launch after
> resume and goes to sleep. Now you have to hit the power button twice
> every time you turn on your computer, weird, but Windows comes up, and
> secureboot is still on, so you must be safe!
So you're going back to the root exploit problem? I thought that was
debunked a few emails ago in the thread?
Your attack vector isn't plausible because for the suspend attack to
work, the box actually has to be running Linux by default ... I think
the admin of that box might notice if it suddenly started running
windows ...
> In this case we have a completely 'untrusted' root inside Linux. From
> the user PoV root and Linux are both malware. Notice the EXACT same
> attack would work launching rootkit'd Linux from Linux. So don't
> pretend not to care about Windows. It's just that launching malware
> Linux seems like a reason to get your key revoked. We don't want
> signed code which can be used as an attack vector on ourselves or on
> others.
>
> That make sense?
Not really, no. A windows attack vector is a pointless abstraction
because we're talking about securing Linux and your vector requires a
Linux attack for the windows compromise ... let's try to keep on point
to how we're using this feature to secure Linux.
James
On Thu, Nov 01, 2012 at 02:42:15PM +0000, James Bottomley wrote:
> On Thu, 2012-11-01 at 10:29 -0400, Eric Paris wrote:
> > Imagine you run windows and you've never heard of Linux. You like
> > that only windows kernels can boot on your box and not those mean
> > nasty hacked up malware kernels. Now some attacker manages to take
> > over your box because you clicked on that executable for young models
> > in skimpy bathing suits. That executable rewrote your bootloader to
> > launch a very small carefully crafted Linux environment. This
> > environment does nothing but launch a perfectly valid signed Linux
> > kernel, which gets a Windows environment all ready to launch after
> > resume and goes to sleep. Now you have to hit the power button twice
> > every time you turn on your computer, weird, but Windows comes up, and
> > secureboot is still on, so you must be safe!
>
> So you're going back to the root exploit problem? I thought that was
> debunked a few emails ago in the thread?
The entire point of this feature is that it's no longer possible to turn
a privileged user exploit into a full system exploit. Gaining admin
access on Windows 8 doesn't permit you to install a persistent backdoor,
unless there's some way to circumvent that. Which there is, if you can
drop a small Linux distribution onto the ESP and use a signed, trusted
bootloader to boot a signed, trusted kernel that then resumes from an
unsigned, untrusted hibernate image. So we have to ensure that that's
impossible.
--
Matthew Garrett | [email protected]
On Thu, Nov 1, 2012 at 10:42 AM, James Bottomley
<[email protected]> wrote:
> On Thu, 2012-11-01 at 10:29 -0400, Eric Paris wrote:
>> On Thu, Nov 1, 2012 at 5:59 AM, James Bottomley
>> <[email protected]> wrote:
>>
>> > But that doesn't really help me: untrusted root is an oxymoron.
>>
>> Imagine you run windows and you've never heard of Linux. You like
>> that only windows kernels can boot on your box and not those mean
>> nasty hacked up malware kernels. Now some attacker manages to take
>> over your box because you clicked on that executable for young models
>> in skimpy bathing suits. That executable rewrote your bootloader to
>> launch a very small carefully crafted Linux environment. This
>> environment does nothing but launch a perfectly valid signed Linux
>> kernel, which gets a Windows environment all ready to launch after
>> resume and goes to sleep. Now you have to hit the power button twice
>> every time you turn on your computer, weird, but Windows comes up, and
>> secureboot is still on, so you must be safe!
>
> So you're going back to the root exploit problem? I thought that was
> debunked a few emails ago in the thread?
>
> Your attack vector isn't plausible because for the suspend attack to
> work, the box actually has to be running Linux by default ... I think
> the admin of that box might notice if it suddenly started running
> windows ...
Maybe you misread. The owner of the box would never know a shim Linux
was loaded. In any case, as I said, windows really is irrelevant.
It's just using Linux as an attack vectore against Windows is what
would get keys revoked. If your key is revoke Linux can't boot on a
large amount of new hardware without BIOS twiddling.u
>> In this case we have a completely 'untrusted' root inside Linux. From
>> the user PoV root and Linux are both malware. Notice the EXACT same
>> attack would work launching rootkit'd Linux from Linux. So don't
>> pretend not to care about Windows. It's just that launching malware
>> Linux seems like a reason to get your key revoked. We don't want
>> signed code which can be used as an attack vector on ourselves or on
>> others.
>>
>> That make sense?
>
> Not really, no. A windows attack vector is a pointless abstraction
> because we're talking about securing Linux and your vector requires a
> Linux attack for the windows compromise ... let's try to keep on point
> to how we're using this feature to secure Linux.
I pointed out that the exact same attack exists with Linux on Linux.
To launch a malware linux kernel all you have to do is launch a shim
signed acceptable linux environment, have it set up the malware kernel
to launch after resume, and go to sleep. Agreed, it'd be very weird
that the first time you hit the power button your machine comes on and
then quickly goes right back to sleep, but certainly we can envision
that being ignored by many desktop users...
Do you see how 'root' in the first environment is untrusted? Now you
can pretend not to care because the 'original' root was trusted. But
people install bad crap all the time. There are hundreds of ways to
install bad software as root. Go to any site distributing rpms or
debs to get that new version of mod_perl and it could install the
malware kernel and shim environment.
The point of secureboot is even if the admin did something which
allowed his kernel to be compromised, it won't persist. Sure,
secureboot moves the attack up the stack to userspace, but at least we
can do something about the kernel.
> The entire point of this feature is that it's no longer possible to turn
> a privileged user exploit into a full system exploit. Gaining admin
> access on Windows 8 doesn't permit you to install a persistent backdoor,
Really, that would be a first. Do you have a detailed knowledge of
windows 8 actual security ?
> unless there's some way to circumvent that. Which there is, if you can
> drop a small Linux distribution onto the ESP and use a signed, trusted
> bootloader to boot a signed, trusted kernel that then resumes from an
> unsigned, untrusted hibernate image. So we have to ensure that that's
> impossible.
Well if you want to make Linux entirely robust Red Hat could start
helping with some of the 6000 odd coverity matches some of which will
most certainly turn out to be real flaws.
Alan
On Thu, Nov 1, 2012 at 10:46 AM, Alan Cox <[email protected]> wrote:
>> Imagine you run windows and you've never heard of Linux.
>
> To those people I think you mean "never heard of Ubuntu" ;-)
:-)
> With all the current posted RH patches I can still take over
> the box as root trivially enough and you seem to have so far abolished
> suspend to disk, kexec and a pile of other useful stuff. To actually lock
> it down you'll have to do a ton more of this.
I'm guessing those writing the patches would like to hear about these.
Suspend to disk and kexec can probably both be fixed up to work...
> Actually from what I've seen on
> the security front there seems to a distinct view that secure boot is
> irrelevant because Windows 8 is so suspend/resume focussed that you might
> as well just trojan the box until the next reboot as its likely to be a
> couple of weeks a way.
Bit of a straw man isn't it? Hey, don't fix A, I can do B! I'm not
saying you're wrong, nor that maybe online attacks which don't persist
across reboot wouldn't be more likely, but they aren't attacking the
same problem. (I haven't heard any progress on what you point out,
but at least we have some progress on some small class of boot time
persistent attacks)
On Thu, 2012-11-01 at 14:49 +0000, Matthew Garrett wrote:
> On Thu, Nov 01, 2012 at 02:42:15PM +0000, James Bottomley wrote:
> > On Thu, 2012-11-01 at 10:29 -0400, Eric Paris wrote:
> > > Imagine you run windows and you've never heard of Linux. You like
> > > that only windows kernels can boot on your box and not those mean
> > > nasty hacked up malware kernels. Now some attacker manages to take
> > > over your box because you clicked on that executable for young models
> > > in skimpy bathing suits. That executable rewrote your bootloader to
> > > launch a very small carefully crafted Linux environment. This
> > > environment does nothing but launch a perfectly valid signed Linux
> > > kernel, which gets a Windows environment all ready to launch after
> > > resume and goes to sleep. Now you have to hit the power button twice
> > > every time you turn on your computer, weird, but Windows comes up, and
> > > secureboot is still on, so you must be safe!
> >
> > So you're going back to the root exploit problem? I thought that was
> > debunked a few emails ago in the thread?
>
> The entire point of this feature is that it's no longer possible to turn
> a privileged user exploit into a full system exploit. Gaining admin
> access on Windows 8 doesn't permit you to install a persistent backdoor,
> unless there's some way to circumvent that. Which there is, if you can
> drop a small Linux distribution onto the ESP and use a signed, trusted
> bootloader to boot a signed, trusted kernel that then resumes from an
> unsigned, untrusted hibernate image. So we have to ensure that that's
> impossible.
But surely that's fanciful ... you've already compromised windows to get
access to the ESP. If you've done it once, you can do it again until
the exploit is patched. There are likely many easier ways of ensuring
persistence than trying to install a full linux kernel with a
compromised resume system.
If this could be used to attack a windows system in the first place,
then Microsoft might be annoyed, but you have to compromise windows
*first* in this scenario.
James
> would get keys revoked. If your key is revoke Linux can't boot on a
> large amount of new hardware without BIOS twiddling.
See "live free or die". If you want to live in a world where you can't
even fart before checking if the man from Microsoft will revoke your key
you might as well go home now.
> The point of secureboot is even if the admin did something which
> allowed his kernel to be compromised, it won't persist. Sure,
> secureboot moves the attack up the stack to userspace, but at least we
> can do something about the kernel.
Nice theory.
At the end of the day I don't care if you want to produce this stuff and
sell it to people. Fine, the interface proposed is clean enough that it
doesn't pee on other work, but don't expect the rest of the world to
follow mindlessly into your slave pit driven by your fear.
Alan
On Thu, Nov 1, 2012 at 11:06 AM, James Bottomley
<[email protected]> wrote:
> But surely that's fanciful ... you've already compromised windows to get
> access to the ESP. If you've done it once, you can do it again until
> the exploit is patched.
You work under the assumption that any bad operation was done by means
of a compromised kernel. Admins install things all the time,
sometimes, things which they shouldn't. (This statement is OS
agnostic)
> There are likely many easier ways of ensuring
> persistence than trying to install a full linux kernel with a
> compromised resume system.
I'm sure lots of us would love to hear the ideas. And the attack is
on the suspend side, nothing about resume has to be malicious (not
really relevant I guess)...
On Thu, 2012-11-01 at 10:59 -0400, Eric Paris wrote:
> On Thu, Nov 1, 2012 at 10:42 AM, James Bottomley
> <[email protected]> wrote:
> > On Thu, 2012-11-01 at 10:29 -0400, Eric Paris wrote:
> >> On Thu, Nov 1, 2012 at 5:59 AM, James Bottomley
> >> <[email protected]> wrote:
> >>
> >> > But that doesn't really help me: untrusted root is an oxymoron.
> >>
> >> Imagine you run windows and you've never heard of Linux. You like
> >> that only windows kernels can boot on your box and not those mean
> >> nasty hacked up malware kernels. Now some attacker manages to take
> >> over your box because you clicked on that executable for young models
> >> in skimpy bathing suits. That executable rewrote your bootloader to
> >> launch a very small carefully crafted Linux environment. This
> >> environment does nothing but launch a perfectly valid signed Linux
> >> kernel, which gets a Windows environment all ready to launch after
> >> resume and goes to sleep. Now you have to hit the power button twice
> >> every time you turn on your computer, weird, but Windows comes up, and
> >> secureboot is still on, so you must be safe!
> >
> > So you're going back to the root exploit problem? I thought that was
> > debunked a few emails ago in the thread?
> >
> > Your attack vector isn't plausible because for the suspend attack to
> > work, the box actually has to be running Linux by default ... I think
> > the admin of that box might notice if it suddenly started running
> > windows ...
>
> Maybe you misread. The owner of the box would never know a shim Linux
> was loaded. In any case, as I said, windows really is irrelevant.
> It's just using Linux as an attack vectore against Windows is what
> would get keys revoked. If your key is revoke Linux can't boot on a
> large amount of new hardware without BIOS twiddling.u
>
> >> In this case we have a completely 'untrusted' root inside Linux. From
> >> the user PoV root and Linux are both malware. Notice the EXACT same
> >> attack would work launching rootkit'd Linux from Linux. So don't
> >> pretend not to care about Windows. It's just that launching malware
> >> Linux seems like a reason to get your key revoked. We don't want
> >> signed code which can be used as an attack vector on ourselves or on
> >> others.
> >>
> >> That make sense?
> >
> > Not really, no. A windows attack vector is a pointless abstraction
> > because we're talking about securing Linux and your vector requires a
> > Linux attack for the windows compromise ... let's try to keep on point
> > to how we're using this feature to secure Linux.
>
> I pointed out that the exact same attack exists with Linux on Linux.
> To launch a malware linux kernel all you have to do is launch a shim
> signed acceptable linux environment, have it set up the malware kernel
> to launch after resume, and go to sleep. Agreed, it'd be very weird
> that the first time you hit the power button your machine comes on and
> then quickly goes right back to sleep, but certainly we can envision
> that being ignored by many desktop users...
>
> Do you see how 'root' in the first environment is untrusted? Now you
> can pretend not to care because the 'original' root was trusted. But
> people install bad crap all the time. There are hundreds of ways to
> install bad software as root. Go to any site distributing rpms or
> debs to get that new version of mod_perl and it could install the
> malware kernel and shim environment.
You're completely confusing two separate goals:
1. Is it possible to use secure boot to implement a security policy
on Linux
2. What do we have to do anything to prevent Linux being used to
attack windows which may lead to a revocation of the distro
signing key.
"untrusted root" is a silly answer to 1 because it's incredibly
difficult to remove sufficient trust from root and still have it be
trusted enough to be effective as root. The trust bound up in root is
incredibly intertwined. It would be far better to start by eliminating
the root user altogether and building up on the capabilities in a
granular fashion for this type of lockdown.
"untrusted root", if it can even be achieved, might be a sufficient
condition for 2 but it's way overkill for a necessary one.
James
On Thu, Nov 01, 2012 at 03:06:30PM +0000, James Bottomley wrote:
> But surely that's fanciful ... you've already compromised windows to get
> access to the ESP. If you've done it once, you can do it again until
> the exploit is patched. There are likely many easier ways of ensuring
> persistence than trying to install a full linux kernel with a
> compromised resume system.
There's a pretty strong distinction between "Machine is exploited until
exploit is patched" and "Machine is exploited until drive is replaced".
--
Matthew Garrett | [email protected]
On Thu, Nov 01, 2012 at 03:06:54PM +0000, Alan Cox wrote:
> > The entire point of this feature is that it's no longer possible to turn
> > a privileged user exploit into a full system exploit. Gaining admin
> > access on Windows 8 doesn't permit you to install a persistent backdoor,
>
> Really, that would be a first. Do you have a detailed knowledge of
> windows 8 actual security ?
http://msdn.microsoft.com/en-us/library/windows/desktop/hh848061%28v=vs.85%29.aspx
> > unless there's some way to circumvent that. Which there is, if you can
> > drop a small Linux distribution onto the ESP and use a signed, trusted
> > bootloader to boot a signed, trusted kernel that then resumes from an
> > unsigned, untrusted hibernate image. So we have to ensure that that's
> > impossible.
>
> Well if you want to make Linux entirely robust Red Hat could start
> helping with some of the 6000 odd coverity matches some of which will
> most certainly turn out to be real flaws.
Sure, bugs should be fixed.
--
Matthew Garrett | [email protected]
On Thu, 1 Nov 2012 16:29:01 +0000
Matthew Garrett <[email protected]> wrote:
> On Thu, Nov 01, 2012 at 03:06:54PM +0000, Alan Cox wrote:
> > > The entire point of this feature is that it's no longer possible to turn
> > > a privileged user exploit into a full system exploit. Gaining admin
> > > access on Windows 8 doesn't permit you to install a persistent backdoor,
> >
> > Really, that would be a first. Do you have a detailed knowledge of
> > windows 8 actual security ?
>
> http://msdn.microsoft.com/en-us/library/windows/desktop/hh848061%28v=vs.85%29.aspx
No I said knowledge of not web pages. The Red Hat pages say Linux is very
secure, the Apple ones say MacOS is.
The point being you don't want to evaluate apparent security by press
release of one system versus deep internal knowledge of the other.
Alan
On Thu, Nov 1, 2012 at 11:18 AM, James Bottomley
<[email protected]> wrote:
> You're completely confusing two separate goals:
>
> 1. Is it possible to use secure boot to implement a security policy
> on Linux
> 2. What do we have to do anything to prevent Linux being used to
> attack windows which may lead to a revocation of the distro
> signing key.
>
> "untrusted root" is a silly answer to 1 because it's incredibly
> difficult to remove sufficient trust from root and still have it be
> trusted enough to be effective as root. The trust bound up in root is
> incredibly intertwined. It would be far better to start by eliminating
> the root user altogether and building up on the capabilities in a
> granular fashion for this type of lockdown.
granular lockdown!? sounds like SELinux. But that certainly can't
solve the problems here...
I think your premise has a couple of problems. First is how you chose
to word #2. Lets reword it as:
What do we have to do to prevent Linux being used to attack Linux
which may lead to secure boot being useless.
If we accept that as #2, then we think, "What makes secure boot
useless" or "What is the security goal as envisioned by secure boot."
The goal of secure boot is to implement an operating system which
prevents uid==0 to ring 0 escalation. That is the security policy
secure boot needs to not be completely useless. And as it turns out,
that security policy is useful in other situations.
> "untrusted root", if it can even be achieved, might be a sufficient
> condition for 2 but it's way overkill for a necessary one.
But it is a condition, as specifically stated, that others have wanted
long before secure boot even came to rise. I've talked with and
worked with a public cloud operator who wants to prevent even a
malicious root user from being able to run code in ring 0 inside their
VM. The hope in that case was that in doing so they can indirectly
shrink the attack surface between virtual machine and hypervisor.
They hoped to limit the ways the guest could interact to only those
methods the linux kernel implemented.
They want to launch a vm running a kernel they chose and make sure
root inside the vm could not run some other kernel or run arbitrary
code in kernel space. It's wasn't something they solved completely.
If it was, all of this secure boot work would be finished. Which is
why we are having these discussions to understand all of the way that
you an Alan seem to have to get around the secure boot restrictions.
And look for solutions to retain functionality which meeting the
security goal of 'prevent uid=0 to ring 0 privilege escalation.
-Eric
Hi!
> > But that doesn't really help me: untrusted root is an oxymoron.
>
> Imagine you run windows and you've never heard of Linux. You like
> that only windows kernels can boot on your box and not those mean
> nasty hacked up malware kernels. Now some attacker manages to take
> over your box because you clicked on that executable for young models
> in skimpy bathing suits. That executable rewrote your bootloader to
> launch a very small carefully crafted Linux environment. This
> environment does nothing but launch a perfectly valid signed Linux
> kernel, which gets a Windows environment all ready to launch after
> resume and goes to sleep. Now you have to hit the power button twice
> every time you turn on your computer, weird, but Windows comes up, and
> secureboot is still on, so you must be safe!
Ok, so you cripple kexec / suspend to disallow this, and then...
...attacker launches carefuly crafter Linux environment, that just launches
X and fullscreen wine.
Sure, timing may be slightly different, but Windows came up and secureboot is still
on.. so user happily enters his bank account details.
Could someone write down exact requirements for Linux kernel to be signed by Microsoft?
Because thats apparently what you want, and I don't think crippling kexec/suspend is
enough.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On 11/01/2012 02:27 PM, Pavel Machek wrote:
> Could someone write down exact requirements for Linux kernel to be signed by Microsoft?
> Because thats apparently what you want, and I don't think crippling kexec/suspend is
> enough.
As I understand it, the kernel won't be signed by Microsoft.
Rather, the bootloader will be signed by Microsoft and the vendors will
be the ones that refuse to sign a kernel unless it is reasonably assured
that it won't be used as an attack vector.
If you want fully-open behaviour it's still possible, you just need to
turn off secure boot.
With secure boot enabled, then the kernel should refuse to let an
unsigned kexec load new images, and kexec itself should refuse to load
unsigned images. Also the kernel would need to sign its
"suspend-to-disk" images and refuse to resume unsigned images.
Presumably the signing key for the "suspend-to-disk" images would need
to be stored somewhere that is not accessable even by root. It's not
clear to me how we would do this, but maybe it's possible with hardware
support.
Chris
On Thu, 2012-11-01 at 13:50 -0400, Eric Paris wrote:
> On Thu, Nov 1, 2012 at 11:18 AM, James Bottomley
> <[email protected]> wrote:
>
> > You're completely confusing two separate goals:
> >
> > 1. Is it possible to use secure boot to implement a security policy
> > on Linux
> > 2. What do we have to do anything to prevent Linux being used to
> > attack windows which may lead to a revocation of the distro
> > signing key.
> >
> > "untrusted root" is a silly answer to 1 because it's incredibly
> > difficult to remove sufficient trust from root and still have it be
> > trusted enough to be effective as root. The trust bound up in root is
> > incredibly intertwined. It would be far better to start by eliminating
> > the root user altogether and building up on the capabilities in a
> > granular fashion for this type of lockdown.
>
> granular lockdown!? sounds like SELinux. But that certainly can't
> solve the problems here...
>
> I think your premise has a couple of problems. First is how you chose
> to word #2. Lets reword it as:
>
> What do we have to do to prevent Linux being used to attack Linux
> which may lead to secure boot being useless.
That's not really remotely related, is it? Microsoft doesn't really
care about Linux on Linux attacks, so preventing or allowing them isn't
going to get a distro key revoked.
> If we accept that as #2, then we think, "What makes secure boot
> useless" or "What is the security goal as envisioned by secure boot."
> The goal of secure boot is to implement an operating system which
> prevents uid==0 to ring 0 escalation. That is the security policy
> secure boot needs to not be completely useless. And as it turns out,
> that security policy is useful in other situations.
Um, so all that is a rewording of what I said in 1 ... how do you take
advantage of secure boot, so you're re-conflating the issues.
Snip the rest because it doesn't really make sense in terms of getting
the key revoked.
James
On Thu, Nov 01, 2012 at 09:03:20PM +0000, James Bottomley wrote:
> On Thu, 2012-11-01 at 13:50 -0400, Eric Paris wrote:
> > What do we have to do to prevent Linux being used to attack Linux
> > which may lead to secure boot being useless.
>
> That's not really remotely related, is it? Microsoft doesn't really
> care about Linux on Linux attacks, so preventing or allowing them isn't
> going to get a distro key revoked.
Linux vendors may care about Linux on Linux attacks. It's all fun and
games until Oracle get Microsoft to revoke Red Hat's signature.
--
Matthew Garrett | [email protected]
On Thu, 2012-11-01 at 21:06 +0000, Matthew Garrett wrote:
> On Thu, Nov 01, 2012 at 09:03:20PM +0000, James Bottomley wrote:
> > On Thu, 2012-11-01 at 13:50 -0400, Eric Paris wrote:
> > > What do we have to do to prevent Linux being used to attack Linux
> > > which may lead to secure boot being useless.
> >
> > That's not really remotely related, is it? Microsoft doesn't really
> > care about Linux on Linux attacks, so preventing or allowing them isn't
> > going to get a distro key revoked.
>
> Linux vendors may care about Linux on Linux attacks. It's all fun and
> games until Oracle get Microsoft to revoke Red Hat's signature.
I agree that's a possibility. However, I think the court of public
opinion would pillory the first Commercial Linux Distribution that went
to Microsoft for the express purpose of revoking their competition's
right to boot. It would be commercial suicide.
James
On Thu, Nov 01, 2012 at 09:14:00PM +0000, James Bottomley wrote:
> I agree that's a possibility. However, I think the court of public
> opinion would pillory the first Commercial Linux Distribution that went
> to Microsoft for the express purpose of revoking their competition's
> right to boot. It would be commercial suicide.
Oracle are something of a vexatious litigant as far as the court of
public opinion is concerned, but even without that it could be a
customer who complains. If you're personally comfortable with a specific
level of security here, that's fine - but it's completely reasonable for
others to feel that there are valid technical and commercial concerns to
do this properly.
--
Matthew Garrett | [email protected]
> Linux vendors may care about Linux on Linux attacks. It's all fun and
> games until Oracle get Microsoft to revoke Red Hat's signature.
Fear uncertainty and doubt (and if you think Oracle are going to do that
I suspect your lawyers should deal with it)
Alan.
On Thu, Nov 01, 2012 at 09:31:27PM +0000, Alan Cox wrote:
> > Linux vendors may care about Linux on Linux attacks. It's all fun and
> > games until Oracle get Microsoft to revoke Red Hat's signature.
>
> Fear uncertainty and doubt (and if you think Oracle are going to do that
> I suspect your lawyers should deal with it)
Lawyers won't remove blacklist entries.
--
Matthew Garrett | [email protected]
On Thu, 1 Nov 2012 21:18:59 +0000
Matthew Garrett <[email protected]> wrote:
> On Thu, Nov 01, 2012 at 09:14:00PM +0000, James Bottomley wrote:
>
> > I agree that's a possibility. However, I think the court of public
> > opinion would pillory the first Commercial Linux Distribution that went
> > to Microsoft for the express purpose of revoking their competition's
> > right to boot. It would be commercial suicide.
>
> Oracle are something of a vexatious litigant as far as the court of
> public opinion is concerned, but even without that it could be a
> customer who complains. If you're personally comfortable with a specific
> level of security here, that's fine - but it's completely reasonable for
> others to feel that there are valid technical and commercial concerns to
> do this properly.
The main people who really really care about this the MS key stuff
is mostly irrelevant for as they won't use the Microsoft keys
anyway. Microsoft will have to provide signing to all sorts of other law
enforcement bodies as a responsible provider. If the FBI have a key no
other government security installation will have that key in their
systems. If the Chinese state has it I doubt the US government will be
too keen either.
All those official government trojans end up creating a big problem in the
trust department.
Alan
On Thu, 1 Nov 2012 21:28:43 +0000
Matthew Garrett <[email protected]> wrote:
> On Thu, Nov 01, 2012 at 09:31:27PM +0000, Alan Cox wrote:
> > > Linux vendors may care about Linux on Linux attacks. It's all fun and
> > > games until Oracle get Microsoft to revoke Red Hat's signature.
> >
> > Fear uncertainty and doubt (and if you think Oracle are going to do that
> > I suspect your lawyers should deal with it)
>
> Lawyers won't remove blacklist entries.
Fear Uncertainty and Doubt
Courts do, injunctions do, the possibilty of getting caught with theirs
hands in the till does.
But I suspect your lawyers should also deal with public comments about
Oracle such as the one you've made before you make them in public 8)
On Thu, Nov 01, 2012 at 09:37:51PM +0000, Alan Cox wrote:
> On Thu, 1 Nov 2012 21:28:43 +0000
> Matthew Garrett <[email protected]> wrote:
> > Lawyers won't remove blacklist entries.
>
> Fear Uncertainty and Doubt
>
> Courts do, injunctions do, the possibilty of getting caught with theirs
> hands in the till does.
I think you've misunderstood. Blacklist updates are append only.
--
Matthew Garrett | [email protected]
On Thu, 1 Nov 2012 21:34:52 +0000
Matthew Garrett <[email protected]> wrote:
> On Thu, Nov 01, 2012 at 09:37:51PM +0000, Alan Cox wrote:
> > On Thu, 1 Nov 2012 21:28:43 +0000
> > Matthew Garrett <[email protected]> wrote:
> > > Lawyers won't remove blacklist entries.
> >
> > Fear Uncertainty and Doubt
> >
> > Courts do, injunctions do, the possibilty of getting caught with theirs
> > hands in the till does.
>
> I think you've misunderstood. Blacklist updates are append only.
I think you've misunderstood - thats a technical detail that merely
alters the cost to the people who did something improper.
If Red Hat want to ship a kernel that is very very locked down - fine.
It's a business choice and maybe it'll sell to someone. The
implementation is non-offensive in its mechanism for everyone else so
technically I don't care, but the 'quiver before our new masters and lick
their boots' stuff isn't a technical (or sane business) approach so can
we cut the trying to FUD other people into doing what you believe your
new master requires.
Alan
On Thu, Nov 01, 2012 at 09:58:17PM +0000, Alan Cox wrote:
> On Thu, 1 Nov 2012 21:34:52 +0000
> Matthew Garrett <[email protected]> wrote:
> > I think you've misunderstood. Blacklist updates are append only.
>
> I think you've misunderstood - thats a technical detail that merely
> alters the cost to the people who did something improper.
Winning a case is cold comfort if your software has been uninstallable
for the years it took to get through the courts. If others want to take
that risk, fine.
--
Matthew Garrett | [email protected]
Matthew Garrett <[email protected]> writes:
> On Thu, Nov 01, 2012 at 09:58:17PM +0000, Alan Cox wrote:
>> On Thu, 1 Nov 2012 21:34:52 +0000
>> Matthew Garrett <[email protected]> wrote:
>> > I think you've misunderstood. Blacklist updates are append only.
>>
>> I think you've misunderstood - thats a technical detail that merely
>> alters the cost to the people who did something improper.
>
> Winning a case is cold comfort if your software has been uninstallable
> for the years it took to get through the courts. If others want to take
> that risk, fine.
When the goal is to secure Linux I don't see how any of this helps.
Windows 8 compromises are already available so if we turn most of these
arguments around I am certain clever attackers can go through windows to
run compromised kernel on a linux system, at least as easily as the
reverse.
Short of instructing UEFI to stop trusting the Microsoft signing key I
don't see any of the secureboot dance gaining any security of computers
running linux or security from keys being revoked for non-sense reasons.
Eric
On Fri, Nov 02, 2012 at 01:49:25AM -0700, Eric W. Biederman wrote:
> When the goal is to secure Linux I don't see how any of this helps.
> Windows 8 compromises are already available so if we turn most of these
> arguments around I am certain clever attackers can go through windows to
> run compromised kernel on a linux system, at least as easily as the
> reverse.
And if any of them are used to attack Linux, we'd expect those versions
of Windows to be blacklisted.
--
Matthew Garrett | [email protected]
On Thu, Nov 01, 2012 at 10:29:17AM -0400, Eric Paris wrote:
> On Thu, Nov 1, 2012 at 5:59 AM, James Bottomley
> <[email protected]> wrote:
>
> > But that doesn't really help me: untrusted root is an oxymoron.
>
> Imagine you run windows and you've never heard of Linux. You like
> that only windows kernels can boot on your box and not those mean
> nasty hacked up malware kernels. Now some attacker manages to take
> over your box because you clicked on that executable for young models
> in skimpy bathing suits. That executable rewrote your bootloader to
> launch a very small carefully crafted Linux environment. This
Rewrote bootloader on disk so that it gets executed next time? It
will not run as signature will not match.
Thanks
Vivek
On Wed, Oct 31, 2012 at 03:02:01PM +0000, Matthew Garrett wrote:
> On Wed, Oct 31, 2012 at 03:50:00PM +0100, Jiri Kosina wrote:
>
> > Reading stored memory image (potentially tampered before reboot) from disk
> > is basically DMA-ing arbitrary data over the whole RAM. I am currently not
> > able to imagine a scenario how this could be made "secure" (without
> > storing private keys to sign the hibernation image on the machine itself
> > which, well, doesn't sound secure either).
>
> shim generates a public and private key. It hands the kernel the private
> key in a boot parameter and stores the public key in a boot variable. On
> suspend, the kernel signs the suspend image with that private key and
> discards it. On the next boot, shim generates a new key pair and hands
> the new private key to the kernel along with the old public key. The
> kernel verifies the suspend image before resuming it. The only way to
> subvert this would be to be able to access kernel memory directly, which
> means the attacker has already won.
"crash" utility has module which allows reading kernel memory. So leaking
this private key will be easier then you are thinking it to be.
Thanks
Vivek
On Fri, Nov 02, 2012 at 11:30:48AM -0400, Vivek Goyal wrote:
> "crash" utility has module which allows reading kernel memory. So leaking
> this private key will be easier then you are thinking it to be.
That's not upstream, right?
--
Matthew Garrett | [email protected]
On Thu, Nov 01, 2012 at 03:02:25PM -0600, Chris Friesen wrote:
> On 11/01/2012 02:27 PM, Pavel Machek wrote:
>
> >Could someone write down exact requirements for Linux kernel to be signed by Microsoft?
> >Because thats apparently what you want, and I don't think crippling kexec/suspend is
> >enough.
>
> As I understand it, the kernel won't be signed by Microsoft.
>
> Rather, the bootloader will be signed by Microsoft and the vendors
> will be the ones that refuse to sign a kernel unless it is
> reasonably assured that it won't be used as an attack vector.
>
> If you want fully-open behaviour it's still possible, you just need
> to turn off secure boot.
>
> With secure boot enabled, then the kernel should refuse to let an
> unsigned kexec load new images, and kexec itself should refuse to
> load unsigned images.
Yep, good in theory. Now that basically means reimplementing kexec-tools
in kernel. That also means creating a new system call. It also
also means cutting down on future flexibility (assuming new system
call interface will be able to support existing features provided by
kernel). And it is lot of code in user space which needs to be
reimplemented in kernel and bloat kernel.
Keeping most of the logic in kexec-tools provided flexibility and keeps
kernel small. So now re-architect kexec and reverse a good design completely
for secureboot. It is a huge pain.
Thanks
Vivek
On Fri, Nov 02, 2012 at 03:42:48PM +0000, Matthew Garrett wrote:
> On Fri, Nov 02, 2012 at 11:30:48AM -0400, Vivek Goyal wrote:
>
> > "crash" utility has module which allows reading kernel memory. So leaking
> > this private key will be easier then you are thinking it to be.
>
> That's not upstream, right?
Yes, checked with Dave, it is not upstream. Well, still it is a concern
for distro kernel.
So if we keep private key in kernel, looks like we shall have to disable
one more feature in secureboot mode.
Thanks
Vivek
On Fri, 2 Nov 2012, Vivek Goyal wrote:
> > > "crash" utility has module which allows reading kernel memory. So leaking
> > > this private key will be easier then you are thinking it to be.
> >
> > That's not upstream, right?
>
> Yes, checked with Dave, it is not upstream. Well, still it is a concern
> for distro kernel.
Well, that's about /dev/crash, right?
How about /proc/kcore?
--
Jiri Kosina
SUSE Labs
On Thu 2012-11-01 15:02:25, Chris Friesen wrote:
> On 11/01/2012 02:27 PM, Pavel Machek wrote:
>
> >Could someone write down exact requirements for Linux kernel to be signed by Microsoft?
> >Because thats apparently what you want, and I don't think crippling kexec/suspend is
> >enough.
>
> As I understand it, the kernel won't be signed by Microsoft.
> Rather, the bootloader will be signed by Microsoft and the vendors
> will be the ones that refuse to sign a kernel unless it is
> reasonably assured that it won't be used as an attack vector.
Yes. So can someone write down what "used as an attack vector" means?
Because, AFAICT, Linux kernel is _designed_ to work as an attact
vector. We intentionally support wine, and want to keep that support.
> With secure boot enabled, then the kernel should refuse to let an
> unsigned kexec load new images, and kexec itself should refuse to
> load unsigned images. Also the kernel would need to sign its
> "suspend-to-disk" images and refuse to resume unsigned images.
I believe that attacking Windows using wine is easier than using
suspend-to-disk.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Fri, Nov 2, 2012 at 9:52 AM, Vivek Goyal <[email protected]> wrote:
> On Fri, Nov 02, 2012 at 03:42:48PM +0000, Matthew Garrett wrote:
>> On Fri, Nov 02, 2012 at 11:30:48AM -0400, Vivek Goyal wrote:
>>
>> > "crash" utility has module which allows reading kernel memory. So leaking
>> > this private key will be easier then you are thinking it to be.
>>
>> That's not upstream, right?
>
> Yes, checked with Dave, it is not upstream. Well, still it is a concern
> for distro kernel.
>
> So if we keep private key in kernel, looks like we shall have to disable
> one more feature in secureboot mode.
>
I have been following parts of this thread and beginning to think,
"Are we over engineering" the solution for secureboot. Do we have a
list of what is must to meet the Spec.? At this point, Linux
secureboot solution is sounding so pervasive and will impact every
aspect of Linux user's and kernel developer's use pattern. So far I
picked up on the following:
Kernel need to be signed.
firmware kernel loads needs to be signed
What else?
Is there a list of what all needs to be signed? I am interested in
seeing a list of requirements. At some point, OS will be so secure
that, will it become so complex to run anything on it and continue to
do development as we are used to doing today? I don't pretend to know
much about secureboot, and I am asking as a concerned Linux user, and
kernel developer.
-- Shuah
On Fri, 2012-11-02 at 17:33 +0100, Pavel Machek wrote:
> On Thu 2012-11-01 15:02:25, Chris Friesen wrote:
> > On 11/01/2012 02:27 PM, Pavel Machek wrote:
> >
> > >Could someone write down exact requirements for Linux kernel to be signed by Microsoft?
> > >Because thats apparently what you want, and I don't think crippling kexec/suspend is
> > >enough.
> >
> > As I understand it, the kernel won't be signed by Microsoft.
>
> > Rather, the bootloader will be signed by Microsoft and the vendors
> > will be the ones that refuse to sign a kernel unless it is
> > reasonably assured that it won't be used as an attack vector.
>
> Yes. So can someone write down what "used as an attack vector" means?
>
> Because, AFAICT, Linux kernel is _designed_ to work as an attact
> vector. We intentionally support wine, and want to keep that support.
I think there's a variety of opinions on this one.
My definition is that you can construct a signed boot system from the
components delivered with a Linux distribution that will fairly
invisibly chain load a hacked version of windows. Thus allowing the
windows user to think they have a chain of trust to the UEFI firmware
when, in fact, they haven't.
The first question is how many compromises do you need. Without
co-operation from windows, you don't get to install something in the
boot system, so if you're looking for a single compromise vector, the
only realistic attack is to trick the user into booting a hacked linux
system from USB or DVD.
There's also a lot of debate around "fairly invisibly". If your hack
involves shim->grub->linux->windows, that's a fairly long boot process
with time for the user to notice something.
Obviously, a boot loader that breaks the trust chain is ideal as a
windows attack vector, which is why most pre bootloaders on virgin
systems do a present user test (tell the user what they're doing and ask
permission to continue). I really think that if the shim+MOK system
always paused and asked to continue if the MOK Boot Services variables
aren't present (i.e. it's a first boot virgin system), we've solved the
windows attack vector problem, and we can move on from this rather
sterile debate to think of how we can use secure boot to enhance Linux
security for the machine owner.
James
On Fri, Nov 02, 2012 at 04:52:44PM +0000, James Bottomley wrote:
> The first question is how many compromises do you need. Without
> co-operation from windows, you don't get to install something in the
> boot system, so if you're looking for a single compromise vector, the
> only realistic attack is to trick the user into booting a hacked linux
> system from USB or DVD.
You run a binary. It pops up a box saying "Windows needs your permission
to continue", just like almost every other Windows binary that's any
use. Done.
--
Matthew Garrett | [email protected]
On 11/02/2012 09:48 AM, Vivek Goyal wrote:
> On Thu, Nov 01, 2012 at 03:02:25PM -0600, Chris Friesen wrote:
>> With secure boot enabled, then the kernel should refuse to let an
>> unsigned kexec load new images, and kexec itself should refuse to
>> load unsigned images.
>
> Yep, good in theory. Now that basically means reimplementing kexec-tools
> in kernel.
Maybe I'm missing something, but couldn't the vendors provide a signed
kexec? Why does extra stuff need to be pushed into the kernel?
Chris
On Fri, Nov 02, 2012 at 10:54:50AM -0600, Chris Friesen wrote:
> On 11/02/2012 09:48 AM, Vivek Goyal wrote:
> >On Thu, Nov 01, 2012 at 03:02:25PM -0600, Chris Friesen wrote:
>
> >>With secure boot enabled, then the kernel should refuse to let an
> >>unsigned kexec load new images, and kexec itself should refuse to
> >>load unsigned images.
> >
> >Yep, good in theory. Now that basically means reimplementing kexec-tools
> >in kernel.
>
> Maybe I'm missing something, but couldn't the vendors provide a
> signed kexec? Why does extra stuff need to be pushed into the
> kernel?
Bingo. Join us in following mail thread for all the gory details and
extra work required to make signing of user space processes work.
https://lkml.org/lkml/2012/10/24/451
In a nut-shell, there is no infrastructure currently for signing user
space processes and verifying it (like module signing). Then if you
just sign select user processes and not whole of the user space, then
it brings extra complications with linking shared objects and being
able to modify the code of process etc.
So yes, being able to sign /sbin/kexec will be great. Looks like that
itself will require lot of work and is not that straight forward.
Thanks
Vivek
On Thu, Nov 01, 2012 at 01:50:08PM -0400, Eric Paris wrote:
[..]
> I've talked with and
> worked with a public cloud operator who wants to prevent even a
> malicious root user from being able to run code in ring 0 inside their
> VM. The hope in that case was that in doing so they can indirectly
> shrink the attack surface between virtual machine and hypervisor.
> They hoped to limit the ways the guest could interact to only those
> methods the linux kernel implemented.
>
> They want to launch a vm running a kernel they chose and make sure
> root inside the vm could not run some other kernel or run arbitrary
> code in kernel space. It's wasn't something they solved completely.
> If it was, all of this secure boot work would be finished. Which is
> why we are having these discussions to understand all of the way that
> you an Alan seem to have to get around the secure boot restrictions.
> And look for solutions to retain functionality which meeting the
> security goal of 'prevent uid=0 to ring 0 privilege escalation.
So will secure boot help with above use case you mentioned? I think
until and unless you lock down user space too on host, it will not be
possible.
On the flip side, one might be able to launch windows in qemu (compromised
noe) and might fool user into thinking it is booted natively and steal
login credentials and other stuff.
Thanks
Vivek
On Fri, 2012-11-02 at 16:54 +0000, Matthew Garrett wrote:
> On Fri, Nov 02, 2012 at 04:52:44PM +0000, James Bottomley wrote:
>
> > The first question is how many compromises do you need. Without
> > co-operation from windows, you don't get to install something in the
> > boot system, so if you're looking for a single compromise vector, the
> > only realistic attack is to trick the user into booting a hacked linux
> > system from USB or DVD.
>
> You run a binary. It pops up a box saying "Windows needs your permission
> to continue", just like almost every other Windows binary that's any
> use. Done.
And if all the loaders do some type of present user test on a virgin
system, how do you propose to get that message up there?
James
On Fri, Nov 02, 2012 at 05:48:31PM +0000, James Bottomley wrote:
> On Fri, 2012-11-02 at 16:54 +0000, Matthew Garrett wrote:
> > On Fri, Nov 02, 2012 at 04:52:44PM +0000, James Bottomley wrote:
> >
> > > The first question is how many compromises do you need. Without
> > > co-operation from windows, you don't get to install something in the
> > > boot system, so if you're looking for a single compromise vector, the
> > > only realistic attack is to trick the user into booting a hacked linux
> > > system from USB or DVD.
> >
> > You run a binary. It pops up a box saying "Windows needs your permission
> > to continue", just like almost every other Windows binary that's any
> > use. Done.
>
> And if all the loaders do some type of present user test on a virgin
> system, how do you propose to get that message up there?
? That's the message generated by the Windows access control mechanism
when you run a binary that requests elevated privileges.
--
Matthew Garrett | [email protected]
On Fri, 2012-11-02 at 17:54 +0000, Matthew Garrett wrote:
> On Fri, Nov 02, 2012 at 05:48:31PM +0000, James Bottomley wrote:
> > On Fri, 2012-11-02 at 16:54 +0000, Matthew Garrett wrote:
> > > On Fri, Nov 02, 2012 at 04:52:44PM +0000, James Bottomley wrote:
> > >
> > > > The first question is how many compromises do you need. Without
> > > > co-operation from windows, you don't get to install something in the
> > > > boot system, so if you're looking for a single compromise vector, the
> > > > only realistic attack is to trick the user into booting a hacked linux
> > > > system from USB or DVD.
> > >
> > > You run a binary. It pops up a box saying "Windows needs your permission
> > > to continue", just like almost every other Windows binary that's any
> > > use. Done.
> >
> > And if all the loaders do some type of present user test on a virgin
> > system, how do you propose to get that message up there?
>
> ? That's the message generated by the Windows access control mechanism
> when you run a binary that requests elevated privileges.
So that's a windows attack vector using a windows binary? I can't really
see how it's relevant to the secure boot discussion then.
James
On Fri, Nov 02, 2012 at 05:57:38PM +0000, James Bottomley wrote:
> On Fri, 2012-11-02 at 17:54 +0000, Matthew Garrett wrote:
> > ? That's the message generated by the Windows access control mechanism
> > when you run a binary that requests elevated privileges.
>
> So that's a windows attack vector using a windows binary? I can't really
> see how it's relevant to the secure boot discussion then.
A user runs a binary that elevates itself to admin. Absent any flaws in
Windows (cough), that should be all it can do in a Secure Boot world.
But if you can drop a small trusted Linux system in there and use that
to boot a compromised Windows kernel, it can make itself persistent.
--
Matthew Garrett | [email protected]
On Fri, Nov 02, 2012 at 05:22:41PM +0100, Jiri Kosina wrote:
> On Fri, 2 Nov 2012, Vivek Goyal wrote:
>
> > > > "crash" utility has module which allows reading kernel memory. So leaking
> > > > this private key will be easier then you are thinking it to be.
> > >
> > > That's not upstream, right?
> >
> > Yes, checked with Dave, it is not upstream. Well, still it is a concern
> > for distro kernel.
>
> Well, that's about /dev/crash, right?
Yes, I was talking about /dev/crash.
>
> How about /proc/kcore?
Yes, we will have to lock down /proc/kcore too if we go the private
key solution way.
Thanks
Vivek
I know I started it, but Windows really isn't necessary to see value,
even if it is what pushed the timing.
A user installs a package as root. Absent any flaws in the Linux
kernel (cough) that should be all it can do in a Secure Boot world.
But if you can drop a small trusted Linux system in there and use that
to boot a compromised Linux kernel, it can make itself persistent.
And like I said, I know there are cloud providers out there who want
EXACTLY this type of system. One in which root in the guest is
untrusted and they want to keep them out of ring 0.
Matthew Garrett <[email protected]> writes:
> On Fri, Nov 02, 2012 at 01:49:25AM -0700, Eric W. Biederman wrote:
>
>> When the goal is to secure Linux I don't see how any of this helps.
>> Windows 8 compromises are already available so if we turn most of these
>> arguments around I am certain clever attackers can go through windows to
>> run compromised kernel on a linux system, at least as easily as the
>> reverse.
>
> And if any of them are used to attack Linux, we'd expect those versions
> of Windows to be blacklisted.
I fail to see the logic here. It is ok to trust Microsofts signing key
because after I have been p0wned they will blacklist the version of
windows that has was used to compromise my system?
A key revokation will help me when my system is p0wned how?
I don't want my system p0wned in the first place and I don't want to run
windows. Why should I trust Microsoft's signing key?
Eric
On 11/02/2012 04:03 PM, Eric W. Biederman wrote:
> Matthew Garrett<[email protected]> writes:
>
>> On Fri, Nov 02, 2012 at 01:49:25AM -0700, Eric W. Biederman wrote:
>>
>>> When the goal is to secure Linux I don't see how any of this helps.
>>> Windows 8 compromises are already available so if we turn most of these
>>> arguments around I am certain clever attackers can go through windows to
>>> run compromised kernel on a linux system, at least as easily as the
>>> reverse.
>>
>> And if any of them are used to attack Linux, we'd expect those versions
>> of Windows to be blacklisted.
>
> I fail to see the logic here. It is ok to trust Microsofts signing key
> because after I have been p0wned they will blacklist the version of
> windows that has was used to compromise my system?
>
> A key revokation will help me when my system is p0wned how?
It won't help you, it will help everyone else that _hasn't_ been p0wned
already because the affected software will no longer be able to run on
their system.
And it will help you because if someone _else_ gets p0wned then your
system won't be able to run the blacklisted insecure software.
> I don't want my system p0wned in the first place and I don't want to run
> windows. Why should I trust Microsoft's signing key?
In any case, you don't need to trust Microsoft's signing key...at least
on x86 hardware you can install your own. But if you want consumer
hardware to be able to boot linux out-of-the-box without messing with
BIOS settings then we need a bootloader that has been signed by Microsoft.
Chris
On Fri, 2012-11-02 at 18:04 +0000, Matthew Garrett wrote:
> On Fri, Nov 02, 2012 at 05:57:38PM +0000, James Bottomley wrote:
> > On Fri, 2012-11-02 at 17:54 +0000, Matthew Garrett wrote:
> > > ? That's the message generated by the Windows access control mechanism
> > > when you run a binary that requests elevated privileges.
> >
> > So that's a windows attack vector using a windows binary? I can't really
> > see how it's relevant to the secure boot discussion then.
>
> A user runs a binary that elevates itself to admin. Absent any flaws in
> Windows (cough), that should be all it can do in a Secure Boot world.
> But if you can drop a small trusted Linux system in there and use that
> to boot a compromised Windows kernel, it can make itself persistent.
We seem to be talking past each other. Assume you managed to install a
Linux boot system on the windows machine. If the linux boot requires
present user on first boot (either because the key of the bootloader
isn't in db or because the MOK database isn't initialised), you still
don't have a compromise because the loader won't start automatically.
James
On Fri, 02 Nov 2012 16:19:39 -0600
Chris Friesen <[email protected]> wrote:
> On 11/02/2012 04:03 PM, Eric W. Biederman wrote:
> > Matthew Garrett<[email protected]> writes:
> >
> >> On Fri, Nov 02, 2012 at 01:49:25AM -0700, Eric W. Biederman wrote:
> >>
> >>> When the goal is to secure Linux I don't see how any of this helps.
> >>> Windows 8 compromises are already available so if we turn most of these
> >>> arguments around I am certain clever attackers can go through windows to
> >>> run compromised kernel on a linux system, at least as easily as the
> >>> reverse.
> >>
> >> And if any of them are used to attack Linux, we'd expect those versions
> >> of Windows to be blacklisted.
This is the first laugh. So they revoke the key. For that to be useful
they must propogate that into all the boxes in warehouses and all the new
boxes. If they do that then all the existing store stock of Windows 8 DVD
and CD media needs replacing.
> > I don't want my system p0wned in the first place and I don't want to run
> > windows. Why should I trust Microsoft's signing key?
>
> In any case, you don't need to trust Microsoft's signing key...at least
> on x86 hardware you can install your own. But if you want consumer
> hardware to be able to boot linux out-of-the-box without messing with
> BIOS settings then we need a bootloader that has been signed by Microsoft.
Or a machine that has other keys in it, isn't sold locked down or doesn't
have lunatic boot firmware.
Alan
On Fri, Nov 02, 2012 at 03:03:02PM -0700, Eric W. Biederman wrote:
> I don't want my system p0wned in the first place and I don't want to run
> windows. Why should I trust Microsoft's signing key?
There's no reason to. Systems that don't trust Microsoft's signing key
have no reason to be concerned about Microsoft revocation.
Unfortunately, that's not the only set of people we have to worry about.
--
Matthew Garrett | [email protected]
On Fri, Nov 02, 2012 at 11:38:23PM +0000, James Bottomley wrote:
> On Fri, 2012-11-02 at 18:04 +0000, Matthew Garrett wrote:
> > A user runs a binary that elevates itself to admin. Absent any flaws in
> > Windows (cough), that should be all it can do in a Secure Boot world.
> > But if you can drop a small trusted Linux system in there and use that
> > to boot a compromised Windows kernel, it can make itself persistent.
>
> We seem to be talking past each other. Assume you managed to install a
> Linux boot system on the windows machine. If the linux boot requires
> present user on first boot (either because the key of the bootloader
> isn't in db or because the MOK database isn't initialised), you still
> don't have a compromise because the loader won't start automatically.
Why would an attacker use one of those Linux systems? There's going to
be plenty available that don't have that restriction.
--
Matthew Garrett | [email protected]
On Fri, Nov 02, 2012 at 11:46:07PM +0000, Alan Cox wrote:
> On Fri, 02 Nov 2012 16:19:39 -0600
> Chris Friesen <[email protected]> wrote:
> > On 11/02/2012 04:03 PM, Eric W. Biederman wrote:
> > > Matthew Garrett<[email protected]> writes:
> > >> And if any of them are used to attack Linux, we'd expect those versions
> > >> of Windows to be blacklisted.
>
> This is the first laugh. So they revoke the key. For that to be useful
> they must propogate that into all the boxes in warehouses and all the new
> boxes. If they do that then all the existing store stock of Windows 8 DVD
> and CD media needs replacing.
Revocation is done via Windows Update. If they refuse to do that, well,
lawyers, right?
--
Matthew Garrett | [email protected]
Matthew Garrett <[email protected]> writes:
> On Fri, Nov 02, 2012 at 03:03:02PM -0700, Eric W. Biederman wrote:
>
>> I don't want my system p0wned in the first place and I don't want to run
>> windows. Why should I trust Microsoft's signing key?
>
> There's no reason to. Systems that don't trust Microsoft's signing key
> have no reason to be concerned about Microsoft revocation.
> Unfortunately, that's not the only set of people we have to worry
> about.
No reason to? How can I configure an off the shelf system originally
sold with windows 8 installed to boot in UEFI secure boot mode using
shim without trusting Microsoft's key?
Eric
On Sat, 3 Nov 2012 00:23:39 +0000
Matthew Garrett <[email protected]> wrote:
> On Fri, Nov 02, 2012 at 11:46:07PM +0000, Alan Cox wrote:
> > On Fri, 02 Nov 2012 16:19:39 -0600
> > Chris Friesen <[email protected]> wrote:
> > > On 11/02/2012 04:03 PM, Eric W. Biederman wrote:
> > > > Matthew Garrett<[email protected]> writes:
> > > >> And if any of them are used to attack Linux, we'd expect those versions
> > > >> of Windows to be blacklisted.
> >
> > This is the first laugh. So they revoke the key. For that to be useful
> > they must propogate that into all the boxes in warehouses and all the new
> > boxes. If they do that then all the existing store stock of Windows 8 DVD
> > and CD media needs replacing.
>
> Revocation is done via Windows Update. If they refuse to do that, well,
> lawyers, right?
Doesn't work. Microsoft themselves have been bouncing up and down in the
press about malware installed in the supply chain. They have to revoke
the key in new systems as supplied. That means they can't install the
Windows 8 DVD which means they can't access windows update which means
all the media has to be updated.
It also means all customers with rescue media and restore media would
lose the ability to restore that media so those would need reissuing or a
mechanism to replace them.
Can't really see it happening.
As any crypto systems and economics people will tell you key revocation
is hard and the digital bits of it while hard are usually the tip of the
iceberg.
Alan
> No reason to? How can I configure an off the shelf system originally
> sold with windows 8 installed to boot in UEFI secure boot mode using
> shim without trusting Microsoft's key?
Assuming its an x86 and a PC class platform and thus should allow you to
disable secure boot mode then you disable secure boot mode and boot in
sane PC mode. You then jump through a collection of hoops to sign all
your OS stuff, your ROMs and a few other things with a new key, remove
the MS key and then "secure" boot it.
That will also stop random people demonstrating how secure your "secure"
boot is by walking up to your box and installing Windows 8 over your
distribution by reformatting your hard drive and probably block a wide
range of interesting law enforcement and other tools some of which will
inevitably fall into the wrong hands.
A lot of the work there is the mechanising of all of the hoop jumping and
key management, but there isn't an intrinsic reason you can't turn this
into a nice clean click and point self-sign my PC UI.
There are some interesting uses for self signed keys or having your own
corporate key included in your builds as a big company. One thing it
solves if you do it with Linux and an own key is being able to remote
install securely over a network which right now for all OS's and PC class
devices is a problem as you have no way to verify the image.
Alan
On Fri, Nov 02, 2012 at 05:47:02PM -0700, Eric W. Biederman wrote:
> No reason to? How can I configure an off the shelf system originally
> sold with windows 8 installed to boot in UEFI secure boot mode using
> shim without trusting Microsoft's key?
Delete the installed keys, install your choice of keys, make sure your
bootloader is signed with a key you trust. You're guaranteed to be able
to do this on any Windows 8 certified hardware.
--
Matthew Garrett | [email protected]
On Sat, 2012-11-03 at 00:22 +0000, Matthew Garrett wrote:
> On Fri, Nov 02, 2012 at 11:38:23PM +0000, James Bottomley wrote:
> > On Fri, 2012-11-02 at 18:04 +0000, Matthew Garrett wrote:
> > > A user runs a binary that elevates itself to admin. Absent any flaws in
> > > Windows (cough), that should be all it can do in a Secure Boot world.
> > > But if you can drop a small trusted Linux system in there and use that
> > > to boot a compromised Windows kernel, it can make itself persistent.
> >
> > We seem to be talking past each other. Assume you managed to install a
> > Linux boot system on the windows machine. If the linux boot requires
> > present user on first boot (either because the key of the bootloader
> > isn't in db or because the MOK database isn't initialised), you still
> > don't have a compromise because the loader won't start automatically.
>
> Why would an attacker use one of those Linux systems? There's going to
> be plenty available that don't have that restriction.
It's called best practices. If someone else releases something that
doesn't conform to them, then it's their signing key in jeopardy, not
yours. You surely must see that the goal of securing "everything"
against "anything" isn't achievable because if someone releases a
bootloader not conforming to the best practices, why would they have
bothered to include your secure boot lockdowns in their kernel. In
other words, you lost ab initio, so it's pointless to cite this type of
thing as a rationale for a kernel lockdown patch.
James
On Sat, Nov 03, 2012 at 12:03:56PM +0000, James Bottomley wrote:
> On Sat, 2012-11-03 at 00:22 +0000, Matthew Garrett wrote:
> > Why would an attacker use one of those Linux systems? There's going to
> > be plenty available that don't have that restriction.
>
> It's called best practices. If someone else releases something that
> doesn't conform to them, then it's their signing key in jeopardy, not
> yours. You surely must see that the goal of securing "everything"
> against "anything" isn't achievable because if someone releases a
> bootloader not conforming to the best practices, why would they have
> bothered to include your secure boot lockdowns in their kernel. In
> other words, you lost ab initio, so it's pointless to cite this type of
> thing as a rationale for a kernel lockdown patch.
I... what? Our signed bootloader will boot our signed kernel without any
physically present end-user involvement. We therefore need to make it
as difficult as practically possible for an attacker to use our signed
bootloader and our signed kernel as an attack vector against other
operating systems, which includes worrying about hibernate and kexec. If
people want to support this use case then patches to deal with that need
to be present in the upstream kernel.
--
Matthew Garrett | [email protected]
> You're guaranteed to be able
> to do this on any Windows 8 certified hardware.
Thats not my understanding of the situation.
On Sat, Nov 3, 2012 at 12:31 PM, Alan Cox <[email protected]> wrote:
>> You're guaranteed to be able
>> to do this on any Windows 8 certified hardware.
>
> Thats not my understanding of the situation.
Windows 8 certification has this as a requirement for x86 hardware. I
belied the opposite is a requirement for arm hardware. However it's
possible that it just doesn't specifiy at all for arm.
So yes, you're guaranteed to be able to do this on any Windows 8
certified x86 hardware.
On Sat, Nov 03, 2012 at 04:31:52PM +0000, Alan Cox wrote:
> > You're guaranteed to be able
> > to do this on any Windows 8 certified hardware.
>
> Thats not my understanding of the situation.
"17. Mandatory. On non-ARM systems, the platform MUST implement the
ability for a physically present user to select between two Secure Boot
modes in firmware setup: "Custom" and "Standard". Custom Mode allows for
more flexibility as specified in the following:
a. It shall be possible for a physically present user to use the Custom
Mode firmware setup option to modify the contents of the Secure Boot
signature databases and the PK. This may be implemented by simply
providing the option to clear all Secure Boot databases (PK, KEK, db,
dbx), which puts the system into setup mode."
--
Matthew Garrett | [email protected]
On Sat, Nov 03, 2012 at 12:37:44PM -0400, Eric Paris wrote:
> On Sat, Nov 3, 2012 at 12:31 PM, Alan Cox <[email protected]> wrote:
> >> You're guaranteed to be able
> >> to do this on any Windows 8 certified hardware.
> >
> > Thats not my understanding of the situation.
>
> Windows 8 certification has this as a requirement for x86 hardware. I
> belied the opposite is a requirement for arm hardware. However it's
> possible that it just doesn't specifiy at all for arm.
Arm devices are Windows RT, not Windows 8.
--
Matthew Garrett | [email protected]
On Sat, 2012-11-03 at 13:46 +0000, Matthew Garrett wrote:
> On Sat, Nov 03, 2012 at 12:03:56PM +0000, James Bottomley wrote:
> > On Sat, 2012-11-03 at 00:22 +0000, Matthew Garrett wrote:
> > > Why would an attacker use one of those Linux systems? There's going to
> > > be plenty available that don't have that restriction.
> >
> > It's called best practices. If someone else releases something that
> > doesn't conform to them, then it's their signing key in jeopardy, not
> > yours. You surely must see that the goal of securing "everything"
> > against "anything" isn't achievable because if someone releases a
> > bootloader not conforming to the best practices, why would they have
> > bothered to include your secure boot lockdowns in their kernel. In
> > other words, you lost ab initio, so it's pointless to cite this type of
> > thing as a rationale for a kernel lockdown patch.
>
> I... what? Our signed bootloader will boot our signed kernel without any
> physically present end-user involvement. We therefore need to make it
> as difficult as practically possible for an attacker to use our signed
> bootloader and our signed kernel as an attack vector against other
> operating systems, which includes worrying about hibernate and kexec. If
> people want to support this use case then patches to deal with that need
> to be present in the upstream kernel.
Right, but what I'm telling you is that by deciding to allow automatic
first boot, you're causing the windows attack vector problem. You could
easily do a present user test only on first boot which would eliminate
it. Instead, we get all of this.
By analogy, it's like an architect trying to design a house to be secure
without a front door lock. If you just secure the front door, you don't
necessarily need all the internal security. There is certainly a market
for houses with good internal security, but not everyone wants the
hassle, so trying to make every house that way is counterproductive.
It's also not so useful to the people who want specialist internal
security because they're willing to use much more specialised systems
than you have to deploy generally.
In short, if you'd just separate the problem into
1. What do we have to do to prevent Linux being used to attack
windows and thus getting our key revoked from,
2. What specialised systems can we put in place to enhance linux
security with secure boot for those who want it
It becomes a lot simpler than trying to do a one size fits all solution.
James
On Fri, 2 Nov 2012, Vivek Goyal wrote:
> > With secure boot enabled, then the kernel should refuse to let an
> > unsigned kexec load new images, and kexec itself should refuse to
> > load unsigned images.
>
> Yep, good in theory. Now that basically means reimplementing kexec-tools
> in kernel.
Why is "when kernel has been securely booted, the in-kernel kexec
mechanism has to verify the signature of the supplied image before
kexecing it" not enough? (basically the same thing we are doing for signed
modules already).
--
Jiri Kosina
SUSE Labs
On Sat, Nov 03, 2012 at 10:56:40PM +0000, James Bottomley wrote:
> On Sat, 2012-11-03 at 13:46 +0000, Matthew Garrett wrote:
> > I... what? Our signed bootloader will boot our signed kernel without any
> > physically present end-user involvement. We therefore need to make it
> > as difficult as practically possible for an attacker to use our signed
> > bootloader and our signed kernel as an attack vector against other
> > operating systems, which includes worrying about hibernate and kexec. If
> > people want to support this use case then patches to deal with that need
> > to be present in the upstream kernel.
>
> Right, but what I'm telling you is that by deciding to allow automatic
> first boot, you're causing the windows attack vector problem. You could
> easily do a present user test only on first boot which would eliminate
> it. Instead, we get all of this.
Your definition of "Best practices" is "Automated installs are
impossible"? Have you ever actually spoken to a user?
--
Matthew Garrett | [email protected]
On Sun, 2012-11-04 at 04:28 +0000, Matthew Garrett wrote:
> On Sat, Nov 03, 2012 at 10:56:40PM +0000, James Bottomley wrote:
> > On Sat, 2012-11-03 at 13:46 +0000, Matthew Garrett wrote:
> > > I... what? Our signed bootloader will boot our signed kernel without any
> > > physically present end-user involvement. We therefore need to make it
> > > as difficult as practically possible for an attacker to use our signed
> > > bootloader and our signed kernel as an attack vector against other
> > > operating systems, which includes worrying about hibernate and kexec. If
> > > people want to support this use case then patches to deal with that need
> > > to be present in the upstream kernel.
> >
> > Right, but what I'm telling you is that by deciding to allow automatic
> > first boot, you're causing the windows attack vector problem. You could
> > easily do a present user test only on first boot which would eliminate
> > it. Instead, we get all of this.
>
> Your definition of "Best practices" is "Automated installs are
> impossible"? Have you ever actually spoken to a user?
Are you sure you've spoken to the right users if you think they use a
distro boot system to do automated installs?
I've actually had more than enough experience with automated installs
over my career: they're either done by paying someone or using a
provisioning system. In either case, they provision a static image and
boot environment description, including EFI boot services variables, so
you can provision a default MOK database if you want the ignition image
not to pause on firstboot.
There is obviously the question of making the provisioning systems
secure, but it's a separate one from making boot secure.
James
On Sun 2012-11-04 04:28:02, Matthew Garrett wrote:
> On Sat, Nov 03, 2012 at 10:56:40PM +0000, James Bottomley wrote:
> > On Sat, 2012-11-03 at 13:46 +0000, Matthew Garrett wrote:
> > > I... what? Our signed bootloader will boot our signed kernel without any
> > > physically present end-user involvement. We therefore need to make it
> > > as difficult as practically possible for an attacker to use our signed
> > > bootloader and our signed kernel as an attack vector against other
> > > operating systems, which includes worrying about hibernate and kexec. If
> > > people want to support this use case then patches to deal with that need
> > > to be present in the upstream kernel.
> >
> > Right, but what I'm telling you is that by deciding to allow automatic
> > first boot, you're causing the windows attack vector problem. You could
> > easily do a present user test only on first boot which would eliminate
> > it. Instead, we get all of this.
>
> Your definition of "Best practices" is "Automated installs are
> impossible"? Have you ever actually spoken to a user?
Always polite Matthew...
Anyway, problem with introducing random signatures all over the kernel
is that it does not _work_. You'll end up signing all the userspace,
too. So far you want to sign kexec, soon you'll discover you need to
sign s2disk, too, and then you realize X, wine and dosemu needs the
same treatment. fwvm95 comes next.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Sun, Nov 04, 2012 at 09:14:47AM +0000, James Bottomley wrote:
> I've actually had more than enough experience with automated installs
> over my career: they're either done by paying someone or using a
> provisioning system. In either case, they provision a static image and
> boot environment description, including EFI boot services variables, so
> you can provision a default MOK database if you want the ignition image
> not to pause on firstboot.
And now you've moved the attack vector to a copy of your provisioning
system instead.
> There is obviously the question of making the provisioning systems
> secure, but it's a separate one from making boot secure.
You don't get to punt on making the kernel secure by simply asserting
that some other system can be secure instead. The chain of trust needs
to go all the way back - if your security model is based on all installs
needing a physically present end user, all installs need a physically
present end user. That's not acceptable, so we need a different security
model.
--
Matthew Garrett | [email protected]
Matthew Garrett <[email protected]> writes:
> On Sun, Nov 04, 2012 at 09:14:47AM +0000, James Bottomley wrote:
>
>> I've actually had more than enough experience with automated installs
>> over my career: they're either done by paying someone or using a
>> provisioning system. In either case, they provision a static image and
>> boot environment description, including EFI boot services variables, so
>> you can provision a default MOK database if you want the ignition image
>> not to pause on firstboot.
>
> And now you've moved the attack vector to a copy of your provisioning
> system instead.
>
>> There is obviously the question of making the provisioning systems
>> secure, but it's a separate one from making boot secure.
>
> You don't get to punt on making the kernel secure by simply asserting
> that some other system can be secure instead. The chain of trust needs
> to go all the way back - if your security model is based on all installs
> needing a physically present end user, all installs need a physically
> present end user. That's not acceptable, so we need a different security
> model.
Bzzzt. Theory and reality disagreeing.
I have done a lot of automatic installs. At the very least someone has
to be present to apply power to the hardware. So someone being present
is not a requirement you can remove.
Furthermore in most cases an automatic install requires kicking the
system into network boot mode or into inserting an install cd. Both are
actions that require a user to be present.
The goal is to reduce what a user must do to a minimum to remove the
possibility of human error, not to reduce what must happen into
absurdity.
The other side is that a general purpose configuration of firmware
almost never is suitable for a general install. So either some small
amount of time must be spent fixing the BIOS settings or have an
appropriate set of BIOS settings come from your supplier.
In practice what I would expect of a UEFI system that ships ready for
automatic installs is a system that initiall boots up in "setup mode"
where it is possible to install your own platform signing key.
What I would expect to happen in that situation is that during the first
boot software would come over the network or from an install cd and
install my platform signing key. Then a bootloader signed with my key
would be installed, and then everything would chain from there.
In most cases where I would be setting up an automatic install I would
not install Microsoft's key, and I would definitely not sign my
bootloader with Microsoft's key. At most I would sign my own "key
install" with Microsoft's key.
Then in cases of automatic reinstallation my key would be in the
firmware and I could change my bootloader and my kernels at will
with no risk that some third party could do anything to the machine
unless they manged to get physical access.
If I was a distroy my key would that I would install by default would be
the distro's signing key. Although honestly I would still prefer a
solution where I could lock things down a little farther.
In any case the notion that unattended install with no user interaction
on any uefi machine in any state is complete and total rubbish. It
can't be done. You need power and you need boot media.
Eric
Jiri Kosina <[email protected]> writes:
> On Fri, 2 Nov 2012, Vivek Goyal wrote:
>
>> > With secure boot enabled, then the kernel should refuse to let an
>> > unsigned kexec load new images, and kexec itself should refuse to
>> > load unsigned images.
>>
>> Yep, good in theory. Now that basically means reimplementing kexec-tools
>> in kernel.
>
> Why is "when kernel has been securely booted, the in-kernel kexec
> mechanism has to verify the signature of the supplied image before
> kexecing it" not enough? (basically the same thing we are doing for signed
> modules already).
For modules the only untrusted part of their environment are the command
line parameters, and several of those have already been noted for
needing to be ignored in a non-trusted root scenario.
For kexec there is a bunch of glue code and data that takes care of
transitioning from the environment provided by kexec and the environment
that the linux kernel or memtest86 or whatever we are booting is
expecting.
Figuring out what glue code and data we need and supplying that glue
code and data is what kexec-tools does. The situation is a bit like
dealing with the modules before most of the work of insmod was moved
into the kernel.
For kexec-tools it is desirable to have glue layers outside of the
kernel because every boot system in existence has a different set of
parameter passing rules.
So signing in the kernel gets us into how do we sign the glue code and
how dow we verify the glue code will jump to our signed and verified
kernel image.
I will be happy to review patches for that don't through the baby out
with the bath water.
Eric
On 11/05/2012 07:14 AM, Eric W. Biederman wrote:
>
> In any case the notion that unattended install with no user interaction
> on any uefi machine in any state is complete and total rubbish. It
> can't be done. You need power and you need boot media.
>
That is a hugely different thing from needing a console.
-hpa
"H. Peter Anvin" <[email protected]> writes:
> On 11/05/2012 07:14 AM, Eric W. Biederman wrote:
>>
>> In any case the notion that unattended install with no user interaction
>> on any uefi machine in any state is complete and total rubbish. It
>> can't be done. You need power and you need boot media.
>>
>
> That is a hugely different thing from needing a console.
Not at all.
In the general case user intereaction is required to tell the system to
boot off of your choosen boot media instead of the local hard drive.
Eric
This is not a good thing to assume. A vendor could have an external button, gor example.
[email protected] wrote:
>"H. Peter Anvin" <[email protected]> writes:
>
>> On 11/05/2012 07:14 AM, Eric W. Biederman wrote:
>>>
>>> In any case the notion that unattended install with no user
>interaction
>>> on any uefi machine in any state is complete and total rubbish. It
>>> can't be done. You need power and you need boot media.
>>>
>>
>> That is a hugely different thing from needing a console.
>
>Not at all.
>
>In the general case user intereaction is required to tell the system to
>boot off of your choosen boot media instead of the local hard drive.
>
>Eric
--
Sent from my mobile phone. Please excuse brevity and lack of formatting.
On Sun, 2012-11-04 at 13:52 +0000, Matthew Garrett wrote:
> On Sun, Nov 04, 2012 at 09:14:47AM +0000, James Bottomley wrote:
>
> > I've actually had more than enough experience with automated installs
> > over my career: they're either done by paying someone or using a
> > provisioning system. In either case, they provision a static image and
> > boot environment description, including EFI boot services variables, so
> > you can provision a default MOK database if you want the ignition image
> > not to pause on firstboot.
>
> And now you've moved the attack vector to a copy of your provisioning
> system instead.
Well, no, it always exists: a lot of provisioning systems install efi
(or previously dos) based agents not linux kernels. However it's a
different vector since the efi agents tend to want to PXE boot and
contact the image server.
> > There is obviously the question of making the provisioning systems
> > secure, but it's a separate one from making boot secure.
>
> You don't get to punt on making the kernel secure by simply asserting
> that some other system can be secure instead. The chain of trust needs
> to go all the way back - if your security model is based on all installs
> needing a physically present end user, all installs need a physically
> present end user. That's not acceptable, so we need a different security
> model.
I didn't. I advocated a simple security model which you asserted
wouldn't allow unattended installs, so I explained how they could be
done.
James
"H. Peter Anvin" <[email protected]> writes:
> This is not a good thing to assume. A vendor could have an external
> button, for example.
Facts are always a good thing to assume.
The fact is the general case does not admit an install without user
interaction.
It makes a lot of sense to revisit the working assumptions when for lack
of 3 o4 4 lines in the bootloader people are advocating turning gold
into lead at the cost of a national banking bailout.
Non-interactive installs are very interesting but they only make sense
in a very narrow range of cases not on every in every BIOS state on
every machine. If the UEFI firmware will let me install a platform key
and set ever other firmware setting in my installer, then it is a good
starting state. The rest of the time there will be some unpredictable
inconsistent mess of firmware settings that someone is going to have to
go in and fix. Or the install cd will have blown away my existing
partitions deleting data I forgot to back up that day.
The notion that a non-interactive install is possible in the general
case is complete and total hogwash.
Eric
On 11/05/2012 09:50 AM, Eric W. Biederman wrote:
>
> Facts are always a good thing to assume.
>
> The fact is the general case does not admit an install without user
> interaction.
>
In the general case, no. However, that is not a good reason to rule out
the cases where it *can* be done; especially as vendors are starting to
wake up to actual needs of users and of Linux in particular.
-hpa
On Mon, Nov 05, 2012 at 09:20:17AM +0100, James Bottomley wrote:
> On Sun, 2012-11-04 at 13:52 +0000, Matthew Garrett wrote:
> > You don't get to punt on making the kernel secure by simply asserting
> > that some other system can be secure instead. The chain of trust needs
> > to go all the way back - if your security model is based on all installs
> > needing a physically present end user, all installs need a physically
> > present end user. That's not acceptable, so we need a different security
> > model.
>
> I didn't. I advocated a simple security model which you asserted
> wouldn't allow unattended installs, so I explained how they could be
> done.
You've explained that a hypothetical piece of software could handle key
provisioning without providing any explanation for how it would be able
to do so in a secure manner.
--
Matthew Garrett | [email protected]
On Sun, Nov 04, 2012 at 11:24:17PM -0800, Eric W. Biederman wrote:
> "H. Peter Anvin" <[email protected]> writes:
> >
> > That is a hugely different thing from needing a console.
>
> Not at all.
>
> In the general case user intereaction is required to tell the system to
> boot off of your choosen boot media instead of the local hard drive.
No, in the general case the system will do that once it fails to find a
bootable OS on the drive.
--
Matthew Garrett | [email protected]
On Mon, 5 Nov 2012 12:38:58 +0000
Matthew Garrett <[email protected]> wrote:
> On Sun, Nov 04, 2012 at 11:24:17PM -0800, Eric W. Biederman wrote:
> > "H. Peter Anvin" <[email protected]> writes:
> > >
> > > That is a hugely different thing from needing a console.
> >
> > Not at all.
> >
> > In the general case user intereaction is required to tell the system to
> > boot off of your choosen boot media instead of the local hard drive.
>
> No, in the general case the system will do that once it fails to find a
> bootable OS on the drive.
So your "secure" system can be wiped by a random Windows 8 install media.
Ooh thats good stuff 8)
On Mon, Nov 05, 2012 at 01:44:36PM +0000, Alan Cox wrote:
> On Mon, 5 Nov 2012 12:38:58 +0000
> Matthew Garrett <[email protected]> wrote:
> > No, in the general case the system will do that once it fails to find a
> > bootable OS on the drive.
>
> So your "secure" system can be wiped by a random Windows 8 install media.
> Ooh thats good stuff 8)
Once you've booted trusted code you can change the boot order.
--
Matthew Garrett | [email protected]
On Sun, 4 Nov 2012, Eric W. Biederman wrote:
> > Why is "when kernel has been securely booted, the in-kernel kexec
> > mechanism has to verify the signature of the supplied image before
> > kexecing it" not enough? (basically the same thing we are doing for signed
> > modules already).
>
> For modules the only untrusted part of their environment are the command
> line parameters, and several of those have already been noted for
> needing to be ignored in a non-trusted root scenario.
>
> For kexec there is a bunch of glue code and data that takes care of
> transitioning from the environment provided by kexec and the environment
> that the linux kernel or memtest86 or whatever we are booting is
> expecting.
>
> Figuring out what glue code and data we need and supplying that glue
> code and data is what kexec-tools does. The situation is a bit like
> dealing with the modules before most of the work of insmod was moved
> into the kernel.
>
> For kexec-tools it is desirable to have glue layers outside of the
> kernel because every boot system in existence has a different set of
> parameter passing rules.
>
> So signing in the kernel gets us into how do we sign the glue code and
> how dow we verify the glue code will jump to our signed and verified
> kernel image.
Do I understand you correctly that by the 'glue' stuff you actually mean
the division of the kexec image into segments?
Of course, when we are dividing the image into segments and then passing
those individually (even more so if some transformations are performed on
those segments, which I don't know whether that's the case or not), then
we can't do any signature verification of the image any more.
But I still don't fully understand what is so magical about taking the
kernel image as is, and passing the whole lot to the running kernel as-is,
allowing for signature verification.
Yes, it couldn't be sys_kexec_load(), as that would be ABI breakage, so
it'd mean sys_kexec_raw_load(), or whatever ... but I fail to see why that
would be problem in principle.
If you can point me to the code where all the magic that prevents this
easy handling is happening, I'd appreciate it.
Thanks,
--
Jiri Kosina
SUSE Labs
On Mon, 5 Nov 2012, Jiri Kosina wrote:
> Do I understand you correctly that by the 'glue' stuff you actually mean
> the division of the kexec image into segments?
>
> Of course, when we are dividing the image into segments and then passing
> those individually (even more so if some transformations are performed on
> those segments, which I don't know whether that's the case or not), then
> we can't do any signature verification of the image any more.
>
> But I still don't fully understand what is so magical about taking the
> kernel image as is, and passing the whole lot to the running kernel as-is,
> allowing for signature verification.
>
> Yes, it couldn't be sys_kexec_load(), as that would be ABI breakage, so
> it'd mean sys_kexec_raw_load(), or whatever ... but I fail to see why that
> would be problem in principle.
>
> If you can point me to the code where all the magic that prevents this
> easy handling is happening, I'd appreciate it.
OK, so after wandering through kexec-tools sources for a while, I am
starting to get your point. I wasn't actually aware of the fact that it
supports such a wide variety of binary formats etc. (multiboot, nbi, etc).
I had a naive idea of just putting in-kernel verification of a complete
ELF binary passed to kernel by userspace, and if the signature matches,
jumping to it.
Would work for elf-x86_64 nicely I guess, but we'd lose a lot of other
functionality currently being provided by kexec-tools.
Bah. This is a real pandora's box.
--
Jiri Kosina
SUSE Labs
On 11/05/2012 09:31 AM, Jiri Kosina wrote:
> I had a naive idea of just putting in-kernel verification of a complete
> ELF binary passed to kernel by userspace, and if the signature matches,
> jumping to it.
> Would work for elf-x86_64 nicely I guess, but we'd lose a lot of other
> functionality currently being provided by kexec-tools.
>
> Bah. This is a real pandora's box.
Would it be so bad to statically link kexec?
Chris
At Thu, 1 Nov 2012 13:18:49 +0000,
Alan Cox wrote:
>
> > I think it make sense because the private key is still protected by
> > signer. Any hacker who modified firmware is still need use private key
> > to generate signature, but hacker's private key is impossible to match
> > with the public key that kernel used to verify firmware.
> >
> > And, I afraid we have no choice that we need put the firmware signature
> > in a separate file. Contacting with those company's legal department
> > will be very time-consuming, and I am not sure all company will agree we
> > put the signature with firmware then distribute.
>
> Then you'd better stop storing it on disk because your disk drive is FEC
> encoding it and adding a CRC 8)
>
> It does want checking with a lawyer but my understanding is that if you
> have a file which is a package that contains the firmware and a signature
> then there is not generally a problem, any more than putting it in an RPM
> file - it's packaging/aggregation. This should be referred to the Linux
> Foundation folks perhaps - no point designing something badly to work
> around a non existant issue.
>
> Also the interface needs to consider that a lot of device firmware is
> already signed. Nobody notices because they don't ever try and do their
> own thus many drivers don't need extra signatures in fact.
Besides the legal concern, embedding the signature into the firmware
makes the file incompatible with old kernel that has no support for
signed firmware. That's the reason I put the files into a new
location in my test patch, /lib/firmware/signed/. Having a separate
signature file would make this easier.
I cooked again quickly firmware loader code for the separate signature
files. I'm going to send a series of test patches.
thanks,
Takashi
On Mon, Nov 05, 2012 at 09:37:18AM -0600, Chris Friesen wrote:
> On 11/05/2012 09:31 AM, Jiri Kosina wrote:
>
> >I had a naive idea of just putting in-kernel verification of a complete
> >ELF binary passed to kernel by userspace, and if the signature matches,
> >jumping to it.
> >Would work for elf-x86_64 nicely I guess, but we'd lose a lot of other
> >functionality currently being provided by kexec-tools.
> >
> >Bah. This is a real pandora's box.
>
> Would it be so bad to statically link kexec?
statically linking and signing /sbin/kexec is sounding most reasonable so
far, to me. Even if we do that, there are few more issues queries though.
- Do we still need a new system call?
- Who does the kernel signature verification. Is it /sbin/kexec or kernel
should do that.
- If kernel is supposed to do kernel signature verification, how the
signed bzImage is passed to kernel with existing system call.
Are certificates passed in separate segments. How does kernel
differentiate between segmets etc.
- Does signed /sbin/kexec means that it can load any other segments
like elf header, boot_params and no signature verifiation is needed.
If we move all the kernel signatuer verification part into /sbin/kexec,
then we should possibly be able to use existing system call. But I don't
know what kind of crypto support we shall have to build into kexec-tools
statically and how much help we can get from static libraries and how
much work it is.
Thanks
Vivek
Matthew Garrett <[email protected]> writes:
> On Sun, Nov 04, 2012 at 11:24:17PM -0800, Eric W. Biederman wrote:
>> "H. Peter Anvin" <[email protected]> writes:
>> >
>> > That is a hugely different thing from needing a console.
>>
>> Not at all.
>>
>> In the general case user intereaction is required to tell the system to
>> boot off of your choosen boot media instead of the local hard drive.
>
> No, in the general case the system will do that once it fails to find a
> bootable OS on the drive.
In the general case there will be a bootable OS on the drive.
Eric
On Mon, Nov 05, 2012 at 11:16:12AM -0800, Eric W. Biederman wrote:
> Matthew Garrett <[email protected]> writes:
> > No, in the general case the system will do that once it fails to find a
> > bootable OS on the drive.
>
> In the general case there will be a bootable OS on the drive.
That's in no way a given.
--
Matthew Garrett | [email protected]
* James Bottomley:
> Right, but what I'm telling you is that by deciding to allow automatic
> first boot, you're causing the windows attack vector problem. You could
> easily do a present user test only on first boot which would eliminate
> it.
Apparently, the warning will look like this:
WARNING: This Binary is unsigned
Are you sure you wish to run an unsigned binary
in a secure environment?
To avoid this question in future place the platform into setup mode
See http://www.linuxfoundation.org/uefi-setup-mode
And reboot.
I'm not convinced this will work because users will confirm their
presence to get back into the system. We expect GNU/Linux users to do
it, why wouldn't Windows users? (And what harm can an unsigned binary
do to a "secure environment", anyway? If it's adversely affected, it
can't be that secure, can it?)
And what's the backup plan if users use this to boot into compromised
Windows systems?
Matthew Garrett <[email protected]> writes:
> On Mon, Nov 05, 2012 at 11:16:12AM -0800, Eric W. Biederman wrote:
>> Matthew Garrett <[email protected]> writes:
>> > No, in the general case the system will do that once it fails to find a
>> > bootable OS on the drive.
>>
>> In the general case there will be a bootable OS on the drive.
>
> That's in no way a given.
You have it backwards. The conclusion here is that having a case where
a non-interactive install is possible is not a given.
Therefore inflicting the entire rest of the ecosystem with requirements
that only exist in the union of the requirements for non-interactive
installs and installs on a machine with an existing machine does not
make sense.
In situations where a non-interactive install is interesting. Aka
an empty boot disk it is not interesting to guard against.
In situations where interaction happens is where windows may already exists
and so spoofing windows is a design consideration and and a user
presence test does not break the design.
Eric
On Mon, Nov 05, 2012 at 06:46:32PM -0800, Eric W. Biederman wrote:
> Matthew Garrett <[email protected]> writes:
>
> > On Mon, Nov 05, 2012 at 11:16:12AM -0800, Eric W. Biederman wrote:
> >> Matthew Garrett <[email protected]> writes:
> >> > No, in the general case the system will do that once it fails to find a
> >> > bootable OS on the drive.
> >>
> >> In the general case there will be a bootable OS on the drive.
> >
> > That's in no way a given.
>
> You have it backwards. The conclusion here is that having a case where
> a non-interactive install is possible is not a given.
I deal with customers who perform non-interactive installs. The fact
that you don't care about that use case is entirely irrelevant to me,
because you're not the person that I am obliged to satisfy.
--
Matthew Garrett | [email protected]
Matthew Garrett <[email protected]> writes:
> On Mon, Nov 05, 2012 at 06:46:32PM -0800, Eric W. Biederman wrote:
>> Matthew Garrett <[email protected]> writes:
>>
>> > On Mon, Nov 05, 2012 at 11:16:12AM -0800, Eric W. Biederman wrote:
>> >> Matthew Garrett <[email protected]> writes:
>> >> > No, in the general case the system will do that once it fails to find a
>> >> > bootable OS on the drive.
>> >>
>> >> In the general case there will be a bootable OS on the drive.
>> >
>> > That's in no way a given.
>>
>> You have it backwards. The conclusion here is that having a case where
>> a non-interactive install is possible is not a given.
>
> I deal with customers who perform non-interactive installs. The fact
> that you don't care about that use case is entirely irrelevant to me,
> because you're not the person that I am obliged to satisfy.
I have spent what feels like half my life doing automatic installs. I
care a lot and I understand the requirements. I also see through
misstatements about reality used to justify stupid design decisions.
For automated installs you don't have to satisfy me. Feel free to
deliver a lousy solution to your users. Just don't use your arbitrary
design decisions to justify your kernel patches.
Non-interactive installs do not justify removing all trust from the root
user of a system, disabling suspend to disk and completely rewriting
kexec for the simple expedient removing a couple of lines of code from
your bootloader.
Eric
On Mon, Nov 05, 2012 at 07:36:32PM -0800, Eric W. Biederman wrote:
> For automated installs you don't have to satisfy me. Feel free to
> deliver a lousy solution to your users. Just don't use your arbitrary
> design decisions to justify your kernel patches.
My kernel patches are justified by genuine user requirements. If you
don't feel that there's any requirement for the kernel to satisfy the
people who use it, you're free to ignore those patches.
--
Matthew Garrett | [email protected]
Matthew Garrett <[email protected]> writes:
> On Mon, Nov 05, 2012 at 07:36:32PM -0800, Eric W. Biederman wrote:
>
>> For automated installs you don't have to satisfy me. Feel free to
>> deliver a lousy solution to your users. Just don't use your arbitrary
>> design decisions to justify your kernel patches.
>
> My kernel patches are justified by genuine user requirements.
Hogwash.
If windows is not present on a system linux can not be used to boot a
compromised version of windows without user knowledge because windows is
not present.
If windows is present on a system then to install linux a user must be
present and push buttons to get the system to boot off of install media.
If a user is present a user presence test may be used to prevent a
bootloader signed with Microsoft's key from booting linux without the
users consent, and thus prevent Linux from attacking windows users.
Therefore preventing the revokation of a signature with Microsoft's
signature from your bootloader does not justify elaborate kernel
modifications to prevent the booting a compromised version of windows.
Furthermore no matter how hard we try with current techniques there will
eventually be kernel bugs that allow attackers to inject code into the
kernel. So attempting to fully close that attack vector is
questionable.
> If you
> don't feel that there's any requirement for the kernel to satisfy the
> people who use it, you're free to ignore those patches.
I feel allowing the kernel to be hacked to bits and decend into an
unmaintainable mess does not serve the users who use the kernel,
and to prevent that technically poor patches should be rejected.
This helps prevent non-technical considerations from justifying
technically poor decisions.
Eric
On Mon, Nov 05, 2012 at 09:19:46PM -0800, Eric W. Biederman wrote:
> Matthew Garrett <[email protected]> writes:
>
> > On Mon, Nov 05, 2012 at 07:36:32PM -0800, Eric W. Biederman wrote:
> >
> >> For automated installs you don't have to satisfy me. Feel free to
> >> deliver a lousy solution to your users. Just don't use your arbitrary
> >> design decisions to justify your kernel patches.
> >
> > My kernel patches are justified by genuine user requirements.
>
> Hogwash.
You keep using that word, which is unfortunate.
> If windows is not present on a system linux can not be used to boot a
> compromised version of windows without user knowledge because windows is
> not present.
Correct.
> If windows is present on a system then to install linux a user must be
> present and push buttons to get the system to boot off of install media.
Incorrect. UEFI boot priorities can be set without physical user
interaction.
> If a user is present a user presence test may be used to prevent a
> bootloader signed with Microsoft's key from booting linux without the
> users consent, and thus prevent Linux from attacking windows users.
Correct, but precludes the kind of automated installs that I know real
people do. The keys a machine carries don't vary depending on whether it
shipped with Windows or not, so it's not possible to differentiate
between the "shipped with Windows" and "shipped without Windows" cases
when determining security models.
> Therefore preventing the revokation of a signature with Microsoft's
> signature from your bootloader does not justify elaborate kernel
> modifications to prevent the booting a compromised version of windows.
That's a stretch.
Bored now. You're adding nothing new to anyone's understanding of the
problem, and I'm just saying the same thing I've been saying for months,
so I don't see any purpose in discussing this with you further.
--
Matthew Garrett | [email protected]
* Eric W. Biederman:
> If windows is not present on a system linux can not be used to boot a
> compromised version of windows without user knowledge because windows is
> not present.
Interesting idea. Unfortunately, it is very hard to detect reliably
that Windows is not present from the bootloader, so it's not possible
to use this approach to simplify matters.
> If windows is present on a system then to install linux a user must be
> present and push buttons to get the system to boot off of install media.
That's not necessarily true.
> If a user is present a user presence test may be used to prevent a
> bootloader signed with Microsoft's key from booting linux without the
> users consent, and thus prevent Linux from attacking windows users.
As already explained, I don't think that user presence accomplishes
anything. You need informed consent, and it's impossible to cram that
on a 80x25 screen. You also need to make sure that you aren't
unnecessarily alarmist. We don't want a "Linux may harm your
computer" warning.
> Therefore preventing the revokation of a signature with Microsoft's
> signature from your bootloader does not justify elaborate kernel
> modifications to prevent the booting a compromised version of windows.
I don't like this approach, either.
> Furthermore no matter how hard we try with current techniques there will
> eventually be kernel bugs that allow attackers to inject code into the
> kernel. So attempting to fully close that attack vector is
> questionable.
I suspect we'd need to revoke old binaries after a grace period. I
guess the Microsoft approach is to revoke only what's actually used
for attacks, but that leads to a lot of unpredictability for our
users.
It's also annoying if we figure out after release that we have to
disable additional kernel functionality because it can be used to
compromise the boot path. Users will not like that, especially if
they do not use Windows at all.
Personally, I think the only way out of this mess is to teach users
how to disable Secure Boot.
On Tue, 06 Nov 2012 03:12:19 +0000, Matthew Garrett said:
> On Mon, Nov 05, 2012 at 06:46:32PM -0800, Eric W. Biederman wrote:
> > You have it backwards. The conclusion here is that having a case where
> > a non-interactive install is possible is not a given.
>
> I deal with customers who perform non-interactive installs. The fact
> that you don't care about that use case is entirely irrelevant to me,
> because you're not the person that I am obliged to satisfy.
You *do* realize that the fact you have some set of customers who
perform non-interactive installs does *not* imply that being able to do
so is a given, right? The fact it is available and doable for your customers
does *not* mean it's available and doable in general.
There's a big difference between "the design has to deal with the fact that
some customers can do this on some subsets of hardware" and "the design
is free to assume that this is doable".
On Tue, 6 Nov 2012 03:53:52 +0000
Matthew Garrett <[email protected]> wrote:
> On Mon, Nov 05, 2012 at 07:36:32PM -0800, Eric W. Biederman wrote:
>
> > For automated installs you don't have to satisfy me. Feel free to
> > deliver a lousy solution to your users. Just don't use your arbitrary
> > design decisions to justify your kernel patches.
>
> My kernel patches are justified by genuine user requirements.
So are lots of patches tht don't go in because they are too ugly or too
invasive or two special case for mainstream.
There are two discussions here
- is it worth Red Hat doing - that's up to Red Hat's business managers
- is it worth merging into the kernel - that's not
The capability bit is small and clean the rest of it is beginning to look
far too ugly for upstream right now. Not to say it might not end up small
and clean in the end.
Alan
On Wed, 31 Oct 2012, Matthew Garrett wrote:
> > Reading stored memory image (potentially tampered before reboot) from disk
> > is basically DMA-ing arbitrary data over the whole RAM. I am currently not
> > able to imagine a scenario how this could be made "secure" (without
> > storing private keys to sign the hibernation image on the machine itself
> > which, well, doesn't sound secure either).
>
> shim generates a public and private key.
It seems to me that this brings quite a huge delay into the boot process
both for "regular" and resume cases (as shim has no way to know what is
going to happen next). Mostly because obtaining enough entropy is
generally very difficult when we have just shim running, right?
> It hands the kernel the private key in a boot parameter and stores the
> public key in a boot variable. On suspend, the kernel signs the suspend
> image with that private key and discards it. On the next boot, shim
> generates a new key pair and hands the new private key to the kernel
> along with the old public key. The kernel verifies the suspend image
> before resuming it. The only way to subvert this would be to be able to
> access kernel memory directly, which means the attacker has already won.
I like this protocol, but after some off-line discussions, I still have
doubts about it. Namely: how do we make sure that there is noone tampering
with the variable?
Obvious step towards solving this is making the variable inaccessible
after ExitBootServices() has been called (by not setting runtime access
flag on it).
Now how about this scenario:
- consider securely booted win8 (no Linux installed on that machine, so
the variable for storing public key doesn't exist yet), possibly being
taken over by a malicious user
- he/she creates this secure variable from within the win8 and stores
his/her own public key into it
- he/she supplies a signed shim (as provided by some Linux distro vendor),
signed kernel (as provided by some Linux distro vendor) and specially
crafted resume image, signed by his/her own private key
- he/she reboots the machine in a way that shim+distro kernel+hacker's S4
image is used to resume
- distro kernel verifies the signature of the S4 image against the
attacker's public key stored in the variable; the signature is OK
- he/she has won, as he has managed to run an arbitrary kernel code
(stored in the S4 image) in a trusted mode
No?
Basically, once the machine is already populated with the "secure" version
of Linux, this can't happen, as we will (as far as I understand) set the
variable for storing the public key in a way that it can't be accessed
from runtime environment. But how can we prevent it being *created* before
the machine is ever touched by Linux?
Thanks,
--
Jiri Kosina
SUSE Labs
On Tue, Nov 06, 2012 at 01:51:15PM +0100, Jiri Kosina wrote:
> On Wed, 31 Oct 2012, Matthew Garrett wrote:
> > shim generates a public and private key.
>
> It seems to me that this brings quite a huge delay into the boot process
> both for "regular" and resume cases (as shim has no way to know what is
> going to happen next). Mostly because obtaining enough entropy is
> generally very difficult when we have just shim running, right?
pseudorandom keys should be sufficient here. It's intended to deal with
the case of an automated attack rather than a deliberate effort to break
into a given user's system.
> > It hands the kernel the private key in a boot parameter and stores the
> > public key in a boot variable. On suspend, the kernel signs the suspend
> > image with that private key and discards it. On the next boot, shim
> > generates a new key pair and hands the new private key to the kernel
> > along with the old public key. The kernel verifies the suspend image
> > before resuming it. The only way to subvert this would be to be able to
> > access kernel memory directly, which means the attacker has already won.
>
> I like this protocol, but after some off-line discussions, I still have
> doubts about it. Namely: how do we make sure that there is noone tampering
> with the variable?
The variable has the same level of security as MOK, so that would be a
more attractive target.
> - consider securely booted win8 (no Linux installed on that machine, so
> the variable for storing public key doesn't exist yet), possibly being
> taken over by a malicious user
> - he/she creates this secure variable from within the win8 and stores
> his/her own public key into it
You can't create a non-RT variable from the OS.
> - he/she supplies a signed shim (as provided by some Linux distro vendor),
> signed kernel (as provided by some Linux distro vendor) and specially
> crafted resume image, signed by his/her own private key
shim detects that the key has the RT bit set and deletes it.
> - he/she reboots the machine in a way that shim+distro kernel+hacker's S4
> image is used to resume
And so this step can't happen.
--
Matthew Garrett | [email protected]
On Tue, Nov 06, 2012 at 09:12:17AM +0000, Alan Cox wrote:
> - is it worth Red Hat doing - that's up to Red Hat's business managers
>
> - is it worth merging into the kernel - that's not
>
> The capability bit is small and clean the rest of it is beginning to look
> far too ugly for upstream right now. Not to say it might not end up small
> and clean in the end.
I absolutely agree - the code has to be good enough to be accepted
upstream and I've no objection to being told that better implementations
must be produced. I do object to being told that there's no point in
trying to find an acceptable implementation.
--
Matthew Garrett | [email protected]
On 11/06/2012 01:56 AM, Florian Weimer wrote:
> Personally, I think the only way out of this mess is to teach users
> how to disable Secure Boot.
If you're going to go that far, why not just get them to install a
RedHat (or SuSE, or Ubuntu, or whoever) key and use that instead?
Secure boot does arguably solve a class of problems, so it seems a bit
odd to recommend just throwing it out entirely.
Chris
On Tue, 6 Nov 2012, Chris Friesen wrote:
> > Personally, I think the only way out of this mess is to teach users
> > how to disable Secure Boot.
>
> If you're going to go that far, why not just get them to install a RedHat (or
> SuSE, or Ubuntu, or whoever) key and use that instead?
You always need to keep in mind the possibility of the key being revoked.
> Secure boot does arguably solve a class of problems, so it seems a bit odd to
> recommend just throwing it out entirely.
Not really. It doesn't solve the the most usual attack vector used (i.e.
exploiting the bug in the kernel ... and that's independent of the OS we
are talking about).
Just because it contains "secure" in its name, doesn't really make it a
proper security solution. It should rather be called "vendor lock-in
boot", or something like that.
--
Jiri Kosina
SUSE Labs
* Chris Friesen:
> On 11/06/2012 01:56 AM, Florian Weimer wrote:
>
>> Personally, I think the only way out of this mess is to teach users
>> how to disable Secure Boot.
>
> If you're going to go that far, why not just get them to install a
> RedHat (or SuSE, or Ubuntu, or whoever) key and use that instead?
Behind that key, considerable infrastructure is needed, and the
challenges are not purely technical. I don't expect many such keys as
a result.
> Secure boot does arguably solve a class of problems, so it seems a bit
> odd to recommend just throwing it out entirely.
I have never seen a Linux system with a compromised boot path. Surely
they exist out there, but they are rare. It's also relatively simple
to detect such a compromise on disk, from the outside. Secure Boot
doesn't even allow you to safely boot from PXE because Fedora's shim
will automatically load an initrd which wipes all your disks. (Safe
booting from network would be a compelling feature, but it's not in
the focus of Secure Boot; that's client-only technology at the
moment.)
Some side effects, such as the end of proprietary kernel modules, may
be desirable. But others are not, like missing hibernate support (or
perhaps even X).
I'm not sure why you think that Fedora PXE installs will automatically wipe disks - they'll do whatever Kickstart tells them to do. The only thing relevant to secure boot here is that you need a signed bootloader, just like when you book off CD.
--
Matthew Garrett | [email protected]
* Matthew Garrett:
> I'm not sure why you think that Fedora PXE installs will
> automatically wipe disks - they'll do whatever Kickstart tells them
> to do.
Or what the referenced initrd contains (which is not signed, for
obvious reasons). The point is that "the bootloader is signed by
Fedora" does not translate to "I can run this without worries".
I'm not sure if anybody has made promises in this direction. But lack
of a "do no harm" rule (which would have to prevent certain forms of
unattended installation for sure) means that we do not get that many
benefits out of Secure Boot.
It protects against certain classes of compromise. It doesn't prevent rogue software damaging your system - anyone who gets root (and so could reconfigure your boot order) could just rm -rf / anyway.
--
Matthew Garrett | [email protected]
On Tue, 06 Nov 2012 16:55:25 -0500
Matthew Garrett <[email protected]> wrote:
> I'm not sure why you think that Fedora PXE installs will automatically wipe disks - they'll do whatever Kickstart tells them to do. The only thing relevant to secure boot here is that you need a signed bootloader, just like when you book off CD.
They'll do whatever the kickstart file says - which means for any
untrusted distribution path like PXE your kickstart file had better be
signed too
Sure, and scripts run as root can wipe your files too. That's really not what this is all about.
--
Matthew Garrett | [email protected]