Hi Alexander, Johannes,
[ Added linux-crypto to Cc: ]
Wow, this is _one_ *intrusive* patchset indeed :-)
But first: Have you checked the digsig project? It's been doing
(for some time) what your current patchset proposes -- and
it uses public key cryptosystems for the key management,
which is decidedly better than using secret-keyed hashes
(HMAC, XCBC). Also, digsig aims to protect executable
binaries in general, and not just suid / sgid ones.
Second: Can we have some discussion on the security model /
threat model / trust model / cryptographic key management
scheme of your signing mechanism? [I had read through the
[0/4] mail you had sent yesterday, but found no relevant
discussion on these aspects there.]
>From the patchset, it appears you use a *common* secret key
for _all_ signed binaries, and it is set at kernel build-time itself:
On 6/22/07, Alexander Wuerstlein <[email protected]> wrote:
> sns_secret_key.dat contains the 'secret key' which is used for HMAC.
Where:
> --- /dev/null
> +++ b/security/sns_secret_key.dat
> +#define SNS_SECRET_KEY_SIZE 8
> +static char sns_secret_key[SNS_SECRET_KEY_SIZE] =
> + {
> + 'd', 'e', 'a', 'd', 'b', 'e', 'e', 'f'
> + };
[ Ok, I won't nitpick as to why does this file look like a header,
is #include-d in the C source as a header, but still has a .dat
extension :-) ]
Anyway, this is *totally* insecure and broken. Do you realize anybody
who lays hands on the kernel image can now _trivially_ extract the
should-have-been-a-secret key from it and use it to sign his own
binaries?
Satyam
On 070622 21:40, Satyam Sharma <[email protected]> wrote:
> Hi Alexander, Johannes,
>
> But first: Have you checked the digsig project? It's been doing
> (for some time) what your current patchset proposes -- and
> it uses public key cryptosystems for the key management,
> which is decidedly better than using secret-keyed hashes
> (HMAC, XCBC). Also, digsig aims to protect executable
> binaries in general, and not just suid / sgid ones.
We have not heard about digsig before, thanks for pointing it out. After a
short look over the source (correct me if I'm wrong): The most important
difference between our project and digsig is that digsig relies on storing
signatures inside the ELF file structure. Therefore a handmade binary-loader or
just COFF binaries could be used to circumvent digsig. We decided against
altering the file itself for that and some other reasons.
The limitation to suid/sgid was only due to a limited amount of time we had for
implementing our patch. For the future we are planning further uses like
setting capabilities only for signed binaries.
> Second: Can we have some discussion on the security model /
> threat model / trust model / cryptographic key management
> scheme of your signing mechanism? [I had read through the
> [0/4] mail you had sent yesterday, but found no relevant
> discussion on these aspects there.]
Our scenario was as follows: Usually system administrators rely on cronjobs
checking their binaries for unwanted suid-bits. Because of the obvious problems
with this (time between cronjobs, performance) we wrote our patch to replace it.
An admin would verify the to-be-installed binaries (e.g. by reading the source,
checking the distribution's package signatures), sign them in a central
location. He then distributes those signatures along with the installation
packages onto his computers. There should only be one key in use at a site the
public part of which is compiled into the kernel. Any kind of chain-of-trust
should be handled in userspace by signing or not signing with the site-wide
key depending on the earlier signatures in the chain.
So far for the initial idea. Perhaps it would be useful to have more than one
key or some more complex scheme for obtaining the keys and checking their
validity. But as all of this would need to be part of the kernel we decided to
rather keep it as simple as possible, anything complex is better and more
flexibly done in userspace.
> From the patchset, it appears you use a *common* secret key
> for _all_ signed binaries, and it is set at kernel build-time itself:
> [...]
> Anyway, this is *totally* insecure and broken. Do you realize anybody
> who lays hands on the kernel image can now _trivially_ extract the
> should-have-been-a-secret key from it and use it to sign his own
> binaries?
We do realize that this is really really ugly, broken and nasty and nobody
would or should ever want to use it for anything but playing around as it is
atm. ;)
We only used HMAC because it was already available inside the kernel, for
implementing real asymetric cryptography there was simply no time. Of course
our next objective is to implement that.
Ciao,
Alexander Wuerstlein.
On 6/25/07, Alexander Wuerstlein
<[email protected]> wrote:
> On 070622 21:40, Satyam Sharma <[email protected]> wrote:
> > [...]
> > But first: Have you checked the digsig project? It's been doing
> > (for some time) what your current patchset proposes -- and
> > it uses public key cryptosystems for the key management,
> > which is decidedly better than using secret-keyed hashes
> > (HMAC, XCBC). Also, digsig aims to protect executable
> > binaries in general, and not just suid / sgid ones.
>
> We have not heard about digsig before, thanks for pointing it out. After a
> short look over the source (correct me if I'm wrong): The most important
> difference between our project and digsig is that digsig relies on storing
> signatures inside the ELF file structure. Therefore a handmade binary-loader or
> just COFF binaries could be used to circumvent digsig.
Yes, that's correct.
> We decided against
> altering the file itself for that and some other reasons.
> The limitation to suid/sgid was only due to a limited amount of time we had for
> implementing our patch. For the future we are planning further uses like
> setting capabilities only for signed binaries.
Ok, effectively what you have there is a signature on an entire file stored in
one of its extended attributes, so I suspect you could think of few other
applications for something like this too.
> > Second: Can we have some discussion on the security model /
> > threat model / trust model / cryptographic key management
> > scheme of your signing mechanism? [I had read through the
> > [0/4] mail you had sent yesterday, but found no relevant
> > discussion on these aspects there.]
> [...]
> An admin would verify the to-be-installed binaries (e.g. by reading the source,
> checking the distribution's package signatures), sign them in a central
> location. He then distributes those signatures along with the installation
> packages onto his computers. There should only be one key in use at a site the
> public part of which is compiled into the kernel. Any kind of chain-of-trust
> should be handled in userspace by signing or not signing with the site-wide
> key depending on the earlier signatures in the chain.
Ok, so:
1. Admin is trusted. [ This need not mean the same as: "superuser
_account_ is trusted", but let's stay in the real world in for now. ]
2. Signing happens at some central, assumed-to-be-secure location (and say
the private key never leaves that central secure location). And let's say the
admin *repackages* the packages, this time such that the signed files get the
signature-carrying-extended-attributes with them, so the installation
automatically copies them correctly. => nothing wrong with this assumption.
3. Kernel verifies signatures at runtime. => kernel is trusted.
4. Public key needs to be *compiled into* the kernel ... so this is not getting
into mainline, but fair enough as something site administrators would patch in
and build.
5. Chain-of-trust handled in userspace. => userspace is trusted.
Let me know if I got the trust model / key management wrong.
> So far for the initial idea. Perhaps it would be useful to have more than one
> key or some more complex scheme for obtaining the keys and checking their
> validity. But as all of this would need to be part of the kernel we decided to
> rather keep it as simple as possible, anything complex is better and more
> flexibly done in userspace.
Well, if you're trusting (privileged) userspace already, I'm suddenly
not so sure
as to what new is this patchset bringing to the table in the first
place ... could
you also describe the attack vectors / threats that you had in mind that get
blocked with the proposed scheme?
> > From the patchset, it appears you use a *common* secret key
> > for _all_ signed binaries, and it is set at kernel build-time itself:
> > [...]
> > Anyway, this is *totally* insecure and broken. Do you realize anybody
> > who lays hands on the kernel image can now _trivially_ extract the
> > should-have-been-a-secret key from it and use it to sign his own
> > binaries?
>
> We do realize that this is really really ugly, broken and nasty and nobody
> would or should ever want to use it for anything but playing around as it is
> atm. ;)
>
> We only used HMAC because it was already available inside the kernel, for
> implementing real asymetric cryptography there was simply no time. Of course
> our next objective is to implement that.
Have a look at modsign (signed kernel modules) project too (just the key
management part, specifically the asymmetric crypto and DSA implementation
that they've already ported to the kernel). You could also go through the lkml
archives for whenever that was proposed for inclusion in mainline ...
Satyam
On 070626 01:56, Satyam Sharma <[email protected]> wrote:
> On 6/25/07, Alexander Wuerstlein
> <[email protected]> wrote:
>> On 070622 21:40, Satyam Sharma <[email protected]> wrote:
>> > [...]
>> We decided against
>> altering the file itself for that and some other reasons.
>> The limitation to suid/sgid was only due to a limited amount of time we
>> had for
>> implementing our patch. For the future we are planning further uses like
>> setting capabilities only for signed binaries.
>
> Ok, effectively what you have there is a signature on an entire file stored
> in one of its extended attributes, so I suspect you could think of few other
> applications for something like this too.
Yes, for example one could sign Java's classfiles and employ a special trusted
Java VM which checks the signatures before execution. Also, this is a more
general case of signing kernel modules (as you mentioned below). There are
really numerous applications one could imagine, we just don't really know which
ones are practical. We definitely appreciate further ideas on this.
Also the signature-in-ELF can be used complementary to our approach: for
example NFS is currently unable to handle real extended attributes (nfs does
only posix acls). So for binaries delivered over NFS our approach wouldn't
work.
> Ok, so:
>
> 1. Admin is trusted. [ This need not mean the same as: "superuser
> _account_ is trusted", but let's stay in the real world in for now. ]
> 2. Signing happens at some central, assumed-to-be-secure location (and say
> the private key never leaves that central secure location). And let's say the
> admin *repackages* the packages, this time such that the signed files get the
> signature-carrying-extended-attributes with them, so the installation
> automatically copies them correctly. => nothing wrong with this assumption.
> 3. Kernel verifies signatures at runtime. => kernel is trusted.
> 4. Public key needs to be *compiled into* the kernel ... so this is not
> getting into mainline, but fair enough as something site administrators would
> patch in and build.
Correct up to here.
> 5. Chain-of-trust handled in userspace. => userspace is trusted.
Nope. I unluckily wrote 'userspace' where I should have said something else:
Chain-of-trust is handled in what I would label 'Adminspace' (Where we do the
signing as in points 1 and 2). There is a very small number of signatures (in
our example one) known to the kernel and only those are trusted, and those are
applied to the binaries by the administrator in your point 2. The kernel does
and should never rely on userspace to tell it which signatures are trustworthy.
Only the administrator may do so by means of the signatures directly compiled
into the kernel.
So in short: Chain-of-trust is handled by the administrator in his secure
central location.
>> So far for the initial idea. Perhaps it would be useful to have more than
>> one
>> key or some more complex scheme for obtaining the keys and checking their
>> validity. But as all of this would need to be part of the kernel we
>> decided to
>> rather keep it as simple as possible, anything complex is better and more
>> flexibly done in userspace.
>
> Well, if you're trusting (privileged) userspace already, I'm suddenly not so
> sure as to what new is this patchset bringing to the table in the first place
> ...
We do not trust any userspace application, see above.
> could you also describe the attack vectors / threats that you had in mind
> that get blocked with the proposed scheme?
We focus on attacks where an attacker may alter some executable file, for
example by altering a mounted nfs-share, manipulating disk-content by simply
pulling a disk, mounting it and writing to it, etc.
This relies on the kernel beeing trustworthy of course, so one would need to
take special measures to protect the kernel-image from beeing similarly
altered. One (somewhat not-so-secure method) would be supplying kernel images
by PXE and forbidding local booting, another measure would be using a TPM
and an appropriate bootloader to check the kernel for unwanted modifications.
> Have a look at modsign (signed kernel modules) project too (just the key
> management part, specifically the asymmetric crypto and DSA implementation
> that they've already ported to the kernel). You could also go through the
> lkml archives for whenever that was proposed for inclusion in mainline ...
We already thought about that. Using some existing code is definitely preferable
to inventing DSA again :)
Ciao,
Alexander Wuerstlein.
On 6/26/07, Alexander Wuerstlein
<[email protected]> wrote:
> [...]
> Nope. I unluckily wrote 'userspace' where I should have said something else:
> Chain-of-trust is handled in what I would label 'Adminspace' (Where we do the
> signing as in points 1 and 2). There is a very small number of signatures (in
> our example one) known to the kernel and only those are trusted, and those are
> applied to the binaries by the administrator in your point 2. The kernel does
> and should never rely on userspace to tell it which signatures are trustworthy.
> Only the administrator may do so by means of the signatures directly compiled
> into the kernel.
>
> So in short: Chain-of-trust is handled by the administrator in his secure
> central location.
Ok, so the "trust chain" you're talking about is simply the decision of the
admin to compile-in the (verified and trusted) public keys of known trusted
entities into the kernel at build time. That is not really scalable, but I guess
you might just as well impose such a restriction for sake of simplicity.
[ I initially thought a scenario where a given binary is signed by an
entity whose
corresponding public key is _not_ present in the kernel, but who does possess
a signature -- over its name, id and public key -- by another entity whose
corresponding public key _is_ built into the kernel). Then at the time of
verification there's really no other alternative to *build* the entire
chain at the
_point of verification_ (in-kernel) itself ... but this obviously
introduces huge and
ugly complexities that you'd have a hard time bringing into the kernel :-) That
"signature over name, id and public key" could be a _certificate_ (if you care
about following standards), and building their chains in-kernel ... well. But if
you really want to differentiate between kernel and userspace from security
perspective, and want to give such functionality, I don't see any easy
way out. ]
> >> So far for the initial idea. Perhaps it would be useful to have more than
> >> one
> >> key or some more complex scheme for obtaining the keys and checking their
> >> validity. But as all of this would need to be part of the kernel we
> >> decided to
> >> rather keep it as simple as possible, anything complex is better and more
> >> flexibly done in userspace.
> >
> > Well, if you're trusting (privileged) userspace already, I'm suddenly not so
> > sure as to what new is this patchset bringing to the table in the first place
> > ...
>
> We do not trust any userspace application, see above.
>
> > could you also describe the attack vectors / threats that you had in mind
> > that get blocked with the proposed scheme?
>
> We focus on attacks where an attacker may alter some executable file, for
> example by altering a mounted nfs-share, manipulating disk-content by simply
> pulling a disk, mounting it and writing to it, etc.
>
> This relies on the kernel beeing trustworthy of course, so one would need to
> take special measures to protect the kernel-image from beeing similarly
> altered. One (somewhat not-so-secure method) would be supplying kernel images
> by PXE and forbidding local booting, another measure would be using a TPM
> and an appropriate bootloader to check the kernel for unwanted modifications.
Kernel-userspace differentiation from security perspective is always tricky
(so this is why I pointed you to the discussions whenever such stuff, such
as asymmetric crypto and modsign etc are proposed to be merged). It's
definitely not impossible to compromise a _running_ kernel from privileged
userspace, if it really wanted to do so ...
Satyam