This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues? This is standard operating procedure
for many projects and complies with RFPolicy.
http://www.wiretrip.net/rfp/policy.html
Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
It would be nice to have a more centralized place for all of this
information to help track it, make sure things don't fall through
the cracks, and make sure of timely fix and disclosure.
In addition, I think it's worth considering keeping the current stable
kernel version moving forward (point releases ala 2.6.x.y) for critical
(mostly security) bugs. If nothing else, I can provide a subset of -ac
patches that are only that.
I volunteer to help with _all_ of the above. It's what I'm here for.
Use me, abuse me ;-)
thanks,
-chris
===== MAINTAINERS 1.269 vs edited =====
--- 1.269/MAINTAINERS 2005-01-10 17:29:35 -08:00
+++ edited/MAINTAINERS 2005-01-11 13:29:23 -08:00
@@ -1959,6 +1959,11 @@ M: [email protected]
W: http://www.weinigel.se
S: Supported
+SECURITY CONTACT
+P: Security Officers
+M: kernel-security@{vger.kernel.org, osdl.org, wherever}
+S: Supported
+
SELINUX SECURITY MODULE
P: Stephen Smalley
M: [email protected]
===== REPORTING-BUGS 1.2 vs edited =====
--- 1.2/REPORTING-BUGS 2002-02-04 23:39:13 -08:00
+++ edited/REPORTING-BUGS 2005-01-10 15:35:10 -08:00
@@ -16,6 +16,9 @@ code relevant to what you were doing. If
describe how to recreate it. That is worth even more than the oops itself.
The list of maintainers is in the MAINTAINERS file in this directory.
+ If it is a security bug, please copy the Security Contact listed
+in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
+
If you are totally stumped as to whom to send the report, send it to
[email protected]. (For more information on the linux-kernel
mailing list see http://www.tux.org/lkml/).
On Wed, 12 Jan 2005, Chris Wright wrote:
>
> This same discussion is taking place in a few forums. Are you opposed to
> creating a security contact point for the kernel for people to contact
> with potential security issues? This is standard operating procedure
> for many projects and complies with RFPolicy.
I wouldn't mind, and it sounds like a good thing to have. The _only_
requirement that I have is that there be no stupid embargo on the list.
Any list with a time limit (vendor-sec) I will not have anything to do
with.
If that means that you can get only the list by invitation-only, that's
fine.
Linus
Hi Chris!
On Wed, Jan 12, 2005 at 09:48:07AM -0800, Chris Wright wrote:
> This same discussion is taking place in a few forums. Are you opposed to
> creating a security contact point for the kernel for people to contact
> with potential security issues? This is standard operating procedure
> for many projects and complies with RFPolicy.
>
> http://www.wiretrip.net/rfp/policy.html
>
> Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
> It would be nice to have a more centralized place for all of this
> information to help track it, make sure things don't fall through
> the cracks, and make sure of timely fix and disclosure.
I very much like the idea and I also think a "official" list of kernel security issues and
respective fixes is very much required, since not every Linux distribution is supposed
to have kernel developers working for them, going through the whole changelogs
looking for security issues, which is just silly.
Disclosing and bookkeeping of security issues is a job of the Linux kernel team.
Alan used to list down security fixes between each v2.2 release, v2.4 has never
had such an official list (I'm trying to write CAN numbers on the changelogs lately),
neither v2.6. Its not a practical thing for Linus/Andrew to do, its a lot of
work.
It would be interesting to have all developers to know about such initiative
and have them send their security fixes to be logged and disclosed - its obviously
impossible for you to read all changes in the kernel. And have Linus/Andrew
advocate in favour of it.
IMO such initiative needs to be known by all contributors for
it to be effective.
> In addition, I think it's worth considering keeping the current stable
> kernel version moving forward (point releases ala 2.6.x.y) for critical
> (mostly security) bugs. If nothing else, I can provide a subset of -ac
> patches that are only that.
Yes, -ac has been playing that role. It is general consensus that
such point releases are required.
Linus doesnt do it because it is too much extra work him (and he is focused
on other things), glad you have stepped up.
> I volunteer to help with _all_ of the above. It's what I'm here for.
> Use me, abuse me ;-)
You've been doing a lot of security work/auditing in the kernel for a long time,
which fits the job position nicely.
I'm willing to help.
* Linus Torvalds ([email protected]) wrote:
> On Wed, 12 Jan 2005, Chris Wright wrote:
> > This same discussion is taking place in a few forums. Are you opposed to
> > creating a security contact point for the kernel for people to contact
> > with potential security issues? This is standard operating procedure
> > for many projects and complies with RFPolicy.
>
> I wouldn't mind, and it sounds like a good thing to have. The _only_
> requirement that I have is that there be no stupid embargo on the list.
> Any list with a time limit (vendor-sec) I will not have anything to do
> with.
Right, I know you don't like the embargo stuff.
> If that means that you can get only the list by invitation-only, that's
> fine.
Opinions on where to set it up? vger, osdl, ...?
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
* Marcelo Tosatti ([email protected]) wrote:
> On Wed, Jan 12, 2005 at 09:48:07AM -0800, Chris Wright wrote:
> > Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
> > It would be nice to have a more centralized place for all of this
> > information to help track it, make sure things don't fall through
> > the cracks, and make sure of timely fix and disclosure.
>
> I very much like the idea and I also think a "official" list of kernel security issues and
> respective fixes is very much required, since not every Linux distribution is supposed
> to have kernel developers working for them, going through the whole changelogs
> looking for security issues, which is just silly.
>
> Disclosing and bookkeeping of security issues is a job of the Linux kernel team.
Yes, I agree.
> Alan used to list down security fixes between each v2.2 release, v2.4 has never
> had such an official list (I'm trying to write CAN numbers on the changelogs lately),
> neither v2.6. Its not a practical thing for Linus/Andrew to do, its a lot of
> work.
>
> It would be interesting to have all developers to know about such initiative
> and have them send their security fixes to be logged and disclosed - its obviously
> impossible for you to read all changes in the kernel. And have Linus/Andrew
> advocate in favour of it.
>
> IMO such initiative needs to be known by all contributors for
> it to be effective.
Indeed, it would be most effective as a collective effort. Of course,
we'll never make 100%, but we could do better than now.
> > In addition, I think it's worth considering keeping the current stable
> > kernel version moving forward (point releases ala 2.6.x.y) for critical
> > (mostly security) bugs. If nothing else, I can provide a subset of -ac
> > patches that are only that.
>
> Yes, -ac has been playing that role. It is general consensus that
> such point releases are required.
>
> Linus doesnt do it because it is too much extra work him (and he is focused
> on other things), glad you have stepped up.
>
> > I volunteer to help with _all_ of the above. It's what I'm here for.
> > Use me, abuse me ;-)
>
> You've been doing a lot of security work/auditing in the kernel for a long time,
> which fits the job position nicely.
>
> I'm willing to help.
Great, thanks!
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Wed, Jan 12, 2005 at 10:05:34AM -0800, Linus Torvalds wrote:
>
>
> On Wed, 12 Jan 2005, Chris Wright wrote:
> >
> > This same discussion is taking place in a few forums. Are you opposed to
> > creating a security contact point for the kernel for people to contact
> > with potential security issues? This is standard operating procedure
> > for many projects and complies with RFPolicy.
>
> I wouldn't mind, and it sounds like a good thing to have. The _only_
> requirement that I have is that there be no stupid embargo on the list.
> Any list with a time limit (vendor-sec) I will not have anything to do
> with.
>
> If that means that you can get only the list by invitation-only, that's
> fine.
So you would be for a closed list, but there would be no incentive at
all for anyone on the list to keep the contents of what was posted to
the list closed at any time? That goes against the above stated goal of
complying with RFPolicy.
I understand your dislike of having to wait once you know of a security
issue before making the fix public, but how should distros coordinate
fixes in any other way?
thanks,
greg k-h
On Wed, 12 Jan 2005, Chris Wright wrote:
>
> Right, I know you don't like the embargo stuff.
I'd very happy with a "private" list in the sense that people wouldn't
feel pressured to fix it that day, and I think it makes sense to have some
policy where we don't necessarily make them public immediately in order to
give people the time to discuss them.
But it should be very clear that no entity (neither the reporter nor any
particular vendor/developer) can require silence, or ask for anything more
than "let's find the right solution". A purely _technical_ delay, in other
words, with no politics or other issues involved.
Otherwise it just becomes politics: you end up having security firms that
want a certain date because they want a PR blitz, and you end up having
vendors who want a certain date because they have release issues.
Does that mean that vendor-sec would end up being used for some things,
where people _want_ the politics and jockeying for position? Probably.
But having a purely technical alternative would be wonderful.
> > If that means that you can get only the list by invitation-only, that's
> > fine.
>
> Opinions on where to set it up? vger, osdl, ...?
I don't personally think it matters. Especially if we make it very clear
that it's purely technical, and no vendor politics can enter into it.
Whatever ends up being easiest.
Linus
On Wed, 12 Jan 2005, Greg KH wrote:
>
> So you would be for a closed list, but there would be no incentive at
> all for anyone on the list to keep the contents of what was posted to
> the list closed at any time? That goes against the above stated goal of
> complying with RFPolicy.
There's already vendor-sec. I assume they follow RFPolicy already. If it's
just another vendor-sec, why would you put up a new list for it?
In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.
When I saw how vendor-sec worked, I decided I will never be on an embargo
list. Ever. That's not to say that such a list can't work - I just
personally refuse to have anything to do with one. Whether that matters or
not is obviously an open question.
Linus
* Linus Torvalds ([email protected]) wrote:
> On Wed, 12 Jan 2005, Chris Wright wrote:
> >
> > Right, I know you don't like the embargo stuff.
>
> I'd very happy with a "private" list in the sense that people wouldn't
> feel pressured to fix it that day, and I think it makes sense to have some
> policy where we don't necessarily make them public immediately in order to
> give people the time to discuss them.
That's what I figured you meant.
> But it should be very clear that no entity (neither the reporter nor any
> particular vendor/developer) can require silence, or ask for anything more
> than "let's find the right solution". A purely _technical_ delay, in other
> words, with no politics or other issues involved.
Agreed.
> Otherwise it just becomes politics: you end up having security firms that
> want a certain date because they want a PR blitz, and you end up having
> vendors who want a certain date because they have release issues.
There is value in coordinating with vendors, namely to keep them from
being caught with pants down. But vendor-sec already does this part
well enough.
> Does that mean that vendor-sec would end up being used for some things,
> where people _want_ the politics and jockeying for position? Probably.
> But having a purely technical alternative would be wonderful.
>
> > > If that means that you can get only the list by invitation-only, that's
> > > fine.
> >
> > Opinions on where to set it up? vger, osdl, ...?
>
> I don't personally think it matters. Especially if we make it very clear
> that it's purely technical, and no vendor politics can enter into it.
> Whatever ends up being easiest.
Well, easiest for me is here ;-)
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Wed, Jan 12, 2005 at 11:01:42AM -0800, Linus Torvalds wrote:
>
>
> On Wed, 12 Jan 2005, Greg KH wrote:
> >
> > So you would be for a closed list, but there would be no incentive at
> > all for anyone on the list to keep the contents of what was posted to
> > the list closed at any time? That goes against the above stated goal of
> > complying with RFPolicy.
>
> There's already vendor-sec. I assume they follow RFPolicy already. If it's
> just another vendor-sec, why would you put up a new list for it?
I think the issue is that there is no main "security" contact for the
kernel. If we want to make vendor-sec that contact, fine, but we better
warn the vendor-sec people :)
> In other words, if you allow embargoes and vendor politics, what would the
> new list buy that isn't already in vendor-sec.
vendor-sec handles a lot of other stuff that is not kernel related
(every package that is in a distro.) This would only be for the kernel.
thanks,
greg k-h
* Greg KH ([email protected]) wrote:
> On Wed, Jan 12, 2005 at 11:01:42AM -0800, Linus Torvalds wrote:
> > On Wed, 12 Jan 2005, Greg KH wrote:
> > > So you would be for a closed list, but there would be no incentive at
> > > all for anyone on the list to keep the contents of what was posted to
> > > the list closed at any time? That goes against the above stated goal of
> > > complying with RFPolicy.
> >
> > There's already vendor-sec. I assume they follow RFPolicy already. If it's
> > just another vendor-sec, why would you put up a new list for it?
>
> I think the issue is that there is no main "security" contact for the
> kernel. If we want to make vendor-sec that contact, fine, but we better
> warn the vendor-sec people :)
Yes. And I think we should have our own contact.
> > In other words, if you allow embargoes and vendor politics, what would the
> > new list buy that isn't already in vendor-sec.
>
> vendor-sec handles a lot of other stuff that is not kernel related
> (every package that is in a distro.) This would only be for the kernel.
Yes, and IMO, it could inform vendor-sec.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Wed, Jan 12, 2005 at 11:01:42AM -0800, Linus Torvalds wrote:
>
>
> On Wed, 12 Jan 2005, Greg KH wrote:
> >
> > So you would be for a closed list, but there would be no incentive at
> > all for anyone on the list to keep the contents of what was posted to
> > the list closed at any time? That goes against the above stated goal of
> > complying with RFPolicy.
>
> There's already vendor-sec. I assume they follow RFPolicy already. If it's
> just another vendor-sec, why would you put up a new list for it?
>
> In other words, if you allow embargoes and vendor politics, what would the
> new list buy that isn't already in vendor-sec.
>
> When I saw how vendor-sec worked, I decided I will never be on an embargo
> list. Ever. That's not to say that such a list can't work - I just
> personally refuse to have anything to do with one. Whether that matters or
> not is obviously an open question.
Of course it matters Linus - vendors need time to prepare their updates. You
can't ignore that, and you can't "have nothing to do with it".
You seem to dislike the way embargos have been done on vendorsec, fine. They can
be done on a different way, but you have to understand that you and Andrew
need to follow and agree with the embargo.
How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
The only reason for this is to have "time for the vendors to catch up", which
can be defined by the kernel security office. Nothing more - no vendor politics
involved.
It is a simple matter of synchronization.
* Chris Wright:
> This same discussion is taking place in a few forums. Are you opposed to
> creating a security contact point for the kernel for people to contact
> with potential security issues?
Would this be anything but a secretary in front of vendor-sec?
> http://www.wiretrip.net/rfp/policy.html
>
> Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
> It would be nice to have a more centralized place for all of this
> information to help track it, make sure things don't fall through
> the cracks, and make sure of timely fix and disclosure.
You mean, like issuing *security* *advisories*? *gasp*
I think this is an absolute must (and we are certainly not alone!),
but this project does not depend on the way the initial initial
contact is handled.
> + If it is a security bug, please copy the Security Contact listed
> +in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
If this is about delayed disclosure, a few more details are required,
IMHO. Otherwise, submitters will continue to use their
well-established channels. Most people hesitate before posting stuff
they view sensitive to a mailing list.
* Greg KH:
>> In other words, if you allow embargoes and vendor politics, what would the
>> new list buy that isn't already in vendor-sec.
>
> vendor-sec handles a lot of other stuff that is not kernel related
> (every package that is in a distro.) This would only be for the kernel.
I don't know that much about vendor-sec, but wouldn't the kernel list
contain roughly the same set of people? vendor-sec also has people
from the *BSDs, I believe, but they should probably notified of Linux
issues as well (often, similar mistakes are made in different
implementations).
If the readership is the same, it doesn't make sense to run two lists,
especially because it's not a normal list and you have to be capable
to deal with the vetting.
I agree that embargoed lists are nasty, but sometimes, you have to
make personal sacrifices to further the cause. 8-(
On Wed, 12 Jan 2005, Marcelo Tosatti wrote:
>
> How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
Please realize that I don't have any problem with a short-term embargo per
se, what I have problems with is the _politics_ that it causes. For
example, I do _not_ want this to become a
"vendor-sec got the information five weeks ago, and decided to embargo
until day X, and then because they knew of the 4-day policy of the
kernel security list, they released it to the kernel security list on
day X-4"
See? That is playing politics with a security list. That's the part I
don't want to have anything to do with. If somebody did that to me, I'd
feel pissed off like hell, and I'd say "screw them".
But in the absense of politics, I'd _happily_ have a self-imposed embargo
that is limited to some reasonable timeframe (and "reasonable" is
definitely counted in days, not weeks. And absolutely _not_ in months,
like apparently sometimes happens on vendor-sec).
So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.
IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
try to time the technical list (unless they do so in a very timely manner
indeed).
I'm not saying that we'd _have_ to go public after five days. I'm saying
that after that, there would be nothing holding it back (but maybe the
technical discussion on how to _fix_ it is still on-going, and that might
make people just not announce it until they're ready).
Linus
On Wed, 12 Jan 2005, Linus Torvalds wrote:
>
> So if the embargo time starts ticking from _first_ report, I'd personally
> be perfectly happy with a policy of, say "5 working days" (aka one week),
> or until it was made public somewhere else.
Btw, the only thing I care about is the embargo on the _fix_.
If a bug reporter is a security house, and wants to put a longer embargo
on announcing the bug itself, or on some other aspect of the issue (ie
known exploits etc), and wants to make sure that they get the credit and
they get to be the first ones to announce the problem, that's fine by me.
The only thing I really care about is that we can serve the people who
depend on us by giving them source code that is as bug-free and secure as
we can make it. If that means that we should make the changelogs be a bit
less verbose because we don't want to steal the thunder from the people
who found the problem, that's fine.
One of the problems with the embargo thing has been exactly the fact that
people couldn't even find bugs (or just uglinesses) in the fixes, because
they were kept under wraps until the "proper date".
Linus
* Linus Torvalds ([email protected]) wrote:
> On Wed, 12 Jan 2005, Marcelo Tosatti wrote:
> > How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
>
> Please realize that I don't have any problem with a short-term embargo per
> se, what I have problems with is the _politics_ that it causes. For
> example, I do _not_ want this to become a
>
> "vendor-sec got the information five weeks ago, and decided to embargo
> until day X, and then because they knew of the 4-day policy of the
> kernel security list, they released it to the kernel security list on
> day X-4"
I agree, and in most of these cases long delay are due to things
falling through cracks or not getting adequate cycles. Not so much
politics.
> See? That is playing politics with a security list. That's the part I
> don't want to have anything to do with. If somebody did that to me, I'd
> feel pissed off like hell, and I'd say "screw them".
>
> But in the absense of politics, I'd _happily_ have a self-imposed embargo
> that is limited to some reasonable timeframe (and "reasonable" is
> definitely counted in days, not weeks. And absolutely _not_ in months,
> like apparently sometimes happens on vendor-sec).
>
> So if the embargo time starts ticking from _first_ report, I'd personally
> be perfectly happy with a policy of, say "5 working days" (aka one week),
> or until it was made public somewhere else.
That's more or less my take. Timely response to reporter, timely
debugging/bug fixing and timely disclosure.
> IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
> try to time the technical list (unless they do so in a very timely manner
> indeed).
What about the reverse, and informing vendors? This is typical...project
security contact gets report, figures out bug, works with vendor-sec on
release date. In my experience, the long cycles rarely come from that
final negotiation. It's usually not much of a negotiation, rather a
"heads-up", "thanks".
The two goals: 1) timely response, fix, dislosure; and 2) not leaving
vendors with pants down; don't have to be mutually exclusive.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Wed, 12 Jan 2005, Linus Torvalds wrote:
>
> On Wed, 12 Jan 2005, Chris Wright wrote:
> >
> > Right, I know you don't like the embargo stuff.
>
> I'd very happy with a "private" list in the sense that people wouldn't
> feel pressured to fix it that day, and I think it makes sense to have some
> policy where we don't necessarily make them public immediately in order to
> give people the time to discuss them.
>
> But it should be very clear that no entity (neither the reporter nor any
> particular vendor/developer) can require silence, or ask for anything more
> than "let's find the right solution". A purely _technical_ delay, in other
> words, with no politics or other issues involved.
>
Being firmly in the full disclosure camp I hope you intend to stick to
that "no entity (neither the reporter nor any particular vendor/developer)
can require silence" bit. If you do, and if embargoes are kept to short
nr. of days, then I think such a list would probably be a good idea. It
would be a good compromise between full disclosure from day one and things
being kept secret and out of view for months.
Just my 0.02euro.
--
Jesper Juhl
On Wed, Jan 12, 2005 at 12:27:11PM -0800, Chris Wright wrote:
> * Linus Torvalds ([email protected]) wrote:
> > But in the absense of politics, I'd _happily_ have a self-imposed embargo
> > that is limited to some reasonable timeframe (and "reasonable" is
> > definitely counted in days, not weeks. And absolutely _not_ in months,
> > like apparently sometimes happens on vendor-sec).
> >
> > So if the embargo time starts ticking from _first_ report, I'd personally
> > be perfectly happy with a policy of, say "5 working days" (aka one week),
> > or until it was made public somewhere else.
>
> That's more or less my take. Timely response to reporter, timely
> debugging/bug fixing and timely disclosure.
That sounds sane to me too.
> > IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
> > try to time the technical list (unless they do so in a very timely manner
> > indeed).
>
> What about the reverse, and informing vendors? This is typical...project
> security contact gets report, figures out bug, works with vendor-sec on
> release date. In my experience, the long cycles rarely come from that
> final negotiation. It's usually not much of a negotiation, rather a
> "heads-up", "thanks".
Vendors should also cc: the kernel-security list/contact at the same
time they would normally contact vendor-sec. I don't see a problem with
that happening, and would help out the people on vendor-sec from having
to wade through a lot of linux kernel specific stuff at times.
> The two goals: 1) timely response, fix, dislosure; and 2) not leaving
> vendors with pants down; don't have to be mutually exclusive.
I agree, having pants down when you don't want them to be isn't a good
thing :)
thanks,
greg k-h
On Wed, Jan 12, 2005 at 12:00:52PM -0800, Linus Torvalds wrote:
> > How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
> Please realize that I don't have any problem with a short-term embargo per
> se, what I have problems with is the _politics_ that it causes. For
> example, I do _not_ want this to become a
>
> "vendor-sec got the information five weeks ago, and decided to embargo
> until day X, and then because they knew of the 4-day policy of the
> kernel security list, they released it to the kernel security list on
> day X-4"
>
> See? That is playing politics with a security list. That's the part I
> don't want to have anything to do with. If somebody did that to me, I'd
> feel pissed off like hell, and I'd say "screw them".
Who would be on the kernel security list if it's to be invite only ?
Is this just going to be a handful of folks, or do you foresee it
being the same kernel folks that are currently on vendor-sec ?
My first thought was 'Chris will forward the output of [email protected]
to vendor-sec, and we'll get a chance to get updates built'. But you
seem dead-set against any form of delayed disclosure, which has the
effect of catching us all with our pants down when you push out
a new kernel fixing a hole and we don't have updates ready.
At this time, those with bad intents rub their hands with glee
0wning boxes at will whilst those of us responsible for vendor
kernels run like headless chickens trying to get updates out,
which can be a pain the ass if $vendor is supporting some ancient
release which is afflicted by the same bug.
If you turned the current model upsidedown and vendor-sec learned
about issues from [email protected] a few days before it'd at
least give us *some* time, as opposed to just springing stuff
on us without warning.
Thoughts?
Dave
On Wed, Jan 12, 2005 at 12:00:52PM -0800, Linus Torvalds wrote:
>
>
> On Wed, 12 Jan 2005, Marcelo Tosatti wrote:
> >
> > How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
>
> Please realize that I don't have any problem with a short-term embargo per
> se, what I have problems with is the _politics_ that it causes. For
> example, I do _not_ want this to become a
>
> "vendor-sec got the information five weeks ago, and decided to embargo
> until day X, and then because they knew of the 4-day policy of the
> kernel security list, they released it to the kernel security list on
> day X-4"
>
> See? That is playing politics with a security list. That's the part I
> don't want to have anything to do with. If somebody did that to me, I'd
> feel pissed off like hell, and I'd say "screw them".
An important thing is that Mr. Torvalds agrees with the embargo, which you never
did, you always applied corrections for security bugs without being concerned
about a disclosure date agreement (which you have your own reasons and arguments
for, OK).
That makes vendorsec/etc uncomfortable submitting the information to you.
Great to hear you think differently and is willing to agree on a reasonable
embargo period.
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
I'm sure one week is enough for vendors to prepare updates, and I'm sure they
will be fine with it.
> But in the absense of politics, I'd _happily_ have a self-imposed embargo
> that is limited to some reasonable timeframe (and "reasonable" is
> definitely counted in days, not weeks. And absolutely _not_ in months,
> like apparently sometimes happens on vendor-sec).
We all agree there is no good reason to embargo a kernel bug for more than
one week, given that the fix is known and settled.
> So if the embargo time starts ticking from _first_ report, I'd personally
> be perfectly happy with a policy of, say "5 working days" (aka one week),
> or until it was made public somewhere else.
>
>
> IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
> try to time the technical list (unless they do so in a very timely manner
> indeed).
>
> I'm not saying that we'd _have_ to go public after five days. I'm saying
> that after that, there would be nothing holding it back (but maybe the
> technical discussion on how to _fix_ it is still on-going, and that might
> make people just not announce it until they're ready).
Wonderful.
On Wed, Jan 12, 2005 at 10:57:25AM -0800, Linus Torvalds wrote:
> On Wed, 12 Jan 2005, Chris Wright wrote:
> > Opinions on where to set it up? vger, osdl, ...?
>
> I don't personally think it matters. Especially if we make it very clear
> that it's purely technical, and no vendor politics can enter into it.
I think vger fits that bill, if for no other reason than to keep the
"osdl is taking over the kernel" rumors at bay :)
That is, if the vger postmasters agree.
thanks,
greg k-h
On Wed, Jan 12, 2005 at 12:28:14PM -0800, Linus Torvalds wrote:
>
>
> On Wed, 12 Jan 2005, Linus Torvalds wrote:
> >
> > So if the embargo time starts ticking from _first_ report, I'd personally
> > be perfectly happy with a policy of, say "5 working days" (aka one week),
> > or until it was made public somewhere else.
>
> Btw, the only thing I care about is the embargo on the _fix_.
>
> If a bug reporter is a security house, and wants to put a longer embargo
> on announcing the bug itself, or on some other aspect of the issue (ie
> known exploits etc), and wants to make sure that they get the credit and
> they get to be the first ones to announce the problem, that's fine by me.
>
> The only thing I really care about is that we can serve the people who
> depend on us by giving them source code that is as bug-free and secure as
> we can make it. If that means that we should make the changelogs be a bit
> less verbose because we don't want to steal the thunder from the people
> who found the problem, that's fine.
I'm not a big fan of hiding security fixes - having a defined and clear
list of security issues is important. Moreover, the code itself is verbose
enough for some people.
If you release the code earlier than the embargo date, even with "non verbose
changelogs", to best serve the people who depend on us by giving them source
code that is as bug-free and secure as possible, you make the issue public.
IMO the best thing is to be very verbose about security problems - giving
credit to the people who deserve it accordingly (not stealing the thunder
from the discovers, but rather making more visible on the changelog who
they are).
The KSO (Kernel Security Officer, the new buzzword on the block) has to
control the embargo date and be strict about it.
> One of the problems with the embargo thing has been exactly the fact that
> people couldn't even find bugs (or just uglinesses) in the fixes, because
> they were kept under wraps until the "proper date".
Exactly, and keeping under wraps means "obscure, unclear list of security issues".
We want the other way around.
* Florian Weimer ([email protected]) wrote:
> * Greg KH:
>
> >> In other words, if you allow embargoes and vendor politics, what would the
> >> new list buy that isn't already in vendor-sec.
> >
> > vendor-sec handles a lot of other stuff that is not kernel related
> > (every package that is in a distro.) This would only be for the kernel.
>
> I don't know that much about vendor-sec, but wouldn't the kernel list
> contain roughly the same set of people?
No.
> vendor-sec also has people
> from the *BSDs, I believe, but they should probably notified of Linux
> issues as well (often, similar mistakes are made in different
> implementations).
Take a look at <http://www.freebsd.org/security/index.html>. Pretty
good description. It's normal for projects to have their own security
contact to handle security issues. Once it's vetted, understood,
etc...it's normal to give vendors some heads-up.
> If the readership is the same, it doesn't make sense to run two lists,
> especially because it's not a normal list and you have to be capable
> to deal with the vetting.
It's not the same readership.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
* Florian Weimer ([email protected]) wrote:
> * Chris Wright:
>
> > This same discussion is taking place in a few forums. Are you opposed to
> > creating a security contact point for the kernel for people to contact
> > with potential security issues?
>
> Would this be anything but a secretary in front of vendor-sec?
Yes, it'd be the primary contact for handling kernel security issues.
Handling vendor coordination is only one piece of a handling security
issue.
> > http://www.wiretrip.net/rfp/policy.html
> >
> > Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
> > It would be nice to have a more centralized place for all of this
> > information to help track it, make sure things don't fall through
> > the cracks, and make sure of timely fix and disclosure.
>
> You mean, like issuing *security* *advisories*? *gasp*
Yes, although we're not even tracking things well, let alone advisories.
> I think this is an absolute must (and we are certainly not alone!),
> but this project does not depend on the way the initial initial
> contact is handled.
>
> > + If it is a security bug, please copy the Security Contact listed
> > +in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
>
> If this is about delayed disclosure, a few more details are required,
> IMHO. Otherwise, submitters will continue to use their
> well-established channels. Most people hesitate before posting stuff
> they view sensitive to a mailing list.
Yes, that's the point of coordinating the fix _and_ the disclosure.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Where is 2.6.10.1 with the security fix only ?
I have not yet finished to deal with the TCP troubles moving to 2.6.10 generated
on my production server, and now, I should apply another large set of mainly
untested patches just to fill the security hole. This just cannot be done in
a fiew days because on many organisations, the new kernel has to pass several
days on secondary servers before reaching the main ones.
Now assuming that I have other production servers still running older kernels,
I have no way to get the simple fix from kernel.org and backport it to 2.6.8
and 2.6.9, unless I'm a kernel fulltime worker that reads all messages on kernel
mailing list.
Basically, you are currently leaving non distribution related users alone in the
cold and this is really really bad for the confidence we have in Linux,
so please publish a 2.6.10.1 with the short term solution to fix the hole.
Of course this does not prevent to publish 2.6.10.2 when you found a better
solution, or use a different fix in 2.6.11 since they are not based on 2.6.10.1
Regards,
Hubert Tonneau
PS: I believe that it would also be a very good idea, since Linux is now
expected to be a mature organisation, to automatically publish 2.6.x.y new holes
only fix patch for each stable kernel that has been released less than a year ago.
This would enable smoother upgrade of highly important production servers.
On Wed, Jan 12, 2005 at 12:27:11PM -0800, Chris Wright wrote:
> The two goals: 1) timely response, fix, dislosure; and 2) not leaving
> vendors with pants down; don't have to be mutually exclusive.
All vendors are normally ready way before the end of the embargo.
I would suggest the slowest of all vendors will enforce the date (i.e.
all vendors propose a date, and the longest one will be choosen like a
reverse auction, the worst offer wins), with a maximum delay of 1 month
(or whatever else). To guarantee everyone will go as fast as possible
the date proposed by every different vendor can be published in the
final report. Just keeping in mind that the more archs involved, the
more kernels have to be built and the slower will be a vendor. So a
difference of a few days just to build and test everything is very
reasonable and not significant, but this will avoid differences of
>1week and it'll avoid the unnecessary delays when everybody is ready to
publish but nobody can (which personally is the only thing that would
annoy me if I were a customer). This will also raise the attention and
it'll increase the stress to get things done ASAP since there'll be a
reward. Nothing gets done if there's no reward.
On Wed, Jan 12, 2005 at 03:53:50PM -0500, Dave Jones wrote:
>
> If you turned the current model upsidedown and vendor-sec learned
> about issues from [email protected] a few days before it'd at
> least give us *some* time, as opposed to just springing stuff
> on us without warning.
I think having security@ notify vendor-sec when it finds a real problem
would be a good idea, as a lot of stuff is just sifting through finding
the root cause and fix. And if security@ still has it's "5 day
countdown" type thing, that still gives you (and me) at least a few days
to run around like mad to update things, which is better than nothing :)
thanks,
greg k-h
On Wed, 12 Jan 2005, Dave Jones wrote:
>
> Who would be on the kernel security list if it's to be invite only ?
> Is this just going to be a handful of folks, or do you foresee it
> being the same kernel folks that are currently on vendor-sec ?
I'd assume that it's as many as possible. The current vendor-sec kernel
people _plus_ anybody else who wants to.
> My first thought was 'Chris will forward the output of [email protected]
> to vendor-sec, and we'll get a chance to get updates built'. But you
> seem dead-set against any form of delayed disclosure, which has the
> effect of catching us all with our pants down when you push out
> a new kernel fixing a hole and we don't have updates ready.
Yes, I think delayed disclosure is broken. I think the whole notion of
"vendor update available when disclosure happens" is nothing but vendor
politics, and doesn't help _users_ one whit. The only thing it does is
allow the vendor to point fingers and say "hey, we have an update, now
it's your problem".
In reality, the user usually needs to have the update available _before_
the disclosure anyway. Preferably by _months_, not minutes.
So I think the whole vendor-sec thing is not helping users at all, it's
purely a "vendor embarassment" thing.
> If you turned the current model upsidedown and vendor-sec learned
> about issues from [email protected] a few days before it'd at
> least give us *some* time, as opposed to just springing stuff
> on us without warning.
I think kernel bugs should be fixed as soon as humanly possible, and _any_
delay is basically just about making excuses. And that means that as many
people as possible should know about the problem as early as possible,
because any closed list (or even just anybody sending a message to me
personally) just increases the risk of the thing getting lost and delayed
for the wrong reasons.
So I'd not personally mind some _totally_ open list. No embargo at all, no
limits on who reads it. The more, the merrier. However, I think my
personal preference is pretty extreme in one end, and I also think that
vendor-sec is extreme in the other. So there is probably some middle
ground.
Will it make everybody happy? Hell no. Nothing like that exists. Which is
why I'm willing to live with an embargo as long as I don't feel like I'm
being toyed with.
And hey, vendor-sec works. I feel like vendor-sec just toys with me, which
is why I refuse to have anything to do with it, but it's entirely possible
that the best solution is to just ignore my wishes. That's OK. I'm ok with
it, vendor-sec is ok with it, nobody is _happy_ with it, but it's another
compromise. Agreeing to disagree is fine too, after all.
So it's embarrassing to everybody if the kernel.org kernel has a security
hole for longer than vendor kernels, but at the same time, most _users_
run vendor kernels anyway, so maybe the current setup is the proper one,
and the kernel.org kernel _should_ be the last one to get the fix.
Whatever. I happen to believe in openness, and vendor-sec does not. It's
that simple.
But if we're seriously looking for a middle ground between my "it should
be open" and vendor-sec "it should be totally closed", that's where my
suggestions come in. Whether people _want_ to look for a middle ground is
the thing to ask first..
For example, having an arbitrarily long embargo on actual known exploit
code is fine with me. I don't care. If I have to promise to never ever
disclose an exploit code in order to see it, I'm fine with that - but I
refuse to delay the _fix_ by more than a few days, and even that "few
days" goes out the window if somebody else has knowingly delayed giving
the fix or problem to me in the first place.
This is not just about sw security, btw. I refuse to sign NDA's on hw
errata too. Same deal - it may mean that I get to know about the problem
later, but it also means that I don't have to feel guilty about knowing of
a problem and being unable to fix it. And it means that people can trust
_me_ personally.
Linus
Linus Torvalds <[email protected]> wrote:
>
> Yes, I think delayed disclosure is broken. I think the whole notion of
> "vendor update available when disclosure happens" is nothing but vendor
> politics, and doesn't help _users_ one whit.
> ...
>
> So I think the whole vendor-sec thing is not helping users at all, it's
> purely a "vendor embarassment" thing.
That sounds a bit over-the-top to me, sorry.
AFAIUI, the vendor requirement is that they have time to have an upgraded
kernel package on their servers when the bug becomes public knowledge.
If correct and reasonable, then what is the best way in which we can
support them in this while promptly upgrading the kernel.org kernel?
Also:
I think we need to be more explicit in separating _types_ of security
problems. This recent controversy over the RLIM_MEMLOCK DoS is plain
silliness.
Look through the kernel changelogs for the past year - we've fixed a huge
number of "fix oops in foo" and "fix deadlock in bar" and "fix memory leak
in zot". All of these are of exactly the same severity as the rlimit bug,
and nobody cares, nobody is hurt.
The fuss over the rlimit problem occurred simply because some external
organisation chose to make a fuss over it.
IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
On Wed, 12 Jan 2005, Andrew Morton wrote:
>
> That sounds a bit over-the-top to me, sorry.
Maybe a bit pointed, but the question is: would a user perhaps want to
know about a security fix a month earlier (knowing that bad people might
guess at it too), or want the security fix a month later (knowing that the
bad guys may well have known about the problem all the time _anyway_)?
Being public is different from being known about. If vendor-sec knows
about it, I don't find it at all unbelievable that some spam-virus writer
might know about it too.
> All of these are of exactly the same severity as the rlimit bug,
> and nobody cares, nobody is hurt.
The fact is, 99% of the time, nobody really does care.
> The fuss over the rlimit problem occurred simply because some external
> organisation chose to make a fuss over it.
I agree. And if i thad been out in the open all the time, the fuss simply
would not have been there.
I'm a big believer in _total_ openness. Accept the fact that bugs will
happen. Be open about them, and fix them as soon as possible. None of this
cloak-and-dagger stuff.
Linus
On Wed, Jan 12, 2005 at 06:28:38PM -0800, Andrew Morton wrote:
>
> IMO, local DoS holes are important mainly because buggy userspace
> applications allow remote users to get in and exploit them, and for that
> reason we of course need to fix them up. Even though such an attacker
> could cripple the machine without exploiting such a hole.
>
> For the above reasons I see no need to delay publication of local DoS holes
> at all. The only thing for which we need to provide special processing is
> privilege escalation bugs.
>
> Or am I missing something?
So, a "classification" of the severity of the bug would cause different
type of disclosures? That's a good idea in theory, but trying to nail
down specific for bug classifications tends to be difficult.
Although I think both Red Hat and SuSE have a classification system in
place already that might help out here.
Anyway, if so, I like it. I think that would be a good thing to have,
if for no other reason that I don't want to see security announcements
for every single driver bug that's patched that had caused a user
created oops.
thanks,
greg k-h
* Andrew Morton ([email protected]) wrote:
> AFAIUI, the vendor requirement is that they have time to have an upgraded
> kernel package on their servers when the bug becomes public knowledge.
Yup.
> If correct and reasonable, then what is the best way in which we can
> support them in this while promptly upgrading the kernel.org kernel?
Most projects inform vendors with enough heads-up time to let them get
their stuff together and out the door.
> IMO, local DoS holes are important mainly because buggy userspace
> applications allow remote users to get in and exploit them, and for that
> reason we of course need to fix them up. Even though such an attacker
> could cripple the machine without exploiting such a hole.
>
> For the above reasons I see no need to delay publication of local DoS holes
> at all. The only thing for which we need to provide special processing is
> privilege escalation bugs.
>
> Or am I missing something?
No, that's pretty similar to CVE allocation. At one time, there was
little effort even put into allocating CVE entries for local DoS holes.
It's not that they aren't important, but less critical than remote DoS
issues, and way less so than anything priv escalation related.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Linus Torvalds said:
>
>
> On Wed, 12 Jan 2005, Andrew Morton wrote:
>>
>> That sounds a bit over-the-top to me, sorry.
>
> Maybe a bit pointed, but the question is: would a user perhaps want to
> know about a security fix a month earlier (knowing that bad people might
> guess at it too), or want the security fix a month later (knowing that the
> bad guys may well have known about the problem all the time _anyway_)?
>
> Being public is different from being known about. If vendor-sec knows
> about it, I don't find it at all unbelievable that some spam-virus writer
> might know about it too.
>
>> All of these are of exactly the same severity as the rlimit bug,
>> and nobody cares, nobody is hurt.
>
> The fact is, 99% of the time, nobody really does care.
>
>> The fuss over the rlimit problem occurred simply because some external
>> organisation chose to make a fuss over it.
>
> I agree. And if i thad been out in the open all the time, the fuss simply
> would not have been there.
>
> I'm a big believer in _total_ openness. Accept the fact that bugs will
> happen. Be open about them, and fix them as soon as possible. None of this
> cloak-and-dagger stuff.
>
> Linus
>
Devils-advocate: Who is on the vendor-sec list? as I have started
devloping a roll your own linux dsitro (as 100s of other have as well) who
decides who is "approved" to hear about the fixes beforehand-what makes
SuSE, and Red Hat more deserving than Bonzai) User Base?
inhouse-developrs?. I agree with Linus-san openness is best all around.
the rest is mostly politics.
--
David Blomberg
[email protected]
AIS, APS, ASE, CCNA, Linux+, LCA, LCP, LPI I, MCP, MCSA, MCSE, RHCE, Server+
Linus Torvalds wrote:
>
> we can make it. If that means that we should make the changelogs be a bit
> less verbose because we don't want to steal the thunder from the people
> who found the problem, that's fine.
what the....no!!
changelogs have to be verbose, i'm still often missing hints in the
current changelogs, commenting that patch_a and update_b got in because
of a security issue. some boxes need only be updated for the sake of
security, so one would be happy only watching for <security patch> lines
in the kernel changlogs. giving credits to the people who found the
problem is still possible by mentioning the (source of the)original
advisory.
Christian.
--
BOFH excuse #30:
positron router malfunction
On Wed, Jan 12, 2005 at 06:09:31PM -0800, Linus Torvalds wrote:
> Yes, I think delayed disclosure is broken. I think the whole notion of
> "vendor update available when disclosure happens" is nothing but vendor
> politics, and doesn't help _users_ one whit.
The volume of traffic we as a vendor get every time an issue
makes news (and sadly even the insignificant issues seem to be
making news these days) from users wanting to know where our
updates are is a good indication that your thinking is clearly bogus.
> The only thing it does is allow the vendor to point fingers and say "hey, we
> have an update, now it's your problem".
I fail to see the point you're trying to make here.
> So it's embarrassing to everybody if the kernel.org kernel has a security
> hole for longer than vendor kernels, but at the same time, most _users_
> run vendor kernels anyway, so maybe the current setup is the proper one,
> and the kernel.org kernel _should_ be the last one to get the fix.
I think the timelyness isn't the issue, the issue is making sure that
the kernel.org kernel actually does end up getting the fixes.
That 2.6.10 got out of -rc with known vulnerabilities which were
known to be fixed in 2.6.9-ac is mind-boggling. That a 2.6.10.1
didn't follow up yet is equally so.
Part of the premise of the 'new' development model was that vendor kernels
were where people go for the 'super-stable kernel', and the kernel.org
kernel may not be quite so polished around the edges. This seems to
go against what you're saying in this thread which reads..
'kernel.org kernels might not be as stable as vendor kernels, but you're
going to need to run it if you want security holes fixed asap'
> Whatever. I happen to believe in openness, and vendor-sec does not. It's
> that simple.
That openness comes at a price. I don't need to bore you with
analogies, as you know as well as I do how wide and far Linux
is deployed these days, but doing this openly is just irresponsible.
Someone malicious on getting the announcement of a new kernel.org release
gets told exactly where the hole is and how to exploit it.
All they'll need to do is find a target running a vendor kernel before
updates get deployed. Whilst this is true to a certain degree
today, as not everyone deploys security updates in a timely manner
(some not at all), things can only get worse.
Dave
On Wed, Jan 12, 2005 at 06:28:38PM -0800, Andrew Morton wrote:
> IMO, local DoS holes are important mainly because buggy userspace
> applications allow remote users to get in and exploit them, and for that
> reason we of course need to fix them up. Even though such an attacker
> could cripple the machine without exploiting such a hole.
>
> For the above reasons I see no need to delay publication of local DoS holes
> at all. The only thing for which we need to provide special processing is
> privilege escalation bugs.
>
> Or am I missing something?
The problem is it depends on who you are, and what you're doing with Linux
how much these things affect you.
A local DoS doesn't both me one squat personally, as I'm the only
user of computers I use each day. An admin of a shell server or
the like however would likely see this in a different light.
(though it can be argued a mallet to the kneecaps of the user
responsible is more effective than any software update)
An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?
priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
Dave
On Wed, 12 Jan 2005, Marcelo Tosatti wrote:
> The only reason for this is to have "time for the vendors to catch up",
> which can be defined by the kernel security office. Nothing more - no
> vendor politics involved.
There are other good reasons, too. One could be:
"Lets not make this security bug public on christmas eve,
because many system administrators won't get around to
applying patches, while the script kiddies have lots of
time over their christmas holidays."
IMHO it will be good to coordinate things like this, based on
common sense, and trying to minimise the impact on users of
the software. I do agree with Linus' "no politics" point,
though ;)
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
Dave Jones <[email protected]> wrote:
>
> On Wed, Jan 12, 2005 at 06:28:38PM -0800, Andrew Morton wrote:
>
> > IMO, local DoS holes are important mainly because buggy userspace
> > applications allow remote users to get in and exploit them, and for that
> > reason we of course need to fix them up. Even though such an attacker
> > could cripple the machine without exploiting such a hole.
> >
> > For the above reasons I see no need to delay publication of local DoS holes
> > at all. The only thing for which we need to provide special processing is
> > privilege escalation bugs.
> >
> > Or am I missing something?
>
> The problem is it depends on who you are, and what you're doing with Linux
> how much these things affect you.
>
> A local DoS doesn't both me one squat personally, as I'm the only
> user of computers I use each day. An admin of a shell server or
> the like however would likely see this in a different light.
> (though it can be argued a mallet to the kneecaps of the user
> responsible is more effective than any software update)
yup. But there are so many ways to cripple a Linux box once you have local
access. Another means which happens to be bug-induced doesn't seem
important.
> An information leak from kernel space may be equally as mundane to some,
> though terrifying to some admins. Would you want some process to be
> leaking your root password, credit card #, etc to some other users process ?
>
> priveledge escalation is clearly the number one threat. Whilst some
> class 'remote root hole' higher risk than 'local root hole', far
> too often, we've had instances where execution of shellcode by
> overflowing some buffer in $crappyapp has led to a shell
> turning a local root into a remote root.
I'd place information leaks and privilege escalations into their own class,
way above "yet another local DoS".
A local privilege escalation hole should be viewed as seriously as a remote
privilege escalation hole, given the bugginess of userspace servers, yes?
On Wed, Jan 12, 2005 at 10:25:06PM -0500, Dave Jones scribbled:
[snip]
> > Whatever. I happen to believe in openness, and vendor-sec does not. It's
> > that simple.
>
> That openness comes at a price. I don't need to bore you with
> analogies, as you know as well as I do how wide and far Linux
> is deployed these days, but doing this openly is just irresponsible.
>
> Someone malicious on getting the announcement of a new kernel.org release
> gets told exactly where the hole is and how to exploit it.
> All they'll need to do is find a target running a vendor kernel before
> updates get deployed. Whilst this is true to a certain degree
> today, as not everyone deploys security updates in a timely manner
> (some not at all), things can only get worse.
That might be, but note one thing: not everybody runs vendor kernels (for various
reasons). Now see what happens when the super-secret vulnerability (with
vendor fixes) is described in an advisory. A person managing a park of machines
(let's say 100) with custom, non-vendor, kernels suddenly finds out that they
have a buggy kernel and 100 machines to upgrade while the exploit and the
description of the vuln are out in the wild. They have to port their
custom stuff to the new kernel, compile it, test it (at least a bit), deploy
on 100 machines and pray it doesn't break. During all that time (and the
whole process won't take a day or even two) the evil guys are far ahead of
the poor bastard managing the 100 machines (since all they need is one
exploit which will work on any of our admin's machines). One other factor
that makes it hard for such a person to apply the patches is simply that there
is no single place to find the security patches in. He goes to securityfocus.com,
for instance, and what does he find? A nice description of the vulnerability, a
discussion, a list of affected kernel versions and credits which usually
list vendor advisories and kernel versions and very rarely a link to an
archived mail message or a webpage with the patch. Hoping he'll find the
fixes in the vendor kernels, he goes to download source packages from SuSe,
RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
to find the patch there as it is to fish it out of the vanilla kernel patch
for the new version. Frustrating, isn't it? Not to mention that he might
need to backport the fix, if he runs an earlier version of the kernel.
And now assume that everything is as extremely open as Linus says - the
admin has the same access to the exact information the vendors on vendor-sec
have, together with the same fix they have (in form of a simple patch
available without fishing for it all over the place). He starts the race
with the bad guys exactly at the same time they start running looking for
the vulnerable machines on the 'Net. Priceless, IMHO.
I guess that, contrary to what you've just said above, hiding the
information is irresponsible.
Having said that, I don't think everything should be as extremely open as
Linus would want it to see, but rather the way he proposed (and which many
folks agreed to) with the 5-day (or so) embargo for the advisory release and
with the patch(es)/discussion openly available to anyone interested (based
on the premise that most people learn about vulnerabilities not from
security lists but from security bulletins, tech news sites, user forums etc.)
best regards,
marek
* Andrew Morton ([email protected]) wrote:
> yup. But there are so many ways to cripple a Linux box once you have local
> access. Another means which happens to be bug-induced doesn't seem
> important.
That depends on the environment. If it's already locked down via MAC
and rlimits, etc. and the bug now creates a DoS that wasn't there before
it may be important. But, as a general rule of thumb, local DoS
is much less severe than other bugs, I fully agree.
> > An information leak from kernel space may be equally as mundane to some,
> > though terrifying to some admins. Would you want some process to be
> > leaking your root password, credit card #, etc to some other users process ?
> >
> > priveledge escalation is clearly the number one threat. Whilst some
> > class 'remote root hole' higher risk than 'local root hole', far
> > too often, we've had instances where execution of shellcode by
> > overflowing some buffer in $crappyapp has led to a shell
> > turning a local root into a remote root.
>
> I'd place information leaks and privilege escalations into their own class,
> way above "yet another local DoS".
Yes, me too.
> A local privilege escalation hole should be viewed as seriously as a remote
> privilege escalation hole, given the bugginess of userspace servers, yes?
Absolutely, yes. Local root hole all too often == remote root hole.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Wed, 12 Jan 2005, Dave Jones wrote:
>
> For us thankfully, exec-shield has trapped quite a few remotely
> exploitable holes, preventing the above.
One thing worth considering, but may be abit _too_ draconian, is a
capability that says "can execute ELF binaries that you can write to".
Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).
Sure, that's clearly not viable for a developer or even somebody who
maintains his own machine, but it _is_ probably viable for pretty much any
user that is afraid of compiling stuff him/herself and just gets signed
rpm's that install as root anyway. And it should certainly be viable for
somebody like "nobody" or "ftp" or "apache".
And I suspect there is almost zero overlap between the "developer
workstation" kind of setup (where the above is just not workable) and
"server or end-user desktop" setup where it might work.
A lot of the local root exploits depend on being able to run code that
doesn't come pre-installed on the system. A hole in a user-level server
may get you local shell access, but you generally need another stage to
get elevated privileges and _really_ mess with the machine.
Quite frankly, nobody should ever depend on the kernel having zero holes.
We do our best, but if you want real security, you should have other
shields in place. exec-shield is one. So is using a compiler that puts
guard values on the stack frame (immunix, I think). But so is saying "you
can't just compile or download your own binaries, nyaah, nyaah, nyaah".
As I've already made clear, I don't believe one whit in the "secrecy"
approach to security. I believe that "security through obscurity" can
actually be one valid level of security (after all, in the extreme case,
that's all a password ever really is).
So I believe that in the case of hiding vulnerabilities, any "security
gain" from the obscurity is more than made up for by all the security you
lose though delaying action and not giving people information about the
problem.
I realize people disagree with me, which is also why I don't in any way
take vendor-sec as a personal affront or anything like that: I just think
it's a mistake, and am very happy to be vocal about it, but hey, the
fundamental strength of open source is exactly the fact that people don't
have to agree about everything.
Linus
Dave Jones <[email protected]> wrote:
>> The problem is it depends on who you are, and what you're doing with Linux
>> how much these things affect you.
>> A local DoS doesn't both me one squat personally, as I'm the only
>> user of computers I use each day. An admin of a shell server or
>> the like however would likely see this in a different light.
>> (though it can be argued a mallet to the kneecaps of the user
>> responsible is more effective than any software update)
On Wed, Jan 12, 2005 at 07:42:39PM -0800, Andrew Morton wrote:
> yup. But there are so many ways to cripple a Linux box once you have local
> access. Another means which happens to be bug-induced doesn't seem
> important.
This is too broad and sweeping of a statement, and can be used to
excuse almost any bug triggerable only by local execution.
Most of the local DoS's I'm aware of are memory management -related,
i.e. user- triggerable proliferation of pinned kernel data structures.
Beancounter patches were meant to address at least part of that. Paging
the larger kernel data structures users can trigger proliferation of
would also be a large help.
Dave Jones <[email protected]> wrote:
>> An information leak from kernel space may be equally as mundane to some,
>> though terrifying to some admins. Would you want some process to be
>> leaking your root password, credit card #, etc to some other users process ?
>> priveledge escalation is clearly the number one threat. Whilst some
>> class 'remote root hole' higher risk than 'local root hole', far
>> too often, we've had instances where execution of shellcode by
>> overflowing some buffer in $crappyapp has led to a shell
>> turning a local root into a remote root.
On Wed, Jan 12, 2005 at 07:42:39PM -0800, Andrew Morton wrote:
> I'd place information leaks and privilege escalations into their own class,
> way above "yet another local DoS".
> A local privilege escalation hole should be viewed as seriously as a remote
> privilege escalation hole, given the bugginess of userspace servers, yes?
I agree on the latter count. On the first, I have to dissent with the
assessment of local DoS's as "unimportant".
-- wli
On Wed, Jan 12, 2005 at 06:28:38PM -0800, Andrew Morton wrote:
>> IMO, local DoS holes are important mainly because buggy userspace
>> applications allow remote users to get in and exploit them, and for that
>> reason we of course need to fix them up. Even though such an attacker
>> could cripple the machine without exploiting such a hole.
>> For the above reasons I see no need to delay publication of local DoS holes
>> at all. The only thing for which we need to provide special processing is
>> privilege escalation bugs.
>> Or am I missing something?
On Wed, Jan 12, 2005 at 10:35:42PM -0500, Dave Jones wrote:
> The problem is it depends on who you are, and what you're doing with Linux
> how much these things affect you.
> A local DoS doesn't both me one squat personally, as I'm the only
> user of computers I use each day. An admin of a shell server or
> the like however would likely see this in a different light.
> (though it can be argued a mallet to the kneecaps of the user
> responsible is more effective than any software update)
It deeply disturbs me to hear this kind of talk. If we're pretending to
be a single-user operating system, why on earth did we use UNIX as a
precedent in the first place?
On Wed, Jan 12, 2005 at 10:35:42PM -0500, Dave Jones wrote:
> An information leak from kernel space may be equally as mundane to some,
> though terrifying to some admins. Would you want some process to be
> leaking your root password, credit card #, etc to some other users process ?
> priveledge escalation is clearly the number one threat. Whilst some
> class 'remote root hole' higher risk than 'local root hole', far
> too often, we've had instances where execution of shellcode by
> overflowing some buffer in $crappyapp has led to a shell
> turning a local root into a remote root.
> For us thankfully, exec-shield has trapped quite a few remotely
> exploitable holes, preventing the above.
If we give up and say we're never going to make multiuser use secure,
where is our distinction from other inherently insecure single-user OS's?
-- wli
On Wed, Jan 12, 2005 at 08:49:19PM -0800, William Lee Irwin III wrote:
> On Wed, Jan 12, 2005 at 10:35:42PM -0500, Dave Jones wrote:
> > The problem is it depends on who you are, and what you're doing with Linux
> > how much these things affect you.
> > A local DoS doesn't both me one squat personally, as I'm the only
> > user of computers I use each day. An admin of a shell server or
> > the like however would likely see this in a different light.
> > (though it can be argued a mallet to the kneecaps of the user
> > responsible is more effective than any software update)
>
> It deeply disturbs me to hear this kind of talk. If we're pretending to
> be a single-user operating system, why on earth did we use UNIX as a
> precedent in the first place?
You completely missed my point. What's classed as a threat to one
user just isn't relevant to another.
> On Wed, Jan 12, 2005 at 10:35:42PM -0500, Dave Jones wrote:
> > An information leak from kernel space may be equally as mundane to some,
> > though terrifying to some admins. Would you want some process to be
> > leaking your root password, credit card #, etc to some other users process ?
> > priveledge escalation is clearly the number one threat. Whilst some
> > class 'remote root hole' higher risk than 'local root hole', far
> > too often, we've had instances where execution of shellcode by
> > overflowing some buffer in $crappyapp has led to a shell
> > turning a local root into a remote root.
> > For us thankfully, exec-shield has trapped quite a few remotely
> > exploitable holes, preventing the above.
>
> If we give up and say we're never going to make multiuser use secure,
> where is our distinction from other inherently insecure single-user OS's?
Nowhere did I make that claim. If you parsed the comment about
exec-shield incorrectly, I should point out that we also issued
security updates to various applications even though (due to exec-shield)
our users weren't vulnerable. The comment was an indication that
the extra barrier has bought us some time in preparing updates
when 0-day exploits have been sprung on us unexpectedly on more
than one occasion.
Dave
On Thu, Jan 13, 2005 at 04:53:31AM +0100, Marek Habersack wrote:
> archived mail message or a webpage with the patch. Hoping he'll find the
> fixes in the vendor kernels, he goes to download source packages from SuSe,
> RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
> to find the patch there as it is to fish it out of the vanilla kernel patch
> for the new version. Frustrating, isn't it? Not to mention that he might
http://linux.bkbits.net is your friend.
Each patch (including security fixes) in the mainline kernels (2.4 and
2.6) appears there as an individual, clickable link with a description
(e.g. "1.1551 Paul Starzetz: sys_uselib() race vulnerability
(CAN-2004-1235)").
If other patches have gone in since then, you may have to scroll through
a (short-form) changelog. However, it's still less frustrating than the
scenario you portray.
-Barry K. Nathan <[email protected]>
On Wed, Jan 12, 2005 at 08:48:57PM -0800, Linus Torvalds wrote:
> Quite frankly, nobody should ever depend on the kernel having zero holes.
> We do our best, but if you want real security, you should have other
> shields in place. exec-shield is one. So is using a compiler that puts
That reminds me...
What are the chances of exec-shield making it into mainline anytime
in the near future? It's the *big* feature that has me preferring
Red Hat/Fedora vendor kernels over mainline kernels, even on non-Red
Hat/Fedora distributions. (I know that parts of exec-shield are already in
mainline, but I'm wondering about the parts that haven't been merged yet.)
-Barry K. Nathan <[email protected]>
William Lee Irwin III <[email protected]> wrote:
>
> Most of the local DoS's I'm aware of are memory management -related,
> i.e. user- triggerable proliferation of pinned kernel data structures.
Well. A heck of a lot of the DoS opportunities we've historically seen
involved memory leaks, deadlocks or making the kernel go oops or BUG with
locks held or with kernel memory allocated.
William Lee Irwin III <[email protected]> wrote:
>> Most of the local DoS's I'm aware of are memory management -related,
>> i.e. user- triggerable proliferation of pinned kernel data structures.
On Wed, Jan 12, 2005 at 10:54:12PM -0800, Andrew Morton wrote:
> Well. A heck of a lot of the DoS opportunities we've historically seen
> involved memory leaks, deadlocks or making the kernel go oops or BUG with
> locks held or with kernel memory allocated.
I'd consider those even more severe.
-- wli
On Wed, Jan 12, 2005 at 10:54:12PM -0800, Andrew Morton wrote:
> William Lee Irwin III <[email protected]> wrote:
> >
> > Most of the local DoS's I'm aware of are memory management -related,
> > i.e. user- triggerable proliferation of pinned kernel data structures.
>
> Well. A heck of a lot of the DoS opportunities we've historically seen
> involved memory leaks, deadlocks or making the kernel go oops or BUG with
> locks held or with kernel memory allocated.
I think we can probably exclude root-only local DoS from the full
embargo treatment for starters. The recent /dev/random sysctl one was
in that category.
I can imagine some local DoS bugs that are worth keeping a lid on for
a bit. Classic F00F bug may have been a good example. But hole in an
arbitrary driver may not.
--
Mathematics is the supreme nostalgia of our time.
On Wed, Jan 12, 2005 at 08:48:57PM -0800, Linus Torvalds wrote:
>
>
> On Wed, 12 Jan 2005, Dave Jones wrote:
> >
> > For us thankfully, exec-shield has trapped quite a few remotely
> > exploitable holes, preventing the above.
>
> One thing worth considering, but may be abit _too_ draconian, is a
> capability that says "can execute ELF binaries that you can write to".
>
> Without that capability set, you can only execute binaries that you cannot
> write to, and that you cannot _get_ write permission to (ie you can't be
> the owner of them either - possibly only binaries where the owner is
> root).
We can do that now with a combination of read-only and no-exec mounts.
--
Mathematics is the supreme nostalgia of our time.
On Wed, Jan 12, 2005 at 11:28:51PM -0800, Matt Mackall wrote:
> On Wed, Jan 12, 2005 at 08:48:57PM -0800, Linus Torvalds wrote:
> >
> >
> > On Wed, 12 Jan 2005, Dave Jones wrote:
> > >
> > > For us thankfully, exec-shield has trapped quite a few remotely
> > > exploitable holes, preventing the above.
> >
> > One thing worth considering, but may be abit _too_ draconian, is a
> > capability that says "can execute ELF binaries that you can write to".
> >
> > Without that capability set, you can only execute binaries that you cannot
> > write to, and that you cannot _get_ write permission to (ie you can't be
> > the owner of them either - possibly only binaries where the owner is
> > root).
>
> We can do that now with a combination of read-only and no-exec mounts.
That's why some hardened distros ship with everything R/O (except var) and
/var non-exec.
Willy
On Thu, 13 Jan 2005, Willy Tarreau wrote:
> On Wed, Jan 12, 2005 at 11:28:51PM -0800, Matt Mackall wrote:
>> On Wed, Jan 12, 2005 at 08:48:57PM -0800, Linus Torvalds wrote:
>>>
>>>
>>> On Wed, 12 Jan 2005, Dave Jones wrote:
>>>>
>>>> For us thankfully, exec-shield has trapped quite a few remotely
>>>> exploitable holes, preventing the above.
>>>
>>> One thing worth considering, but may be abit _too_ draconian, is a
>>> capability that says "can execute ELF binaries that you can write to".
>>>
>>> Without that capability set, you can only execute binaries that you cannot
>>> write to, and that you cannot _get_ write permission to (ie you can't be
>>> the owner of them either - possibly only binaries where the owner is
>>> root).
>>
>> We can do that now with a combination of read-only and no-exec mounts.
>
> That's why some hardened distros ship with everything R/O (except var) and
> /var non-exec.
this only works if you have no reason to mix the non-exec and R/O stuff in
the same directory (there is some software that has paths for stuff hard
coded that will not work without them being togeather)
also it gives you no ability to maintain the protection for normal users
at the same time that an admin updates the system. Linus's proposal would
let you five this cap to the normal users, but still let the admin manage
the box normally.
David Lang
--
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
-- C.A.R. Hoare
* Linus Torvalds:
> So I think the whole vendor-sec thing is not helping users at all, it's
> purely a "vendor embarassment" thing.
At least vendor-sec serves as a candidate naming authority for CVE,
and makes sure that the distributors use the same set of CANs in their
advisories. For users, this is an important step forward, because
there is no other way to tell if vendor A is fixing the same problem
as vendor B, at least for end users.
In the past, the kernel developers (including you) supported the
vendor-sec process by not addressing security issues in official
kernels in a timely manner, and (what's far worse from a user point of
view) silently fixing security bugs in new releases, probably because
some vendor kernels weren't fixed yet. Especially the last point
doesn't help users.
On Wed, Jan 12, 2005 at 08:48:57PM -0800, Linus Torvalds wrote:
> Without that capability set, you can only execute binaries that you cannot
> write to, and that you cannot _get_ write permission to (ie you can't be
> the owner of them either - possibly only binaries where the owner is
> root).
I think this is called "mount user-writeable filesystems with -noexec" ;-)
* Barry K. Nathan:
> On Thu, Jan 13, 2005 at 04:53:31AM +0100, Marek Habersack wrote:
>> archived mail message or a webpage with the patch. Hoping he'll find the
>> fixes in the vendor kernels, he goes to download source packages from SuSe,
>> RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
>> to find the patch there as it is to fish it out of the vanilla kernel patch
>> for the new version. Frustrating, isn't it? Not to mention that he might
>
> http://linux.bkbits.net is your friend.
>
> Each patch (including security fixes) in the mainline kernels (2.4 and
> 2.6) appears there as an individual, clickable link with a description
> (e.g. "1.1551 Paul Starzetz: sys_uselib() race vulnerability
> (CAN-2004-1235)").
This is the exception. Usually, changelogs are cryptic, often
deliberately so. Do you still remember Alan's DMCA protest
changelogs?
On Thu, Jan 13, 2005 at 12:02:01AM -0800, David Lang wrote:
> >That's why some hardened distros ship with everything R/O (except var)
> >and
> >/var non-exec.
>
> this only works if you have no reason to mix the non-exec and R/O stuff
> in the same directory (there is some software that has paths for stuff
> hard coded that will not work without them being togeather)
Symlinks are the solution against this breakage. And if your software comes
from the dos world where temporary files are stored in the same directory
as the binaries (remember SET TEMP=C:\DOS ?) then you have no possibility at
all, but the application design by itself should be frightening enough to keep
away from it.
> also it gives you no ability to maintain the protection for normal users
> at the same time that an admin updates the system. Linus's proposal would
> let you five this cap to the normal users, but still let the admin manage
> the box normally.
That's perfectly true. What I explained was not meant to be a universal
solution, but an easy step forward.
Willy
Linus writes:
>So I'd not personally mind some _totally_ open list. No embargo at all, no
>limits on who reads it. The more, the merrier. However, I think my
>personal preference is pretty extreme in one end
>
I'm tipping my security hat to Linus (and somewhat away from RFPolicy)
on this one. Keeping a large organization free from viruses and malware
becomes increasingly entertaining the more "day zero" variants there
are. And recently, we've seen a lot for the windoze platform here; at
least one major anti-virus player thanks us for sending them infected
executables to analyze. Waiting for some embargo to allow a researcher
to claim credit just does not work. We spend all of our time swatting
flies, waiting for a vendor fix; yet a disclose-without-delay
quick-and-dirty fix would have saved so many staff hours.
>So it's embarrassing to everybody if the kernel.org kernel has a security
>hole for longer than vendor kernels, but at the same time, most _users_
>run vendor kernels anyway
>
Not here! :-) All of my security infrastructure runs kernel.org
kernels. (I don't want any vendor "goodies" hidden in places I don't
know about.) I punch a button on my heavily-hacked Slackware boxen, and
the latest kernel, the latest internet-facing servers, the latest
critical libraries are automatically downloaded, compiled and installed
whenever newer version numbers exist. Time to a patched system from
when the author creates a patch is measured in hours; I compare that to
the day(s) or weeks I can wait for a vendor to get around to doing the
same thing.
Kris
On Thu, Jan 13, 2005 at 09:59:00AM +0100, Florian Weimer wrote:
> This is the exception. Usually, changelogs are cryptic, often
> deliberately so. Do you still remember Alan's DMCA protest
> changelogs?
Yes, I remember. However, if I saw a BK changeset called "Security fixes"
or "Security fixes -- details censored in accordance with the US DMCA"
then it would obviously be a security patch worth looking at. So looking
at linux.bkbits.net would still be an improvement over looking at a raw
patch with everything combined (which is what was the complaint was
about).
-Barry K. Nathan <[email protected]>
On Thu, 13 Jan 2005, Christoph Hellwig wrote:
>2B
> On Wed, Jan 12, 2005 at 08:48:57PM -0800, Linus Torvalds wrote:
> > Without that capability set, you can only execute binaries that you cannot
> > write to, and that you cannot _get_ write permission to (ie you can't be
> > the owner of them either - possibly only binaries where the owner is
> > root).
>
> I think this is called "mount user-writeable filesystems with -noexec" ;-)
You miss the point.
It wouldn't be a global flag. It's a per-process flag. For example, many
people _do_ need to execute binaries in their home directory. I do it all
the time. I know what a compiler is.
Others do not necessarily do that. Sure, you could mount each users home
directory separately with a bind mount, but that's not only inconvenient,
it also misses the point - it's not about _where_ the binary is, it's
about _who_ runs it.
What is the real issue with MS security? Is it that NT is findamentally a
weak kernel? Hey, maybe. Or maybe not. More likely it's the mindset that
you trust everything, regardless of where they are. Most users are admins,
and you run any code you see (or don't see) by default, whether it's in an
email attachement or whatever.
Containment is what real security is about. Everybody knows bugs happen,
and that people do stupid things. Developers, users, whatever. We all do.
For example, in many environments it could possibly be a good idea to make
even _root_ have the "can run non-root binaries flag" clear by default.
Imagine a system that booted up that way, and used PAM to enable non-root
binaries on a per-user basis (for developers who need it or otherwise
people who are trusted to have their own binaries). Think about what that
means...
Every single deamon in the system would have the flag clear by default.
You take over the web-server, and the most you have to play with are the
binaries that are already installed on the system (and the code you can
inject directly into the web server process from outside - that's likely
to be the _real_ security hazard).
It's just another easy containment. It's not real security in itself, but
_no_ single thing is "real security". You just add containment, to the
point where it gets increasingly difficult to get to some state where you
can do lots of damage (in a perfect world, exponentially more so, but
these containments are seldom independent or each other).
NOTE! I'd personally hate some of the security things. For example, I
think the "randomize code addresses" is absolutely horrible, just because
of the startup overhead it implies (specifically no pre-linking). I also
immensely dislike exec-shield because of the segment games it plays - I
think it makes sense in the short run but not in the long run, so I much
prefer that one as a "vendor feature", not as a "core feature".
So when I talk about security, I have this double-standard where I end up
convinced that many features are things that _I_ should not do, but
others likely should ;)
Linus
On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> The kernel security list must be higher in hierarchy than vendorsec.
>
> Any information sent to vendorsec must be sent immediately for the kernel
> security list and discussed there.
We cannot do this without the reporters permission. Often we get
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
If you are setting up the list also make sure its entirely encrypted
after the previous sniffing incident.
Alan
> Vendors should also cc: the kernel-security list/contact at the same
> time they would normally contact vendor-sec. I don't see a problem with
> that happening, and would help out the people on vendor-sec from having
> to wade through a lot of linux kernel specific stuff at times.
vendor-sec has no control over dates or who else gets to know. We can
ask people to also notify others, we can suggest dates to people but
that is all. So if you think 7 days is sensible when reporting a hole
specify you will be making it public in 7 days.
If vendor-sec ignores a request for example that the bug doesn't go
public until date X then we just don't get told in future and we get
more 0 day crap
On Iau, 2005-01-13 at 02:28, Andrew Morton wrote:
> For the above reasons I see no need to delay publication of local DoS holes
> at all. The only thing for which we need to provide special processing is
> privilege escalation bugs.
>
> Or am I missing something?
Universities and web hosting companys see the DoS issue rather
differently sometimes. (Once we have Xen in the tree we'll have a good
answer)
On Thu, 2005-01-13 at 08:38 -0800, Linus Torvalds wrote:
>
> NOTE! I'd personally hate some of the security things. For example, I
> think the "randomize code addresses" is absolutely horrible, just
> because
> of the startup overhead it implies (specifically no pre-linking). I
> also
> immensely dislike exec-shield because of the segment games it plays -
> I
> think it makes sense in the short run but not in the long run, so I
> much
> prefer that one as a "vendor feature", not as a "core feature".
I think you are somewhat misguided on these: the randomisation done in
FC does NOT prohibit prelink for working, with the exception of special
PIE binaries. Does this destroy the randomisation? No: prelink *itself*
randomizes the addresses when creating it's prelink database (which is
in fedora once every two weeks with a daily incremental run inbetween;
the bi-weekly run is needed anyway to properly deal with new and updated
software, the daily runs are stopgapping only). This makes all *systems*
different, even though runs of the same app on the same machine right
after eachother are the same for the library addresses only.
That does not destroy the value of randomisation; it limits it slightly,
since this ONLY matters for libraries, not for the stack or heap and the
other things that get randomized.
As for the segment limits (you call them execshield, but execshield is
actually a whole bunch of stuff that happens to include segment limits;
a bit like tree and forrest ;) yes they probably should remain a vendor
feature, no argument about that.
On Iau, 2005-01-13 at 08:59, Florian Weimer wrote:
> This is the exception. Usually, changelogs are cryptic, often
> deliberately so. Do you still remember Alan's DMCA protest
> changelogs?
They were not cryptic, just following the law to the point it claimed
neccessary....
That aside right now because Linus doesn't give us heads up we vendor
spend our time scanning all Linus' diffs and playing spot the security
fix because we know the bad guys do the same, and they are rather good
at it. Its useful anyway - eg its how we found that base kernels have
broken AX.25, and several other patches got tagged for immediate revert
in the -ac tree (and of course reported back upstream to l/k) but its a
pain to have to do it this way.
Having a list that fed such notices on to vendor-sec with a date fixed
by them is a real possible improvement - thats how we work with many
other projects. I also don't see any reason that Linus or Andrew
wouldn't be able to become a CAN issuing authority for security
advisories.
Alan
On Iau, 2005-01-13 at 16:38, Linus Torvalds wrote:
> It wouldn't be a global flag. It's a per-process flag. For example, many
> people _do_ need to execute binaries in their home directory. I do it all
> the time. I know what a compiler is.
noexec has never been worth anything because of scripts. Kernel won't
load that binary, I can write a script to do it.
On Thu, 13 Jan 2005, Arjan van de Ven wrote:
>
> I think you are somewhat misguided on these: the randomisation done in
> FC does NOT prohibit prelink for working, with the exception of special
> PIE binaries. Does this destroy the randomisation? No: prelink *itself*
> randomizes the addresses when creating it's prelink database
There was a kernel-based randomization patch floating around at some
point, though. I think it's part of PaX. That's the one I hated.
Although I haven't seen it in a long time, so you may well be right that
that one too is fine.
My point was really more about the generic issue of me being two-faced:
I'll encourage people to do things that I don't actually like myself in
the standard kernel.
I just think that forking at some levels is _good_. I like the fact that
different vendors have different objectives, and that there are things
like Immunix and PaX etc around. Of course, the problem that sometimes
results in is the very fact that because I encourage others to have
special patches, they en dup not even trying to feed back _parts_ of them.
In this case I really believe that was the case. There are fixes in PaX
that make sense for the standard kernel. But because not _all_ of PaX
makes sense for the standard kernel, and because I will _not_ take their
patch whole-sale, they apparently believe (incorrectly) that I wouldn't
even take the non-intrusive fixes, and haven't really even tried to feed
them back.
(Yes, Brad Spengler has talked to me about PaX, but never sent me
individual patches, for example. People seem to expect me to take all or
nothing - and there's a _lot_ of pretty extreme people out there that
expect everybody else to be as extreme as they are..)
Linus
* Hubert Tonneau ([email protected]) wrote:
> Basically, you are currently leaving non distribution related users alone in the
> cold and this is really really bad for the confidence we have in Linux,
> so please publish a 2.6.10.1 with the short term solution to fix the hole.
> Of course this does not prevent to publish 2.6.10.2 when you found a better
> solution, or use a different fix in 2.6.11 since they are not based on 2.6.10.1
I agree (it was part of my original mail), and would like to remedy this.
For now, you can pick up fixes from -ac tree.
> Regards,
> Hubert Tonneau
>
>
> PS: I believe that it would also be a very good idea, since Linux is now
> expected to be a mature organisation, to automatically publish 2.6.x.y new holes
> only fix patch for each stable kernel that has been released less than a year ago.
> This would enable smoother upgrade of highly important production servers.
Not sure about that (it's quite some work), but at least the _current_
stable release version.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Thu, 13 Jan 2005, Alan Cox wrote:
>
> On Iau, 2005-01-13 at 16:38, Linus Torvalds wrote:
> > It wouldn't be a global flag. It's a per-process flag. For example, many
> > people _do_ need to execute binaries in their home directory. I do it all
> > the time. I know what a compiler is.
>
> noexec has never been worth anything because of scripts. Kernel won't
> load that binary, I can write a script to do it.
Scripts can only do what the interpreter does. And it's often a lot harder
to get the interpreter to do certain things. For example, you simply
_cannot_ get any thread race conditions with most scripts out there, nor
can you generally use magic mmap patterns.
Am I claiming that disallowing self-written ELF binaries gets rid of all
security holes? Obviously not. I'm claiming that there are things that
people can do that make it harder, and that _real_ security is not about
trusting one subsystem, but in making it hard enough in many independent
ways that it's just too effort-intensive to attack.
It's the same thing with passwords. Clearly any password protected system
can be broken into: you just have to guess the password. It then becomes a
matter of how hard it is to "guess" - at some point you say a password is
secure not because it is a password, but because it's too _expensive_ to
guess/break.
So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.
Linus
* Alan Cox:
> We cannot do this without the reporters permission. Often we get
> material that even the list isn't allowed to directly see only by
> contacting the relevant bodies directly as well. The list then just
> serves as a "foo should have told you about issue X" notification.
>
> If you are setting up the list also make sure its entirely encrypted
> after the previous sniffing incident.
Others have had made good use of symmetric encryption with OpenPGP
(the CAST5 cipher seems most interoperable). New symmetric keys are
distributed twice per year, using the participants OpenPGP public
keys.
(There are also various implementations of reencrypting mailing lists,
but they cannot ensure end-to-end encryption.)
* Linus Torvalds ([email protected]) wrote:
> On Thu, 13 Jan 2005, Alan Cox wrote:
> >
> > On Iau, 2005-01-13 at 16:38, Linus Torvalds wrote:
> > > It wouldn't be a global flag. It's a per-process flag. For example, many
> > > people _do_ need to execute binaries in their home directory. I do it all
> > > the time. I know what a compiler is.
> >
> > noexec has never been worth anything because of scripts. Kernel won't
> > load that binary, I can write a script to do it.
>
> Scripts can only do what the interpreter does. And it's often a lot harder
> to get the interpreter to do certain things. For example, you simply
> _cannot_ get any thread race conditions with most scripts out there, nor
> can you generally use magic mmap patterns.
I think perl has threads and some type of free form syscall ability.
Heck, with a legit elf binary and gdb you can get a long ways. But I
agree in two things. 1) It's all about layers, since there is no silver
bullet, and 2) Containment goes a long ways to mitigate damage.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Linus Torvalds wrote:
>
> On Thu, 13 Jan 2005, Arjan van de Ven wrote:
>
>>I think you are somewhat misguided on these: the randomisation done in
>>FC does NOT prohibit prelink for working, with the exception of special
>>PIE binaries. Does this destroy the randomisation? No: prelink *itself*
>>randomizes the addresses when creating it's prelink database
>
>
> There was a kernel-based randomization patch floating around at some
> point, though. I think it's part of PaX. That's the one I hated.
>
PaX and Exec Shield both have them; personally I believe PaX is a more
mature technology, since it's 1) still actively developed, and 2) been
around since late 2000. The rest of the community dissagrees with me of
course, but whatever; let's not get into PMS matches on whose junk is
better than whose.
> Although I haven't seen it in a long time, so you may well be right that
> that one too is fine.
>
> My point was really more about the generic issue of me being two-faced:
> I'll encourage people to do things that I don't actually like myself in
> the standard kernel.
>
> I just think that forking at some levels is _good_. I like the fact that
> different vendors have different objectives, and that there are things
> like Immunix and PaX etc around.
I use the argument that the 2.6 development model being used as 'stable'
hurts this all the time, and people (not you Linus) have fed back to me
that "they should submit their patches to mainline then."
> Of course, the problem that sometimes
> results in is the very fact that because I encourage others to have
> special patches, they en dup not even trying to feed back _parts_ of them.
>
> In this case I really believe that was the case. There are fixes in PaX
> that make sense for the standard kernel.
Yes, there's fixes that should go in to mainline often, aside from the
added functionality. I think these should be split out and distributed
*shrug*
> But because not _all_ of PaX
> makes sense for the standard kernel,
Personally I believe it does, for social engineering reasons (encourage
software developers to be mindful of the more secure setting). That
being said, every part of PaX is an option, so even if it went mainline,
it'd be disabled where inappropriate anyway.
> and because I will _not_ take their
> patch whole-sale, they apparently believe (incorrectly) that I wouldn't
> even take the non-intrusive fixes, and haven't really even tried to feed
> them back.
>
> (Yes, Brad Spengler has talked to me about PaX, but never sent me
> individual patches, for example. People seem to expect me to take all or
> nothing - and there's a _lot_ of pretty extreme people out there that
> expect everybody else to be as extreme as they are..)
>
Things like PaX actually have to be taken all or nothing for a reason.
This doesn't mean they have to come with all the GrSecurity
enhancements; although those help as well.
PaX supplies two major components: enhanced memory protections,
particularly using the PROT_EXEC marking (hardawer or otherwise); and
address space layout randomization.
For now I'll set aside the emulations on x86, but I'll cover that later.
First, let's look at ASLR. ASLR can be defeated if you can inject code
to read (if I understand correctly) %efp and locate the global offset
table. Thus, ASLR is pretty much useless.
If we look at executable space protections, the PROTECTIONS&(~PROT_EXEC)
can be changed by returning to mprotect(); since PaX restricts
mprotect(), you have to return to open() and write() and mmap(), but
same deal. Either way, the memory space protections can be defeated by
ret2libc, so these are also pretty much useless.
Examining further, you should consider deploying ASLR in conjunction
with proper memory space protections. In this situation, ASLR must be
defeated before the memory protections can be defeated; and the memory
protections must be defeated before you can defeat ASLR. *->ASLR->NX->*
continuous circle.
This makes defeating the ASLR/NX combination a paradox; you can't have
both at the same time, you can't have one without the other. The only
logical possibility is to do neither. (it's actually possible to defeat
it, but only by completely random guessing and one hell of a stroke of luck)
Going back to the emulation, there's no NX protections without an NX
bit; so for any of this to have any point at all on x86--the most
popular desktop platform ATM--you need to emulate an NX bit.
I can see where you wouldn't want to put in a superpatch like PaX, and
I'm not saying you should jump up right now and go merge it with
mainline; but I feel it's important that you understand that each part
of PaX compliments the others to form a network of protections that
reciprocate upon eachother. Each piece would fail without the others to
control their shortfallings; but together they've got everything pretty
well covered.
> Linus
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB5r5qhDd4aOud5P8RAmemAJ0T3Eu32QxKp7npUeMLR+pMBbriQACfb3Uv
h7d+IiGyuaOTJkkoAfPJHX0=
=0eSC
-----END PGP SIGNATURE-----
On Thu, 2005-01-13 at 09:19 -0800, Linus Torvalds wrote:
>
> On Thu, 13 Jan 2005, Arjan van de Ven wrote:
> >
> > I think you are somewhat misguided on these: the randomisation done in
> > FC does NOT prohibit prelink for working, with the exception of special
> > PIE binaries. Does this destroy the randomisation? No: prelink *itself*
> > randomizes the addresses when creating it's prelink database
>
> There was a kernel-based randomization patch floating around at some
> point, though. I think it's part of PaX. That's the one I hated.
>
> Although I haven't seen it in a long time, so you may well be right that
> that one too is fine.
I don't know about the pax one, we were careful with the fc one to not
break prelink for obvious reasons ;)
> I just think that forking at some levels is _good_. I like the fact that
> different vendors have different objectives, and that there are things
> like Immunix and PaX etc around. Of course, the problem that sometimes
> results in is the very fact that because I encourage others to have
> special patches, they en dup not even trying to feed back _parts_ of them.
actually I was hoping to feed some bits of execshield (eg the
randomisation) to you sometime in the next weeks/months, after a
thorough cleaning of the code, and defaulting to off.
The code can be made quite reasonable I suspect if I manage to find a
few hours to clean it up some
(the pre-cleanup patch is at
http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/00-
randomize-A0
in case you want to see for yourself)
that patch randomizes the stack (well already done via an x86 specific
hack in the existing kernel, this pulls that more generic)
the brk start
the start of mmap space (but leaves mmaps where the app gives a hint for
the address alone, like ld.so does for prelinked libs)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Linus Torvalds wrote:
>
> On Thu, 13 Jan 2005, Alan Cox wrote:
>
>>On Iau, 2005-01-13 at 16:38, Linus Torvalds wrote:
>>
[...]
> Am I claiming that disallowing self-written ELF binaries gets rid of all
> security holes? Obviously not. I'm claiming that there are things that
> people can do that make it harder, and that _real_ security is not about
> trusting one subsystem, but in making it hard enough in many independent
> ways that it's just too effort-intensive to attack.
>
I think you can make it non-guaranteeable.
> It's the same thing with passwords. Clearly any password protected system
> can be broken into: you just have to guess the password. It then becomes a
> matter of how hard it is to "guess" - at some point you say a password is
> secure not because it is a password, but because it's too _expensive_ to
> guess/break.
>
You can't guarantee you can guess a password. You could for example
write a pam module that mandates a 3 second delay on failed
authentication for a user (it does it for the console currently; use 3
separate consoles and you can do the attack 3 times faster). Now you
have to guess the password with one try every 3 seconds.
aA1# 96 possible values per character, 8 characters. 7.2139x10^15
combinations. It takes 686253404.7 years to go through all those at one
every 3 seconds. You've got a good chance at half that.
This isn't "hard," it's "infeasible." I think the idea is to make it so
an attacker doesn't have to put lavish amounts of work into creating an
exploit that reliably re-exploits a hole over and over again; but to
make it so he can't make an exploit that actually works, unless it works
only by rediculously remote chance.
> So all security issues are about balancing cost vs gain. I'm convinced
> that the gain from openness is higher than the cost. Others will disagree.
>
Yes. Nobody code audits your binaries. You need source code to do
source code auditing. :)
> Linus
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB5sUGhDd4aOud5P8RAtL7AJ45IkplC/ArkSykOPdkwrXknhpgdwCgjLHJ
H8I593lQ0EuESMpriE6UIy0=
=kcas
-----END PGP SIGNATURE-----
On Iau, 2005-01-13 at 03:53, Marek Habersack wrote:
> That might be, but note one thing: not everybody runs vendor kernels (for various
> reasons). Now see what happens when the super-secret vulnerability (with
> vendor fixes) is described in an advisory. A person managing a park of machines
> (let's say 100) with custom, non-vendor, kernels suddenly finds out that they
> have a buggy kernel and 100 machines to upgrade while the exploit and the
Those running 2.4 non-vendor kernels are just fine because Marcelo
chooses to work with vendor-sec while Linus chooses not to. I choose to
work with vendor-sec so generally the -ac tree is also fairly prompt on
fixing things.
Given that base 2.6 kernels are shipped by Linus with known unfixed
security holes anyone trying to use them really should be doing some
careful thinking. In truth no 2.6 released kernel is suitable for
anything but beta testing until you add a few patches anyway.
2.6.9 for example went out with known holes and broken AX.25 (known)
2.6.10 went out with the known holes mostly fixed but memory corrupting
bugs, AX.25 still broken and the wrong fix applied for the smb holes so
SMB doesn't work on it
I still think the 2.6 model works well because its making very good
progress and then others are doing testing and quality management on it.
Linus is doing the stuff he is good at and other people are doing the
stuff he doesn't.
That change of model changes the security model too however.
On Thursday 13 January 2005 19:59, you wrote:
> Linus Torvalds wrote:
> > On Thu, 13 Jan 2005, Alan Cox wrote:
> >>On Iau, 2005-01-13 at 16:38, Linus Torvalds wrote:
>
> [...]
>
> > Am I claiming that disallowing self-written ELF binaries gets rid of all
> > security holes? Obviously not. I'm claiming that there are things that
> > people can do that make it harder, and that _real_ security is not about
> > trusting one subsystem, but in making it hard enough in many independent
> > ways that it's just too effort-intensive to attack.
>
> I think you can make it non-guaranteeable.
>
> > It's the same thing with passwords. Clearly any password protected system
> > can be broken into: you just have to guess the password. It then becomes
> > a matter of how hard it is to "guess" - at some point you say a password
> > is secure not because it is a password, but because it's too _expensive_
> > to guess/break.
>
> You can't guarantee you can guess a password. You could for example
> write a pam module that mandates a 3 second delay on failed
> authentication for a user (it does it for the console currently; use 3
> separate consoles and you can do the attack 3 times faster). Now you
> have to guess the password with one try every 3 seconds.
Already done, actually standard practice. This does not mean actually that you
can not guess a password, just that it will take longer (on average).
Luck and some knowledge about the system and people speeds up the process, so
the standard procedure if you really want to get into a system with a
password is to get information.
>
> aA1# 96 possible values per character, 8 characters. 7.2139x10^15
> combinations. It takes 686253404.7 years to go through all those at one
> every 3 seconds. You've got a good chance at half that.
>
> This isn't "hard," it's "infeasible." I think the idea is to make it so
> an attacker doesn't have to put lavish amounts of work into creating an
> exploit that reliably re-exploits a hole over and over again; but to
> make it so he can't make an exploit that actually works, unless it works
> only by rediculously remote chance.
>
> > So all security issues are about balancing cost vs gain. I'm convinced
> > that the gain from openness is higher than the cost. Others will
> > disagree.
>
> Yes. Nobody code audits your binaries. You need source code to do
> source code auditing. :)
>
> > Linus
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> > in the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
--
<a href="http://www.edusupport.nl">EduSupport: Linux Desktop for schools and
small to medium business in The Netherlands and Belgium</a>
On Wed, Jan 12, 2005 at 09:38:07PM -0800, Barry K. Nathan scribbled:
> On Thu, Jan 13, 2005 at 04:53:31AM +0100, Marek Habersack wrote:
> > archived mail message or a webpage with the patch. Hoping he'll find the
> > fixes in the vendor kernels, he goes to download source packages from SuSe,
> > RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
> > to find the patch there as it is to fish it out of the vanilla kernel patch
> > for the new version. Frustrating, isn't it? Not to mention that he might
>
> http://linux.bkbits.net is your friend.
I know about that, but many people don't.
> Each patch (including security fixes) in the mainline kernels (2.4 and
> 2.6) appears there as an individual, clickable link with a description
> (e.g. "1.1551 Paul Starzetz: sys_uselib() race vulnerability
> (CAN-2004-1235)").
>
> If other patches have gone in since then, you may have to scroll through
> a (short-form) changelog. However, it's still less frustrating than the
> scenario you portray.
Less frustrating, yes, safer, not even slightly. You are still left on the
thin ice precisely the moment you are notified about the vulnerability (when
it goes public). Those not being members of vendor-sec still don't have the
privilege to know about the vulnerability ahead of time, before it goes
"officially" public. Besides, I know a few people who administer linux
machines who don't know what bkbits.net is, and they don't have to. There
should be a single place, a webpage which you can visit (or get an rss feed
of) and be sure you will be among the first to know about a vulnerability
(yes, I know about the CIA feeds, but this is still not the real thing,
IMHO).
regards,
marek
On Thu, Jan 13, 2005 at 03:36:33PM +0000, Alan Cox wrote:
> 2.6.9 for example went out with known holes and broken AX.25 (known)
> 2.6.10 went out with the known holes mostly fixed but memory corrupting
> bugs, AX.25 still broken and the wrong fix applied for the smb holes so
> SMB doesn't work on it
XFS on 2.6.10 does work. The patches you had in earlier -ac made it
not work.
On Thu, Jan 13, 2005 at 02:33:56PM -0500, Dave Jones wrote:
> > On Thu, Jan 13, 2005 at 03:36:33PM +0000, Alan Cox wrote:
> > > 2.6.9 for example went out with known holes and broken AX.25 (known)
> > > 2.6.10 went out with the known holes mostly fixed but memory corrupting
> > > bugs, AX.25 still broken and the wrong fix applied for the smb holes so
> > > SMB doesn't work on it
> >
> > XFS on 2.6.10 does work.
freudian typo, should have been smbfs as it should be obvious for the
context I replied to.
> Depends on your definition of 'work'.
> It oopses under load with NFS very easily,
Do you have a bugreport?
On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> > The kernel security list must be higher in hierarchy than vendorsec.
> >
> > Any information sent to vendorsec must be sent immediately for the kernel
> > security list and discussed there.
>
> We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
> material that even the list isn't allowed to directly see only by
> contacting the relevant bodies directly as well. The list then just
> serves as a "foo should have told you about issue X" notification.
This sounds crazy. I understand that this may happen with proprietary
software, or software that is made/supported by a company but otherwise opensource
(like OpenOffice, for instance), but the kernel?
regards,
marek
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Norbert van Nobelen wrote:
> On Thursday 13 January 2005 19:59, you wrote:
>
[...]
>>You can't guarantee you can guess a password. You could for example
>>write a pam module that mandates a 3 second delay on failed
>>authentication for a user (it does it for the console currently; use 3
>>separate consoles and you can do the attack 3 times faster). Now you
>>have to guess the password with one try every 3 seconds.
>
>
> Already done, actually standard practice. This does not mean actually that you
> can not guess a password, just that it will take longer (on average).
> Luck and some knowledge about the system and people speeds up the process, so
> the standard procedure if you really want to get into a system with a
> password is to get information.
>
>
I'm pretty sure that you only get a 3 second delay on the specific
console. I've mistyped my root password on tty1, and switched to tty2
to log in before the delay was up.
as a test, switch to vc/0 and enter 'root', then press enter. Type a
bogus password.
Switch to vc/1, and enter 'root', then press enter. Type your real root
password.
Go back to vc/0 and hit enter so you submit your false password, then
immediately switch to vc/1 and hit enter.
You should get a bash shell and have enough time to switch to vc/0 and
see it still waiting for a second or two, before returning "login
incorrect."
Automating an attack on about 10 different ssh connections shouldn't be
a problem. Just keep creating them.
>
>>aA1# 96 possible values per character, 8 characters. 7.2139x10^15
>>combinations. It takes 686253404.7 years to go through all those at one
>>every 3 seconds. You've got a good chance at half that.
>>
>>This isn't "hard," it's "infeasible." I think the idea is to make it so
>>an attacker doesn't have to put lavish amounts of work into creating an
>>exploit that reliably re-exploits a hole over and over again; but to
>>make it so he can't make an exploit that actually works, unless it works
>>only by rediculously remote chance.
>>
>>
>>>So all security issues are about balancing cost vs gain. I'm convinced
>>>that the gain from openness is higher than the cost. Others will
>>>disagree.
>>
>>Yes. Nobody code audits your binaries. You need source code to do
>>source code auditing. :)
>>
>>
>>> Linus
>>>-
>>>To unsubscribe from this list: send the line "unsubscribe linux-kernel"
>>>in the body of a message to [email protected]
>>>More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>Please read the FAQ at http://www.tux.org/lkml/
>
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB5s2QhDd4aOud5P8RAlwhAJ9G8SWcxq1HFCM58VIeEWJPevg9qgCeMpxt
MHGB3N3TMy5n8MWnkUctqhM=
=3mYn
-----END PGP SIGNATURE-----
On Thu, 13 Jan 2005, John Richard Moser wrote:
>
> > So all security issues are about balancing cost vs gain. I'm convinced
> > that the gain from openness is higher than the cost. Others will disagree.
>
> Yes. Nobody code audits your binaries. You need source code to do
> source code auditing. :)
Oh, it's very clear that some exploits have definitely been written by
looking at the source code with automated tools or by instrumenting
things, and that the exploits would likely have never been found without
source code. That's fine. We just have higher requirements in the open
source community.
And I do think that the same is true for being open about security
advisories: I think that to offset an open security list, we'd have to
then have more "best practices" than a vendor-sec-type closed security
list might need. I think it would be worth it.
Linus
* Marek Habersack ([email protected]) wrote:
> On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> > On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> > > The kernel security list must be higher in hierarchy than vendorsec.
> > >
> > > Any information sent to vendorsec must be sent immediately for the kernel
> > > security list and discussed there.
> >
> > We cannot do this without the reporters permission. Often we get
> I think I don't understand that. A reporter doesn't "own" the bug - not the
> copyright, not the code, so how come they can own the fix/report?
It's not about ownership. It's about disclosure and common sense.
If someone reports something to you in private, and you disclose it
publically (or even privately to someone else) without first discussing
that with them, you'll lose their confidence. Consequently they won't
be so kind to give you forewarning next time.
> > material that even the list isn't allowed to directly see only by
> > contacting the relevant bodies directly as well. The list then just
> > serves as a "foo should have told you about issue X" notification.
> This sounds crazy. I understand that this may happen with proprietary
> software, or software that is made/supported by a company but otherwise opensource
> (like OpenOffice, for instance), but the kernel?
Licensing is irrelevant. Like it or not, the person who is discovering
the bugs has some say in how you deal with the information. It's in our
best interest to work nicely with these folks, not marginalize them.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Linus Torvalds wrote:
>
> On Thu, 13 Jan 2005, John Richard Moser wrote:
>
>>>So all security issues are about balancing cost vs gain. I'm convinced
>>>that the gain from openness is higher than the cost. Others will disagree.
>>
>>Yes. Nobody code audits your binaries. You need source code to do
>>source code auditing. :)
>
>
> Oh, it's very clear that some exploits have definitely been written by
> looking at the source code with automated tools or by instrumenting
> things, and that the exploits would likely have never been found without
> source code. That's fine. We just have higher requirements in the open
> source community.
Yeah but malicious people are more determined than whitehats and
greyhats. If I'm trying to find bugs to help you fix them, I'm not
going to waste my time on running your binaries through a debugger. If
I want to use your machine as a sock puppet to attack SCO, then maybe.
In contrast, if I've got a good background in programming and want to
help you find and fix security bugs, it's not that big a deal for me to
brush over your source code. If I'm just in there to improve it or add
new features, I might even ACCIDENTALLY stumble over something. This is
where OSS becomes more secure :)
I think we're on the same page, Linus :)
>
> And I do think that the same is true for being open about security
> advisories: I think that to offset an open security list, we'd have to
> then have more "best practices" than a vendor-sec-type closed security
> list might need. I think it would be worth it.
>
It'd need control. You can start an open security advisory list if you
like, but don't just flip off the vendors who want to keep their
security advisories quiet until they have a fix.
Aside from that, go for it.
> Linus
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB5tKshDd4aOud5P8RApj6AJ41VAxD5SDTzLJZGX6K0OfOjhh4iQCfRHPC
c9zacuxvB3/gPlXMCZklyso=
=C7LA
-----END PGP SIGNATURE-----
On Iau, 2005-01-13 at 19:35, Christoph Hellwig wrote:
> freudian typo, should have been smbfs as it should be obvious for the
> context I replied to.
It works in some situations but not others. Chuck Ebbert fixed this but
its never gotten upstream, although I think Andrew was now looking at
it.
On Thu, Jan 13, 2005 at 08:42:46PM +0100, Marek Habersack wrote:
> On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> > On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> > > The kernel security list must be higher in hierarchy than vendorsec.
> > >
> > > Any information sent to vendorsec must be sent immediately for the kernel
> > > security list and discussed there.
> >
> > We cannot do this without the reporters permission. Often we get
> I think I don't understand that. A reporter doesn't "own" the bug - not the
> copyright, not the code, so how come they can own the fix/report?
Security researchers are an odd bunch. They're very attached to their
bugs in the sense they want to be the ones who get the glory for
having reported it.
As soon as bugs start getting forwarded around between lists, the
potential for leaks increases greatly. The recent fiasco surrounding
one of the isec.pl holes was believed to have been caused due to
someone 'sniffing upstream' for example.
When issues get leaked, the incentive for a researcher to use the
same process again goes away, which hurts us. Basically, trying
to keep them happy is in our best interests.
Dave
On Iau, 2005-01-13 at 17:33, Linus Torvalds wrote:
> Scripts can only do what the interpreter does. And it's often a lot harder
> to get the interpreter to do certain things. For example, you simply
> _cannot_ get any thread race conditions with most scripts out there, nor
> can you generally use magic mmap patterns.
And then perl was invented.
> Am I claiming that disallowing self-written ELF binaries gets rid of all
> security holes? Obviously not. I'm claiming that there are things that
> people can do that make it harder, and that _real_ security is not about
> trusting one subsystem, but in making it hard enough in many independent
> ways that it's just too effort-intensive to attack.
It lasts until someone publishes the first perl ELF loader/executor on
bugtraq, or ruby, or python, or java. Then everyone has it.
> It's the same thing with passwords. Clearly any password protected system
> can be broken into: you just have to guess the password. It then becomes a
> matter of how hard it is to "guess" - at some point you say a password is
> secure not because it is a password, but because it's too _expensive_ to
> guess/break.
Its more like breaking a password algorithm or everyone having the same
password unfortunately. One perl ELF loader, game over. You can do this
stuff with SELinux but even then it is very hard and you have to whack
the interpreters.
On Thu, 13 Jan 2005, Dave Jones wrote:
>
> When issues get leaked, the incentive for a researcher to use the
> same process again goes away, which hurts us. Basically, trying
> to keep them happy is in our best interests.
Not so.
_balancing_ their happiness with our needs is what's in our best
interests. Yes, we should encourage them to tell us, but totally bending
over backwards is definitely the wrong thing to do.
In fact, right now we seem to encourage even people who do _not_
necessarily want the delay and secrecy to go over to vendor-sec, just
because the vendor-sec people are clearly arguing even against
alternatives.
Which is something I do not understand. The _apologia_ for vendor-sec is
absolutely stunning. Even if there are people who want to only interface
with a fascist vendor-sec-style absolute secrecy list, THAT IS NOT AN
EXCUSE TO NOT HAVE OPEN LISTS IN _ADDITION_!
In other words, I really don't understand this total subjugation by people
to the vendor-sec mentaliy. It's a disease, I tell you.
Linus
On Thu, Jan 13, 2005 at 03:36:33PM +0000, Alan Cox scribbled:
> On Iau, 2005-01-13 at 03:53, Marek Habersack wrote:
> > That might be, but note one thing: not everybody runs vendor kernels (for various
> > reasons). Now see what happens when the super-secret vulnerability (with
> > vendor fixes) is described in an advisory. A person managing a park of machines
> > (let's say 100) with custom, non-vendor, kernels suddenly finds out that they
> > have a buggy kernel and 100 machines to upgrade while the exploit and the
>
> Those running 2.4 non-vendor kernels are just fine because Marcelo
> chooses to work with vendor-sec while Linus chooses not to. I choose to
> work with vendor-sec so generally the -ac tree is also fairly prompt on
> fixing things.
That's fine, but if one isn't on vendor-sec, they are still out in the cold
until the vulnerability with an embargo is announced - at which point all
the vendors are ready, but those with non-vendor kernels are in for an
unpleasant surprise. And as for 2.4, yes, Marcelo does a good job applying
the fixes asap, but that's not helping. If one runs (as I wrote) a kernel
with custom code inside, tux and, say, grsecurity - and it's not the latest
2.4 kernel - he still needs to backport the fixes and make sure they work
fine with is custom code, all that in a great hurry. Somebody suggested
here that perhaps there could be a version of a security fix released for
X past kernel versions (2? 3?) if it doesn't apply cleanly to them. That
would be a great help along with earlier notification of a problem - not in
the way it is done with vendor-sec where you have to wear a pointy hat and
a beard to be accepted as a member. It's not that I'm whining or bitching,
hell no, I just think it would be more fair if everybody was treated the
same - vendors, non-vendors, bad guys, all alike.
> Given that base 2.6 kernels are shipped by Linus with known unfixed
> security holes anyone trying to use them really should be doing some
> careful thinking. In truth no 2.6 released kernel is suitable for
> anything but beta testing until you add a few patches anyway.
>
> 2.6.9 for example went out with known holes and broken AX.25 (known)
> 2.6.10 went out with the known holes mostly fixed but memory corrupting
> bugs, AX.25 still broken and the wrong fix applied for the smb holes so
> SMB doesn't work on it
>
> I still think the 2.6 model works well because its making very good
> progress and then others are doing testing and quality management on it.
> Linus is doing the stuff he is good at and other people are doing the
> stuff he doesn't.
>
> That change of model changes the security model too however.
yes, definitely. IMHO, it enforces prompt and open security advisory/patch
releases, just as Linus proposed (with the limited embargo). Of course, one
can just take a released vendor kernel, patch it with their custom code and
compile the way they see it fit, but it's not in any way faster or better
than backporting the fixes to your own kernel.
regards,
marek
On Iau, 2005-01-13 at 19:42, Marek Habersack wrote:
> On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> > We cannot do this without the reporters permission. Often we get
> I think I don't understand that. A reporter doesn't "own" the bug - not the
> copyright, not the code, so how come they can own the fix/report?
They own the report. Who owns it is kind of irrelevant. If we publish it
when they don't want it published then next time they'll send it to
full-disclosure or worse still just share an exploit with the bad guys.
So unless we get really stoopid requests we try not to annoy people -
hole reporting is a volunatry activity
> > material that even the list isn't allowed to directly see only by
> > contacting the relevant bodies directly as well. The list then just
> > serves as a "foo should have told you about issue X" notification.
> This sounds crazy. I understand that this may happen with proprietary
> software, or software that is made/supported by a company but otherwise opensource
> (like OpenOffice, for instance), but the kernel?
Its not uncommon. Not all security bodies (especially government
security agencies) trust vendor-sec directly, only some members on the
basis of their own private auditing/background checks.
Alan
On Thu, Jan 13, 2005 at 11:50:04AM -0800, Chris Wright scribbled:
> * Marek Habersack ([email protected]) wrote:
> > On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> > > On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> > > > The kernel security list must be higher in hierarchy than vendorsec.
> > > >
> > > > Any information sent to vendorsec must be sent immediately for the kernel
> > > > security list and discussed there.
> > >
> > > We cannot do this without the reporters permission. Often we get
> > I think I don't understand that. A reporter doesn't "own" the bug - not the
> > copyright, not the code, so how come they can own the fix/report?
>
> It's not about ownership. It's about disclosure and common sense.
> If someone reports something to you in private, and you disclose it
> publically (or even privately to someone else) without first discussing
> that with them, you'll lose their confidence. Consequently they won't
> be so kind to give you forewarning next time.
I understand that, but I don't see a point in holding the fixes back for the
majority of people (since the vendors on vendor-sec are a minority and I
suspect that more people run self-compiled kernels on their servers than the
vendor kernels, I might be wrong on that). If there is a list that's at
least half-open (i.e. invitation required, but no CV required :) then there
is no issue of confidence, is there? And with such list, everybody has
equal chances - bad, good and the ugly too. Maybe my logic is flawed, but
that's how I see it - the linux kernel is a piece of open code, accessible
to all, with all its features, bugs, flaws. So, if the code is open, the
reports about the code security/bugs should be as open, together with fixes,
from the day one of finding the bug. Otherwise, if we have the scenario when
the vendor-sec members are informed about a bug+fix 2 months earlier and the
vulnerability+fix are disclosed 2 months later, then this is creating a
situation where not everybody has equal chances of reacting to the bug. As I
wrote earlier, that puts the folks using non-vendor kernels way behind both
the vendors _and_ the bad guys - since the latter have both the
vulnerability, the fix _and_ (usually) the exploit (or they can come up with
it in a matter of hours). For me it's all about equal chances in reacting to
the security issues. Again, I might be totally wrong in my reasoning, feel
free to correct me.
> > > material that even the list isn't allowed to directly see only by
> > > contacting the relevant bodies directly as well. The list then just
> > > serves as a "foo should have told you about issue X" notification.
> > This sounds crazy. I understand that this may happen with proprietary
> > software, or software that is made/supported by a company but otherwise opensource
> > (like OpenOffice, for instance), but the kernel?
>
> Licensing is irrelevant. Like it or not, the person who is discovering
> the bugs has some say in how you deal with the information. It's in our
> best interest to work nicely with these folks, not marginalize them.
It's not about marginalizing, because by requesting that their report is
kept secret for a while and known only to a small bunch of people, you could
say they are marginalizing us, the majority of people who use the linux
kernel (us - those who aren't on the vendor-sec list). It's, again IMHO,
about equal chances. More and more often it seems that security advisories
and releases are treated as an asset for security companies, not a common
good/knowledge. And that's pretty sad...
regards,
marek
On Thu, Jan 13, 2005 at 03:03:08PM -0500, Dave Jones scribbled:
> On Thu, Jan 13, 2005 at 08:42:46PM +0100, Marek Habersack wrote:
> > On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> > > On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> > > > The kernel security list must be higher in hierarchy than vendorsec.
> > > >
> > > > Any information sent to vendorsec must be sent immediately for the kernel
> > > > security list and discussed there.
> > >
> > > We cannot do this without the reporters permission. Often we get
> > I think I don't understand that. A reporter doesn't "own" the bug - not the
> > copyright, not the code, so how come they can own the fix/report?
>
> Security researchers are an odd bunch. They're very attached to their
> bugs in the sense they want to be the ones who get the glory for
> having reported it.
Let them have it! We can even chip in to run banner adds on freshmeat with
their name on it, I'm all game for that. Or create an RFC-like archive of
the vulnerabilities, ruled by the same rules - no changes after publishing.
Their names will be circulating around the internet forever.
> As soon as bugs start getting forwarded around between lists, the
> potential for leaks increases greatly. The recent fiasco surrounding
> one of the isec.pl holes was believed to have been caused due to
> someone 'sniffing upstream' for example.
I think it would be a non-issue if there was no drive towards keeping it
secret at all cost. It would be out in the open, nothing else, nothing more.
> When issues get leaked, the incentive for a researcher to use the
> same process again goes away, which hurts us. Basically, trying
> to keep them happy is in our best interests.
You've said they want glory, we can give it to them in many ways without
keeping their discoveries secret.
best regards,
marek
On Iau, 2005-01-13 at 20:10, Linus Torvalds wrote:
> In fact, right now we seem to encourage even people who do _not_
> necessarily want the delay and secrecy to go over to vendor-sec, just
> because the vendor-sec people are clearly arguing even against
> alternatives.
If someone posts something to vendor-sec that says "please tell Linus"
we would. If someone posts to vendor-sec saying "I posted this to
linux-kernel here's a heads up" its useful. If you are uber cool elite 0
day disclosure weenie you post to full-disclosure or bugtraq. There are
alternatives 8)
> Which is something I do not understand. The _apologia_ for vendor-sec is
> absolutely stunning. Even if there are people who want to only interface
> with a fascist vendor-sec-style absolute secrecy list, THAT IS NOT AN
> EXCUSE TO NOT HAVE OPEN LISTS IN _ADDITION_!
I'm all for an open list too. Its currently called linux-kernel. Its
full of such reports, and most of them are about new code or trivial
holes where secrecy is pointless. Having an open linux-security list so
they don't get missed as the grsecurity stuff did (and until I got fed
up of waiting the coverity stuff did) would help because it would make
sure that it didn't get buried in the noise.
Similarly it would help if you are sneaking security fixes in (as you do
regularly) you actually told the vendors about them.
Alan
> > > > 2.6.9 for example went out with known holes and broken AX.25 (known)
> > > > 2.6.10 went out with the known holes mostly fixed but memory corrupting
> > > > bugs, AX.25 still broken and the wrong fix applied for the smb holes so
> > > > SMB doesn't work on it
> > > XFS on 2.6.10 does work.
>
> freudian typo, should have been smbfs as it should be obvious for the
> context I replied to.
The smbfs breakage depended on what server you were using it seemed.
For a lot of folks it broke horribly.
> > Depends on your definition of 'work'.
> > It oopses under load with NFS very easily,
> Do you have a bugreport?
There are number of XFS related issues in Red Hat bugzilla.
As its not something we actively support, they've not got a lot
of attention. Some of them are quite old (dating back to 2.6.6 or so)
so may already have got fixed.
We've also seen a few reports on Fedora mailing lists.
Dave
On Thu, Jan 13, 2005 at 07:25:12PM +0000, Christoph Hellwig wrote:
> On Thu, Jan 13, 2005 at 03:36:33PM +0000, Alan Cox wrote:
> > 2.6.9 for example went out with known holes and broken AX.25 (known)
> > 2.6.10 went out with the known holes mostly fixed but memory corrupting
> > bugs, AX.25 still broken and the wrong fix applied for the smb holes so
> > SMB doesn't work on it
>
> XFS on 2.6.10 does work.
Depends on your definition of 'work'.
It oopses under load with NFS very easily, though that's not helped
with 4K stacks.
Dave
On Iau, 2005-01-13 at 20:29, Marek Habersack wrote:
> I understand that, but I don't see a point in holding the fixes back for the
> majority of people (since the vendors on vendor-sec are a minority and I
vendor-sec probably covers the majority of users
It covers
2.4
2.6-ac
Red Hat
SuSE
Debian
Gentoo
Mandrake
and many more including some of the BSD folk (a lot of user space bugs
are common)
2.6 base isn't covered because Linus has differing views.
> suspect that more people run self-compiled kernels on their servers than the
> vendor kernels, I might be wrong on that). If there is a list that's at
I'd say you are very very wrong from the data I have access too,
probably of the order of 1000:1 wrong or more.
> > Licensing is irrelevant. Like it or not, the person who is discovering
> > the bugs has some say in how you deal with the information. It's in our
> > best interest to work nicely with these folks, not marginalize them.
> It's not about marginalizing, because by requesting that their report is
> kept secret for a while and known only to a small bunch of people, you could
> say they are marginalizing us, the majority of people who use the linux
> kernel (us - those who aren't on the vendor-sec list). It's, again IMHO,
They chose to. A lot of people report bugs directly to Linus too or to
the lists or to full-disclosure depending upon their view. The folks who
report bugs in private either to Linus or to vendor-sec or maintainers
or whoever generally believe that the bad guys can move faster and cause
a lot of damage if a bug isn't fixed before announce.
Thats based on the observation that
- the bad guys have to move a small exploit versus a large binary
- the exploit doesn't have to pass quality assurance, you just write
more
- they can automate the attack tools very effectively
So the non-disclosure argument is perhaps put as "equality of access at
the point of discovery means everyone gets rooted.". And if you want a
lot more detail on this read papers on the models of security economics
- its a well studied field.
Alan
On Thu, Jan 13, 2005 at 07:19:45PM +0000, Alan Cox scribbled:
> On Iau, 2005-01-13 at 19:42, Marek Habersack wrote:
> > On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox scribbled:
> > > We cannot do this without the reporters permission. Often we get
> > I think I don't understand that. A reporter doesn't "own" the bug - not the
> > copyright, not the code, so how come they can own the fix/report?
>
> They own the report. Who owns it is kind of irrelevant. If we publish it
> when they don't want it published then next time they'll send it to
> full-disclosure or worse still just share an exploit with the bad guys.
> So unless we get really stoopid requests we try not to annoy people -
> hole reporting is a volunatry activity
Sounds a bit backwards to me. It's like surrendering to a guy who attacks you
on the street "because he's got a knife and I don't". There is some sense in
it, but that way you're putting yourself in a position of a victim. The
reporters... ok, they own the report, but do they own the information?
> > > material that even the list isn't allowed to directly see only by
> > > contacting the relevant bodies directly as well. The list then just
> > > serves as a "foo should have told you about issue X" notification.
> > This sounds crazy. I understand that this may happen with proprietary
> > software, or software that is made/supported by a company but otherwise opensource
> > (like OpenOffice, for instance), but the kernel?
>
> Its not uncommon. Not all security bodies (especially government
> security agencies) trust vendor-sec directly, only some members on the
> basis of their own private auditing/background checks.
So it sounds that we, the men-in-the-crowd are really left out in the crowd,
people who are affected the most by the issues. Since the vendors are not
affected by the bugs (playing a devil's advocate here), since they fix them
for their machines as they appear, way before they get public.
best regards,
marek
On Thu, Jan 13, 2005 at 03:36:27PM +0000, Alan Cox wrote:
> On Mer, 2005-01-12 at 17:42, Marcelo Tosatti wrote:
> > The kernel security list must be higher in hierarchy than vendorsec.
> >
> > Any information sent to vendorsec must be sent immediately for the kernel
> > security list and discussed there.
>
> We cannot do this without the reporters permission. Often we get
> material that even the list isn't allowed to directly see only by
> contacting the relevant bodies directly as well. The list then just
> serves as a "foo should have told you about issue X" notification.
Well the reporters, and vendorsec, have to be aware that the
"kernel security list" is the main discussion point of kernel security issues.
If the embargo period is reasonable for vendors to prepare their updates and
do necessary QA, there should be no need for kernel issues to be coordinated
(and embargoed) on vendorsec anymore. Does it make sense?
Of course vendorsec gets informed of what is happening at "kernel security list".
The main reason for reporters to require "permission" to spread the information
is because they want make a PR of their discovery, yes?
In that case they should be aware that submitting to vendorsec means submitting
to kernel security, and that means X days of embargo period.
> If you are setting up the list also make sure its entirely encrypted
> after the previous sniffing incident.
Definately, I asked Chris about it...
On Thu, 13 Jan 2005, Alan Cox wrote:
>
> I'm all for an open list too. Its currently called linux-kernel. Its
> full of such reports, and most of them are about new code or trivial
> holes where secrecy is pointless. Having an open linux-security list so
> they don't get missed as the grsecurity stuff did (and until I got fed
> up of waiting the coverity stuff did) would help because it would make
> sure that it didn't get buried in the noise.
Yes. But I know people send private emails because they don't want to
create a scare, so I think we actually have several levels of lists:
- totally open: linux-kernel, or an alternative with lower noise
We've kind of got this, but things get lost in the noise, and "white
hat" people don't like feeling guilty about announcing things.
- no embargo, no rules, but "private" in the sense that it's supposed to
be for kernel developers only or at least people who won't take
advantage of it.
_I_ think this is the one that makes sense. No hard rules, but private
enough that people won't feel _guilty_ about reporting problems. Right
now I sometimes get private email from people who don't want to point
out some local DoS or similar, and that can certainly get lost in the
flow.
- _short_ embargo, for kernel-only. I obviously believe that vendor-sec
is whoring itself for security firms and vendors. I believe there would
be a place for something with stricter rules on disclosure.
- vendor-sec. The place where you can play any kind of games you want.
It's not a black-and-white thing. I refuse to believe that most security
problems are found by people without any morals. I believe that somewhere
in the middle is where most people feel most comfortable.
Linus
On Thu, Jan 13, 2005 at 07:41:10PM +0000, Alan Cox scribbled:
[snip]
> > suspect that more people run self-compiled kernels on their servers than the
> > vendor kernels, I might be wrong on that). If there is a list that's at
>
> I'd say you are very very wrong from the data I have access too,
> probably of the order of 1000:1 wrong or more.
I stand corrected then, you have access to much better sources than I do, no
doubts.
> > > Licensing is irrelevant. Like it or not, the person who is discovering
> > > the bugs has some say in how you deal with the information. It's in our
> > > best interest to work nicely with these folks, not marginalize them.
>
> > It's not about marginalizing, because by requesting that their report is
> > kept secret for a while and known only to a small bunch of people, you could
> > say they are marginalizing us, the majority of people who use the linux
> > kernel (us - those who aren't on the vendor-sec list). It's, again IMHO,
>
> They chose to. A lot of people report bugs directly to Linus too or to
> the lists or to full-disclosure depending upon their view. The folks who
> report bugs in private either to Linus or to vendor-sec or maintainers
> or whoever generally believe that the bad guys can move faster and cause
They can still move faster when the vulnerability (and the fixed vendor
kernels) are released. The people who are to install the kernels usually
cannot act immediately, so if the bad guys have somebody on target, they
will root them anyway. I see no difference here to a model of totally open
disclosure list.
> a lot of damage if a bug isn't fixed before announce.
Again, it works for vendors, not for end users, IMO.
> Thats based on the observation that
> - the bad guys have to move a small exploit versus a large binary
delayed release doesn't change that. One still needs to download and deploy
the kernels (possibly compiling them if they have to).
> - the exploit doesn't have to pass quality assurance, you just write
> more
again, closed mailing lists don't change that
> - they can automate the attack tools very effectively
ditto
> So the non-disclosure argument is perhaps put as "equality of access at
> the point of discovery means everyone gets rooted.". And if you want a
> lot more detail on this read papers on the models of security economics
> - its a well studied field.
Theory is fine, practice is that the closed disclosure list changes matters
for a vaste minority of people - those who are to install the fixed kernels
are in perfectly the same situation they would be in if there was a fully
open disclosure list.
all of this is IMHO, of course - cannot stress that more :)
best regards,
marek
On Thu, 13 Jan 2005, Arjan van de Ven wrote:
>
> On Thu, 2005-01-13 at 19:41 +0000, Alan Cox wrote:
>
> > So the non-disclosure argument is perhaps put as "equality of access at
> > the point of discovery means everyone gets rooted.". And if you want a
> > lot more detail on this read papers on the models of security economics
> > - its a well studied field.
>
> or in other words: you can write an exploit faster than y ou can write
> the fix, so the thing needs delaying until a fix is available to make it
> more equal.
That's a bogus argument, and anybody who looks at MS practices and
is at all honest with himself should see it as a bogus argument.
I think MS _still_ to this day will stand up and say that they have had no
zero-day exploits. Exactly because they count "zero-day" as the day things
get publically released. Never mind that exploits where (and are)
privately available on cracking networks for months before. They just
haven't been publically released BECAUSE EVERYBODY IS PARTICIPATING IN THE
GAME.
The written rule in this community is "no honest person will report a bug
before its time is through". Which automatically means that you get
branded as being "bad" if you ever rock the boat. That's a piece of
bullshit, and anybody who doesn't admit it is being totally dishonest with
himself.
Me, I consider that to be dirty.
Does Linux have a better track record than MS? Damn right it does. We've
had fewer problems, and I think there are more people out there standing
up for what's right anyway. Less PR people deathly afraid of rockign the
boat. Better technology, and fewer horrid design mistakes.
But that doesn't mean that all the same things aren't true for vendor-sec
that are true for MS. They are just bad to a (much, I hope) smaller
degree.
So instead, let's look at FACTS:
- fixing a security bug is almost always much easier than writing an
exploit. Arjan, your argument simply isn't true except for the worst
possible fundamental design issues. You should know that. In the case
of "uselib()", it was literally four lines of obvious code - all the
rest was just to make sure that there weren't any other cases like that
lurking around.
- There are more white-hats around than black-hats, but they are often
less "driven" and motivated. Now _that_, I would argue, is the real
problem with early disclosure - motivation. The people really
motivated to find the bugs are the people who are also motivated to
mis-use them. However, vendor-sec and "the game" just makes it more
worth-while for security firms to participate in it - it gives them the
"good PR" thing. And how much can you trust the "gray hats"?
And this is why I believe vendor-sec is part of the problem. If you don't
see that, then you're blinding yourself to the downsides, and trying to
only look at the upsides.
Are there advantages and upsides? Yes. Are there disadvantages?
Indubitably. And anybody who disregards the disadvantages as "inevitable"
is not really interested in fixing the game.
Linus
On Thu, Jan 13, 2005 at 10:02:29PM +0100, Marek Habersack wrote:
> Theory is fine, practice is that the closed disclosure list changes matters
> for a vaste minority of people - those who are to install the fixed kernels
> are in perfectly the same situation they would be in if there was a fully
> open disclosure list.
No, it's not the same. They're in a _worse_ situation if anything.
With open disclosure, the bad guys get even more lead time.
If admins don't install updates in a timely manner, there's
not a lot we can do about it. For those that _do_ however,
we can make their lives a lot more stress free.
Dave
> But that doesn't mean that all the same things aren't true for vendor-sec
> that are true for MS. They are just bad to a (much, I hope) smaller
> degree.
(for the record, I'm no great fan of vendor-sec, and haven't been on it
for quite some time and am glad for that. I also try to avoid it for
things I find myself for a lot of the reasons you stated earlier.
However I do still think that it is nice if people who find security
issues give the upstream author (or a select subset thereof) SOME time
to come up with a fix, and audit for similar bugs elsewhere in the code.
1 week should in nearly all cases be more than plenty for that though)
> So instead, let's look at FACTS:
>
> - fixing a security bug is almost always much easier than writing an
> exploit. Arjan, your argument simply isn't true except for the worst
> possible fundamental design issues. You should know that. In the case
> of "uselib()", it was literally four lines of obvious code - all the
> rest was just to make sure that there weren't any other cases like that
> lurking around.
I've seen it both ways; some of the worst issues fix wise (remember the
seek thing) took a while to fix, and especially to audit the rest of the
code for teh same bug. Ok it's also the best proof against the v-s
approach at the same time since the fix that came out of that wasn't a
really nice/good/maintainable fix.
Also I'm thinking "hours" not "days" here. Getting a fix done and
released will generally take a few hours, for example, you sleep a few
hours a day already, so a fix just cannot make your bk tree "within 2
hours" at any time of the day. Oh of course there is lkml, and patches
go out that way as well; but that's not quite the same. Sometimes
someone gives a "here this fixes it" patch that is worse than the
original problem.
> - There are more white-hats around than black-hats, but they are often
> less "driven" and motivated. Now _that_, I would argue, is the real
> problem with early disclosure - motivation. The people really
> motivated to find the bugs are the people who are also motivated to
> mis-use them. However, vendor-sec and "the game" just makes it more
> worth-while for security firms to participate in it - it gives them the
> "good PR" thing. And how much can you trust the "gray hats"?
I've seen enough of this go wrong/abused to agree with you to a large
extend. To the point where I have a natural initial distrust to people
who come with an embargoed hole and want to make a really big splash
about it (as opposed to mere developers finding something by accident
that want to do the right thing). Both sides happen. The later ones we
should have something for, that is both easy for such developers to
report, and that gets the patch out at least close in time to the hole
becoming widespread knowledge.
> And this is why I believe vendor-sec is part of the problem. If you don't
> see that, then you're blinding yourself to the downsides, and trying to
> only look at the upsides.
I think v-s has gone too far and is not really useful anymore. In the
last four years I've been in a job where I both got to like the few days
of advanced notice and got to hate the downsides. (fwiw I'm no longer in
that position so by now I probably have a different perspective than
Dave has;). I would absolutely agree with the statement that the
downsides of the current v-s outweigh the advantages by far.
That does not mean that I think that ANY system that makes it easy for
people to report things such that the upstream developers get a
reasonable advance notice and can come up with a fix before the hole
goes public is flawed in design. It can be done right, and some of the
things on this list (putting a time cap on the notice period for
example) will go a long way to making a sensible system that serves the
users of the software best. After all its a balance between remaining
vulnerable a bit longer than perhaps strictly needed versus having a lot
of active exploits out there without a fix. And yes you can't guarantee
that the alter is the case while you think you do the former. That's the
gamble you take and that is what a reasonable cap on the disclosure time
will at least limit in extend.
Also, to be fair, I suspect half the things that come into v-s and
similar don't have a reporter who is interested in more than "I let them
know and please let me forget about it now I want to move on with other
things in life". Getting that case right is important.
On Thu, Jan 13, 2005 at 10:48:14PM +0100, Marek Habersack wrote:
> > If admins don't install updates in a timely manner, there's
> > not a lot we can do about it. For those that _do_ however,
> > we can make their lives a lot more stress free.
> Indeed, but what does have it to do with a closed disclosure list?
For the N'th time, it gives vendors a chance to have packages
ready at the time of disclosure.
> With open
> disclosure list you provide a set of fixes right away, the admins take them
> and apply. With closed list you do the same, but with a delay (which gives
> an opportunity for a "race condition" with the bad guys, one could argue).
> So, what's the advantage of the delayed disclosure?
Not having to panic and rush out releases on day of disclosure.
Not having users vulnerable whilst packages build/get QA/get pushed to mirrors.
Users of kernel.org kernels get to build and boot in under an hour.
Vendor kernels take a lot longer to build.
1- More architectures.
(And trust me, there's nothing I'd like more than to be able
to increase the speed of kernel builds on some of the architectures
we support).
2- More generic, ie more modules to build.
In the case of public disclosure of issues that we weren't aware of,
it's a miracle that we get update kernels out on day of disclosure
in some cases. (In others, we don't, and the same applies to other vendors too)
Dave
On Iau, 2005-01-13 at 17:22, Marcelo Tosatti wrote:
> Well the reporters, and vendorsec, have to be aware that the
> "kernel security list" is the main discussion point of kernel security issues.
As it should be - I'd rather Linus was fixing these bugs than some of
the other stuff that comes out. The fix quality would go up markedly.
> If the embargo period is reasonable for vendors to prepare their updates and
> do necessary QA, there should be no need for kernel issues to be coordinated
> (and embargoed) on vendorsec anymore. Does it make sense?
vendor-sec was never intended to be a kernel security list, it became
one by necessity. Its mostly actually talking about crap like gaim,
xpdf, gaim, gaim again. Its a contact point for any security problem
related to Linux and then normally works with the authors unless they
don't want to work with us.
> The main reason for reporters to require "permission" to spread the information
> is because they want make a PR of their discovery, yes?
Sometimes. Others like CERT have set disclosure dates across many
vendors already and aren't in the PR business so much as the "this is a
linux and windows and apple bug" business. Most of those cross platform
bugs are user space but far from all.
> In that case they should be aware that submitting to vendorsec means submitting
> to kernel security, and that means X days of embargo period.
Then if the dates don't suit them they won't submit to vendor-sec and
we'll have to set up vendor-sec-two for them or build individual
relationships which are bound to mean the small vendors suffer. We can
push them, we can ask them to report to linux-security but we can't make
them jump.
Alan
On Thu, Jan 13, 2005 at 05:06:52PM -0500, Dave Jones scribbled:
> On Thu, Jan 13, 2005 at 10:48:14PM +0100, Marek Habersack wrote:
>
> > > If admins don't install updates in a timely manner, there's
> > > not a lot we can do about it. For those that _do_ however,
> > > we can make their lives a lot more stress free.
> > Indeed, but what does have it to do with a closed disclosure list?
>
> For the N'th time, it gives vendors a chance to have packages
> ready at the time of disclosure.
I've heard that N times, too. I'm just failing to see that this justifies
the existence of that list and that's why I'm asking for other arguments.
> > With open
> > disclosure list you provide a set of fixes right away, the admins take them
> > and apply. With closed list you do the same, but with a delay (which gives
> > an opportunity for a "race condition" with the bad guys, one could argue).
> > So, what's the advantage of the delayed disclosure?
>
> Not having to panic and rush out releases on day of disclosure.
Releasing it a month later doesn't remove the rush and panic of the _users_
who are paniced and rush to install the freshly released kernels.
> Not having users vulnerable whilst packages build/get QA/get pushed to mirrors.
They are vulnerable all the time since the "internal" disclosure to the
public one. What makes you think that keeping things secret on the list
won't allow the bad guys to discover the vulnerability? As Linus said, they
are probably _more_ motivated than the good guys.
> Users of kernel.org kernels get to build and boot in under an hour.
> Vendor kernels take a lot longer to build.
Come on, it's just a matter of throwing in more hardware to that process.
> 1- More architectures.
> (And trust me, there's nothing I'd like more than to be able
> to increase the speed of kernel builds on some of the architectures
> we support).
I realize that. In Debian we support a few very slow architectures. But,
let's not fool ourselves, the slow architectures aren't nearly as popular as
x86 or x86_64, ppc.
> 2- More generic, ie more modules to build.
If speed is an issue, I guess a vendor can afford getting 5 ARM/m68k/8086
(:) machines and distribute the build across them.
> In the case of public disclosure of issues that we weren't aware of,
> it's a miracle that we get update kernels out on day of disclosure
> in some cases. (In others, we don't, and the same applies to other vendors too)
Looking from the vendor point of view, you are perfectly right. Looking from
the user point of view, the user is as exposed with the closed list and new
vendor kernels released as with the open list and immediate disclosures. In
both cases it's a sort of a miracle for a user not to be hacked. Remember
the recent php problem? Before we knew, many, many, many sites were hacked -
even though the release with fixes was out. I don't know whether those
vulnerabilities were known to vendor-sec before that but if they were, the
delay didn't do a thing to make the situation better. Simply, no difference
- as far as I am concerned, the only argument so far for the existence of
the totally closed list is that "vendors must have time to build
kernels/software", as you wrote above. Not nearly enough of a reason, IMHO.
regards,
marek
On Iau, 2005-01-13 at 21:22, Linus Torvalds wrote:
> Are there advantages and upsides? Yes. Are there disadvantages?
> Indubitably. And anybody who disregards the disadvantages as "inevitable"
> is not really interested in fixing the game.
So the next time I find a remote root hole I should post an exploit
example targetting kernel.org to the linux-kernel list ? Now where are
you going to publish the fix - bk is down, kernel.org is down ...
Disclosre isn't quite as simple as you'd like.
On Iau, 2005-01-13 at 21:03, Linus Torvalds wrote:
> On Thu, 13 Jan 2005, Alan Cox wrote:
> - no embargo, no rules, but "private" in the sense that it's supposed to
> be for kernel developers only or at least people who won't take
> advantage of it.
>
> _I_ think this is the one that makes sense. No hard rules, but private
> enough that people won't feel _guilty_ about reporting problems. Right
> now I sometimes get private email from people who don't want to point
> out some local DoS or similar, and that can certainly get lost in the
> flow.
And also not passed on to vendors and other folks which is a pita and
this would fix
>
> - _short_ embargo, for kernel-only. I obviously believe that vendor-sec
> is whoring itself for security firms and vendors. I believe there would
> be a place for something with stricter rules on disclosure.
Seems these two could be the same list with a bit of respect for users
wishes and common sense.
> It's not a black-and-white thing. I refuse to believe that most security
> problems are found by people without any morals. I believe that somewhere
> in the middle is where most people feel most comfortable.
Seems sane
On Thu, 13 Jan 2005, Alan Cox wrote:
>
> On Iau, 2005-01-13 at 21:22, Linus Torvalds wrote:
> > Are there advantages and upsides? Yes. Are there disadvantages?
> > Indubitably. And anybody who disregards the disadvantages as "inevitable"
> > is not really interested in fixing the game.
>
> So the next time I find a remote root hole I should post an exploit
> example targetting kernel.org to the linux-kernel list ? Now where are
> you going to publish the fix - bk is down, kernel.org is down ...
>
> Disclosre isn't quite as simple as you'd like.
This is like saying "somebody will do the bad thing, it might as well be
me". I don't believe that is a basis for doing things right.
First off, I've tried to make it clear that while I believe in openness,
my beliefs are not exclusive to anybody elses beliefs. I'd rather see
shades of gray than absolute black-and-white.
Secondly, I'd much rather have the mindset where we try to minimize the
likelihood of a catastrophic failure. That includes having many
_different_ ways of gettign things out: Bk, tar-balls, email. Diversity is
a _fundamental_ security strength. It also includes having diversity in
other areas, ie multiple architectures.
I see vendor-sec as trying to treat the symptoms. It's a "take two
aspirins, call me in the morning". And you seem to not even want to
discuss treating the disease - and vendor-sec is PART of the disease. It's
the drug that people get addicted to when they decided to treat the
symptoms.
I think Linux - just by the source being open - has one real treatmeant to
one fundamental -cause- of insecurity, namely "we don't care, and we'll
put our heads in the sane". Open source just doesn't allow that mentality.
And similarly, I think truly open disclosure is another fundamental
-treatment-, in that it doesn't _allow_ the mentality that vendor-sec
tends to instill in people. Well, maybe not "treatment" per se: it's more
like admitting you have a problem.
It's like alcoholism. Admitting you have a problem is the first step.
vendor-sec is the band-aid that allows you to try to ignore the problem
("I can handle it - I could stop any day").
Linus
On Thu, 13 Jan 2005, Alan Cox wrote:
>
> > - _short_ embargo, for kernel-only. I obviously believe that vendor-sec
> > is whoring itself for security firms and vendors. I believe there would
> > be a place for something with stricter rules on disclosure.
>
> Seems these two could be the same list with a bit of respect for users
> wishes and common sense.
Possibly. On the other hand, I can well imagine that the list of
subscribers is different for the two cases. The same way I refuse to have
anything to do with vendor-sec, maybe somebody else refuses to honor even
a five-day rule, but would want to be on the "no rules, but let's be clear
that we're all good guys, not gray or black-hats.
Also, especially with a hard rule, there's just less confusion, I think,
if the two are separate. Otherwise you'd have to have strict Subject: line
rules or something - which basically means that they are separate lists
anyway.
But hey, it's not even clear that both are needed. With a short enough
disclosure requirement, maybe people feel like the "five-day rule,
possible explicitly _relaxed_ by the original submitter" is sufficient.
Linus
* Alan Cox ([email protected]) wrote:
> On Iau, 2005-01-13 at 21:03, Linus Torvalds wrote:
> > On Thu, 13 Jan 2005, Alan Cox wrote:
> > - no embargo, no rules, but "private" in the sense that it's supposed to
> > be for kernel developers only or at least people who won't take
> > advantage of it.
> >
> > _I_ think this is the one that makes sense. No hard rules, but private
> > enough that people won't feel _guilty_ about reporting problems. Right
> > now I sometimes get private email from people who don't want to point
> > out some local DoS or similar, and that can certainly get lost in the
> > flow.
>
> And also not passed on to vendors and other folks which is a pita and
> this would fix
> >
> > - _short_ embargo, for kernel-only. I obviously believe that vendor-sec
> > is whoring itself for security firms and vendors. I believe there would
> > be a place for something with stricter rules on disclosure.
>
> Seems these two could be the same list with a bit of respect for users
> wishes and common sense.
I think they should be the same. I hope the draft security contact bits
reflect that.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
On Thu, 13 Jan 2005, Dave Jones wrote:
> On Thu, Jan 13, 2005 at 10:48:14PM +0100, Marek Habersack wrote:
>
> > > If admins don't install updates in a timely manner, there's
> > > not a lot we can do about it. For those that _do_ however,
> > > we can make their lives a lot more stress free.
> > Indeed, but what does have it to do with a closed disclosure list?
>
> For the N'th time, it gives vendors a chance to have packages
> ready at the time of disclosure.
>
> > With open
> > disclosure list you provide a set of fixes right away, the admins take them
> > and apply. With closed list you do the same, but with a delay (which gives
> > an opportunity for a "race condition" with the bad guys, one could argue).
> > So, what's the advantage of the delayed disclosure?
>
> Not having to panic and rush out releases on day of disclosure.
> Not having users vulnerable whilst packages build/get QA/get pushed to mirrors.
>
The users are still vulnerable during the time you are preparing your
kernel packages.
Personally I'd very much prefer to know of the bug even before a fix is
ready since that would allow me to protect my systems in alternative ways
until the fixes are ready. Depending on the nature of the bug I
could perhaps tweak firewall rulesets temporarily, temporarily disable
certain services, perhaps I could mount a few filesystems read-only for a
few days, maybe rebuild my current vulnerable kernel with an option
disabled as a workaround and live with less functionality until the fix is
ready, maybe even take vulnerable systems offline until a fix is ready.
Having the info that the bug exists and can be targeted in this or
that way gives me a chance to respond and protect my systems while a
proper fix is being developed. I can't do that if I'm in the dark until
vendors feel comfortable and ready to release packaged bug free kernels -
and all the time I'm waiting some black hat idiot may have found the same
bug and cracked my system.
--
Jesper Juhl
On Thu, 2005-01-13 at 19:41 +0000, Alan Cox wrote:
> So the non-disclosure argument is perhaps put as "equality of access at
> the point of discovery means everyone gets rooted.". And if you want a
> lot more detail on this read papers on the models of security economics
> - its a well studied field.
or in other words: you can write an exploit faster than y ou can write
the fix, so the thing needs delaying until a fix is available to make it
more equal.
On Thu, Jan 13, 2005 at 04:30:02PM -0500, Dave Jones scribbled:
> On Thu, Jan 13, 2005 at 10:02:29PM +0100, Marek Habersack wrote:
> > Theory is fine, practice is that the closed disclosure list changes matters
> > for a vaste minority of people - those who are to install the fixed kernels
> > are in perfectly the same situation they would be in if there was a fully
> > open disclosure list.
>
> No, it's not the same. They're in a _worse_ situation if anything.
> With open disclosure, the bad guys get even more lead time.
I guess it depends on how you look at it. In fact, thinking again, I think
it gives the same time to the bad and good guys in each case. So it seems
there is no benefit to having a closed list or an open list in this regard
after all. And if this is not an issue, what might be the reason for having
the closed list? The lust for glory as you've said earlier?
> If admins don't install updates in a timely manner, there's
> not a lot we can do about it. For those that _do_ however,
> we can make their lives a lot more stress free.
Indeed, but what does have it to do with a closed disclosure list? With open
disclosure list you provide a set of fixes right away, the admins take them
and apply. With closed list you do the same, but with a delay (which gives
an opportunity for a "race condition" with the bad guys, one could argue).
So, what's the advantage of the delayed disclosure?
best regards,
marek
Previously Marek Habersack wrote:
> So it sounds that we, the men-in-the-crowd are really left out in the crowd,
> people who are affected the most by the issues. Since the vendors are not
> affected by the bugs (playing a devil's advocate here), since they fix them
> for their machines as they appear, way before they get public.
vendor suffer from that as well. Suppose vendors learn of a problem in
a product they visibly use such as apache or rsync. If all vendors
suddenly update their versions or disable things that will be noticed as
well, so vendors can't do that.
Wichert.
--
Wichert Akkerman <[email protected]> It is simple to make things.
http://www.wiggy.net/ It is hard to make things simple.
In article <[email protected]>,
Wichert Akkerman <[email protected]> wrote:
>Previously Marek Habersack wrote:
>> So it sounds that we, the men-in-the-crowd are really left out in the crowd,
>> people who are affected the most by the issues. Since the vendors are not
>> affected by the bugs (playing a devil's advocate here), since they fix them
>> for their machines as they appear, way before they get public.
>
>vendor suffer from that as well. Suppose vendors learn of a problem in
>a product they visibly use such as apache or rsync. If all vendors
>suddenly update their versions or disable things that will be noticed as
>well, so vendors can't do that.
I don't buy that at all. There are numerous reasons for updating
programs or disabling things, of which fixing security holes is but
one. Furthermore, even if fixing security holes was the only reason,
updating a service would indicate only that a bug had been found. It
doesn't tell the observer what the bug is, or how to exploit it, so it
doesn't increase the risk to the end users. The observant black hat
now knows that there is a bug in, say, apache, and can set about
reading the source code to try to find it, but he was probably looking
there anyway, so I don't think that need worry you much.
So, the reason for not updating the software isn't "letting the black
hats know", which leaves "not being seen to break the embargo" as the
only possible explanation for such action. But the embargo is there
to protect the end users, not to protect the vendors, so what the hell
does it matter if the information that there is a (non-disclosed) bug
in $CRITICAL_SERVER leaks so that the vendors can ensure that their
users are not put in danger of downloading binaries from a compromised
machine.
If instead the vendors have drunk so heavily from the kool-aid that
they believe they must leave their machines vulnerable in order either
not to break the (apparently flawed) rules of vendor-sec, or, even
worse, to avoid annoying some dime-a-dozen "security researcher" who's
desperate to make a big name for himself, then things have reached a
very sorry state indeed.
Julian
--
Julian T. J. Midgley http://www.xenoclast.org/
Cambridge, England.
PGP: BCC7863F FP: 52D9 1750 5721 7E58 C9E1 A7D5 3027 2F2E BCC7 863F
Linus Torvalds <[email protected]> said:
> On Thu, 13 Jan 2005, Alan Cox wrote:
> > On Iau, 2005-01-13 at 16:38, Linus Torvalds wrote:
> > > It wouldn't be a global flag. It's a per-process flag. For example,
> > > many people _do_ need to execute binaries in their home directory. I
> > > do it all the time. I know what a compiler is.
> > noexec has never been worth anything because of scripts. Kernel won't
> > load that binary, I can write a script to do it.
> Scripts can only do what the interpreter does. And it's often a lot harder
> to get the interpreter to do certain things. For example, you simply
> _cannot_ get any thread race conditions with most scripts out there, nor
> can you generally use magic mmap patterns.
But you can trivially run an executable via e.g.:
/lib/ld-2.3.4.so some-nice-proggie
and the execute permissions (and noexec, etc) on some-nice-proggie don't
matter.
> Am I claiming that disallowing self-written ELF binaries gets rid of all
> security holes? Obviously not.
It makes their running a bit harder, but not much.
> I'm claiming that there are things that
> people can do that make it harder, and that _real_ security is not about
> trusting one subsystem, but in making it hard enough in many independent
> ways that it's just too effort-intensive to attack.
Right. But this is a broken idea, IMVHO.
Besides, something that has been overlooked in all this discussion so far:
It does routinely happen that fixing some "just an ordinary bug" really
does correct a security problem. Plus backporting "only security fixes"
gets harder and harder as they start depending on other random changes.
--
Dr. Horst H. von Brand User #22616 counter.li.org
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513
On Fri, Jan 14, 2005 at 11:22:49AM +0100, Wichert Akkerman scribbled:
> Previously Marek Habersack wrote:
> > So it sounds that we, the men-in-the-crowd are really left out in the crowd,
> > people who are affected the most by the issues. Since the vendors are not
> > affected by the bugs (playing a devil's advocate here), since they fix them
> > for their machines as they appear, way before they get public.
>
> vendor suffer from that as well. Suppose vendors learn of a problem in
> a product they visibly use such as apache or rsync. If all vendors
> suddenly update their versions or disable things that will be noticed as
> well, so vendors can't do that.
So yet another reason why such closed list does more harm than good - it
hurts security, if what you said above does happen.
regards,
marek
* Julian T. J. Midgley:
>>vendor suffer from that as well. Suppose vendors learn of a problem in
>>a product they visibly use such as apache or rsync. If all vendors
>>suddenly update their versions or disable things that will be noticed as
>>well, so vendors can't do that.
>
> I don't buy that at all. There are numerous reasons for updating
> programs or disabling things, of which fixing security holes is but
> one.
People used to monitor large name servers run by the in-crowd for
synchronous updates, to get advance notice of the existence of BIND
security holes. AFAIK, it was a reliable indicator.
In article <[email protected]>,
Florian Weimer <[email protected]> wrote:
>* Julian T. J. Midgley:
>
>>>vendor suffer from that as well. Suppose vendors learn of a problem in
>>>a product they visibly use such as apache or rsync. If all vendors
>>>suddenly update their versions or disable things that will be noticed as
>>>well, so vendors can't do that.
>>
>> I don't buy that at all. There are numerous reasons for updating
>> programs or disabling things, of which fixing security holes is but
>> one.
>
>People used to monitor large name servers run by the in-crowd for
>synchronous updates, to get advance notice of the existence of BIND
>security holes. AFAIK, it was a reliable indicator.
It might well have been - did these people then trawl through the BIND
sources to try to find the bug itself, and frequently develop an
exploit before the official patches were released? If so, why didn't
they just assume that there was a bug in BIND and go looking for it
instead of waiting for circumstantial evidence that there mighe be
before they started looking.
You'll have to explain why leaking the information "that there is a
bug in $PROGRAM", by fixing it (without disclosing either the bug or
the fix), is a problem. After all, you can assume that for every
black hat foolish enough to sit around waiting for some evidence that
a bug exists before trying to find it, there'll be another that just
went looking anyway and has already found it. It's better for all
concerned that the vendors protect themselves against the latter bunch
as soon as they reasonably can.
Julian
--
Julian T. J. Midgley http://www.xenoclast.org/
Cambridge, England.
PGP: BCC7863F FP: 52D9 1750 5721 7E58 C9E1 A7D5 3027 2F2E BCC7 863F
On Fri, 14 Jan 2005, Horst von Brand wrote:
>
> But you can trivially run an executable via e.g.:
>
> /lib/ld-2.3.4.so some-nice-proggie
I thought we fixed this, and modern ld-so's will fail on this if
"some-nice-proggie" cannot be mapped executably. Which is exactly what
we'd do.
[ scrounge scrounge ]
Yup, just checked - it's exactly the same case as MNT_NOEXEC, which indeed
used to have exactly that bug.
So the implementation of what I suggested (and no, I'm not at all
guaranteeing that this is a wonderful idea, I'm sure others have tried it
and it probably sucks) would be something like
--- 1.161/mm/mmap.c 2005-01-12 08:26:28 -08:00
+++ edited/mm/mmap.c 2005-01-14 07:37:51 -08:00
@@ -882,9 +882,12 @@
if (!file->f_op || !file->f_op->mmap)
return -ENODEV;
- if ((prot & PROT_EXEC) &&
- (file->f_vfsmnt->mnt_flags & MNT_NOEXEC))
- return -EPERM;
+ if (prot & PROT_EXEC) {
+ if (file->f_vfsmnt->mnt_flags & MNT_NOEXEC)
+ return -EPERM;
+ if (!capability(CAP_CAN_RUN_NONROOT) && file->f_dentry->d_inode->i_uid)
+ return -EPERM;
+ }
}
/*
* Does the application expect PROT_READ to imply PROT_EXEC?
(or just add a security hook there - it's not like this couldn't be a
SELinux thing..)
And no, this doesn't trap mprotect(), but that's not the point. The point
of this is not to make it impossible to execute code on purpose by some
existing binary - it's to make it impossible for some people to compile or
download their own binaries.
(Side note: this is probably useful for MIS kind of things - if you don't
want your users to download games etc, you'd want soemthing like that. Of
course, MNT_NOEXEC in that case is fairly easy, and the "run programs
capability" is more a "this also works for arbitrary servers etc" things).
Alan's point about perl is well taken, though. Perl is a pretty damn
generic interpreter, and unlike most interpreters exposes everything. And
I doubt it uses "mmap(.. PROT_EXEC)" to map in the file ;)
Linus
On Fri, 2005-01-14 at 07:45 -0800, Linus Torvalds wrote:
>
> Alan's point about perl is well taken, though. Perl is a pretty damn
> generic interpreter, and unlike most interpreters exposes everything.
> And
> I doubt it uses "mmap(.. PROT_EXEC)" to map in the file ;)
you can feed it via stdin, I doubt it mmaps stdin that way for sure ;)
On Fri, 2005-01-14 at 10:45, Linus Torvalds wrote:
> (or just add a security hook there - it's not like this couldn't be a
> SELinux thing..)
>
> And no, this doesn't trap mprotect(), but that's not the point. The point
> of this is not to make it impossible to execute code on purpose by some
> existing binary - it's to make it impossible for some people to compile or
> download their own binaries.
Just FYI, SELinux does apply checking via the security hooks in mmap and
mprotect, and can be used to prevent a process from executing anything
it can write via policy.
The TPE security module recently posted to lkml by Lorenzo also tries to
prevent untrusted users/groups from executing anything outside of
'trusted paths', likewise using the security hooks in mmap and mprotect.
--
Stephen Smalley <[email protected]>
National Security Agency
On Fri, 2005-01-14 at 10:57, Stephen Smalley wrote:
> Just FYI, SELinux does apply checking via the security hooks in mmap and
> mprotect, and can be used to prevent a process from executing anything
> it can write via policy.
>
> The TPE security module recently posted to lkml by Lorenzo also tries to
> prevent untrusted users/groups from executing anything outside of
> 'trusted paths', likewise using the security hooks in mmap and mprotect.
More generally, you should be able to easily implement the checking you
describe as a new LSM or even as part of the capability security module,
without requiring any change to the core kernel code.
--
Stephen Smalley <[email protected]>
National Security Agency
On Thu, Jan 13, 2005 at 01:03:22PM -0800, Linus Torvalds wrote:
>
> - _short_ embargo, for kernel-only. I obviously believe that vendor-sec
> is whoring itself for security firms and vendors. I believe there would
> be a place for something with stricter rules on disclosure.
>
> - vendor-sec. The place where you can play any kind of games you want.
>
Linus,
I think you're being a bit unfair here. I've been on
vendor-sec since almost its very beginnings, and it's not nearly as
politically driven as you seem to make it out to be.
Now, that may be because I've seen the early days of CERT,
where I saw an lpr/lpd hole get covered up for some 9 months before I
gave up tracking to see when they would ever bother to issue a CERT
advisory. *That's* whoring itself to vendors, where CERT would delay
advisories to keep pace with the slowest vendor, because CERT was a
heavy-weight organization that was beholden to vendors in order to
meet its payroll. Vendor-sec, because it's a mailing list that is
quite frankly, extremely informally organized, especially compared
many of the other security fora that exist out there, doesn't have
many of the shortcomings of CERT, thanks be to $DEITY.
That being said, part of the problem here is that everybody
has different standards for when they would like to be informed. Some
people *like* wearing a pager and even if a security vulnerability is
found at 4am on the Tuesday before Thanksgiving, to rush out, download
a kernel.org kernel, and install it on 500 production critical
machines all over their corporate network within the next eight hours.
Other people would prefer that public releases be delayed until after
public holidays --- so they can get back from visiting their family in
Vermont, where cell phones and pagers don't work so well.
Similarly people who find and disclose security holes do so
for a very large variety of reasons. Some of them do it for the
glory; some of them do it because they are trying to drive business to
their security firms; and some of them, believe it or not, do it
because they are trying to make the world a better place. (You know,
like some open source developers. :-)
The people who find and report security holes have a very wide
range of opinions about full versus partial disclosure. Some of them
take very seriously the possibility of what might happen if they were
to perform full disclosure on some vulnerability, and if an attacker
were to be able to use it to rewire the gates to the Grand Coulee Dam,
and cause huge loss of life, they would consider themselves as least
partially morally culpable. Other people might say that the upside of
getting the news out faster outweighs any delay at all, even there are
some security breeches that cause loss of life or property. I don't
hold with that position, but it is an intellectually defensible one.
As a result, it is a highly religious, extremely charged
emotional issue, and the arguments on *both* sides of the fence tend
to go over the top; I've seen high levels of rhetorical arguing for
both full and delayed disclosure. I also don't think we're going to
settle this issue any time soon. We will probably come to some grand
consensus on the emacs versus vi issue first.
It's probably not going to be helpful to say that vendor-sec
is "whoring itself" because some people who report vulnerabilities say
that they will only report them under certain conditions. Maybe you
would prefer it if some group of vendors would say "no thank you" if
someone were to attach those conditions to a security report. That's
a choice you've made, and that's fine.
But please reflect that the glory-hounds would in fact get
more attention if they were to announce their findings right away,
along with the exploit that does something public and visible, such as
taking down kernel.org ---- and that sometimes the security
researchers who insist on delayed disclosure are doing so out of the
best of intentions, and will only work with organizations that repsect
their requests. You may not agree with them, but name calling is not
going to help matters.
I think this is much like your position about licensing.
People can choose whatever license they want on their code. But if
they choose a particular license, then you better damn well respect
it. Similarly, if someone tells you that they will only tell you
about a security vulnerability if you agree not to release it until 1
week later, then you are honor bound either to (a) respect their
request of confidentiality, or (b) refuse to accept the information.
Either is an honorable choice.
However, there have been some on this thread (not you) that
have claimed that vendor-sec should ignore the requests for delayed
disclosure and make public information where the reporter has said,
"I'll only give you this information if you promise to use it in the
following restricted way". That's just dealing in bad faith, and I
reject that. People of good will can, and have, and will no doubt
continue, to disagree about whether some level of delayed disclosure
is a good thing.
I believe that delays of less than 7-10 days are a good thing,
and that scheduling public releases to avoid making a security problem
public at 5pm on Christmas eve, is a good thing. I would not agree
with six month delays, and I think I've heard you say that a few days
to perhaps a week on the outside you might think would be acceptable.
The point is that there is a huge middle ground here, and in fact I
think most people participating on this thread are somewhere within
this vast middle ground --- I haven't heard anyone saying that the
(historical) CERT-style "delay for six months" is a good thing.
- Ted
P.S. As other people have noted, if the reporter says that they plan
to do immediate full/public disclosure, vendor-sec will work with
people who feel that way, and immediately get word out. Vendor-sec
does *not* only work with delayed disclosure issues, and does not
insist that people hold back on reports, contrary to what some people
have claimed. It has and always will be up to the person who
discovered the vulnerability to decide how to release it.
On Fri, 14 Jan 2005, Theodore Ts'o wrote:
>
> But please reflect that the glory-hounds would in fact get
> more attention if they were to announce their findings right away,
> along with the exploit that does something public and visible, such as
> taking down kernel.org ---- and that sometimes the security
> researchers who insist on delayed disclosure are doing so out of the
> best of intentions, and will only work with organizations that repsect
> their requests. You may not agree with them, but name calling is not
> going to help matters.
I disagree violently.
What vendor-sec does is to make it "socially acceptable" to be a parasite.
I personally think that such behaviour simply should not be encouraged. If
you have a security "researcher" that has some reason to delay his
disclosure, you should see for for what he is: looking for cheap PR. You
shouldn't make excuses for it. Any research organization that sees PR as a
primary objective is just misguided.
Also, there's a more fundamental issue: the "glorification" of bugs. Bugs
aren't news. We have various small bugs all the time, and many of them are
at least potential local DoS issues. Suppression of information is what
_makes_ these bugs news.
So I dislike the _culture_ that vendor-sec encourages. THAT is the real
problem. And hey, it may be better than some other places. Goodie. But
dammit, it needs somebody to be critical about it too.
What's the alternative? I'd like to foster a culture of
(a) accepting that bugs happen, and that they aren't news, but making
sure that the very openness of the process means that people know
what's going on exactly because it is _open_, not because some news
organization had to make a big stink about it just to make a vendor
take notice.
Right now, people seem to think that big news media warnings on
cnet.com about SP2 fixing 15 vulnerabilities or similar is the proper
way to get people to upgrade. That just -cannot- be right.
(b) reporting a bug openly is _good_, and not frowned upon (right now
people clearly try to steer even white-hat people who have no real
incentive to use vendor-sec into that mentality - because it's the
"proper channel")
And yes, for this to work people need to get away from the notion of
"let's apply vendor patch X to fix problem Y". What we should strive for
(and what the whole system should be _geared_ for) is to have redundant
enough security that people hopefully don't know of <n> outstanding bugs
at the same time that allows for a combination attack.
I'm convinced most security firms are like the virus firms: they react.
They react to things they see on the cracker lists etc. They use a lot of
the same tools to find problems that really bad people find. That means
that the problems they "discover" are often discovered by the bad guys
first. Sure, they have their own people too, but that's like saying that
_we_ have our own people too.
And that makes the whole "nondisclosure to avoid bad people" argument
pretty much totally bogus, something that nobody who argues for vendor-sec
seems to be willing to accept.
And let's not kid ourselves: the security firms may have resources that
they put into it, but the worst-case schenario is actual criminal intent.
People who really have resources to study security problems, and who have
_no_ advantage of using vendor-sec at all. And in that case, vendor-sec is
_REALLY_ a huge mistake.
Linus
On Thu, Jan 13, 2005 at 09:19:16AM -0800, Linus Torvalds wrote:
> (Yes, Brad Spengler has talked to me about PaX, but never sent me
> individual patches, for example. People seem to expect me to take all or
> nothing
The same is true elsewhere as well, unfortunately. I do wish people
would realise that there's a huge difference between "new feature"
patches and "small bugfix" patches.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
On Fri, Jan 14, 2005 at 11:15:19AM -0800, Linus Torvalds wrote:
>
> What vendor-sec does is to make it "socially acceptable" to be a parasite.
>
> I personally think that such behaviour simply should not be encouraged. If
> you have a security "researcher" that has some reason to delay his
> disclosure, you should see for for what he is: looking for cheap PR.
I disagree. First of all, we can't know what motivates someone, and
presuming that we know their motivation is something that should only
be done with the greatest of care. Secondly, someone who does want
cheap PR can do so without delaying their disclosure; they can issue a
breathless press release or "security advisory" about a DOS attack
just easily with a zero-day disclosure as they can with a two-week
delayed disclosure.
Sure, there are less-than-savory security firms that are only after PR
to drive business. But that's completely orthogonal to whether or not
the disclosure is delayed or not. Yes, "glorification" of relatively
trivial bugs is a problem; but whether or not the bugs are delayed
doesn't change whether someone issues an "iOFFENSE SECURITY ADVISORY"
which overstates the bug; it only changes whether they send the
advisory right away or a week or two later. (After all, the security
groups that subscribe to a zero-day "full disclosure" policy use
advisory/press releases glorifies their findings just as much.)
> (a) accepting that bugs happen, and that they aren't news, but making
> sure that the very openness of the process means that people know
> what's going on exactly because it is _open_, not because some news
> organization had to make a big stink about it just to make a vendor
> take notice.
A one or two week delay is hardly cause for "a news organization to
make a big stick so vendors will take notice". Besides, which is it?
Are it about security researchers that are after cheap PR, or "news
organizations forcing vendors to take notice"? Certainly I've never
that kind of dynamic with Linux vendors where reporters are trying to
get vendors to take notice; it doesn't matter whether take notice if
they are going to be releasing in two weeks later come hell or high
water.
> And yes, for this to work people need to get away from the notion of
> "let's apply vendor patch X to fix problem Y". What we should strive for
> (and what the whole system should be _geared_ for) is to have redundant
> enough security that people hopefully don't know of <n> outstanding bugs
> at the same time that allows for a combination attack.
Here, we certainly agree.
> And let's not kid ourselves: the security firms may have resources that
> they put into it, but the worst-case schenario is actual criminal intent.
> People who really have resources to study security problems, and who have
> _no_ advantage of using vendor-sec at all. And in that case, vendor-sec is
> _REALLY_ a huge mistake.
Nah. If you have criminal intent, generally there are far easier ways
to target a specific company. For example, many companies that have
multiple layers of firewalls, intrusion detection systems, etc., will
keep critical financial information in boxes left unlocked in the
hallways, and not bother to do any kind of background checks before
hiring temporary employees/contractors. Sad but true.
I'm quite certain that far more economic damage is being done by
script kiddies and by "insiders" using officially granted privileges
to access financial applications than by the hypothetical Dr. Evil
that hires computer experts to find vulnerabilities that could be used
to cary out criminal intent. And it's the script kiddies that we can
prevent by delaying disclosures by only a week or two, to give a
chance to get the fixes out to the people who need them.
- Ted
On Fri, 14 Jan 2005, Theodore Ts'o wrote:
>
> I disagree. First of all, we can't know what motivates someone, and
> presuming that we know their motivation is something that should only
> be done with the greatest of care. Secondly, someone who does want
> cheap PR can do so without delaying their disclosure; they can issue a
> breathless press release or "security advisory" about a DOS attack
> just easily with a zero-day disclosure as they can with a two-week
> delayed disclosure.
Your "secondly" is bogus.
Sure, you can do that, and if you do that, then the world recognizes you
for what you are - nothing but a publicity-seeking creep.
THAT is why vendor-sec is bad. It allows publicity-seeking creeps to take
on the mantle of being "good".
I'm arguing for exposing them for what they are. If that hurts some
feelings, tough ;)
> > (a) accepting that bugs happen, and that they aren't news, but making
> > sure that the very openness of the process means that people know
> > what's going on exactly because it is _open_, not because some news
> > organization had to make a big stink about it just to make a vendor
> > take notice.
>
> A one or two week delay is hardly cause for "a news organization to
> make a big stick so vendors will take notice".
You ignore reality.
It's not a one- or two-week delay. Once the vendor-sec mentality takes
effect, the delay inevitably grows. You _always_ have excuses for
delaying, and as shown by this thread, a _lot_ of people believe them.
Also, even a one- or two-week delay _is_ actually detrimental. It means
that you can't handle the problem when it happens, so it gets queued up.
People cannot inform unrelated third parties about their patches (because
they are embargoed), which means that they get out of sync, and suddenly
the thing that open source is so good at - namely making communication
work well - becomes a problem.
> > And let's not kid ourselves: the security firms may have resources that
> > they put into it, but the worst-case schenario is actual criminal intent.
> > People who really have resources to study security problems, and who have
> > _no_ advantage of using vendor-sec at all. And in that case, vendor-sec is
> > _REALLY_ a huge mistake.
>
> Nah. If you have criminal intent, generally there are far easier ways
> to target a specific company.
The spam-viruses clearly show that that isn't always true, though. The
advantage to targeting the whole infrastructure _is_ clearly there.
Linus
On Gwe, 2005-01-14 at 15:12, Julian T. J. Midgley wrote:
> You'll have to explain why leaking the information "that there is a
> bug in $PROGRAM", by fixing it (without disclosing either the bug or
> the fix), is a problem. After all, you can assume that for every
Because the bad guys do keep watch and they do go looking and some of
them are very very bright people. Knowing application A has a bug
generally means you know the kind of bug because it'll be "flavour of
the month" bug. In other words most bugs are variants of the latest
exploit because everyone is now looking at every other app for the same
problem.
We had network buffer overflow period, multiplication/addition overflow
period, 2D maths overflow in image viewer period and so on..
Alan
On Gwe, 2005-01-14 at 22:51, Linus Torvalds wrote:
> Sure, you can do that, and if you do that, then the world recognizes you
> for what you are - nothing but a publicity-seeking creep.
And what does that make writing your own operating system ? Some of the
security folks are publicity seekers, others see it as an investment
against getting a job by becoming known. Quite a few we deal with a
large professional organisations who don't need publicity and actually
the more interesting bodies often don't *want* publicity just to ensure
that all their vendors have fixes before things go public and that their
government infrastructure and nation state will be best served.
> THAT is why vendor-sec is bad. It allows publicity-seeking creeps to take
> on the mantle of being "good".
They don't agree with you, nor do the economists I'm afraid.
> I'm arguing for exposing them for what they are. If that hurts some
> feelings, tough ;)
Its ok I'm sure they think you are an arrogant clueless jerk 8)
> It's not a one- or two-week delay. Once the vendor-sec mentality takes
> effect, the delay inevitably grows. You _always_ have excuses for
> delaying, and as shown by this thread, a _lot_ of people believe them.
The "vendor-sec" mentality - from someone who has never been involved in
it. Ah yes Linus you might want to consider writing articles for a local
newspaper you appear to have the right qualifications for it 8)
vendor-sec has a lot of people on it who don't like long non-disclosure
periods and get quite annoyed when they happen out of our control (eg
CERT originated notifications).
Alan
On Iau, 2005-01-13 at 23:30, Jesper Juhl wrote:
> The users are still vulnerable during the time you are preparing your
> kernel packages.
Vulnerable to what - a bug that has probably taken months to be located
and isn't know to the bad guys right now ?
> proper fix is being developed. I can't do that if I'm in the dark until
> vendors feel comfortable and ready to release packaged bug free kernels -
> and all the time I'm waiting some black hat idiot may have found the same
> bug and cracked my system.
Let me save you some hassle. On current models anything you are running
more than 5000 lines long probably has serious flaws in it. Your
processor probably has flaws too, and even if you put up your firewall
someone might break into your house with a sledgehammer and take your
computer away (eg the music industry ;))
Its also about -risk- levels and the sum of risk to all parties
involved.
On Gwe, 2005-01-14 at 15:45, Linus Torvalds wrote:
> On Fri, 14 Jan 2005, Horst von Brand wrote:
> >
> > But you can trivially run an executable via e.g.:
> >
> > /lib/ld-2.3.4.so some-nice-proggie
>
> I thought we fixed this, and modern ld-so's will fail on this if
> "some-nice-proggie" cannot be mapped executably. Which is exactly what
> we'd do.
And I can still write it in perl forget MAP_EXEC and work on almost ever
processor in use today because NX is very recent. Rewriting qemu in perl
might be a bit extreme but its possible 8)
On 2005-01-15, at 01:34, Alan Cox wrote:
> Its also about -risk- levels and the sum of risk to all parties
> involved.
Rather "Its also about price levels and the sum of costs to all parties
involved."
For example if you share the costs of 5000 lines of code with millions
of people
you can afford to pay the costs of developing them in a way which
really assures safety.
Think about the software controlling a servo motor in your car...
You can't neglect economics when thinking about security issues, because
costs are the "metric" of this "space". If you don't like dollars just
think about an even more
precise currency you have to pay with anyway: developer time.
Its simply expensive to develop well working code. And on the other
hand buggy code is not bad in itself. Its just that cheap...
On Sat, 15 Jan 2005, Alan Cox wrote:
>
> vendor-sec has a lot of people on it who don't like long non-disclosure
> periods and get quite annoyed when they happen out of our control (eg
> CERT originated notifications).
Hey, fair enough. I tend to see the problem cases, which may be why I
absolutely detest it. Statistical self-selection and all that.
Linus
On Fri, 14 Jan 2005, Linus Torvalds wrote:
> Sure, you can do that, and if you do that, then the world recognizes you
> for what you are - nothing but a publicity-seeking creep.
>
> THAT is why vendor-sec is bad. It allows publicity-seeking creeps to
> take on the mantle of being "good".
Hey, if that motivates them to disclose the security
bugs they found in a way that makes it easier for
sysadmins and vendors to keep up - sounds like a fair
trade-off.
Everybody has their reasons for doing whatever they do.
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
With no disrespect, I don't believe you have ever been a full-time
employee system administrator for any commercial or government
organization, and I don't believe you have any experience trying to do
security when change must be reviewed by technically naive management to
justify cost, time, and policy implications. The people on the list who
disagree may view the security information issue in a very different
context.
Linus Torvalds wrote:
> What vendor-sec does is to make it "socially acceptable" to be a parasite.
>
> I personally think that such behaviour simply should not be encouraged. If
> you have a security "researcher" that has some reason to delay his
> disclosure, you should see for for what he is: looking for cheap PR. You
> shouldn't make excuses for it. Any research organization that sees PR as a
> primary objective is just misguided.
There are damn fine reasons for not having immediate public disclosure,
it allows vandors and administrators to close the hole before the script
kiddies get a hold of it. And they are the real problem, because there
are so MANY of them, and they tend to do slash and burn stuff, wipe out
your files, steal your identity, and other things you have to notice.
They aren't smart enough to find holes themselves in most cases, they
are too lazy in many cases to read the high-level hacker boards, and a
few weeks of delay in many cases lets the careful avoid damage.
Security through obscurity doesn't work, but a small delay for a fix to
be developed can prevent a lot of problems. And of course the
information should be released, it encourages the creation and
installation of fixes.
Oh, and many of the problem reports result in "cheap PR" consisting of a
single line mention in a CERT report or similar. Most people are not
doing it for the glory.
> What's the alternative? I'd like to foster a culture of
>
> (a) accepting that bugs happen, and that they aren't news, but making
> sure that the very openness of the process means that people know
> what's going on exactly because it is _open_, not because some news
> organization had to make a big stink about it just to make a vendor
> take notice.
Linux vendors aside, many vendors react in direct proportion to the bad
publicity engendered. I'd like the world to work that way, but in many
places it doesn't.
>
> Right now, people seem to think that big news media warnings on
> cnet.com about SP2 fixing 15 vulnerabilities or similar is the proper
> way to get people to upgrade. That just -cannot- be right.
Unfortunately reality doesn't agree with you. Many organizations have no
other effective way to convince management of the need for a fix except
newspaper articles and magazine articles. A sometimes that has to get to
the horror story stage before action is possible.
> And let's not kid ourselves: the security firms may have resources that
> they put into it, but the worst-case schenario is actual criminal intent.
> People who really have resources to study security problems, and who have
> _no_ advantage of using vendor-sec at all. And in that case, vendor-sec is
> _REALLY_ a huge mistake.
I think you are still missing the point, I don't care if a security firm
reads mailing lists or tea leaves, does research or just knows where to
find it, they are paid to do it and if they do it well and report the
problems which apply to me and the source of the fixes they keep me from
missing something and at the same time save me time. Even reading only
good mailing lists and newsgroups it takes a lot of time to keep
current, and you see a lot of stuff you don't need.
--
-bill davidsen ([email protected])
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
Bill Davidsen <davidsen <at> tmr.com> writes:
>
> With no disrespect, I don't believe you have ever been a full-time
> employee system administrator for any commercial or government
> organization, and I don't believe you have any experience trying to do
> security when change must be reviewed by technically naive management to
> justify cost, time, and policy implications. The people on the list who
> disagree may view the security information issue in a very different
> context.
Basically you are saying that if i disagree, my view is irrelevant. What do you
expect with this kind of start.
> Linus Torvalds wrote:
>
> > What vendor-sec does is to make it "socially acceptable" to be a parasite.
> >
> > I personally think that such behaviour simply should not be encouraged. If
> > you have a security "researcher" that has some reason to delay his
> > disclosure, you should see for for what he is: looking for cheap PR. You
> > shouldn't make excuses for it. Any research organization that sees PR as a
> > primary objective is just misguided.
>
> There are damn fine reasons for not having immediate public disclosure,
> it allows vandors and administrators to close the hole before the script
> kiddies get a hold of it. And they are the real problem, because there
> are so MANY of them, and they tend to do slash and burn stuff, wipe out
> your files, steal your identity, and other things you have to notice.
> They aren't smart enough to find holes themselves in most cases, they
> are too lazy in many cases to read the high-level hacker boards, and a
> few weeks of delay in many cases lets the careful avoid damage.
>
> Security through obscurity doesn't work, but a small delay for a fix to
> be developed can prevent a lot of problems. And of course the
> information should be released, it encourages the creation and
> installation of fixes.
>
> Oh, and many of the problem reports result in "cheap PR" consisting of a
> single line mention in a CERT report or similar. Most people are not
> doing it for the glory.
Nobody told against a small delay , in most of the case that is already what is
happening today.
There is a little problem in this rhetoric. You want fix fast and disclosure
latter. As soon as the fix (we are talking about source available) is out, the
hole is too. Wondering if chiken or egg is great flame subject.
>
> > What's the alternative? I'd like to foster a culture of
> >
> > (a) accepting that bugs happen, and that they aren't news, but making
> > sure that the very openness of the process means that people know
> > what's going on exactly because it is _open_, not because some news
> > organization had to make a big stink about it just to make a vendor
> > take notice.
>
> Linux vendors aside, many vendors react in direct proportion to the bad
> publicity engendered. I'd like the world to work that way, but in many
> places it doesn't.
> >
> > Right now, people seem to think that big news media warnings on
> > cnet.com about SP2 fixing 15 vulnerabilities or similar is the proper
> > way to get people to upgrade. That just -cannot- be right.
>
> Unfortunately reality doesn't agree with you. Many organizations have no
> other effective way to convince management of the need for a fix except
> newspaper articles and magazine articles. A sometimes that has to get to
> the horror story stage before action is possible.
All those to lines to say one thing . Managing security requires social skills.
> > And let's not kid ourselves: the security firms may have resources that
> > they put into it, but the worst-case schenario is actual criminal intent.
> > People who really have resources to study security problems, and who have
> > _no_ advantage of using vendor-sec at all. And in that case, vendor-sec is
> > _REALLY_ a huge mistake.
>
> I think you are still missing the point, I don't care if a security firm
> reads mailing lists or tea leaves, does research or just knows where to
> find it, they are paid to do it and if they do it well and report the
> problems which apply to me and the source of the fixes they keep me from
> missing something and at the same time save me time. Even reading only
> good mailing lists and newsgroups it takes a lot of time to keep
> current, and you see a lot of stuff you don't need.
>
Does this resume to :
I want my company to be in control. And nobody else please, because i do not
trust them.
Who would you want in this security board ? No hackers i believe they have no
incentive to shut the *** up, they do not care about money or their buisness or
who knows why.
So you want :
a/ everyboddy is wrong, we cannot understand,
b/ crackers "are too lazy in many cases to read the high-level hacker boards"
c/ "How can i have fix without ever having a hole ?".
Close your eyes and believe, that s the only way to achieved absolute safety.
I am not kidding, billions of people does this, it seems efficient (only few
dies by accident).
d/ the world is mad , nobody cares about security except who we are in charge
(and have no power in the politics).
e/ i don t care who does the job but i want my god damn system to have no holes.
Sorry for this rude analysis . I assume you want :
1/ a way to be alerted of the security hole of your application stack , and
those only.
2/ fix before the script kiddies.
For one the fix is quite easy, it is a matter of getting security alerts in an
easy way (maybe newsletter are getting old, what about a web as amazon does for
stuff) and a filter on your application stack.
For two, nobody can help. Script kiddies does not even read tech lists. They do
not make the scripts. Those who made them usually don't just read ML, they read
source, even binaries.
And those who make a living of cracking usually do not tell anybody. No CERT
alert. The only hope is easy to read code, audit.
>And they are the real problem, because there
> are so MANY of them, and they tend to do slash and burn stuff, wipe out
> your files, steal your identity, and other things you have to notice.
i bet you don't know what script kiddies is all about , those who want to stole
your identity does not fit there (neither XSS, pishing nor script kiddies
relate to kernel devel in most case).
Afaik the problem discussed in this thread was not about customer as you and I
to be alerted but upstream to have the patch at the same time. And maybe for the
distribution to share patches faster.
About politics:
> With no disrespect, I don't believe you have ever been a full-time
> employee system administrator for any commercial or government
> organization, and I don't believe you have any experience trying to do
> security
followed by :
> I think you are still missing the point, I don't care if a security firm
> reads mailing lists or tea leaves, does research or just knows where to
> find it, they are paid to do it and if they do it well and report the
> problems which apply to me and the source of the fixes they keep me from
> missing something and at the same time save me time.
been there done that. You cannot master security, be the sole administrator and
deal with internal company politics (and maybe provide scripts, macros and
documentation if time permits). This has nothing to do with the way the kernel
deal with security alerts. We are not only paid to secure everything and admin.
But to make choices and the right ones as ressources are limited.
You could look for debian kernel , you have fixes backported , so you do not
have to worry about a new feature breaking an app or an ABI change.
All server distributions does it. Most distributions also alert their users
about those security hole you should be warned about. Just subscribe to their
security ML.
About us as users of binaries, those problems are already resolved by
distribution security teams.
Cheers
Alban
* John Richard Moser <[email protected]> wrote:
> > There was a kernel-based randomization patch floating around at some
> > point, though. I think it's part of PaX. That's the one I hated.
>
> PaX and Exec Shield both have them; personally I believe PaX is a more
> mature technology, since it's 1) still actively developed, and 2) been
> around since late 2000. The rest of the community dissagrees with me
> of course, [...]
might this disagreement be based on the fact that exec-shield _is_ being
actively developed and is in active use in Fedora/RHEL, and that split
out portions of exec-shield (e.g. flexmmap, PT_GNU_STACK, NX) are
already in the upstream kernel?
(but no doubt PaX is fine and protects against exploits at least as
effectively as (and in some cases more effectively than) exec-shield, so
you've definitely not made a bad choice.)
Ingo
Hi!
> > For us thankfully, exec-shield has trapped quite a few remotely
> > exploitable holes, preventing the above.
>
> One thing worth considering, but may be abit _too_ draconian, is a
> capability that says "can execute ELF binaries that you can write to".
>
> Without that capability set, you can only execute binaries that you cannot
> write to, and that you cannot _get_ write permission to (ie you can't be
> the owner of them either - possibly only binaries where the owner is
> root).
Well, if there's gdb installed on such machine, you can probably circumvent this.
Hmm, you can probably do /lib/ld-linux.so.2 your binary, no?
Pavel
--
64 bytes from 195.113.31.123: icmp_seq=28 ttl=51 time=448769.1 ms
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Ingo Molnar wrote:
> * John Richard Moser <[email protected]> wrote:
>
>
>>>There was a kernel-based randomization patch floating around at some
>>>point, though. I think it's part of PaX. That's the one I hated.
>>
>>PaX and Exec Shield both have them; personally I believe PaX is a more
>>mature technology, since it's 1) still actively developed, and 2) been
>>around since late 2000. The rest of the community dissagrees with me
>>of course, [...]
>
>
> might this disagreement be based on the fact that exec-shield _is_ being
> actively developed and is in active use in Fedora/RHEL, and that split
> out portions of exec-shield (e.g. flexmmap, PT_GNU_STACK, NX) are
> already in the upstream kernel?
>
ES has been actively developed since it was poorly implemented in 2003.
PaX has been actively developed since it was poorly implemented in
2000. PaX has had about 4 times longer to go from a poor
proof-of-concept NX emulation patch based on the plex86 announcement to
a full featured security system, and is written by a competant security
developer rather than a competant scheduler developer.
Split-out portions of PaX (and of ES) don't make sense. ASLR can be
evaded pretty easily: inject code, read %efp, find the GOT, read
addresses. The NX protections can be evaded by using ret2libc. on x86,
you need emulation to make an NX bit or the NX protections are useless.
So every part prevents every other part from being pushed gently aside.
PT_GNU_STACK annoys me :P I'm more interested in 1) PaX' full set of
markings (-ps for NX, -m for mprotect(), r for randmmap, x for
randexec), 2) getting rid of the need for anything but -m, and 3)
eliminating relocations. Sometimes they don't patch GLIBC here and
Firefox won't load flash or Java because they're PT_GNU_STACK and don't
really need it (the java executables are marked, but the java plug-in
doesn't need PT_GNU_STACK).
I guess it works on Exec Shield, but it frightens me that I have to
audit every library an executable uses for a PT_GNU_STACK marking to see
if it has an executable stack.
> (but no doubt PaX is fine and protects against exploits at least as
> effectively as (and in some cases more effectively than) exec-shield, so
> you've definitely not made a bad choice.)
>
Either or if it stops an exploit; there's no "stopping an exploit
better," just stopping more of them and having fewer loopholes. As I
understand, ES' NX approximation fails if you relieve protections on a
higher mapping-- which confuses me, isn't vsyscall() a high-address
executable mapping, which would disable NX protection for the full
address space?
PaX disables vsyscall when using PAGEEXEC on x86 because (since 2.6.6 or
so) pipacs uses the same method as ExecShield as a best-effort, falling
back to kernel-assisted MMU walking if that fails. Wasted effort with
vsyscall.
PaX though gives me powerful, flexible administrative control over
executable space protections as a privileged resource.
mprotect(PROT_EXEC|PROT_WRITE) isn't something normal programs need; so
it's not something I allow everyone to do.
Aside from that, I just trust the PaX developer more. He's already got
a more developed product; he's a security developer instead of a
scheduler developer; and he reads CPU manuals for breakfast. I think a
lot of PaX is developed without real hardware-- I know he at least
doesn't have an AMD64 (which is what I use PaX on-- and yes I use the
regression tests), and he does a fine job anyway. This indicates to me
that this is a serious project with someone who knows what he's doing,
so I trust it more.
> Ingo
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7pbmhDd4aOud5P8RAuV2AJ44dE9gvqZ9xwfENaWA6Hm81ALcfQCaA7mk
QFZejeyBBLd1sdtSj3o4Avk=
=hNuJ
-----END PGP SIGNATURE-----
* John Richard Moser <[email protected]> wrote:
> Split-out portions of PaX (and of ES) don't make sense. [...]
which shows that you dont know the exec-shield patch at all, nor those
split-out portions. At which point it becomes pretty pointless to
discuss any technical details, dont you think?
> PT_GNU_STACK annoys me :P I'm more interested in 1) PaX' full set of
> markings (-ps for NX, -m for mprotect(), r for randmmap, x for
> randexec), [...]
>
> I guess it works on Exec Shield, but it frightens me that I have to
> audit every library an executable uses for a PT_GNU_STACK marking to
> see if it has an executable stack.
here there are two misconceptions:
1) you claim that the manual setting of markings is better than the
_automatic_ setting of markings in Fedora. Manual setting is a support
and maintainance nightmare, there can be false positives and false
negatives as well. Also, manual setting of markings assumes code review
or 'does this application break' type of feedback - neither is as
reliable as automatic detection done by the compiler.
2) you claim that you have to audit everything. You dont have to. It's
all automatic. _Fedora developers_ (not you) then check the markings and
reduce the number of executable stacks as much as possible.
> [...] ES' NX approximation fails if you relieve protections on a
> higher mapping-- which confuses me, isn't vsyscall() a high-address
> executable mapping, which would disable NX protection for the full
> address space?
another misconception. Read the patch and you'll see how it's solved.
> Aside from that, I just trust the PaX developer more. He's already
> got a more developed product; he's a security developer instead of a
> scheduler developer; and he reads CPU manuals for breakfast.
this is your choice, and i respect it. Please show the same amount of
respect for the choice of others as well, without badmouthing anything
just because their choice is different from yours.
Ingo
> ES has been actively developed since it was poorly implemented in 2003.
> PaX has been actively developed since it was poorly implemented in
> 2000. PaX has had about 4 times longer to go from a poor
> proof-of-concept NX emulation patch based on the plex86 announcement to
> a full featured security system, and is written by a competant security
> developer rather than a competant scheduler developer.
I would call that an insult to Ingo.
> Split-out portions of PaX (and of ES) don't make sense.
they do. Somewhat.
> ASLR can be
> evaded pretty easily: inject code, read %efp, find the GOT, read
> addresses. The NX protections can be evaded by using ret2libc. on x86,
> you need emulation to make an NX bit or the NX protections are useless.
actually modern x86 cpus have hardware NX.
> PT_GNU_STACK annoys me :P I'm more interested in 1) PaX' full set of
> markings (-ps for NX, -m for mprotect(), r for randmmap, x for
> randexec), 2) getting rid of the need for anything but -m, and 3)
> eliminating relocations. Sometimes they don't patch GLIBC here and
> Firefox won't load flash or Java because they're PT_GNU_STACK and don't
> really need it (the java executables are marked, but the java plug-in
> doesn't need PT_GNU_STACK).
so remark them.
>
> I guess it works on Exec Shield, but it frightens me that I have to
> audit every library an executable uses for a PT_GNU_STACK marking to see
> if it has an executable stack.
there is lsexec which does this automatic for you based on running
propcesses
>
> Either or if it stops an exploit; there's no "stopping an exploit
> better," just stopping more of them and having fewer loopholes. As I
> understand, ES' NX approximation fails if you relieve protections on a
> higher mapping
which is REALLY rare for programs to do
> -- which confuses me, isn't vsyscall() a high-address
> executable mapping, which would disable NX protection for the full
> address space?
just like PaX, execshield has to disable the vsyscall page.
Exec-Shield actually has the code to 1) move the vsyscall page down in
the address space and 2) randomize it per process, but that is inactive
right now since it needs a bit of help from the VM that isn't provided
anymore since 2.6.8 or so.
> PaX though gives me powerful, flexible administrative control over
> executable space protections as a privileged resource.
> mprotect(PROT_EXEC|PROT_WRITE) isn't something normal programs need; so
> it's not something I allow everyone to do.
it's a balance between compatibility and security. PaX strikes a
somewhat different balance from E-S. E-S goes a long way to avoid
breaking things that posix requires, even if they are silly and rare.
Apps don't DO PROT_EXEC|PROT_WRITE normally after all.. so this added
"protect" is to a point artifical.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Ingo Molnar wrote:
> * John Richard Moser <[email protected]> wrote:
>
>
>>Split-out portions of PaX (and of ES) don't make sense. [...]
>
>
> which shows that you dont know the exec-shield patch at all, nor those
> split-out portions. At which point it becomes pretty pointless to
> discuss any technical details, dont you think?
>
I'm shoddy on ES details, but I was more remarking on the overlapping
functions between PaX and ES.
>
>>PT_GNU_STACK annoys me :P I'm more interested in 1) PaX' full set of
>>markings (-ps for NX, -m for mprotect(), r for randmmap, x for
>>randexec), [...]
>>
>>I guess it works on Exec Shield, but it frightens me that I have to
>>audit every library an executable uses for a PT_GNU_STACK marking to
>>see if it has an executable stack.
>
>
> here there are two misconceptions:
>
> 1) you claim that the manual setting of markings is better than the
> _automatic_ setting of markings in Fedora. Manual setting is a support
> and maintainance nightmare, there can be false positives and false
> negatives as well. Also, manual setting of markings assumes code review
> or 'does this application break' type of feedback - neither is as
> reliable as automatic detection done by the compiler.
PaX has trampoline detection/emulation. I think the toolchain spits out
libraries with -E when there's one. It's not inherited from libraries
to the executable though; but a quick hack to trace down everything from
`ldd` or `readelf` and grab the -E would do the same thing without
giving a fully executable stack.
>
> 2) you claim that you have to audit everything. You dont have to. It's
> all automatic. _Fedora developers_ (not you) then check the markings and
> reduce the number of executable stacks as much as possible.
>
And a distribution maintainer could do the same with PaX. Once it's
done it's fairly low maintenance. I know because I've done it myself.
I can determine minimal pax markings on a given binary in about 15
seconds in most cases.
>
>>[...] ES' NX approximation fails if you relieve protections on a
>>higher mapping-- which confuses me, isn't vsyscall() a high-address
>>executable mapping, which would disable NX protection for the full
>>address space?
>
>
> another misconception. Read the patch and you'll see how it's solved.
>
I've been told it maps vsyscall at a lower address, though didn't
remember until after I hit send. Is this true?
>
>>Aside from that, I just trust the PaX developer more. He's already
>>got a more developed product; he's a security developer instead of a
>>scheduler developer; and he reads CPU manuals for breakfast.
>
>
> this is your choice, and i respect it. Please show the same amount of
> respect for the choice of others as well, without badmouthing anything
> just because their choice is different from yours.
>
I respect you as a kernel developer as long as you're doing preemption
and schedulers; but I honestly think PaX is the better technology, and I
think it's important that the best security technology be in place. My
concerns are only with real security, and in that respect I feel that if
I didn't stand up and assert my understandings on the matter that people
may hurt themselves. I can't put a slave collar on people and use force
feedback to control their minds, but I don't have to keep quiet either.
It doesn't much matter at this point. If everything goes well, PaX
should show up in a fairly popular distribution soon, so we'll get to
finally see something added that this conversation lacks: a genuine
large-scale demonstration of the deployment of PaX. ES has that
already; but until I can see the excellence and the failings of PaX
deployed with a target on the average user as well, I can't make
assessments about which deploys better in what cases and why.
On a final note, isn't PaX the only technology trying to apply NX
protections to kernel space? Granted, most kernel exploits aren't RCE;
but it's still a basic protection that should be in place. Wouldn't it
be embarassing to say one day that RCE is so rare we don't need to waste
effort on kernel-level W/X separation, then the next day see an RCE
exploit? :P (do it murphy, do it! >:P) This is just a generic
observation; as I said, RCE in kernel is rare enough to not be a major
selling point, but it's still a consideration.
> Ingo
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7qhthDd4aOud5P8RAkjFAJ0YGEjbpNvu2DEr7DiRicuVcWUwawCdGXV2
/CMT3w5TL7KmnsORwIB850M=
=mrQU
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Arjan van de Ven wrote:
>>ES has been actively developed since it was poorly implemented in 2003.
>> PaX has been actively developed since it was poorly implemented in
>>2000. PaX has had about 4 times longer to go from a poor
>>proof-of-concept NX emulation patch based on the plex86 announcement to
>>a full featured security system, and is written by a competant security
>>developer rather than a competant scheduler developer.
>
>
> I would call that an insult to Ingo.
>
You're reading too deeply then.
>
>
>>Split-out portions of PaX (and of ES) don't make sense.
>
>
> they do. Somewhat.
They do to "break all existing exploits" until someone takes 5 minutes
to make a slight alteration. Only the reciprocating combinations of
each protection can protect the others from being exploited and create a
truly secure environment.
Ingo said there's other stuff in ES that this doesn't apply to but
*shrug* again, beyond what I intended when I said that.
>
>>ASLR can be
>>evaded pretty easily: inject code, read %efp, find the GOT, read
>>addresses. The NX protections can be evaded by using ret2libc. on x86,
>>you need emulation to make an NX bit or the NX protections are useless.
>
>
> actually modern x86 cpus have hardware NX.
not my point. . .
>
>
>>PT_GNU_STACK annoys me :P I'm more interested in 1) PaX' full set of
>>markings (-ps for NX, -m for mprotect(), r for randmmap, x for
>>randexec), 2) getting rid of the need for anything but -m, and 3)
>>eliminating relocations. Sometimes they don't patch GLIBC here and
>>Firefox won't load flash or Java because they're PT_GNU_STACK and don't
>>really need it (the java executables are marked, but the java plug-in
>>doesn't need PT_GNU_STACK).
>
>
> so remark them.
Manually. Annoying because now I'm doing PaX AND Exec Shield markings,
but I do remark them anyway. This wasn't meant to sound like it was a
major problem, just to be a side comment.
>
>
>>I guess it works on Exec Shield, but it frightens me that I have to
>>audit every library an executable uses for a PT_GNU_STACK marking to see
>>if it has an executable stack.
>
>
> there is lsexec which does this automatic for you based on running
> propcesses
>
I don't want to run something potentially dangerous. Think secret
military installation with no name and blank checks made out to nobody.
The security has to scale up and down; it has to be useful for the home
user, for the business, and for those that don't officially exist.
>
>>Either or if it stops an exploit; there's no "stopping an exploit
>>better," just stopping more of them and having fewer loopholes. As I
>>understand, ES' NX approximation fails if you relieve protections on a
>>higher mapping
>
>
> which is REALLY rare for programs to do
>
True, but PaX has a failsafe in PAGEEXEC, and doesn't suffer this in
SEGMEXEC.
>
>>-- which confuses me, isn't vsyscall() a high-address
>>executable mapping, which would disable NX protection for the full
>>address space?
>
>
> just like PaX, execshield has to disable the vsyscall page.
> Exec-Shield actually has the code to 1) move the vsyscall page down in
> the address space and 2) randomize it per process, but that is inactive
> right now since it needs a bit of help from the VM that isn't provided
> anymore since 2.6.8 or so.
>
>
ah.
>
>>PaX though gives me powerful, flexible administrative control over
>>executable space protections as a privileged resource.
>>mprotect(PROT_EXEC|PROT_WRITE) isn't something normal programs need; so
>>it's not something I allow everyone to do.
>
>
> it's a balance between compatibility and security. PaX strikes a
> somewhat different balance from E-S. E-S goes a long way to avoid
> breaking things that posix requires, even if they are silly and rare.
> Apps don't DO PROT_EXEC|PROT_WRITE normally after all.. so this added
> "protect" is to a point artifical.
>
>
The actual threat this mitigates is that the app may be ret2libc'd to
mprotect() (possible with unrandomized ET_EXEC?), but in reality a more
complex attack can accomplish the same thing. I prefer it more as a
speed bump to expose broken code to me or at least give me an idea of
what to audit. If something HAS to mprotect() the stack, then I HAVE to
make sure that program is audited, or I'm just being a dumbass and
waiting to be infected with a cheap worm some scriptkiddie wrote using a
build-your-own-virus program.
>
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7qvthDd4aOud5P8RAhbVAJ9Jdxp/mKByxWChjM1bQMVZaIN4JACfaJ1I
Rezv+g9BE7ezKwHB5UCvdnk=
=EEu/
-----END PGP SIGNATURE-----
> I respect you as a kernel developer as long as you're doing preemption
> and schedulers; but I honestly think PaX is the better technology, and I
> think it's important that the best security technology be in place.
the difference is not that big and only in tradeoffs. eg pax trades
virtual address space against protecting a rare occurance (eg where exec
shield wouldn't work because of a high executable mapping. That really
doesn't happen in normal programs)
> On a final note, isn't PaX the only technology trying to apply NX
> protections to kernel space?
Exec Shield does that too but only if your CPU has hardware assist for
NX (which all current AMD and most current intel cpus do).
Alban Browaeys wrote:
> Bill Davidsen <davidsen <at> tmr.com> writes:
>
>
>>With no disrespect, I don't believe you have ever been a full-time
>>employee system administrator for any commercial or government
>>organization, and I don't believe you have any experience trying to do
>>security when change must be reviewed by technically naive management to
>>justify cost, time, and policy implications. The people on the list who
>>disagree may view the security information issue in a very different
>>context.
>
>
> Basically you are saying that if i disagree, my view is irrelevant. What do you
> expect with this kind of start.
What I am saying is what I said, no reinterpretation needed. Linus has
not had the experience of being an administrator working where policies
and resources are controlled by someone else. I think experience is
valuable.
>
>
>>Linus Torvalds wrote:
>>Unfortunately reality doesn't agree with you. Many organizations have no
>>other effective way to convince management of the need for a fix except
>>newspaper articles and magazine articles. A sometimes that has to get to
>>the horror story stage before action is possible.
>
>
>
> All those to lines to say one thing . Managing security requires social skills.
And people who haven't been there would not appreciate the reality if I
said it in one line.
>
>
>
>>>And let's not kid ourselves: the security firms may have resources that
>>>they put into it, but the worst-case schenario is actual criminal intent.
>>>People who really have resources to study security problems, and who have
>>>_no_ advantage of using vendor-sec at all. And in that case, vendor-sec is
>>>_REALLY_ a huge mistake.
>>
>>I think you are still missing the point, I don't care if a security firm
>>reads mailing lists or tea leaves, does research or just knows where to
>>find it, they are paid to do it and if they do it well and report the
>>problems which apply to me and the source of the fixes they keep me from
>>missing something and at the same time save me time. Even reading only
>>good mailing lists and newsgroups it takes a lot of time to keep
>>current, and you see a lot of stuff you don't need.
>>
>
>
> Does this resume to :
Again it means what it says, security services are cost effective. They
control the resources used in providing security, because in some
companies there simply aren't the human resources available to do a good
job.
> I want my company to be in control. And nobody else please, because i do not
> trust them.
> Who would you want in this security board ? No hackers i believe they have no
> incentive to shut the *** up, they do not care about money or their buisness or
> who knows why.
>
> So you want :
> a/ everyboddy is wrong, we cannot understand,
> b/ crackers "are too lazy in many cases to read the high-level hacker boards"
> c/ "How can i have fix without ever having a hole ?".
> Close your eyes and believe, that s the only way to achieved absolute safety.
> I am not kidding, billions of people does this, it seems efficient (only few
> dies by accident).
> d/ the world is mad , nobody cares about security except who we are in charge
> (and have no power in the politics).
> e/ i don t care who does the job but i want my god damn system to have no holes.
I don't know how you got to this list from what I posted...
> Sorry for this rude analysis . I assume you want :
> 1/ a way to be alerted of the security hole of your application stack , and
> those only.
> 2/ fix before the script kiddies.
And that is a reasonable summary of the goals.
>
> For one the fix is quite easy, it is a matter of getting security alerts in an
> easy way (maybe newsletter are getting old, what about a web as amazon does for
> stuff) and a filter on your application stack.
I actually like newsletter and Email alerts, they can be set to generate
interrupts at the level appropriate, from "you have mail" to pager
alerts, as desired.
>
> For two, nobody can help. Script kiddies does not even read tech lists. They do
> not make the scripts. Those who made them usually don't just read ML, they read
> source, even binaries.
> And those who make a living of cracking usually do not tell anybody. No CERT
> alert. The only hope is easy to read code, audit.
I don't regard any solution less that perfect as "nobody can help."
Timely fixes significantly reduce the exposure, the world isn't perfect.
As my wife says, "Life's a bitch. Then you die."
--
-bill davidsen ([email protected])
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Arjan van de Ven wrote:
>>I respect you as a kernel developer as long as you're doing preemption
>>and schedulers; but I honestly think PaX is the better technology, and I
>>think it's important that the best security technology be in place.
>
>
> the difference is not that big and only in tradeoffs. eg pax trades
> virtual address space against protecting a rare occurance (eg where exec
> shield wouldn't work because of a high executable mapping. That really
> doesn't happen in normal programs)
>
PAGEEXEC uses the same method as Exec Shield, but falls back to kernel
assisted MMU walking when that fails. This does not split the address
space in half. Stop pretending SEGMEXEC is the only emulation PaX has.
PaX also allows more finegrained administrative control.
>
>>On a final note, isn't PaX the only technology trying to apply NX
>>protections to kernel space?
>
>
> Exec Shield does that too but only if your CPU has hardware assist for
> NX (which all current AMD and most current intel cpus do).
>
Uh, ok. You've read the code right? *would rather hear from Ingo*
>
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7rkfhDd4aOud5P8RAmvXAKCMADZGBVZx9UVRTfmVCoSBY9ODrgCfVK5s
djLbjG/KmLx8QotWNAqr6Mc=
=Tcjc
-----END PGP SIGNATURE-----
On Wed, 19 Jan 2005 13:50:23 EST, John Richard Moser said:
> Arjan van de Ven wrote:
> >>Split-out portions of PaX (and of ES) don't make sense.
> > they do. Somewhat.
> They do to "break all existing exploits" until someone takes 5 minutes
> to make a slight alteration. Only the reciprocating combinations of
> each protection can protect the others from being exploited and create a
> truly secure environment.
OK, for those who tuned in late to the telecast of "Kernel Development Process
for Newbies":
It *DOES NOT MATTER* that PaX and ES "don't make sense" split out into 5 or
6 pieces. We merge in stuff *ALL THE TIME* in 20 or 30 chunks, where it
doesn't make any real sense unless all 20 or 30 go in. Just today, there was
a 29-patch monster replacing kexec, and another 12-patcher replacing something
else. And I don't think anybody claims that many of those 29 patches stand
totally by themselves. You install 25 of them, you probably don't have a working
kexec, which is the goal of the patch series.
The point is that *each* of those 29 patches is small and self-contained enough
to review for breakage of current stuff, elegance of coding, and so on. Now
let's look at grsecurity:
% wc grsecurity-2.1.0-2.6.10-200501071049.patch
23539 89686 700414 grsecurity-2.1.0-2.6.10-200501071049.patch
700K. In one patch. If PAX is available for 2.6.10 by itself, it certainly
hasn't been posted to http://pax.grsecurity.net - that's still showing a 2.6.7
patch. But even there, that's a single monolithic 280K patch. That's never
going to get merged, simply because *nobody* can review a single patch that big.
Now look at http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/.
4 separate hunks, the biggest is under 7K. Other chunks of similar size
for non-exec stack and NX support are already merged.
And why were they merged? Because they showed up in 4-8K chunks.
> Split-out portions of PaX (and of ES) don't make sense. ASLR can be
> evaded pretty easily: inject code, read %efp, find the GOT, read
> addresses. The NX protections can be evaded by using ret2libc. on x86,
> you need emulation to make an NX bit or the NX protections are useless.
> So every part prevents every other part from being pushed gently aside.
Right. But if you *submit* them as "a chunk to add x86 emulation of an NX
bit", "a chunk to add ASLR", "a chunk to add NX", "a chunk to do FOO with the
vsyscall page", and so on, they might actually have a snowball's chance of
being included.
If nothing else, the fact they're posted as different patches means each can be
used as the anchor for a thread discussing the merits of *that* patch. Adrian
Bunk has been submitting patches for the last several weeks which probably
total *well* over the size of the PAX patch. And since they show up as
separate patches, the non-controversial ones can sail by, the ALSA crew can
comment when he hits an ALSA module, the filesystem people can comment when he
hits one of their files, and so on.
> >
> > Exec Shield does that too but only if your CPU has hardware assist for
> > NX (which all current AMD and most current intel cpus do).
> >
>
> Uh, ok. You've read the code right? *would rather hear from Ingo*
I co-developed a bunch of it together with Ingo in fact, and did lots
and lots of reviews on it as a whole and worked on it for over a year in
relation with putting it in the Fedora kernel etc etc.
So yes I have read the code.
> 700K. In one patch. If PAX is available for 2.6.10 by itself, it certainly
> hasn't been posted to http://pax.grsecurity.net - that's still showing a 2.6.7
> patch. But even there, that's a single monolithic 280K patch. That's never
> going to get merged, simply because *nobody* can review a single patch that big.
>
> Now look at http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/.
> 4 separate hunks, the biggest is under 7K. Other chunks of similar size
> for non-exec stack and NX support are already merged.
>
> And why were they merged? Because they showed up in 4-8K chunks.
>
note to readers: I'm still not happy about the split up and want to
split this up even further in smaller pieces; the split up there is only
a first order split.
Linus Torvalds wrote:
>
> On Wed, 12 Jan 2005, Dave Jones wrote:
>
>>For us thankfully, exec-shield has trapped quite a few remotely
>>exploitable holes, preventing the above.
>
>
> One thing worth considering, but may be abit _too_ draconian, is a
> capability that says "can execute ELF binaries that you can write to".
>
> Without that capability set, you can only execute binaries that you cannot
> write to, and that you cannot _get_ write permission to (ie you can't be
> the owner of them either - possibly only binaries where the owner is
> root).
How would you map that to interpreted languages? Bash may not be an
issue (in general), but perl, java, SQL, etc, would be. People other
than software developers do write in some of those.
> I realize people disagree with me, which is also why I don't in any way
> take vendor-sec as a personal affront or anything like that: I just think
> it's a mistake, and am very happy to be vocal about it, but hey, the
> fundamental strength of open source is exactly the fact that people don't
> have to agree about everything.
That's true, but in practice an administrator who disagrees with a
developer gets to maintain their own application or O/S, and most users
have no recourse but to go to another O/S or app. Which makes it far
more practical to explain a point than to storm off and do it yourself
and have to maintain it forever.
--
-bill davidsen ([email protected])
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
[email protected] wrote:
> On Wed, 19 Jan 2005 13:50:23 EST, John Richard Moser said:
>
>>Arjan van de Ven wrote:
>>
>>>>Split-out portions of PaX (and of ES) don't make sense.
>>>
>>>they do. Somewhat.
>
>
>>They do to "break all existing exploits" until someone takes 5 minutes
>>to make a slight alteration. Only the reciprocating combinations of
>>each protection can protect the others from being exploited and create a
>>truly secure environment.
>
>
> OK, for those who tuned in late to the telecast of "Kernel Development Process
> for Newbies":
>
> It *DOES NOT MATTER* that PaX and ES "don't make sense" split out into 5 or
> 6 pieces. We merge in stuff *ALL THE TIME* in 20 or 30 chunks, where it
> doesn't make any real sense unless all 20 or 30 go in. Just today, there was
> a 29-patch monster replacing kexec, and another 12-patcher replacing something
> else. And I don't think anybody claims that many of those 29 patches stand
> totally by themselves. You install 25 of them, you probably don't have a working
> kexec, which is the goal of the patch series.
>
> The point is that *each* of those 29 patches is small and self-contained enough
> to review for breakage of current stuff, elegance of coding, and so on. Now
> let's look at grsecurity:
>
> % wc grsecurity-2.1.0-2.6.10-200501071049.patch
> 23539 89686 700414 grsecurity-2.1.0-2.6.10-200501071049.patch
>
> 700K. In one patch. If PAX is available for 2.6.10 by itself, it certainly
> hasn't been posted to http://pax.grsecurity.net - that's still showing a 2.6.7
> patch. But even there, that's a single monolithic 280K patch. That's never
> going to get merged, simply because *nobody* can review a single patch that big.
>
PaX is available for 2.6.10, but it's in the testing phase. I've had
good results.
> Now look at http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/.
> 4 separate hunks, the biggest is under 7K. Other chunks of similar size
> for non-exec stack and NX support are already merged.
>
> And why were they merged? Because they showed up in 4-8K chunks.
>
so you want 90-200 split out patches for GrSecurity?
>
>>Split-out portions of PaX (and of ES) don't make sense. ASLR can be
>>evaded pretty easily: inject code, read %efp, find the GOT, read
>>addresses. The NX protections can be evaded by using ret2libc. on x86,
>>you need emulation to make an NX bit or the NX protections are useless.
>>So every part prevents every other part from being pushed gently aside.
>
>
> Right. But if you *submit* them as "a chunk to add x86 emulation of an NX
> bit", "a chunk to add ASLR", "a chunk to add NX", "a chunk to do FOO with the
> vsyscall page", and so on, they might actually have a snowball's chance of
> being included.
>
> If nothing else, the fact they're posted as different patches means each can be
> used as the anchor for a thread discussing the merits of *that* patch. Adrian
> Bunk has been submitting patches for the last several weeks which probably
> total *well* over the size of the PAX patch. And since they show up as
> separate patches, the non-controversial ones can sail by, the ALSA crew can
> comment when he hits an ALSA module, the filesystem people can comment when he
> hits one of their files, and so on.
ok ok ok. I get the point: I'm the only person in the world who can
swallow a twinkie whole, the rest of you need to chew.
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7r8UhDd4aOud5P8RAg6tAJ4uWXxFSVcLhfB/QwWcBR0rTS/WKgCcD5ga
S1xb603WKqgk2Bq5/zhpSXw=
=aHEb
-----END PGP SIGNATURE-----
On Wed, 19 Jan 2005 15:12:05 EST, John Richard Moser said:
> > And why were they merged? Because they showed up in 4-8K chunks.
> so you want 90-200 split out patches for GrSecurity?
Even better would be a 30-40 patch train for PaX, a 10-15 patch train
for the other randomization stuff in grsecurity (pid, port number, all
the rest of those), a 50-60 patch train for the RBAC stuff, and so on.
Keep in mind that properly segmented, *parts* of grsecurity have at least
a fighting chance - the fact that (for instance) mainline may reject the
way RBAC is implemented because it's not LSM-based doesn't mean that you
shouldn't at least try to get the PaX stuff in, and the randomization stuff,
and so on.
On Wed, 19 Jan 2005 20:53:51 +0100, Arjan van de Ven said:
> > Now look at http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/
.
> > 4 separate hunks, the biggest is under 7K. Other chunks of similar size
> > for non-exec stack and NX support are already merged.
> >
> > And why were they merged? Because they showed up in 4-8K chunks.
> >
> note to readers: I'm still not happy about the split up and want to
> split this up even further in smaller pieces; the split up there is only
> a first order split.
Right - the point is that even an idiot like me can get my head wrapped around
that biggest 7K chunk and figure out what's going on. On the other hand, even
the Alan Cox gnome-cluster isn't able to digest a 280K patch...
[trimming of cc list since this has nothing to see with the thread]
El Wed, 19 Jan 2005 15:12:05 -0500 John Richard Moser <[email protected]> escribi?:
> so you want 90-200 split out patches for GrSecurity?
Documentation/SubmittingPatches.txt is all you need to read.
There has been a lot of good projects that have failed just because they sat
around saying "my stuff is better" without caring about how to merge it or
without listening other kernel developers. Then someone reimplemented it
better and submitted it in a way it could be handled, and listened to other
developers, and it got in the kernel and everybody helped to make it better
than the first alternative. Kbuild is a good examle of this
So, if you want to have have PAX or grsecurity in the kernel, you probably
should submit patches (in the way described in SubmittingPatches.txt) and if
everybody agrees that it's better and you listen other developers and make
changes accordingly and you don't say "$SOMEPERSON is just a scheduler
developer" perhaps it'll be merged. Of course that's more difficult since
people has already cared about doing all that work with ES and it's already
working OK for thousand of people.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
[email protected] wrote:
> On Wed, 19 Jan 2005 15:12:05 EST, John Richard Moser said:
>
>
>>>And why were they merged? Because they showed up in 4-8K chunks.
>
>
>>so you want 90-200 split out patches for GrSecurity?
>
>
> Even better would be a 30-40 patch train for PaX, a 10-15 patch train
> for the other randomization stuff in grsecurity (pid, port number, all
> the rest of those), a 50-60 patch train for the RBAC stuff, and so on.
>
RBAC first. Some of the other stuff relies on the RBAC system, I'm
told. Not sure what.
> Keep in mind that properly segmented, *parts* of grsecurity have at least
> a fighting chance - the fact that (for instance) mainline may reject the
> way RBAC is implemented because it's not LSM-based doesn't mean that you
> shouldn't at least try to get the PaX stuff in, and the randomization stuff,
> and so on.
>
I think GrSecurity's RBAC is a bit bigger than LSM can accomodate.
Anyway, I wasn't originally trying to get PaX into mainline in this
discussion; I think this started out with me trying to point out why
things like PaX have to be all-or-nothing.
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7ssKhDd4aOud5P8RAnVtAJ9f4YcAjLOEGkG7NOB7TBqJdnXD5QCfXwyZ
ozuM56ETWpuOAvKUgXkmJrA=
=+Hnj
-----END PGP SIGNATURE-----
On Wed, 19 Jan 2005 16:03:06 EST, John Richard Moser said:
(New Subject: line to split this thread out...)
> > Even better would be a 30-40 patch train for PaX, a 10-15 patch train
> > for the other randomization stuff in grsecurity (pid, port number, all
> > the rest of those), a 50-60 patch train for the RBAC stuff, and so on.
> >
>
> RBAC first. Some of the other stuff relies on the RBAC system, I'm
> told. Not sure what.
Well, there's 3 classes of stuff:
1) Stuff that's basically independent of RBAC (a lot of randomization stuff,
for instance). These can go as a separate stream.
2) Stuff that is mostly independent of RBAC, but can use it for configuration
and control. So for instance, the PAX stuff (which by itself is close to half
the whole thing) could go in, and possibly with a "stub" patch that adds
control via /proc/kernel/something or a /sys entry. And it's *OK* if your
code has a "shim" in it to make patch 3 work until the new infrastructure
that patch 27 adds shows up, meaning that patch 26 removes a big chunk of
patch 3 (especially if your /sys shim stands on its own even without patch 27).
3) The stuff that literally makes *no* sense if you don't have RBAC.
It may very well make sense to attack the stuff in group (1) *first*, because
then (a) all the kernel users get the benefits (similar to the "non-exec-stack"
patch from execshield - everybody wins from that piece even though it's not all
of the package), and (b) it's an easy way to pile up street creds by demonstrating
with small patches that you are with the program - when some of the later, more
contentious patches show up, it helps if you're recognized as the guy who
already sent in 10-15 patches...
> I think GrSecurity's RBAC is a bit bigger than LSM can accomodate.
Well - what parts of RBAC *can* be done inside the LSM framework?
What parts could be done inside LSM if LSM gained another hook or two (there
*is* precedent for adding a hook for things that can reasonably use it)?
What parts can't be done inside LSM, and why?
> Anyway, I wasn't originally trying to get PaX into mainline in this
> discussion; I think this started out with me trying to point out why
> things like PaX have to be all-or-nothing.
I agree that the sum set of features eventually included needs to cover
all the bases - the big hurdle is factoring it down into patches that stand
a chance.
* John Richard Moser <[email protected]> wrote:
> On a final note, isn't PaX the only technology trying to apply NX
> protections to kernel space? [...]
NX protection for kernel-space overflows on x86 has been part of the
mainline kernel as of June 2004 (released in 2.6.8), on CPUs that
support the NX bit - i.e. latest AMD and Intel CPUs. Let me quote from
the commit log:
http://linux.bkbits.net:8080/linux-2.5/[email protected]
[...]
furthermore, the patch also implements 'NX protection' for kernelspace
code: only the kernel code and modules are executable - so even
kernel-space overflows are harder (in some cases, impossible) to
exploit. Here is how kernel code that tries to execute off the stack is
stopped:
kernel tried to access NX-protected page - exploit attempt? (uid: 500)
Unable to handle kernel paging request at virtual address f78d0f40
printing eip:
...
implemented, split out and brought to you by yours truly, as part
of the exec-shield project. (You know, the one not developed by that
'scheduler developer' ;-)
Ingo
* John Richard Moser <[email protected]> wrote:
> > Exec Shield does that too but only if your CPU has hardware assist for
> > NX (which all current AMD and most current intel cpus do).
>
> Uh, ok. You've read the code right? *would rather hear from Ingo*
FYI, Arjan is one of the exec-shield developers. So he has not only read
the code but has written portions of it.
Ingo
* John Richard Moser <[email protected]> wrote:
> I respect you as a kernel developer as long as you're doing preemption
> and schedulers; [...]
actually, 'preemption and schedulers' ignores 80% of my contributions to
Linux so i'm not sure what to make of your comment :-| Here's a list of
bigger stuff i worked on in the past 3-4 years:
http://redhat.com/~mingo/
as you can readily notice from the directory names alone, 'preemption
and schedulers' is pretty much in the minority :-|
and that list i think sums up the main difference in mindset: to me,
exec-shield is 'just' another kernel feature (amongst dozens), which
solves a problem. I'm not attached to the concept/patch emotionally, i
only want to see a solution for a problem in a pragmatic way. Playing
with lowlevel segment details is not nice and not always fun and results
in tradeoffs i dont like, but it's pretty much the only technique that
works on older x86 CPUs (as PaX has proven it too). If something better
comes along, then more power to it.
> [...] but I honestly think PaX is the better technology, and I think
> it's important that the best security technology be in place. [...]
i like PaX's completeness, and it has different tradeoffs. There is one
major and two medium tradeoffs that PaX has, from a distributor's POV:
1) the halving of the per-process VM space from 3GB to 1.5GB.
[ 2) the technique PaX uses (mirrored vmas) is pretty complex in terms
of MM code. ]
[ 3) requires manual tagging of applications. ]
The technique exec-shield uses (to track the per-process 'highest
executable address') is pretty simple and non-intrusive on the
implementational level, but it also results in exec-shield's main
tradeoff:
certain VM allocation patterns (e.g. doing mprotect() on an area that
was allocated not via PROT_EXEC and was thus mapped high) can reduce
exec-shield to 'only protects the stack against execution', or if
the application needs an executable stack then reduces exec-shield to
'no protection'.
it turns out these cases where exec-shield gets reduced are quite rare
and dont happen in critical applications. (partly because we fixed
affected critical applications - such fixes made sense even when not
considering the exec-shield impact.)
If a 'generic' distribution (i.e. one that has a significant userbase,
has thousands of packages that do get used) deviates from mainline it
wants to do it as simply as possible. (otherwise it would have the
overhead of porting/testing those deviations all the time.) In fact,
most of the extra patches that distribution kernels apply are patches
that they think will go mainline soon. If they apply any patch they know
wont be merged anytime soon they only do it if it is really needed, and
even then they try chose the variant that is smaller and easier to
maintain. Another important aspect is that extra patches should
obviously _widen_ the utility of the system, not narrow it.
On x86, VM space is scarce so PaX's halving of the VM space is a
'reduced utility' problem. (yes, you can turn it off per application and
get processes that have 3GB of address space, but that removes security
for those processes. Also, you cannot know in advance whether an
application will use more than 1.5GB of VM - different systems have
different usage patterns.)
[ PaX's #2 tradeoff is a maintainance overhead issue. Not an
insurmountable issue because it is well-written kernel code, but
combined with #1 it can tip the scale. PaX's #3 tradeoff is fixable -
it could very well use the PT_GNU_STACK code now upstream. ]
you seem to be arguing for a 'no prisoners taken' approach to security,
and that is a perfectly fine approach if you maintain your own variant
of a distribution.
the other approach to security (which Fedora follows) is to 'make it as
seemless and automatic as possible, so that people actually end up using
our stuff.'
so while exec-shield is not "complete" in the sense of PaX, in practice
it is like 99% complete. E.g. on my Fedora desktop box:
$ lsexec --all | grep 'execshield enabled' | wc -l
86
$ lsexec --all | grep 'execshield disabled' | wc -l
0
and that's what really matters at the end of the day. (Anyway, you dont
have to believe and/or follow any of this, you are free to run your own
distribution, and if it's good then people will inevitably end up using
it.)
Ingo
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Ingo Molnar wrote:
> * John Richard Moser <[email protected]> wrote:
>
>
>>I respect you as a kernel developer as long as you're doing preemption
>>and schedulers; [...]
>
>
> actually, 'preemption and schedulers' ignores 80% of my contributions to
> Linux so i'm not sure what to make of your comment :-| Here's a list of
> bigger stuff i worked on in the past 3-4 years:
>
> http://redhat.com/~mingo/
>
> as you can readily notice from the directory names alone, 'preemption
> and schedulers' is pretty much in the minority :-|
>
> and that list i think sums up the main difference in mindset: to me,
> exec-shield is 'just' another kernel feature (amongst dozens), which
> solves a problem. I'm not attached to the concept/patch emotionally, i
> only want to see a solution for a problem in a pragmatic way. Playing
> with lowlevel segment details is not nice and not always fun and results
> in tradeoffs i dont like, but it's pretty much the only technique that
> works on older x86 CPUs (as PaX has proven it too). If something better
> comes along, then more power to it.
>
>
Granted, you're somewhat more diverse than I pointed out; but I don't
keep up on what you're doing. The point was more that you're not a
major security figure and/or haven't donated your life to security and
forsaken all lovers before it like Joshua Brindle or Brad Spengler or
whoever the anonymous guy who developes PaX is. I guess less focus on
the developer and more focus on the development.
>>[...] but I honestly think PaX is the better technology, and I think
>>it's important that the best security technology be in place. [...]
>
>
> i like PaX's completeness, and it has different tradeoffs. There is one
> major and two medium tradeoffs that PaX has, from a distributor's POV:
>
> 1) the halving of the per-process VM space from 3GB to 1.5GB.
>
Which has *never* caused a problem in anything I've ever used, and can
be disabled on a per-process basis.
> [ 2) the technique PaX uses (mirrored vmas) is pretty complex in terms
> of MM code. ]
>
*shrug* The kernel's basic initialization code is in assembly. On 40
different platforms. That's pretty complex in terms of kernel code,
which is 99.998% C.
> [ 3) requires manual tagging of applications. ]
>
Good. Maybe distributors will actually know what they're talking about
when flapping their mouths, rather than say "Oh look PaX it's magic so
we just need to turn it on!" Even I (at user level) examine everything
I'm using and try to understand it; I don't expect all users to do this,
but the distribution has to.
Once I was on the SELinux toy box (a honeypot-type thing) that Gentoo
set up, with root. The first thing I did was run a 2-line shell command
to scan for and inform me of any areas I could write to. I was only
supposed to be able to write to /tmp, but I found 2 or 3 more. Holes in
the Gentoo SELinux policy, which PeBenito fixed in about 2 minutes.
He had to write that policy by hand. How bad do you think it'd have
been--not just what I caught, but what I wouldn't have been able to
catch with just a cursory look, maybe serious flaws--if the policy was
automatically generated? I could only imagine auto-generated policy,
drop-in, you think you're secure for a good long while. . . .
Even when the tagging is all automatic, to really deploy a competantly
formed system you have to review the results of the automated tagging.
It's a bit easier in most cases to automate-and-review, but it still has
to be done. I think in the case of PaX markings, the maintenance
overhead of manually marking binaries is minimal enough that looking for
mistakes would be more work than working from an already known and
familiar base.
Also, a modified toolchain spits out ELF binaries with -E when you need
emutramp (I've seen this but I don't know if it's SOME or ALL cases),
which is normally what causes you to need an executable stack. An
automated tool could read the ELF header (ldd, readelf) and trace down
all libraries for each program, looking for any with -E. If it finds
one, it marks the program -E too. That only leaves a few points of
breakage, particularly things like zsnes which need to mprotect() a huge
hunk of assembly code to make it writable.
> The technique exec-shield uses (to track the per-process 'highest
> executable address') is pretty simple and non-intrusive on the
> implementational level, but it also results in exec-shield's main
> tradeoff:
>
> certain VM allocation patterns (e.g. doing mprotect() on an area that
> was allocated not via PROT_EXEC and was thus mapped high) can reduce
> exec-shield to 'only protects the stack against execution', or if
> the application needs an executable stack then reduces exec-shield to
> 'no protection'.
>
Which brings us to a point on (1) and (2). You and others continue to
pretend that SEGMEXEC is the only NX emulation in PaX. I should remind
you (again) that PAGEEXEC uses the same method that Exec Shield uses
since I believe kernel 2.6.6. In the cases where this method fails, it
falls back to kernel-assisted MMU walking, which can produce potentially
high overhead.
This combination is more suitable for enterprise production environments
which chose the system for the security. The program will still "work"
with the fallback method. It will probably be a little slower, and may
be a lot slower. But, per your own arguments, this situation is
"extremely rare" and it should normally use the highest executable
address method. Even when this rare case occurs and it falls back to
KAMMUW, it's at least not sacrificing security to do it.
> it turns out these cases where exec-shield gets reduced are quite rare
> and dont happen in critical applications. (partly because we fixed
> affected critical applications - such fixes made sense even when not
> considering the exec-shield impact.)
>
Applause to you for fixing things. That's what we need.
> If a 'generic' distribution (i.e. one that has a significant userbase,
> has thousands of packages that do get used) deviates from mainline it
> wants to do it as simply as possible.
"Things may suck and we may have awesome ideas and may be able to add X,
Y, and Z that combined mitigate 60-80% of security problems, but we
don't want to deviate from mainline that much."
I'm one of the type that aims for progress. My idea of progress is
outlined below.
If it breaks a handfull of things, you set up a temporary work-around
while you fix those.
If it breaks a LOT, you examine why things break. If it's that they're
coded badly (i.e. every program on initialization mprotect()s everything
PROT_EXEC for no reason whatsoever), you fix it, you smack people for
it, then you go ahead. If it's that your implementation is retarded,
you find a better way.
If it breaks EVERYTHING, you find another way. If your implementation
breaks EVERYTHING, yet can be easily adjusted for, and solves a HUGE
chunk of problems without creating more (i.e. solving security while
imposing 99% overhead is wrong), then you may need to take it up with
the community. If it's examined, there's no other way, and people
realize that this is actually an important step forward, then maybe
they'll bite the bullet and do it.
If you're just an idiot and you broke things for no reason when other
solutions are perfectly fine and work just as well, then screw you.
Somebody else will do it better, and then we'll use it.
THAT is my idea of progress. We all spend too much time sitting around
whining about how much work it is to do this and that. Then somebody
does the work, and we all sit around whining that we don't want to spend
the 5 minutes to actually put it in place. This is not progress, this
is ass.
> (otherwise it would have the
> overhead of porting/testing those deviations all the time.) In fact,
> most of the extra patches that distribution kernels apply are patches
> that they think will go mainline soon. If they apply any patch they know
> wont be merged anytime soon they only do it if it is really needed, and
> even then they try chose the variant that is smaller and easier to
> maintain. Another important aspect is that extra patches should
> obviously _widen_ the utility of the system, not narrow it.
This is contrary and yet not contrary to security. Remember, all
security implements a policy to allow the targetted restriction of the
system, yet to give the administrator power over that restriction. For
example, SELinux doesn't make the system capable of doing more stuff; it
makes the administrator capable of making it so you can't run an IRC
server from your home directory.
>
> On x86, VM space is scarce so PaX's halving of the VM space is a
> 'reduced utility' problem. (yes, you can turn it off per application and
> get processes that have 3GB of address space, but that removes security
> for those processes. Also, you cannot know in advance whether an
> application will use more than 1.5GB of VM - different systems have
> different usage patterns.)
PAGEEXEC
And I've yet to see this 1.5GB split problem. If you have a specialized
application, you need to mark it. Generic apps don't do this.
>
> [ PaX's #2 tradeoff is a maintainance overhead issue. Not an
> insurmountable issue because it is well-written kernel code, but
> combined with #1 it can tip the scale. PaX's #3 tradeoff is fixable -
> it could very well use the PT_GNU_STACK code now upstream. ]
>
PT_GNU_STACK is actually explicitly disabled -- apparently this is hard
work, as my distribution can't seem to always keep up with it or get it
quite right.
> you seem to be arguing for a 'no prisoners taken' approach to security,
> and that is a perfectly fine approach if you maintain your own variant
> of a distribution.
>
> the other approach to security (which Fedora follows) is to 'make it as
> seemless and automatic as possible, so that people actually end up using
> our stuff.'
>
I'd like to point out that I split "users" and "upstream developers,"
although you may have a combined view of the two as "users." I don't
mind hurting a few peoples' feelings (SUN, BLACKDOWN, IBM -->
http://www.kaffe.org/pipermail/kaffe/2004-October/099938.html) and
causing a *slight* maintenance increase if it means castrating 80% of
anything an atacker can hope to do
(https://www.ubuntulinux.org/wiki/USNAnalysis).
> so while exec-shield is not "complete" in the sense of PaX, in practice
> it is like 99% complete. E.g. on my Fedora desktop box:
>
> $ lsexec --all | grep 'execshield enabled' | wc -l
> 86
> $ lsexec --all | grep 'execshield disabled' | wc -l
> 0
>
> and that's what really matters at the end of the day. (Anyway, you dont
> have to believe and/or follow any of this, you are free to run your own
> distribution, and if it's good then people will inevitably end up using
> it.)
>
I intend to, if I can ever figure out how to store BLOB data in an
SQLite database. I'd like very much to get my own package manager up
and running and start up my own distribution, because I realize that if
I have that, I have a large amount of control over what goes into it.
Although I'm the type that likes to try radical changes and
enhancements, I'm also the type that tries to design robust enough that
these enhancements won't be forced down your throat if they break
things. I think I'll do pretty good IF I ever get it off the ground :)
This remains to be seen of course; my ego's not THAT big.
> Ingo
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB7/WAhDd4aOud5P8RAlQoAJwIpRd5EW/Uydq+/xlQIHPvbYdipgCfYQ/U
XpBDKmHbUQOP7OTd8Xtl4Tk=
=RXcp
-----END PGP SIGNATURE-----
On Thu, 2005-01-20 at 13:16 -0500, John Richard Moser wrote:
> Even when the tagging is all automatic, to really deploy a competantly
> formed system you have to review the results of the automated tagging.
> It's a bit easier in most cases to automate-and-review, but it still has
> to be done. I think in the case of PaX markings, the maintenance
> overhead of manually marking binaries is minimal enough that looking for
> mistakes would be more work than working from an already known and
> familiar base.
well, marking with PT_GNU_STACK is similar, execstack tool (part of the
prelink package) both shows and can change the existing marking of
binaries/libs.
How is that much different to what pax provides?
On Thu, 20 Jan 2005 13:16:33 EST, John Richard Moser said:
> > 1) the halving of the per-process VM space from 3GB to 1.5GB.
> Which has *never* caused a problem in anything I've ever used, and can
> be disabled on a per-process basis.
Just because something has never caused *you* a problem doesn't mean that
it's suitable for inclusion in something like RedHat where it's almost
certain to cause a problem for *some* user.
> > [ 3) requires manual tagging of applications. ]
> >
>
> Good. Maybe distributors will actually know what they're talking about
> when flapping their mouths, rather than say "Oh look PaX it's magic so
> we just need to turn it on!" Even I (at user level) examine everything
> I'm using and try to understand it; I don't expect all users to do this,
> but the distribution has to.
OK.. but then you say...
> PT_GNU_STACK is actually explicitly disabled -- apparently this is hard
> work, as my distribution can't seem to always keep up with it or get it
> quite right.
Can you explain why your distro has difficulty getting PT_GNU_STACK 100%
right, but you expect them to get tagging of apps with a flag that has
almost identical semantics to PT_GNU_STACK correct?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Arjan van de Ven wrote:
> On Thu, 2005-01-20 at 13:16 -0500, John Richard Moser wrote:
>
>>Even when the tagging is all automatic, to really deploy a competantly
>>formed system you have to review the results of the automated tagging.
>>It's a bit easier in most cases to automate-and-review, but it still has
>>to be done. I think in the case of PaX markings, the maintenance
>>overhead of manually marking binaries is minimal enough that looking for
>>mistakes would be more work than working from an already known and
>>familiar base.
>
>
>
> well, marking with PT_GNU_STACK is similar, execstack tool (part of the
> prelink package) both shows and can change the existing marking of
> binaries/libs.
>
> How is that much different to what pax provides?
>
>
The point was more that it's easier to avoid embarasments like "What?
Plug-ins are marked PT_GNU_STACK, but don't need it? Firefox is a high
risk application and we're giving it an executable stack needlessly?!
SOMEBODY TOLD WIRED THIS?! *IT'S ON SLASHDOT?!!?!!?*" when you do ALL
of the marking manually, so that you know who has what.
The reason for this is that rather than check every marking on every
program (and library in the ES case), you just run each program. You do
run each program right? Or is your distribution's QA shit? I'd hope
you test each program carefully to make sure it actually works. So this
should be normal anyway. When you run into an ES or PaX problem, you
know to track it down and mark it. No accidental mismarking setting
things less secure than they have to be.
I usually encourage deploying a new security system like SSP, PaX, or
the use of PIE binaries across everything on the development boxes, and
then cleaning up the breakage. The reason for this is that you
quickly--without having to second-guess an automatic marking system or
specifically examine each program in testing separated from your normal
QA--locate ALL breakage in your normal QA testing routine AND come out
with the tightest security settings possible. (On the same note, never
ever make a release with protections you haven't actually tested.)
>
>
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB8APmhDd4aOud5P8RAgQmAJ9f/Li0fj1+w1RH2bpCmIurZWidBACfbpvN
ITRMox6SIRt1qLsRP3ykUF0=
=Q22O
-----END PGP SIGNATURE-----
On Thu, Jan 20, 2005 at 01:16:33PM -0500, John Richard Moser wrote:
> Granted, you're somewhat more diverse than I pointed out; but I don't
> keep up on what you're doing. The point was more that you're not a
> major security figure and/or haven't donated your life to security and
> forsaken all lovers before it like Joshua Brindle or Brad Spengler or
> whoever the anonymous guy who developes PaX is. I guess less focus on
> the developer and more focus on the development.
But Ingo is someone who
- is a known allround kernel hacker
- has a trackrecord of getting things done that actually get used
- lowlevel CPU knowledge
- is able to comunicate with other developers very well
- is able to make good tradeoffs
- has taste
most of that can't be said for your personal heroes
> *shrug* The kernel's basic initialization code is in assembly. On 40
> different platforms. That's pretty complex in terms of kernel code,
> which is 99.998% C.
No, the kernel initialization is not complex at all. complexity != code size
> Which brings us to a point on (1) and (2). You and others continue to
> pretend that SEGMEXEC is the only NX emulation in PaX. I should remind
> you (again) that PAGEEXEC uses the same method that Exec Shield uses
> since I believe kernel 2.6.6. In the cases where this method fails, it
> falls back to kernel-assisted MMU walking, which can produce potentially
> high overhead.
You stated that a few time. Now let's welcome you to reality:
- Linus doesn't want to make the tradeoffs for segment based NX-bit
emulation in mainline at all
- Ingo and his collegues at Red Hat want to have it, but they don't
want to break application nor introduce the addition complexity
of the PaX code.
Is is that hard to understand?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Christoph Hellwig wrote:
> On Thu, Jan 20, 2005 at 01:16:33PM -0500, John Richard Moser wrote:
>
>>Granted, you're somewhat more diverse than I pointed out; but I don't
>>keep up on what you're doing. The point was more that you're not a
>>major security figure and/or haven't donated your life to security and
>>forsaken all lovers before it like Joshua Brindle or Brad Spengler or
>>whoever the anonymous guy who developes PaX is. I guess less focus on
>>the developer and more focus on the development.
>
>
> But Ingo is someone who
>
> - is a known allround kernel hacker
> - has a trackrecord of getting things done that actually get used
> - lowlevel CPU knowledge
> - is able to comunicate with other developers very well
> - is able to make good tradeoffs
> - has taste
>
> most of that can't be said for your personal heroes
>
That's all good, but I notice being exceedingly competant in the needs
of a proper secured environment is lacking in your list. Is he
exceedingly knowledgible about security? Who is he working with who
will fill in his gaps in understanding if not?
The PaX developer may not be well known, or have anything in the kernel,
but I've talked to the guy. He has CPU manuals for practically
everything, and *gasp* he READS them! He maintains PaX himself, and it
works well on my AMD64; he has the manual, but not the CPU.
The trade-off of SEGMEXEC is 50% of VM space and 0.7% CPU. The
trade-off of PAGEEXEC (on x86; a real NX bit is used on other CPUs) is
identical to Exec Shield's until that method fails, then it falls back
to a potentially very painful CPU trade-off that's necessary to continue
to offer a supported NX bit.
Explain "Taste."
>
>>*shrug* The kernel's basic initialization code is in assembly. On 40
>>different platforms. That's pretty complex in terms of kernel code,
>>which is 99.998% C.
>
>
> No, the kernel initialization is not complex at all. complexity != code size
>
I was more pointing out that it was assembler code. Clean and simple as
it may be, you come back in 10 years and try to maintain it.
>
>>Which brings us to a point on (1) and (2). You and others continue to
>>pretend that SEGMEXEC is the only NX emulation in PaX. I should remind
>>you (again) that PAGEEXEC uses the same method that Exec Shield uses
>>since I believe kernel 2.6.6. In the cases where this method fails, it
>>falls back to kernel-assisted MMU walking, which can produce potentially
>>high overhead.
>
>
> You stated that a few time. Now let's welcome you to reality:
>
> - Linus doesn't want to make the tradeoffs for segment based NX-bit
> emulation in mainline at all
It's an option, set in menuconfig. It's not a forced trade-off, so
integrating it (btw I wasn't and am not currently arguing to get PaX
integrated) wouldn't really force a trade-off on the user.
Back to the above, this argument doesn't cover page-based NX-bit emulation.
> - Ingo and his collegues at Red Hat want to have it, but they don't
> want to break application nor introduce the addition complexity
> of the PaX code.
>
Nor guarantee that the NX emulation is actually durable.
> Is is that hard to understand?
>
>
What's hard to understand is the constant banter about SEGMEXEC when
there's a second mode AND when it's completely optional. Are you trying
to make it sound like you're pretending that PaX isn't innovative and
that the trade-offs are obscene and infeasible in an every-day
environment? Why is PAGEEXEC ignored in every argument, and SEGMEXEC
focused on, when one or both can be disabled so that the VM split goes away?
Could it be that you can't argue against PAGEEXEC because it uses the
exact same method that Exec Shield uses, and falls back to a heavier one
when that fails; yet you argue that Exec Shield shouldn't fail except in
extremely rare cases, so you can't hold the possibly heavy-overhead case
in PAGEEXEC to question without invalidating your own arguments?
What's wrong with PAGEEXEC? Why focus on SEGMEXEC?
The only thing I ever complain about concerning Exec Shield's principle
implementation is that it can fail in certain conditions. the
deployment side (PT_GNU_STACK and automated marking) I don't even know
why I touched on; perhaps I should try to separate ES from RedHat's
overall smoke-and-mirrors approach to security, since ES at least
supplies a partially functional and reciprocating NX-ASLR proteciton.
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB8CGphDd4aOud5P8RAlq/AJ40TYCxoUMi2PsWvZz/BqHsugEnuQCeL5iC
y2Ot5pTwi+1dbPKN+6WYU4k=
=Weu3
-----END PGP SIGNATURE-----
[email protected] wrote:
> On Wed, 19 Jan 2005 13:50:23 EST, John Richard Moser said:
>
>>Arjan van de Ven wrote:
>>
>>>>Split-out portions of PaX (and of ES) don't make sense.
>>>
>>>they do. Somewhat.
>
>
>>They do to "break all existing exploits" until someone takes 5 minutes
>>to make a slight alteration. Only the reciprocating combinations of
>>each protection can protect the others from being exploited and create a
>>truly secure environment.
>
>
> OK, for those who tuned in late to the telecast of "Kernel Development Process
> for Newbies":
>
> It *DOES NOT MATTER* that PaX and ES "don't make sense" split out into 5 or
> 6 pieces. We merge in stuff *ALL THE TIME* in 20 or 30 chunks, where it
> doesn't make any real sense unless all 20 or 30 go in. Just today, there was
> a 29-patch monster replacing kexec, and another 12-patcher replacing something
> else. And I don't think anybody claims that many of those 29 patches stand
> totally by themselves. You install 25 of them, you probably don't have a working
> kexec, which is the goal of the patch series.
>
> The point is that *each* of those 29 patches is small and self-contained enough
> to review for breakage of current stuff, elegance of coding, and so on. Now
> let's look at grsecurity:
>
> % wc grsecurity-2.1.0-2.6.10-200501071049.patch
> 23539 89686 700414 grsecurity-2.1.0-2.6.10-200501071049.patch
>
> 700K. In one patch. If PAX is available for 2.6.10 by itself, it certainly
> hasn't been posted to http://pax.grsecurity.net - that's still showing a 2.6.7
> patch. But even there, that's a single monolithic 280K patch. That's never
> going to get merged, simply because *nobody* can review a single patch that big.
>
> Now look at http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/.
> 4 separate hunks, the biggest is under 7K. Other chunks of similar size
> for non-exec stack and NX support are already merged.
>
> And why were they merged? Because they showed up in 4-8K chunks.
>
>
>>Split-out portions of PaX (and of ES) don't make sense. ASLR can be
>>evaded pretty easily: inject code, read %efp, find the GOT, read
>>addresses. The NX protections can be evaded by using ret2libc. on x86,
>>you need emulation to make an NX bit or the NX protections are useless.
>>So every part prevents every other part from being pushed gently aside.
>
>
> Right. But if you *submit* them as "a chunk to add x86 emulation of an NX
> bit", "a chunk to add ASLR", "a chunk to add NX", "a chunk to do FOO with the
> vsyscall page", and so on, they might actually have a snowball's chance of
> being included.
>
> If nothing else, the fact they're posted as different patches means each can be
> used as the anchor for a thread discussing the merits of *that* patch. Adrian
> Bunk has been submitting patches for the last several weeks which probably
> total *well* over the size of the PAX patch. And since they show up as
> separate patches, the non-controversial ones can sail by, the ALSA crew can
> comment when he hits an ALSA module, the filesystem people can comment when he
> hits one of their files, and so on.
Unfortunately if A depends on B to work at all, you have to put A and B
in as a package. There is no really good way (AFAIK) to submit a bunch
of patches and say "if any one of these is rejected the whole thing
should be ignored." While akpm and others do a great job of noting
related parts, that's not the ideal solution. Ideally the monolithic
patch should be checked in parts by the people you mention, or there
should be an "all or nothing" protocol better than dropping the
responsibility on the maintainer.
Adding and vetting things in stages works only when the parts work
independently, and that's not always the case. You don't leap vast
chasms in small cautious steps.
--
bill davidsen <[email protected]>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
On Tue, 25 Jan 2005, Bill Davidsen wrote:
>
> Unfortunately if A depends on B to work at all, you have to put A and B
> in as a package.
No. That's totally bogus. You can put in B on its own. You do not have to
make A+B be one patch.
> There is no really good way (AFAIK) to submit a bunch of patches and
> say "if any one of these is rejected the whole thing should be ignored."
But that's done ALL THE TIME. Claiming that there is no good way is not
only disingenious (we call them "numbers", and they start at 1, go to 2,
then 3. Then there's usually a 0-patch which only contains explanations
of the series), but it's clearly not true, since we have patches like that
weekly.
In the last seven days the kernel mailing list has seen at least four
such series where patches depend at least partly on each other:
- Kay Sievers: driver core: export MAJOR/MINOR to the hotplug (0-7)
- Andreas Gruenbacher: NFSACL protocol extension for NFSv3 (0-13)
- Roland Dreier: InfiniBand updates for 0-12
- Roland McGrath: per-process timers (1-7)
and that was from just a quick look. It seems to be almost a daily
occurrence.
In short: listen to Arjan, because he is wise. And stop making totally
idiotic excuses that are clearly not true.
Linus
Linus Torvalds wrote:
>
> On Tue, 25 Jan 2005, Bill Davidsen wrote:
>
>>Unfortunately if A depends on B to work at all, you have to put A and B
>>in as a package.
>
>
> No. That's totally bogus. You can put in B on its own. You do not have to
> make A+B be one patch.
No,perhaps it isn't clear. If A changes the way a lock is used (for
example), then all the places which were using the lock the old way have
to use it the new way, or lockups or similar bad behaviour occur.
Did I say it more clearly? Some things, like locks, have to have all the
players using the same rules.
>
>
>>There is no really good way (AFAIK) to submit a bunch of patches and
>>say "if any one of these is rejected the whole thing should be ignored."
>
>
> But that's done ALL THE TIME. Claiming that there is no good way is not
> only disingenious (we call them "numbers", and they start at 1, go to 2,
> then 3. Then there's usually a 0-patch which only contains explanations
> of the series), but it's clearly not true, since we have patches like that
> weekly.
Again, I said later that it depends on the maintainer not to apply one
part which won't work without the others. Not that it wasn't happening,
but that there's nothing more formal than human talent. I don't regard
that as a really good way, since it makes more work for maintainers.
I really think the original post was reasonably clear that I was
suggesting a more formal means of designating things which should be
accepted as a unit, not whatever you rea into it.
--
bill davidsen <[email protected]>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Bill Davidsen wrote:
> Linus Torvalds wrote:
>
>>
>> On Tue, 25 Jan 2005, Bill Davidsen wrote:
>>
>>> Unfortunately if A depends on B to work at all, you have to put A and
>>> B in as a package.
>>
>>
>>
>> No. That's totally bogus. You can put in B on its own. You do not have
>> to make A+B be one patch.
>
>
> No,perhaps it isn't clear. If A changes the way a lock is used (for
> example), then all the places which were using the lock the old way have
> to use it the new way, or lockups or similar bad behaviour occur.
>
Actually, the issue I was looking at was more focused on security
patches which implement multiple security countermeasures which do
precisely dick; except that they cover eachothers' flaws so that
together they create a real solution.
It's kind of like locking your front door, or your back door. If one is
locked and the other other is still wide open, then you might as well
not even have doors. If you lock both, then you (finally) create a
problem for an intruder.
That is to say, patch A will apply and work without B; patch B will
apply and work without patch A; but there's no real gain from using
either without the other.
> Did I say it more clearly? Some things, like locks, have to have all the
> players using the same rules.
>
>>
>>
>>> There is no really good way (AFAIK) to submit a bunch of patches and
>>> say "if any one of these is rejected the whole thing should be ignored."
>>
>>
>>
>> But that's done ALL THE TIME. Claiming that there is no good way is
>> not only disingenious (we call them "numbers", and they start at 1, go
>> to 2, then 3. Then there's usually a 0-patch which only contains
>> explanations of the series), but it's clearly not true, since we have
>> patches like that weekly.
>
>
> Again, I said later that it depends on the maintainer not to apply one
> part which won't work without the others. Not that it wasn't happening,
> but that there's nothing more formal than human talent. I don't regard
> that as a really good way, since it makes more work for maintainers.
>
> I really think the original post was reasonably clear that I was
> suggesting a more formal means of designating things which should be
> accepted as a unit, not whatever you rea into it.
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9ol1hDd4aOud5P8RApVuAJ4jPnFcRGp7hThvmDefm6yUaDB4VACeOrqH
bSD9P/v/lyJiIZ675QJfFqY=
=EgxJ
-----END PGP SIGNATURE-----
On Tue, 25 Jan 2005, Bill Davidsen wrote:
>
> No,perhaps it isn't clear. If A changes the way a lock is used (for
> example), then all the places which were using the lock the old way have
> to use it the new way, or lockups or similar bad behaviour occur.
Sure. Some patches are like that, but even then you can split it out so
that one patch does _only_ that part, and is verifiable as doing only that
part.
It's also pretty rare. We've had a few big ones like that, notably when
moving a BKL around (moving it from the VFS layer down into each
individual filesystem). And I can't see that really happening in a
security-only patch.
Linus
On Tue, 25 Jan 2005, John Richard Moser wrote:
>
> It's kind of like locking your front door, or your back door. If one is
> locked and the other other is still wide open, then you might as well
> not even have doors. If you lock both, then you (finally) create a
> problem for an intruder.
>
> That is to say, patch A will apply and work without B; patch B will
> apply and work without patch A; but there's no real gain from using
> either without the other.
Sure there is. There's the gain that if you lock the front door but not
the back door, somebody who goes door-to-door, opportunistically knocking
on them and testing them, _will_ be discouraged by locking the front door.
Never mind that he still could have gotten in. After all, if you locked
the back door too, he might still have a crow-bar.
It is a logically fallacy to think that "perfect" is good. It's not.
Anybody who strives for perfection will FAIL.
What's good is "incremental changes". Something that everybody and his dog
can look at for five seconds and say "oh, that's obviously fine", and then
can get more testing (because "everybody and his dog" saying "that's fine"
doesn't actually prove much of anything).
This has nothing to do with security, btw. It's universally true. You get
absolutely nowhere by trying to redesign the world.
Linus
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Linus Torvalds wrote:
>
> On Tue, 25 Jan 2005, John Richard Moser wrote:
>
>>It's kind of like locking your front door, or your back door. If one is
>>locked and the other other is still wide open, then you might as well
>>not even have doors. If you lock both, then you (finally) create a
>>problem for an intruder.
>>
>>That is to say, patch A will apply and work without B; patch B will
>>apply and work without patch A; but there's no real gain from using
>>either without the other.
>
>
> Sure there is. There's the gain that if you lock the front door but not
> the back door, somebody who goes door-to-door, opportunistically knocking
> on them and testing them, _will_ be discouraged by locking the front door.
>
In the real world yes. On the computer, the front and back doors are
half-consumed by a short-path wormhole that places them right next to
eachother, so not really. :)
> Never mind that he still could have gotten in. After all, if you locked
> the back door too, he might still have a crow-bar.
>
Crowbars don't work in computer security. The most you can do is slow
the machine down by infinite network requests or CPU hogging (web server
requests take CPU, even to reject) if *everything* else is perfect; but
the goal is to keep them out, since we live in reality and not fairyland
where we can even stop DDoSes from eating network BW.
> It is a logically fallacy to think that "perfect" is good. It's not.
> Anybody who strives for perfection will FAIL.
>
No, you aim close. You won't hit it, but you'll get close.
> What's good is "incremental changes". Something that everybody and his dog
> can look at for five seconds and say "oh, that's obviously fine", and then
> can get more testing (because "everybody and his dog" saying "that's fine"
> doesn't actually prove much of anything).
>
> This has nothing to do with security, btw. It's universally true. You get
> absolutely nowhere by trying to redesign the world.
>
yeah, I'm just very security minded. Don't mind me much.
> Linus
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD4DBQFB9pHWhDd4aOud5P8RAoDBAJwIrXSd5Z6uDUoFFBUWP4y/0m/TLgCYrcEa
Qu0RrJrCbo4A0OCj8im4JQ==
=6pZA
-----END PGP SIGNATURE-----
On Tue, 25 Jan 2005 13:37:10 -0500, John Richard Moser
<[email protected]> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
> Linus Torvalds wrote:
> >
> > On Tue, 25 Jan 2005, John Richard Moser wrote:
> >
> >>It's kind of like locking your front door, or your back door. If one is
> >>locked and the other other is still wide open, then you might as well
> >>not even have doors. If you lock both, then you (finally) create a
> >>problem for an intruder.
> >>
> >>That is to say, patch A will apply and work without B; patch B will
> >>apply and work without patch A; but there's no real gain from using
> >>either without the other.
> >
> >
> > Sure there is. There's the gain that if you lock the front door but not
> > the back door, somebody who goes door-to-door, opportunistically knocking
> > on them and testing them, _will_ be discouraged by locking the front door.
> >
>
> In the real world yes. On the computer, the front and back doors are
> half-consumed by a short-path wormhole that places them right next to
> eachother, so not really. :)
>
Then one might argue that doing any security patches is meaningless
because, as with bugs, there will always be some other hole not
covered by both A and B so why bother?
--
Dmitry
On Tue, 25 Jan 2005, John Richard Moser wrote:
> >
> > Sure there is. There's the gain that if you lock the front door but not
> > the back door, somebody who goes door-to-door, opportunistically knocking
> > on them and testing them, _will_ be discouraged by locking the front door.
>
> In the real world yes. On the computer, the front and back doors are
> half-consumed by a short-path wormhole that places them right next to
> eachother, so not really. :)
No, the same is true even when the doors are right next to each other.
A lot of worms etc depend on automation, and do one or a few very specific
things. If one of them doesn't work (not because the computer is _secure_
mind you, just some random thing), it's already a more secure setup.
And even if two independent security bugs cause _exactly_ the same
symptoms (ie the exact same exploit works even if either of the bugs still
remain), having two independent patches that fix them is STILL better.
Just because the explanation of them is simpler, and the verification of
them is simpler.
> > Never mind that he still could have gotten in. After all, if you locked
> > the back door too, he might still have a crow-bar.
>
> Crowbars don't work in computer security.
Sure they do. They're the brute-force password-cracking. They're the
physical security of the machine. They are any number of things.
The point being that you will always have holes. Arguing for "there's
another hole" is _never_ an argument against a small patch fixing one
problem.
Take it from me - I've been reviewing patches for _way_ too long. And it's
a damn lot easier to review 100 small patches that do simple things and
that have been split up and explained individually byt he submitter than
it is to review 10 big ones.
It's also a lot easier to find the (inevitable) bugs. Either you already
have a clue ("try reverting that one patch") or you can do things like
binary searching. The bugs introduced a patch often have very little to do
with the thing a patch fixes - exactly because the patch _fixes_
something, it's been tested with that particular problem, and the new
problem it introduces is usually orthogonal.
Which is why lots of small patches usually have _different_ bug behaviour
than the patch they fix. To go back to the A+B fix: the bug they fix may
be fixed only by the _combination_ of the patch, but the bug they cause is
often an artifact of _one_ of the patches.
IOW, splitting the patches up makes them
- easier to merge
- easier to verify
- easier to debug
and combining them has _zero_ advantages (whatever bug the combined patch
fix _will_ be fixed by the series of individual patches too - even if the
splitting was buggy in some respect, you are pretty much guaranteed of
this, since the bug you were trying to fix is the _one_ thing you are
really testing for).
See?
Linus
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Dmitry Torokhov wrote:
> On Tue, 25 Jan 2005 13:37:10 -0500, John Richard Moser
> <[email protected]> wrote:
>
>>-----BEGIN PGP SIGNED MESSAGE-----
>>Hash: SHA1
>>
>>
>>Linus Torvalds wrote:
>>
>>>On Tue, 25 Jan 2005, John Richard Moser wrote:
>>>
>>>
>>>>It's kind of like locking your front door, or your back door. If one is
>>>>locked and the other other is still wide open, then you might as well
>>>>not even have doors. If you lock both, then you (finally) create a
>>>>problem for an intruder.
>>>>
>>>>That is to say, patch A will apply and work without B; patch B will
>>>>apply and work without patch A; but there's no real gain from using
>>>>either without the other.
>>>
>>>
>>>Sure there is. There's the gain that if you lock the front door but not
>>>the back door, somebody who goes door-to-door, opportunistically knocking
>>>on them and testing them, _will_ be discouraged by locking the front door.
>>>
>>
>>In the real world yes. On the computer, the front and back doors are
>>half-consumed by a short-path wormhole that places them right next to
>>eachother, so not really. :)
>>
>
>
> Then one might argue that doing any security patches is meaningless
> because, as with bugs, there will always be some other hole not
> covered by both A and B so why bother?
>
I'm not talking about bugs, I'm talking about mitigation of unknown bugs.
You have to remember that I think mostly in terms of proactive security.
If there's a buffer overflow, temp file race condition, code injection
or ret2libc in a userspace program, it can be stopped. this narrows
down what exploits an attacker can actually use.
This puts pressure on the attacker; he has to find a bug, write an
exploit, and find an opportunity to use it before a patch is written and
applied to fix the exploit. If say 80% of exploits are suddenly
non-exploitable, then he's left with mostly very short windows that are
far and few, and thus may be beyond his level of UNION(task->skill,
task->luck) in many cases.
Thus, by having fewer exploits available, fewer successful attacks
should happen due to the laws of probability. So the goal becomes to
fix as many bugs as possible, but also to mitigate the ones we don't
know about. To truly mitigate any security flaw, we must make a
non-circumventable protection.
If you can circumvent protection A by simply using attack B* to disable
protection A to do more interesting attack A*, then protection A is
smoke and mirrors. If you have protection B that stops B*, but can be
circumvented by A*, then deploying A and B will reciprocate and prevent
both A* and B*, creating a protection scheme that can't be circumvented.
In this context, it doesn't make sense to deploy a protection A or B
without the companion protection, which is what I meant. You're
thinking of fixing specific bugs; this is good and very important (as
effective proactive security BREAKS things that are buggy), but there is
a better way to create a more secure environment. Fixing the bugs
increases the quality of the product, while adding protections makes
them durable enough to withstand attacks targetting their own flaws.
Try reading through (shameless plug)
http://www.ubuntulinux.org/wiki/USNAnalysis and then try to understand
where I'm coming from.
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9qRchDd4aOud5P8RAv74AJ9zvphwU8c7tX1bTa1YwofVJYxligCbBkgN
hLg9othWu96Oc+w47PI7+b0=
=XLFq
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Linus Torvalds wrote:
>
> On Tue, 25 Jan 2005, John Richard Moser wrote:
>
>>>Sure there is. There's the gain that if you lock the front door but not
>>>the back door, somebody who goes door-to-door, opportunistically knocking
>>>on them and testing them, _will_ be discouraged by locking the front door.
[...]
>
>>>Never mind that he still could have gotten in. After all, if you locked
>>>the back door too, he might still have a crow-bar.
>>
>>Crowbars don't work in computer security.
>
>
> Sure they do. They're the brute-force password-cracking. They're the
> physical security of the machine. They are any number of things.
>
> The point being that you will always have holes. Arguing for "there's
> another hole" is _never_ an argument against a small patch fixing one
> problem.
>
Not what I meant.
http://www.ubuntulinux.org/wiki/USNAnalysis
I'm more focused on this sort of security. Finding and fixing bugs is
important, but protecting against the exploitation of certain classes of
bugs is also a major step forward.
> Take it from me - I've been reviewing patches for _way_ too long. And it's
> a damn lot easier to review 100 small patches that do simple things and
> that have been split up and explained individually byt he submitter than
> it is to review 10 big ones.
>
Yeah I noticed. I'm trying to grep through the grsecurity megapatch and
write an LSM clone (stackable already) based on those hooks to
reimplement GrSecurity, as an academic learning experience. I try to
make something functional at each step (I did linking restrictions
first), but it's hard to find everything in that gargantuant thing
related to a specific feature :)
That being said, you should also consider (unless somebody forgot to
tell me something) that it takes two source trees to make a split-out
patch. The author also has to chew down everything but the feature he
wants to split out. I could probably log 10,000 man-hours splitting up
GrSecurity. :)
> It's also a lot easier to find the (inevitable) bugs. Either you already
> have a clue ("try reverting that one patch") or you can do things like
> binary searching. The bugs introduced a patch often have very little to do
> with the thing a patch fixes - exactly because the patch _fixes_
> something, it's been tested with that particular problem, and the new
> problem it introduces is usually orthogonal.
true. Very very true.
With things like Gr, there's like a million features. Normally the
first step I take is "Disable it all". If it still breaks, THEN THERE'S
A PROBLEM. If it works, then the binary searching begins.
>
> Which is why lots of small patches usually have _different_ bug behaviour
> than the patch they fix. To go back to the A+B fix: the bug they fix may
> be fixed only by the _combination_ of the patch, but the bug they cause is
> often an artifact of _one_ of the patches.
>
Wasn't talking about bugfixes, see above.
> IOW, splitting the patches up makes them
> - easier to merge
> - easier to verify
> - easier to debug
>
> and combining them has _zero_ advantages (whatever bug the combined patch
> fix _will_ be fixed by the series of individual patches too - even if the
> splitting was buggy in some respect, you are pretty much guaranteed of
> this, since the bug you were trying to fix is the _one_ thing you are
> really testing for).
Lots of work to split up a patch though.
>
> See?
>
> Linus
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9qX3hDd4aOud5P8RAlMGAJ0cXEbY1QALk6EyfCNJDE26FdRYLQCdGOQB
799/tZxwWQkpv+a/eavf4EY=
=GQR6
-----END PGP SIGNATURE-----
On Tue, Jan 25, 2005 at 02:56:13PM -0500, John Richard Moser wrote:
> In this context, it doesn't make sense to deploy a protection A or B
> without the companion protection, which is what I meant.
But breaking up the introduction of new code into logical steps is still
helpful for people trying to understand the new code.
Even if it's true that it's no use locking any door until they are all
locked, there's still some value to allowing people to watch you lock
each door individually. It's easier for them to understand what you're
doing that way.
--Bruce Fields
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
J. Bruce Fields wrote:
> On Tue, Jan 25, 2005 at 02:56:13PM -0500, John Richard Moser wrote:
>
>>In this context, it doesn't make sense to deploy a protection A or B
>>without the companion protection, which is what I meant.
>
>
> But breaking up the introduction of new code into logical steps is still
> helpful for people trying to understand the new code.
>
> Even if it's true that it's no use locking any door until they are all
> locked, there's still some value to allowing people to watch you lock
> each door individually. It's easier for them to understand what you're
> doing that way.
>
I guess so.
This still doesn't give me any way to take a big patch and make little
patches without hours of work and (N+2) kernel trees for N patches
> --Bruce Fields
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9qw3hDd4aOud5P8RAq+AAJ4ynZrASPcnh87ziZ1ZWrmzF9V44gCdHQXh
yZQ7Z9J7gJ4GWr3zaXM6Qx8=
=/4Ze
-----END PGP SIGNATURE-----
On Tue, Jan 25, 2005 at 03:29:44PM -0500, John Richard Moser wrote:
> This still doesn't give me any way to take a big patch and make little
> patches without hours of work and (N+2) kernel trees for N patches
Any path to getting a big complicated patch reviewed and into the kernel
is going to involve many hours of work, by more people than just the
submitter.
I highly recommend Andrew Morton's patch scripts, or something similar.
http://www.zip.com.au/~akpm/linux/patches/
--b.
On Tue, 25 Jan 2005 14:56:13 EST, John Richard Moser said:
> This puts pressure on the attacker; he has to find a bug, write an
> exploit, and find an opportunity to use it before a patch is written and
> applied to fix the exploit. If say 80% of exploits are suddenly
> non-exploitable, then he's left with mostly very short windows that are
> far and few, and thus may be beyond his level of UNION(task->skill,
> task->luck) in many cases.
Correct.
> If you can circumvent protection A by simply using attack B* to disable
> protection A to do more interesting attack A*, then protection A is
> smoke and mirrors.
You however missed an important case here. If attack B is outside
UNTION(task->skill, task->luck) protection A is *NOT* smoke-and-mirrors.
And for the *vast* majority of attackers, if they have a canned exploit for
A and it doesn't work, they'll be stuck because B is outside their ability.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
[email protected] wrote:
> On Tue, 25 Jan 2005 14:56:13 EST, John Richard Moser said:
>
>
>>This puts pressure on the attacker; he has to find a bug, write an
>>exploit, and find an opportunity to use it before a patch is written and
>>applied to fix the exploit. If say 80% of exploits are suddenly
>>non-exploitable, then he's left with mostly very short windows that are
>>far and few, and thus may be beyond his level of UNION(task->skill,
>>task->luck) in many cases.
>
>
> Correct.
>
>
>
>>If you can circumvent protection A by simply using attack B* to disable
>>protection A to do more interesting attack A*, then protection A is
>>smoke and mirrors.
>
>
> You however missed an important case here. If attack B is outside
> UNTION(task->skill, task->luck) protection A is *NOT* smoke-and-mirrors.
>
> And for the *vast* majority of attackers, if they have a canned exploit for
> A and it doesn't work, they'll be stuck because B is outside their ability.
Yes, true; but someone wrote that canned exploit for them, so the actual
exploit writers will just adapt. Those attackers I don't think write
their own exploits normally :)
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9rMqhDd4aOud5P8RAgXBAJ9vOzRSZUsxmFOo9W7fROhfq1IBvgCcCINx
gTiTNm44vp/hlygaPTdy9UM=
=tDcw
-----END PGP SIGNATURE-----
On Tue, 25 Jan 2005, John Richard Moser wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>
> Dmitry Torokhov wrote:
>> On Tue, 25 Jan 2005 13:37:10 -0500, John Richard Moser
>> <[email protected]> wrote:
>>
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>>
>>>
>>> Linus Torvalds wrote:
>>>
>>>> On Tue, 25 Jan 2005, John Richard Moser wrote:
>>>>
>>>>
>>>>> It's kind of like locking your front door, or your back door. If one is
>>>>> locked and the other other is still wide open, then you might as well
>>>>> not even have doors. If you lock both, then you (finally) create a
>>>>> problem for an intruder.
>>>>>
>>>>> That is to say, patch A will apply and work without B; patch B will
>>>>> apply and work without patch A; but there's no real gain from using
>>>>> either without the other.
>>>>
>>>>
>>>> Sure there is. There's the gain that if you lock the front door but not
>>>> the back door, somebody who goes door-to-door, opportunistically knocking
>>>> on them and testing them, _will_ be discouraged by locking the front door.
>>>>
>>>
>>> In the real world yes. On the computer, the front and back doors are
>>> half-consumed by a short-path wormhole that places them right next to
>>> eachother, so not really. :)
>>>
>>
>>
>> Then one might argue that doing any security patches is meaningless
>> because, as with bugs, there will always be some other hole not
>> covered by both A and B so why bother?
>>
>
> I'm not talking about bugs, I'm talking about mitigation of unknown bugs.
>
> You have to remember that I think mostly in terms of proactive security.
> If there's a buffer overflow, temp file race condition, code injection
> or ret2libc in a userspace program, it can be stopped. this narrows
> down what exploits an attacker can actually use.
>
> This puts pressure on the attacker; he has to find a bug, write an
> exploit, and find an opportunity to use it before a patch is written and
> applied to fix the exploit. If say 80% of exploits are suddenly
> non-exploitable, then he's left with mostly very short windows that are
> far and few, and thus may be beyond his level of UNION(task->skill,
> task->luck) in many cases.
>
> Thus, by having fewer exploits available, fewer successful attacks
> should happen due to the laws of probability. So the goal becomes to
> fix as many bugs as possible, but also to mitigate the ones we don't
> know about. To truly mitigate any security flaw, we must make a
> non-circumventable protection.
>
So you intend to make so many changes to the kernel that a
previously thought-out exploit may no longer be workable?
A preemptive strike, so to speak? No thanks, to quote Frank
Lanza of L3 communications; "Better is the enemy of good enough."
> If you can circumvent protection A by simply using attack B* to disable
> protection A to do more interesting attack A*, then protection A is
> smoke and mirrors. If you have protection B that stops B*, but can be
> circumvented by A*, then deploying A and B will reciprocate and prevent
> both A* and B*, creating a protection scheme that can't be circumvented.
>
It makes sense to add incremental improvements to security as
part of the normal maturation of a product. It does not make
sense to dump a new pile of snakes in the front yard because
that might keep the burglars away.
> In this context, it doesn't make sense to deploy a protection A or B
> without the companion protection, which is what I meant. You're
> thinking of fixing specific bugs; this is good and very important (as
> effective proactive security BREAKS things that are buggy), but there is
> a better way to create a more secure environment. Fixing the bugs
> increases the quality of the product, while adding protections makes
> them durable enough to withstand attacks targetting their own flaws.
>
Adding protections for which no known threat exists is a waste of
time, effort, and adds to the kernel size. If you connect a machine
to a network, it can always get hit with so many broadcast packets
that it has little available CPU time to do useful work. Do we
add a network throttle to avoid this? If so, then you will hurt
somebody's performance on a quiet network. Everything done in
the name of "security" has its cost. The cost is almost always
much more than advertised or anticipated.
> Try reading through (shameless plug)
> http://www.ubuntulinux.org/wiki/USNAnalysis and then try to understand
> where I'm coming from.
>
This isn't relevant at all. The Navy doesn't have any secure
systems connected to a network to which any hackers could connect.
The TDRS communications satellites provide secure channels
that are disassembled on-board. Some ATM-slot, after decryption
is fed to a LAN so the sailors can have an Internet connection
for their lap-tops. The data took the same paths, but it's
completely independent and can't get mixed up no matter how
hard a hacker tries.
Cheers,
Dick Johnson
Penguin : Linux version 2.6.10 on an i686 machine (5537.79 BogoMips).
Notice : All mail here is now cached for review by Dictator Bush.
98.36% of all statistics are fiction.
On Tue, Jan 25, 2005 at 03:03:04PM -0500, John Richard Moser wrote:
> > and combining them has _zero_ advantages (whatever bug the combined patch
> > fix _will_ be fixed by the series of individual patches too - even if the
> > splitting was buggy in some respect, you are pretty much guaranteed of
> > this, since the bug you were trying to fix is the _one_ thing you are
> > really testing for).
>
> Lots of work to split up a patch though.
Exactly. And since that's a prerequisite for any meaningful review,
some equivalent of that work will have to be done at some point.
The only question is who will be doing that work - proponents of patch
or reviewers?
Look at it that way: when you are submitting a paper for publication,
it's your responsibility to get it into form that would allow review.
Sending a lump of something that might, given considerable efforts, be
massaged into readable and understandable text is not going to fly.
And doing that with "it's a lot of work [so could reviewers please do
that work themselves and spare me the efforts]" as rationale...
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
linux-os wrote:
> On Tue, 25 Jan 2005, John Richard Moser wrote:
>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>>
>>
>> Dmitry Torokhov wrote:
>>
>>> On Tue, 25 Jan 2005 13:37:10 -0500, John Richard Moser
>>> <[email protected]> wrote:
>>>
>>>> -----BEGIN PGP SIGNED MESSAGE-----
>>>> Hash: SHA1
>>>>
>>>>
>>>> Linus Torvalds wrote:
>>>>
>>>>> On Tue, 25 Jan 2005, John Richard Moser wrote:
>>>>>
>>>>>
>>>>>> It's kind of like locking your front door, or your back door. If
>>>>>> one is
>>>>>> locked and the other other is still wide open, then you might as well
>>>>>> not even have doors. If you lock both, then you (finally) create a
>>>>>> problem for an intruder.
>>>>>>
>>>>>> That is to say, patch A will apply and work without B; patch B will
>>>>>> apply and work without patch A; but there's no real gain from using
>>>>>> either without the other.
>>>>>
>>>>>
>>>>>
>>>>> Sure there is. There's the gain that if you lock the front door but
>>>>> not
>>>>> the back door, somebody who goes door-to-door, opportunistically
>>>>> knocking
>>>>> on them and testing them, _will_ be discouraged by locking the
>>>>> front door.
>>>>>
>>>>
>>>> In the real world yes. On the computer, the front and back doors are
>>>> half-consumed by a short-path wormhole that places them right next to
>>>> eachother, so not really. :)
>>>>
>>>
>>>
>>> Then one might argue that doing any security patches is meaningless
>>> because, as with bugs, there will always be some other hole not
>>> covered by both A and B so why bother?
>>>
>>
>> I'm not talking about bugs, I'm talking about mitigation of unknown bugs.
>>
>> You have to remember that I think mostly in terms of proactive security.
>> If there's a buffer overflow, temp file race condition, code injection
>> or ret2libc in a userspace program, it can be stopped. this narrows
>> down what exploits an attacker can actually use.
>>
>> This puts pressure on the attacker; he has to find a bug, write an
>> exploit, and find an opportunity to use it before a patch is written and
>> applied to fix the exploit. If say 80% of exploits are suddenly
>> non-exploitable, then he's left with mostly very short windows that are
>> far and few, and thus may be beyond his level of UNION(task->skill,
>> task->luck) in many cases.
>>
>> Thus, by having fewer exploits available, fewer successful attacks
>> should happen due to the laws of probability. So the goal becomes to
>> fix as many bugs as possible, but also to mitigate the ones we don't
>> know about. To truly mitigate any security flaw, we must make a
>> non-circumventable protection.
>>
>
> So you intend to make so many changes to the kernel that a
> previously thought-out exploit may no longer be workable?
>
> A preemptive strike, so to speak? No thanks, to quote Frank
> Lanza of L3 communications; "Better is the enemy of good enough."
>
No, like this.
You have a race condition, let's say. This is fairly common. Race
conditions work because you generate a unique tempfile directory, create
it, check to see if this tempfile exists in it, it doesn't, so you
create it. Problem is, someone's symlinked or hardlinked another file
into that temp directory, which you can write to but they can't; and you
wind up opening the file and trashing it, or erasing it by creating over it.
So, simple fix.
1) If the directory is +t,o+w, and the symlink is not owned by you, and
the symlink is not owned by the owner of the directory, you can't follow
the symlink.
2) If you try to make a hardlink (ln) to a file you don't own,
permission is denied, unless you've got CAP_FOWNER and uid==0.
Now, root tries to traverse /tmp/root/tmp4938.193a -> /etc/fstab, and
gets permission denied.
This is a real solution to race conditions (it's in GrSecurity). It's
not "so many changes that previously thought-out exploits are no longer
workable," it's a change in policy to remove conditions necessary for
any future exploit of this class to be workable.
>> If you can circumvent protection A by simply using attack B* to disable
>> protection A to do more interesting attack A*, then protection A is
>> smoke and mirrors. If you have protection B that stops B*, but can be
>> circumvented by A*, then deploying A and B will reciprocate and prevent
>> both A* and B*, creating a protection scheme that can't be circumvented.
>>
>
> It makes sense to add incremental improvements to security as
> part of the normal maturation of a product. It does not make
> sense to dump a new pile of snakes in the front yard because
> that might keep the burglars away.
Snakes like passwords, or like a DAC system, or like SELinux MAC
policies, or like preventing tasks from reading or altering eachothers'
memory space?
>
>> In this context, it doesn't make sense to deploy a protection A or B
>> without the companion protection, which is what I meant. You're
>> thinking of fixing specific bugs; this is good and very important (as
>> effective proactive security BREAKS things that are buggy), but there is
>> a better way to create a more secure environment. Fixing the bugs
>> increases the quality of the product, while adding protections makes
>> them durable enough to withstand attacks targetting their own flaws.
>>
>
> Adding protections for which no known threat exists is a waste of
> time, effort, and adds to the kernel size. If you connect a machine
> to a network, it can always get hit with so many broadcast packets
> that it has little available CPU time to do useful work. Do we
> add a network throttle to avoid this? If so, then you will hurt
> somebody's performance on a quiet network. Everything done in
> the name of "security" has its cost. The cost is almost always
> much more than advertised or anticipated.
>
>> Try reading through (shameless plug)
>> http://www.ubuntulinux.org/wiki/USNAnalysis and then try to understand
>> where I'm coming from.
>>
>
> This isn't relevant at all. The Navy doesn't have any secure
> systems connected to a network to which any hackers could connect.
BUT MY HOME COMPUTER IS CONNECTED TO A NETWORK FROM WHICH I COULD GET A
WORM :D :D :D
And there could be an insider in the navy trying to get higher access
than he has.
> The TDRS communications satellites provide secure channels
> that are disassembled on-board. Some ATM-slot, after decryption
> is fed to a LAN so the sailors can have an Internet connection
> for their lap-tops. The data took the same paths, but it's
> completely independent and can't get mixed up no matter how
> hard a hacker tries.
>
So, what?
Your solution is that it doesn't matter that exploits exist because
wherever it matters, other things in the environment will stop them?
I.e. you're just an ass?
> Cheers,
> Dick Johnson
> Penguin : Linux version 2.6.10 on an i686 machine (5537.79 BogoMips).
> Notice : All mail here is now cached for review by Dictator Bush.
> 98.36% of all statistics are fiction.
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9rgMhDd4aOud5P8RAnfjAJ40L6GvhTLunbVKdF16VOv66vZpQQCgh8hz
9yZDc5h8hK13yfvnB9z3R7E=
=IL8i
-----END PGP SIGNATURE-----
On Tue, 25 Jan 2005, John Richard Moser wrote:
> Thus, by having fewer exploits available, fewer successful attacks
> should happen due to the laws of probability. So the goal becomes to
> fix as many bugs as possible, but also to mitigate the ones we don't
> know about. To truly mitigate any security flaw, we must make a
> non-circumventable protection.
To the extent that this means "if you see a bug, fix the bug, even if it's
unrelated" I agree completely.
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Bill Davidsen wrote:
> On Tue, 25 Jan 2005, John Richard Moser wrote:
>
>
>
>>Thus, by having fewer exploits available, fewer successful attacks
>>should happen due to the laws of probability. So the goal becomes to
>>fix as many bugs as possible, but also to mitigate the ones we don't
>>know about. To truly mitigate any security flaw, we must make a
>>non-circumventable protection.
>
>
> To the extent that this means "if you see a bug, fix the bug, even if it's
> unrelated" I agree completely.
>
That's the old, old, OLD method. :) It's a fundamental principle of
good programming, a good one that some people (*cough*Microsoft*Cough*)
have forgotten, and we see the results and know not to forget it ourselves.
But I also like to go beyond that, to the extent that if you toss a
wrench in it, it'll sieze up, but won't break. Some of this is
userspace, and some is kernelspace. It's possible to fix userspace
problems like code injection and tempfile races with kernel level
policies on memory protections and filesystem intrinsics
(symlink/hardlink rules).
I believe that these and similar concepts should be explored, so that we
can truly progress rather than simply continue in the archaic manner
that we use today. Eventually we will evolve from "look for security
vulns and fix them before they're exploited" to "fix unhandled security
vulns first, and treat handled vulns as normal bugs." That is, we'll
still fix the bugs; but we'll have a much smaller range of bugs that are
actually exploitable, and thus a better, smaller, more refined set of
high-priority focus issues.
We already do this with everything else. The kernel developers, both
LKML and external projects, have and are still exploring new schedulers
for disk, IO, and networking; new memory managment and threading models;
and new security concepts. We have everything from genetics algorithms
to binary signing on the outside, as well as a O(1) CPU scheduler and
security hook framework in vanilla. I want things to just continue moving.
It's interesting to me mainly that something like 80% of the USNs Ubuntu
puts out contain exploits that could only manage to be used as DoS
attacks if the right systems were put in place, only counting the ones I
actually know and understand myself. Not all of those protections are
kernel-based, but the kernel based ones 'should' touch on each exploit
in some way. I believe these are suitable for widespread deployment, so
of course my idea of progress includes widespread deployment of these :)
It's not entirely relavent to argue this here, but it gives me something
to do while I'm extremely bored (hell I've even done an LSM clone that's
simpler and implements full stacking just to occupy myself). Hopefully
the Ubuntu developers deploy and run this stuff, so after being around
4-6 years, the merits of some often overlooked systems will finally be
widely demonstrated and assessable.
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9ucWhDd4aOud5P8RAt57AJwNGyBm9jn87da+JJCbnYXQp+KH4QCbBupJ
mEPqyDIE7ZZitAG1tTKo4qI=
=rCVA
-----END PGP SIGNATURE-----
On Tuesday 25 January 2005 15:05, linux-os wrote:
> On Tue, 25 Jan 2005, John Richard Moser wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
[snip]
> > In this context, it doesn't make sense to deploy a protection A or B
> > without the companion protection, which is what I meant. You're
> > thinking of fixing specific bugs; this is good and very important (as
> > effective proactive security BREAKS things that are buggy), but there is
> > a better way to create a more secure environment. Fixing the bugs
> > increases the quality of the product, while adding protections makes
> > them durable enough to withstand attacks targetting their own flaws.
>
> Adding protections for which no known threat exists is a waste of
> time, effort, and adds to the kernel size. If you connect a machine
> to a network, it can always get hit with so many broadcast packets
> that it has little available CPU time to do useful work. Do we
> add a network throttle to avoid this? If so, then you will hurt
> somebody's performance on a quiet network. Everything done in
> the name of "security" has its cost. The cost is almost always
> much more than advertised or anticipated.
>
> > Try reading through (shameless plug)
> > http://www.ubuntulinux.org/wiki/USNAnalysis and then try to understand
> > where I'm coming from.
>
> This isn't relevant at all. The Navy doesn't have any secure
> systems connected to a network to which any hackers could connect.
> The TDRS communications satellites provide secure channels
> that are disassembled on-board. Some ATM-slot, after decryption
> is fed to a LAN so the sailors can have an Internet connection
> for their lap-tops. The data took the same paths, but it's
> completely independent and can't get mixed up no matter how
> hard a hacker tries.
Obviously you didn't hear about the secure network being hit by the "I love
you" virus.
The Navy doesn't INTEND to have any secure systems connected to a network to
which any hackers could connect.
Unfortunately, there will ALWAYS be a path, either direct, or indirect between
the secure net and the internet.
The problem exists. The only to protect is to apply layers of protection.
And covering the possible unknown errors is a good way to add protection.
On Tue, Jan 25, 2005 at 03:03:04PM -0500, John Richard Moser wrote:
> That being said, you should also consider (unless somebody forgot to
> tell me something) that it takes two source trees to make a split-out
> patch. The author also has to chew down everything but the feature he
> wants to split out. I could probably log 10,000 man-hours splitting up
> GrSecurity. :)
I'd try out Andrew's patch scripts if I were you. If you're making a patch to
the kernel, you'd best keep it in separate patches from the beginning, and
that's exactly what those scripts are very useful for.
> > It's also a lot easier to find the (inevitable) bugs. Either you already
> > have a clue ("try reverting that one patch") or you can do things like
> > binary searching. The bugs introduced a patch often have very little to do
> > with the thing a patch fixes - exactly because the patch _fixes_
> > something, it's been tested with that particular problem, and the new
> > problem it introduces is usually orthogonal.
>
> true. Very very true.
>
> With things like Gr, there's like a million features. Normally the
> first step I take is "Disable it all". If it still breaks, THEN THERE'S
> A PROBLEM. If it works, then the binary searching begins.
So how do you think you would do a binary search within big patches, if it
would take you 10,000 man-hours to split up the patch? Disabling a lot of
small patches is easy, disabling a part of a big one takes a lot more work.
> > Which is why lots of small patches usually have _different_ bug behaviour
> > than the patch they fix. To go back to the A+B fix: the bug they fix may
> > be fixed only by the _combination_ of the patch, but the bug they cause is
> > often an artifact of _one_ of the patches.
> >
>
> Wasn't talking about bugfixes, see above.
Oh, so you're saying that security fixes don't cause bugs? Great world you live
in, then...
> > IOW, splitting the patches up makes them
> > - easier to merge
> > - easier to verify
> > - easier to debug
> >
> > and combining them has _zero_ advantages (whatever bug the combined patch
> > fix _will_ be fixed by the series of individual patches too - even if the
> > splitting was buggy in some respect, you are pretty much guaranteed of
> > this, since the bug you were trying to fix is the _one_ thing you are
> > really testing for).
>
> Lots of work to split up a patch though.
See above.
Sytse
On Wed, 26 Jan 2005, Jesse Pollard wrote:
>
> And covering the possible unknown errors is a good way to add protection.
I heartily agree. The more we can do to make the inevitable bugs be less
likely to be security problems, the better off we are. Most of that ends
up being design - trying to avoid design decisions that just drive every
bug to be an inevitable security problem.
The biggest part of that is having nice interfaces. If you have good
interfaces, bugs are less likely to be problematic. For example, the
"seq_file" interfaces for /proc were written to clean up a lot of common
mistakes, so that the actual low-level code would be much simpler and not
have to worry about things like buffer sizes and page boundaries. I don't
know/remember if it actually fixed any security issues, but I'm confident
it made them less likely, just by making it _easier_ to write code that
doesn't have silly bounds problems.
Linus
On Wed, 26 Jan 2005, Olaf Hering wrote:
> >
> > Details, please?
>
> You did it this way:
> http://linux.bkbits.net:8080/linux-2.5/cset@4115cba3UCrZo9SnkQp0apTO3SghJQ
Oh, that's a separate issue. We want to have multiple levels of security.
We not only try to make sure that there are easy interfaces (but yeah, I
don't force people to rewrite - I sadly don't have a cadre of slaves at my
beck and call ;p), but it's also always a good idea to have interfaces
that are bug-resistant even in the face of people actively not using the
better interfaces.
So having good interfaces that are harder to have bugs in does _not_ mean
that we still shouldn't have defensive programming practices anyway. The
combination of the two means that a bug in one layer hopefully gets caught
be the other layer.
Linus
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
[email protected] wrote:
> On Wed, 26 Jan 2005 14:31:00 EST, John Richard Moser said:
>
>
>>[*] Grsecurity
>> Security Level (Custom) --->
>> Address Space Protection --->
>> Role Based Access Control Options --->
>> Filesystem Protections --->
>> Kernel Auditing --->
>> Executable Protections --->
>> Network Protections --->
>> Sysctl support --->
>> Logging Options --->
>>
>>?? Address Space Protection ??
>> [ ] Deny writing to /dev/kmem, /dev/mem, and /dev/port
>> [ ] Disable privileged I/O
>> [*] Remove addresses from /proc/<pid>/[maps|stat]
>> [*] Deter exploit bruteforcing
>> [*] Hide kernel symbols
>>
>>Need I continue? There's some 30 or 40 more options I could show. If
>>you can't use your enter, left, right, up, y, n, and ? keys, you're
>>crippled and won't be able to patch and unpatch crap either.
>
>
> Just because I can use my arrow keys doesn't mean I can find which part of
> a 250,000 line patch broke something.
>
I can.
Read Kconfig. Find the CONFIG_* for the option. Find what that
disables in the code. Get to work.
> If it's done as 30 or 40 patches, each of which implements ONE OPTION, then
> it's pretty easy to play binary search to find what broke something.
>
Yes and those patches would implement what's inside #ifdef CONFIG_*'s,
so if turning an option off fixes something, it's fairly equivalent.
I'll let it slide that those patches would likley make "some" changes
that aren't in #ifdef blocks, making it a bit harder to track down,
since those changes can also cause breakage themselves and be even
tougher to track down (though maybe not, just read the patch for
non-blocked-off stuff in some cases).
> And don't give me "it doesn't break anything" - in the past, I've fed at least
> 2 bug fixes on things I found broken back to the grsecurity crew (one was a
> borkage in the process-ID-randomization code, another was a bad parenthesis
> matching breaking the intent of an 'if' in one of the filesystem protection
> checks (symlink or fifo or something like that).
Hmm? I found the PID rand breakage in 2.6.7's gr to be quite annoying
and disabled it. It took me all of 2 minutes to determine that PID
randomization was causing the breakage-- as I enabled it during boot
with an init script, the machine oopsed several times and then panic'd. :)
Heh, divide that 2 minutes by the thousands of people who look at the
code, and you find bugs before they're created :D (j/k)
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9/dbhDd4aOud5P8RAokYAJ9oukytYsqBhz71RtzpC4o7K9od1QCfTRou
ln0qF42yrB6+gi1Kt4YXudY=
=75yE
-----END PGP SIGNATURE-----
On Wed, 26 Jan 2005 14:31:00 EST, John Richard Moser said:
> [*] Grsecurity
> Security Level (Custom) --->
> Address Space Protection --->
> Role Based Access Control Options --->
> Filesystem Protections --->
> Kernel Auditing --->
> Executable Protections --->
> Network Protections --->
> Sysctl support --->
> Logging Options --->
>
> ?? Address Space Protection ??
> [ ] Deny writing to /dev/kmem, /dev/mem, and /dev/port
> [ ] Disable privileged I/O
> [*] Remove addresses from /proc/<pid>/[maps|stat]
> [*] Deter exploit bruteforcing
> [*] Hide kernel symbols
>
> Need I continue? There's some 30 or 40 more options I could show. If
> you can't use your enter, left, right, up, y, n, and ? keys, you're
> crippled and won't be able to patch and unpatch crap either.
Just because I can use my arrow keys doesn't mean I can find which part of
a 250,000 line patch broke something.
If it's done as 30 or 40 patches, each of which implements ONE OPTION, then
it's pretty easy to play binary search to find what broke something.
And don't give me "it doesn't break anything" - in the past, I've fed at least
2 bug fixes on things I found broken back to the grsecurity crew (one was a
borkage in the process-ID-randomization code, another was a bad parenthesis
matching breaking the intent of an 'if' in one of the filesystem protection
checks (symlink or fifo or something like that).
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Sytse Wielinga wrote:
[...]
>>If you people ever bothered to read what I say, you wouldn't continually
>>say stupid shit like <me> You get milk from cows <you> wtf idiot
>>chocolate milk doens't come from chocolate cows
>
>
> I'm sorry about the rant. Besides, your comment ('Wasn't talking about
> bugfixes') makes some sense, too. You were actually talking about two patches
> though, which close two closely related holes. Linus was talking about the
> possible bugs caused by either one of these two patches, which may be totally
> unrelated to the thing they try to fix.
>
Sorry, I just woke up, this thread has me under a lot of stress. I
should go back to arguing things that have some end goal to them, rather
than arguing simply because I have nothing better to do.
> Sytse
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9//rhDd4aOud5P8RAqJ4AKCOUHogl5iN8txWkw971x9TlJPJeQCghz4v
tBGYkU69tUQnKdZnyez0+10=
=8DIE
-----END PGP SIGNATURE-----
On Wed, Jan 26, 2005 at 03:39:08PM -0500, John Richard Moser wrote:
> > I'm sorry about the rant. Besides, your comment ('Wasn't talking about
> > bugfixes') makes some sense, too. You were actually talking about two patches
> > though, which close two closely related holes. Linus was talking about the
> > possible bugs caused by either one of these two patches, which may be totally
> > unrelated to the thing they try to fix.
> Sorry, I just woke up, this thread has me under a lot of stress. I
> should go back to arguing things that have some end goal to them, rather
> than arguing simply because I have nothing better to do.
Yes.. it seems that the thread has gone in a rather pointless direction, noone
seems to know exactly what it was about anymore but everyone keeps carrying
huge emotions all over the place. Let's just forget it and move on :-)
Sytse
On Wed, 26 Jan 2005, Jesse Pollard wrote:
> On Tuesday 25 January 2005 15:05, linux-os wrote:
> > This isn't relevant at all. The Navy doesn't have any secure
> > systems connected to a network to which any hackers could connect.
> > The TDRS communications satellites provide secure channels
> > that are disassembled on-board. Some ATM-slot, after decryption
> > is fed to a LAN so the sailors can have an Internet connection
> > for their lap-tops. The data took the same paths, but it's
> > completely independent and can't get mixed up no matter how
> > hard a hacker tries.
>
> Obviously you didn't hear about the secure network being hit by the "I love
> you" virus.
>
> The Navy doesn't INTEND to have any secure systems connected to a network to
> which any hackers could connect.
What's hard about that? Matter of physical network topology, absolutely no
physical connection, no machines with a 2nd NIC, no access to/from I'net.
Yes, it's a PITA, add logging to a physical printer which can't be erased
if you want to make your CSO happy (corporate security officer).
>
> Unfortunately, there will ALWAYS be a path, either direct, or indirect between
> the secure net and the internet.
Other than letting people use secure computers after they have seen the
Internet, a good setup has no indirect paths.
>
> The problem exists. The only to protect is to apply layers of protection.
>
> And covering the possible unknown errors is a good way to add protection.
>
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
On Wed, Jan 26, 2005 at 02:31:00PM -0500, John Richard Moser wrote:
> Sytse Wielinga wrote:
> > On Tue, Jan 25, 2005 at 03:03:04PM -0500, John Richard Moser wrote:
> >[...]
> >>true. Very very true.
> >>
> >>With things like Gr, there's like a million features. Normally the
> >>first step I take is "Disable it all". If it still breaks, THEN THERE'S
> >>A PROBLEM. If it works, then the binary searching begins.
> >
> >
> > So how do you think you would do a binary search within big patches, if it
> > would take you 10,000 man-hours to split up the patch? Disabling a lot of
> > small patches is easy, disabling a part of a big one takes a lot more work.
>
> 'make menuconfig' is not a lot more work wtf
>
>
> [*] Grsecurity
> Security Level (Custom) --->
> Address Space Protection --->
> Role Based Access Control Options --->
> Filesystem Protections --->
> Kernel Auditing --->
> Executable Protections --->
> Network Protections --->
> Sysctl support --->
> Logging Options --->
>
> ?? Address Space Protection ??
> [ ] Deny writing to /dev/kmem, /dev/mem, and /dev/port
> [ ] Disable privileged I/O
> [*] Remove addresses from /proc/<pid>/[maps|stat]
> [*] Deter exploit bruteforcing
> [*] Hide kernel symbols
>
> Need I continue? There's some 30 or 40 more options I could show. If
> you can't use your enter, left, right, up, y, n, and ? keys, you're
> crippled and won't be able to patch and unpatch crap either.
Granted, in some patches you can disable certain features by turning off config
options. Even though it's much less convenient and you might miss out on some
parts because bugs may be introduced that aren't disabled by any option and
even if you find the option that triggers the bug, you still may have lots of
code to check because the option enables a large piece of code, and will have
to work with the entire patch instead of just a small one, in the case of a
well-written patch it's mostly very inconvenient. It still is a good habit to
split out the work you do into small parts though.
> >>>Which is why lots of small patches usually have _different_ bug behaviour
> >>>than the patch they fix. To go back to the A+B fix: the bug they fix may
> >>>be fixed only by the _combination_ of the patch, but the bug they cause is
> >>>often an artifact of _one_ of the patches.
> >>>
> >>
> >>Wasn't talking about bugfixes, see above.
> >
> >
> > Oh, so you're saying that security fixes don't cause bugs? Great world you live
> > in, then...
> >
>
> I didn't say that. I said I wasn't talking about bugfix patches. I
> wasn't talking about "mremap(0,0) gives you root," I was talking about
> "preventing following links under X conditions breaks nothing legitimate
> but deadstops /tmp races" or "properly setting CPU protections for
> PROT_EXEC stops code injection" or "ASLR stops ret2libc attacks."
>
> If you people ever bothered to read what I say, you wouldn't continually
> say stupid shit like <me> You get milk from cows <you> wtf idiot
> chocolate milk doens't come from chocolate cows
I'm sorry about the rant. Besides, your comment ('Wasn't talking about
bugfixes') makes some sense, too. You were actually talking about two patches
though, which close two closely related holes. Linus was talking about the
possible bugs caused by either one of these two patches, which may be totally
unrelated to the thing they try to fix.
Sytse
On Wed, Jan 26, Linus Torvalds wrote:
>
>
> On Wed, 26 Jan 2005, Olaf Hering wrote:
> >
> > And, did that nice interface help at all? No, it did not.
> > Noone made seqfile mandatory in 2.6.
>
> Sure it helped. We didn't make it mandatory, but new stuff ends up being
> written with it, and old stuff _does_ end up being converted to it.
2.5 was the right time to enforce it.
> > Now we have a few nice big patches to carry around because every driver
> > author had its own proc implementation. Well done...
>
> Details, please?
You did it this way:
http://linux.bkbits.net:8080/linux-2.5/cset@4115cba3UCrZo9SnkQp0apTO3SghJQ
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Sytse Wielinga wrote:
> On Tue, Jan 25, 2005 at 03:03:04PM -0500, John Richard Moser wrote:
>
>>That being said, you should also consider (unless somebody forgot to
>>tell me something) that it takes two source trees to make a split-out
>>patch. The author also has to chew down everything but the feature he
>>wants to split out. I could probably log 10,000 man-hours splitting up
>>GrSecurity. :)
>
>
> I'd try out Andrew's patch scripts if I were you. If you're making a patch to
> the kernel, you'd best keep it in separate patches from the beginning, and
> that's exactly what those scripts are very useful for.
>
>
>>>It's also a lot easier to find the (inevitable) bugs. Either you already
>>>have a clue ("try reverting that one patch") or you can do things like
>>>binary searching. The bugs introduced a patch often have very little to do
>>>with the thing a patch fixes - exactly because the patch _fixes_
>>>something, it's been tested with that particular problem, and the new
>>>problem it introduces is usually orthogonal.
>>
>>true. Very very true.
>>
>>With things like Gr, there's like a million features. Normally the
>>first step I take is "Disable it all". If it still breaks, THEN THERE'S
>>A PROBLEM. If it works, then the binary searching begins.
>
>
> So how do you think you would do a binary search within big patches, if it
> would take you 10,000 man-hours to split up the patch? Disabling a lot of
> small patches is easy, disabling a part of a big one takes a lot more work.
>
'make menuconfig' is not a lot more work wtf
[*] Grsecurity
Security Level (Custom) --->
Address Space Protection --->
Role Based Access Control Options --->
Filesystem Protections --->
Kernel Auditing --->
Executable Protections --->
Network Protections --->
Sysctl support --->
Logging Options --->
?? Address Space Protection ??
[ ] Deny writing to /dev/kmem, /dev/mem, and /dev/port
[ ] Disable privileged I/O
[*] Remove addresses from /proc/<pid>/[maps|stat]
[*] Deter exploit bruteforcing
[*] Hide kernel symbols
Need I continue? There's some 30 or 40 more options I could show. If
you can't use your enter, left, right, up, y, n, and ? keys, you're
crippled and won't be able to patch and unpatch crap either.
>
>>>Which is why lots of small patches usually have _different_ bug behaviour
>>>than the patch they fix. To go back to the A+B fix: the bug they fix may
>>>be fixed only by the _combination_ of the patch, but the bug they cause is
>>>often an artifact of _one_ of the patches.
>>>
>>
>>Wasn't talking about bugfixes, see above.
>
>
> Oh, so you're saying that security fixes don't cause bugs? Great world you live
> in, then...
>
I didn't say that. I said I wasn't talking about bugfix patches. I
wasn't talking about "mremap(0,0) gives you root," I was talking about
"preventing following links under X conditions breaks nothing legitimate
but deadstops /tmp races" or "properly setting CPU protections for
PROT_EXEC stops code injection" or "ASLR stops ret2libc attacks."
If you people ever bothered to read what I say, you wouldn't continually
say stupid shit like <me> You get milk from cows <you> wtf idiot
chocolate milk doens't come from chocolate cows
>
>>>IOW, splitting the patches up makes them
>>> - easier to merge
>>> - easier to verify
>>> - easier to debug
>>>
>>>and combining them has _zero_ advantages (whatever bug the combined patch
>>>fix _will_ be fixed by the series of individual patches too - even if the
>>>splitting was buggy in some respect, you are pretty much guaranteed of
>>>this, since the bug you were trying to fix is the _one_ thing you are
>>>really testing for).
>>
>>Lots of work to split up a patch though.
>
>
> See above.
>
> Sytse
>
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9+/zhDd4aOud5P8RAsZzAJ4rUryqsKc1OcfT4Nwc1m/lJtePPwCfXMWx
fEoc1nSxOfEzjJNZRDx6qYQ=
=NYJe
-----END PGP SIGNATURE-----
On Wed, 26 Jan 2005, Olaf Hering wrote:
>
> And, did that nice interface help at all? No, it did not.
> Noone made seqfile mandatory in 2.6.
Sure it helped. We didn't make it mandatory, but new stuff ends up being
written with it, and old stuff _does_ end up being converted to it.
> Now we have a few nice big patches to carry around because every driver
> author had its own proc implementation. Well done...
Details, please?
Linus
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
[....]
Did any of you actually READ the link I put? How the heck did we get
the navy into this?
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9+5+hDd4aOud5P8RAnrJAKCAGRMebZP3EX1pvqxhWInQVQgGVQCfbu2f
XxZez57GG7z66bhlQTOX0M0=
=fcXP
-----END PGP SIGNATURE-----
On Wed, Jan 26, Linus Torvalds wrote:
> The biggest part of that is having nice interfaces. If you have good
> interfaces, bugs are less likely to be problematic. For example, the
> "seq_file" interfaces for /proc were written to clean up a lot of common
> mistakes, so that the actual low-level code would be much simpler and not
> have to worry about things like buffer sizes and page boundaries. I don't
> know/remember if it actually fixed any security issues, but I'm confident
> it made them less likely, just by making it _easier_ to write code that
> doesn't have silly bounds problems.
And, did that nice interface help at all? No, it did not.
Noone made seqfile mandatory in 2.6.
Now we have a few nice big patches to carry around because every driver
author had its own proc implementation. Well done...
On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
> On Wed, 26 Jan 2005, Jesse Pollard wrote:
> > On Tuesday 25 January 2005 15:05, linux-os wrote:
> > > This isn't relevant at all. The Navy doesn't have any secure
> > > systems connected to a network to which any hackers could connect.
> > > The TDRS communications satellites provide secure channels
> > > that are disassembled on-board. Some ATM-slot, after decryption
> > > is fed to a LAN so the sailors can have an Internet connection
> > > for their lap-tops. The data took the same paths, but it's
> > > completely independent and can't get mixed up no matter how
> > > hard a hacker tries.
> >
> > Obviously you didn't hear about the secure network being hit by the "I
> > love you" virus.
> >
> > The Navy doesn't INTEND to have any secure systems connected to a network
> > to which any hackers could connect.
>
> What's hard about that? Matter of physical network topology, absolutely no
> physical connection, no machines with a 2nd NIC, no access to/from I'net.
> Yes, it's a PITA, add logging to a physical printer which can't be erased
> if you want to make your CSO happy (corporate security officer).
And you are ASSUMING the connection was authorized. I can assure you that
there are about 200 (more or less) connections from the secure net to the
internet expressly for the purpose of transferring data from the internet
to the secure net for analysis. And not ALL of these connections are
authorized. Some are done via sneakernet, others by running a cable ("I need
the data NOW... I'll just disconnect afterward..."), and are not visible
for very long. Other connections are by picking up a system and carrying it
from one connection to another (a version of sneakernet, though here it
sometimes needs a hand cart).
> > Unfortunately, there will ALWAYS be a path, either direct, or indirect
> > between the secure net and the internet.
>
> Other than letting people use secure computers after they have seen the
> Internet, a good setup has no indirect paths.
Ha. Hahaha...
Reality bites.
> > The problem exists. The only to protect is to apply layers of protection.
> >
> > And covering the possible unknown errors is a good way to add protection.
On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:
> On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
> > On Wed, 26 Jan 2005, Jesse Pollard wrote:
> > > On Tuesday 25 January 2005 15:05, linux-os wrote:
> > > > This isn't relevant at all. The Navy doesn't have any secure
> > > > systems connected to a network to which any hackers could connect.
> > > > The TDRS communications satellites provide secure channels
> > > > that are disassembled on-board. Some ATM-slot, after decryption
> > > > is fed to a LAN so the sailors can have an Internet connection
> > > > for their lap-tops. The data took the same paths, but it's
> > > > completely independent and can't get mixed up no matter how
> > > > hard a hacker tries.
> > >
> > > Obviously you didn't hear about the secure network being hit by the "I
> > > love you" virus.
> > >
> > > The Navy doesn't INTEND to have any secure systems connected to a network
> > > to which any hackers could connect.
> >
> > What's hard about that? Matter of physical network topology, absolutely no
> > physical connection, no machines with a 2nd NIC, no access to/from I'net.
> > Yes, it's a PITA, add logging to a physical printer which can't be erased
> > if you want to make your CSO happy (corporate security officer).
>
> And you are ASSUMING the connection was authorized. I can assure you that
> there are about 200 (more or less) connections from the secure net to the
> internet expressly for the purpose of transferring data from the internet
> to the secure net for analysis. And not ALL of these connections are
> authorized. Some are done via sneakernet, others by running a cable ("I need
> the data NOW... I'll just disconnect afterward..."), and are not visible
> for very long. Other connections are by picking up a system and carrying it
> from one connection to another (a version of sneakernet, though here it
> sometimes needs a hand cart).
>
> > > Unfortunately, there will ALWAYS be a path, either direct, or indirect
> > > between the secure net and the internet.
> >
> > Other than letting people use secure computers after they have seen the
> > Internet, a good setup has no indirect paths.
>
> Ha. Hahaha...
>
> Reality bites.
In the reality I'm familiar with, the defense contractor's secure
projects building had one entrance, guarded by security guards who were
not cheap $10/hr guys, with strict instructions. No computers or
computer media were allowed to leave the building except with written
authorization of a corporate officer. The building was shielded against
Tempest attacks and verified by the NSA. Any computer hardware or media
brought into the building for the project was physically destroyed at
the end.
Secure nets _are_ possible.
--
Zan Lynx <[email protected]>
On Thursday 27 January 2005 11:18, Zan Lynx wrote:
> On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:
>
> >
> > > > Unfortunately, there will ALWAYS be a path, either direct, or
> > > > indirect between the secure net and the internet.
> > >
> > > Other than letting people use secure computers after they have seen the
> > > Internet, a good setup has no indirect paths.
> >
> > Ha. Hahaha...
> >
> > Reality bites.
>
> In the reality I'm familiar with, the defense contractor's secure
> projects building had one entrance, guarded by security guards who were
> not cheap $10/hr guys, with strict instructions. No computers or
> computer media were allowed to leave the building except with written
> authorization of a corporate officer. The building was shielded against
> Tempest attacks and verified by the NSA. Any computer hardware or media
> brought into the building for the project was physically destroyed at
> the end.
>
And you are assuming that everybody follows the rules.
when a PHB, whether military or not (and not contractor) comes in and
says "... I don't care what it takes... get that data over there NOW..."
guess what - it gets done. Even if it is "less secure" in the process.
Oh - and about that "physically destroyed" - that used to be true.
Until it was pointed out to them that destruction of 300TB of data
media would cost them about 2 Million.
Suddenly, erasing became popular. And sufficient. Then it was reused
in a non-secure facility, operated by the same CO.
> Secure nets _are_ possible.
Yes they are. But they are NOT reliable.
Don't ever assume a "secure" network really is.
All it means is: "as secure as we can manage"
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Bill Davidsen wrote:
> On Thu, 27 Jan 2005, Zan Lynx wrote:
>
>
>>On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:
>>
>>>On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
>>>
>>>>On Wed, 26 Jan 2005, Jesse Pollard wrote:
>>>>
>>>>>On Tuesday 25 January 2005 15:05, linux-os wrote:
>>>>>
>>>>>>This isn't relavent [Stuff about the navy][...]
>>>>>
>>>>>The Navy [...]
>>>>
>>>>[...]Physical network topology[...]
>>>
>>>[...]sneakernet[...]
>>>
>>>
>>>>>[...]path[...]
>>>>
>>>>[...]internet[...]
>>>
>>>[...]hahaha[...]
>>
>>[...]NSA[...]
>
>
> [...]security clearance[...]
>
I'll ask again
How the f!@k did the navy get involved in this discussion?
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB+XrshDd4aOud5P8RAlYQAKCIoi9N6fsNcmjHrT+S5nVptw8sdACfQuZ6
cpAXu20BIaitjRvuqwJq/K4=
=zbim
-----END PGP SIGNATURE-----
On Thu, 27 Jan 2005, Zan Lynx wrote:
> On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:
> > On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
> > > On Wed, 26 Jan 2005, Jesse Pollard wrote:
> > > > On Tuesday 25 January 2005 15:05, linux-os wrote:
> > > > > This isn't relevant at all. The Navy doesn't have any secure
> > > > > systems connected to a network to which any hackers could connect.
> > > > > The TDRS communications satellites provide secure channels
> > > > > that are disassembled on-board. Some ATM-slot, after decryption
> > > > > is fed to a LAN so the sailors can have an Internet connection
> > > > > for their lap-tops. The data took the same paths, but it's
> > > > > completely independent and can't get mixed up no matter how
> > > > > hard a hacker tries.
> > > >
> > > > Obviously you didn't hear about the secure network being hit by the "I
> > > > love you" virus.
> > > >
> > > > The Navy doesn't INTEND to have any secure systems connected to a network
> > > > to which any hackers could connect.
> > >
> > > What's hard about that? Matter of physical network topology, absolutely no
> > > physical connection, no machines with a 2nd NIC, no access to/from I'net.
> > > Yes, it's a PITA, add logging to a physical printer which can't be erased
> > > if you want to make your CSO happy (corporate security officer).
> >
> > And you are ASSUMING the connection was authorized. I can assure you that
> > there are about 200 (more or less) connections from the secure net to the
> > internet expressly for the purpose of transferring data from the internet
> > to the secure net for analysis. And not ALL of these connections are
> > authorized. Some are done via sneakernet, others by running a cable ("I need
> > the data NOW... I'll just disconnect afterward..."), and are not visible
> > for very long. Other connections are by picking up a system and carrying it
> > from one connection to another (a version of sneakernet, though here it
> > sometimes needs a hand cart).
> >
> > > > Unfortunately, there will ALWAYS be a path, either direct, or indirect
> > > > between the secure net and the internet.
> > >
> > > Other than letting people use secure computers after they have seen the
> > > Internet, a good setup has no indirect paths.
> >
> > Ha. Hahaha...
> >
> > Reality bites.
>
> In the reality I'm familiar with, the defense contractor's secure
> projects building had one entrance, guarded by security guards who were
> not cheap $10/hr guys, with strict instructions. No computers or
> computer media were allowed to leave the building except with written
> authorization of a corporate officer. The building was shielded against
> Tempest attacks and verified by the NSA. Any computer hardware or media
> brought into the building for the project was physically destroyed at
> the end.
That sounds familiar... Doing any of the things mentioned above would (if
detected) result in firing on the spot, loss of security clearance, and a
stunningly bad reference if anyone did an employment check.
Not to mention possible civil or criminal prosecution in some cases.
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
Zan Lynx <[email protected]> writes:
> In the reality I'm familiar with, the defense contractor's secure
> projects building had one entrance, guarded by security guards who were
> not cheap $10/hr guys, with strict instructions. No computers or
> computer media were allowed to leave the building except with written
> authorization of a corporate officer.
Wow, nice. How do they check for, say, Compact Flashes or, for example,
even smaller XD-picture cards?
--
Krzysztof Halasa
On Thu, 27 Jan 2005, John Richard Moser wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>
> Bill Davidsen wrote:
>> On Thu, 27 Jan 2005, Zan Lynx wrote:
>>
>>
>>> On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:
>>>
>>>> On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
>>>>
>>>>> On Wed, 26 Jan 2005, Jesse Pollard wrote:
>>>>>
>>>>>> On Tuesday 25 January 2005 15:05, linux-os wrote:
>>>>>>
>>>>>>> This isn't relavent [Stuff about the navy][...]
>>>>>>
>>>>>> The Navy [...]
>>>>>
>>>>> [...]Physical network topology[...]
>>>>
>>>> [...]sneakernet[...]
>>>>
>>>>
>>>>>> [...]path[...]
>>>>>
>>>>> [...]internet[...]
>>>>
>>>> [...]hahaha[...]
>>>
>>> [...]NSA[...]
>>
>>
>> [...]security clearance[...]
>>
>
> I'll ask again
>
> How the f!@k did the navy get involved in this discussion?
>
That's where the love-you virus was (supposed to have been)
introduced into a secure system. It's probably, with I would
guess a probable 89-90 percent probability, some BS.
You spelled f!@k wrong. The 'u' is ahead of the 'c'. You
dyslexic or something?
> - --
> All content of all messages exchanged herein are left in the
> Public Domain, unless otherwise explicitly stated.
>
Cheers,
Dick Johnson
Penguin : Linux version 2.6.10 on an i686 machine (5537.79 BogoMips).
Notice : All mail here is now cached for review by Dictator Bush.
98.36% of all statistics are fiction.
On Mer, 2005-01-26 at 19:15, Olaf Hering wrote:
> And, did that nice interface help at all? No, it did not.
> Noone made seqfile mandatory in 2.6.
> Now we have a few nice big patches to carry around because every driver
> author had its own proc implementation. Well done...
seqfile has helped immensely from what I can see. And gradually it takes
over the kernel because each time someone has a broken proc driver it is
easier to rewrite it in seq_file than fix it any other way.
All good API's work that way, and they really do work. You only have to
look at things like the statistics for Gnome application string caused
security errors versus those for generic C apps to see the huge effect
stuff like g_string classes have had on reliability.
We need *more* API's like this - we are lacking some nice helpers for
simple block/char devices and also lacking a "call under lock" construct
which avoids forgetting to drop locks for example.
Alan
I followed the start of this thread when it was about security mailing
lists and bug-disclosure rules, and then lost interest.
I just looked in again, and I seem to be seeing discussion of merging
grsecurity pathes into mainline. I haven't yet found an message
where this is proposed explicitly, so if I am inferring incorrectly,
I apologize. (And you can ignore the rest of this missive.)
However, I did look carefully at an earlier patch that claimed to be a
Linux port of some OpenBSD networking randomization code, ostensibly to
make packet-guessing attacks more difficult.
http://marc.theaimsgroup.com/?l=linux-kernel&m=110693283511865
It was further claimed that this code came via grsecurity. I did verify
that the code looked a lot like pieces of OpenBSD, but didn't look at
grsecurity at all.
However, I did look in some detail at the code itself.
http://marc.theaimsgroup.com/?l=linux-netdev&m=110736479712671
What I concluded was that it was broken beyond belief, and the effect
on the networking code varied from (due to putting the IP ID generation
code in the wrong place) wasting a lot of time randomizing a number that
could be a constant zero if not for working around a bug in Microsoft's
PPP stack, to (RPC XID generation) severe protocol violation.
Not to mention race conditions out the wazoo due to porting
single-threaded code.
After careful review, I couldn't find a single redeeming feature, or
even a good idea that was merely implemented badly. See the posting
for details and more colorful criticism.
Now, as I said, I have *not* gone to the trouble of seeing if this patch
really did come from grsecurity, or if it was horribly damaged in the
process of splitting it out. So I may be unfairly blaming grsecurity,
but I didn't feel like seeking out more horrible code to torture my
sanity with.
My personal, judgemental opinion was that if that was typical of
grsecurity, it's a festering pile of pus that I'm not going to let
anywhere near my kernel, thank you very much.
But to the extent that this excerpt constitutes reasonable grounds for
suspicion, I would like to recommend a particularly careful review of any
grsecurity patches. In addition to Linus' dislike of monolithic patches.
Just my $0.02.