Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754977AbbGCJOY (ORCPT ); Fri, 3 Jul 2015 05:14:24 -0400 Received: from mail-ig0-f181.google.com ([209.85.213.181]:36951 "EHLO mail-ig0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754717AbbGCJN5 convert rfc822-to-8bit (ORCPT ); Fri, 3 Jul 2015 05:13:57 -0400 MIME-Version: 1.0 In-Reply-To: <878ud9oehg.fsf%l.stelmach@samsung.com> References: <21824.5086.446831.189915@quad.stoffel.home> <5540D2F9.2010704@redhat.com> <5540DEEB.2060405@redhat.com> <5540E0C7.3050106@nod.at> <5540E432.9020606@redhat.com> <5540E4D9.6000007@nod.at> <5540E684.4070606@redhat.com> <5540E821.8050204@nod.at> <5540F081.9090005@redhat.com> <20150429150341.GA12374@thunk.org> <5540F6E3.8000706@gmail.com> <871tj2ouk2.fsf%l.stelmach@samsung.com> <5541F209.8070302@nod.at> <87wq0tor57.fsf%l.stelmach@samsung.com> <55420685.8080607@nod.at> <87oam5olpu.fsf%l.stelmach@samsung.com> <55421EBB.2010500@nod.at> <87k2wtokl8.fsf%l.stelmach@samsung.com> <554223CE.7070704@nod.at> <878ud9oehg.fsf%l.stelmach@samsung.com> From: cee1 Date: Fri, 3 Jul 2015 17:13:37 +0800 Message-ID: Subject: Re: [GIT PULL] kdbus for 4.1-rc1 To: =?UTF-8?Q?=C5=81ukasz_Stelmach?= Cc: Richard Weinberger , Austin S Hemmelgarn , "Theodore Ts'o" , Harald Hoyer , "linux-kernel@vger.kernel.org" , Greg KH Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3624 Lines: 85 2015-04-30 22:52 GMT+08:00 Łukasz Stelmach : > It was <2015-04-30 czw 14:45>, when Richard Weinberger wrote: >> Am 30.04.2015 um 14:40 schrieb Łukasz Stelmach: >>> It was <2015-04-30 czw 14:23>, when Richard Weinberger wrote: >>>> Am 30.04.2015 um 14:16 schrieb Łukasz Stelmach: >>>>> It was <2015-04-30 czw 12:40>, when Richard Weinberger wrote: >>>>>> Am 30.04.2015 um 12:19 schrieb Łukasz Stelmach: >>>>>>> It was <2015-04-30 czw 11:12>, when Richard Weinberger wrote: >>>>>>>> Am 30.04.2015 um 11:05 schrieb Łukasz Stelmach: >>>>>>>>> Regardless, of initrd issues I feel there is a need of a local IPC >>>>>>>>> that is more capable than UDS. >>> [...] >>>>>>> For example, a service can't aquire credentials of a client process that >>>>>>> actually sent a request (it can, but it can't trust them). The service >>>>>>> can't be protected by LSM on a bus that is driven by dbus-daemon. Yes, >>>>>>> dbus-daemon, can check client's and srevice's labels and enforce a >>>>>>> policy but it is going to be the daemon and not the LSM code in the >>>>>>> kernel. >>>>>> >>>>>> That's why I said we can think of new kernel features if they are >>>>>> needed. But they current sink or swim approach of kdbus folks is also >>>>>> not the solution. As I said, if dbus-daemon utilizes the kernel >>>>>> interface as much as possible we can think of new features. >>>>> >>>>> What kernel interfaces do you suggest to use to solve the issues >>>>> I mentioned in the second paragraph: race conditions, LSM support (for >>>>> example)? >>>> >>>> The question is whether it makes sense to collect this kind of meta data. >>>> I really like Andy and Alan's idea improve AF_UNIX or revive AF_BUS. >>> >>> Race conditions have nothing to do with metadata. Neither has LSM >>> support. >> >> Sorry, I thought you mean the races while collecting metadata in userspace... > > My bad, some reace conditions *are* associated with collecting metadata > but ont all. It is impossible (correct me if I am wrong) to implement > reliable die-on-idle with dbus-daemon. > >>> AF_UNIX with multicast support wouldn't be AF_UNIX anymore. >>> >>> AF_BUS? I haven't followed the discussion back then. Why do you think it >>> is better than kdbus? >> >> Please see https://lwn.net/Articles/641278/ > > Thanks. If I understand correctly, the author suggests using EBPF on a > receiveing socket side for receiving multicast messages. This is nice if > you care about introducing (or not) (too?) much of new code. However, > AFAICT it may be more computationally complex than Bloom filters because > you need to run EBPF on every receiving socket instead of getting a list > of a few of them to copy data to. Of course for small number of > receivers the "constant" cost of running the Bloom filter may be higher. Still think about the idea of implementing KDBUS in the form of socket. What about using __multicast group__ instead of EBPF, to send/receive multicast message? (Which can implement the bloom filter as follows ?) E.g. Sender: send to multi_address Receivers: if ((multi_address & joined_address) == joined_address) { /* a message for us */ } Then we can further apply EBFP to remove the "False positive" case, which will otherwise wake up user space code, and let it clear "False positive" case. -- Regards, - cee1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/