Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932800AbbHDI6H (ORCPT ); Tue, 4 Aug 2015 04:58:07 -0400 Received: from mail-la0-f51.google.com ([209.85.215.51]:34878 "EHLO mail-la0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932331AbbHDI6E (ORCPT ); Tue, 4 Aug 2015 04:58:04 -0400 MIME-Version: 1.0 In-Reply-To: References: Date: Tue, 4 Aug 2015 10:58:03 +0200 Message-ID: Subject: Re: kdbus: to merge or not to merge? From: David Herrmann To: Andy Lutomirski Cc: Linus Torvalds , "linux-kernel@vger.kernel.org" , Djalal Harouni , Greg KH , Havoc Pennington , "Eric W. Biederman" , One Thousand Gnomes , Tom Gundersen , Daniel Mack , "Kalle A. Sandstrom" , Borislav Petkov , cee1 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2110 Lines: 61 Hi On Tue, Aug 4, 2015 at 1:02 AM, Andy Lutomirski wrote: > I got Fedora > Rawhide working under kdbus (thanks, everyone!), and I ran this little > program: > > #include > #include > > int main(int argc, char *argv[]) > { > while (1) { > sd_bus *bus; > if (sd_bus_open_system(&bus) < 0) { > /* warn("sd_bus_open_system"); */ > continue; > } > sd_bus_close(bus); You lack a call to sd_bus_unref() here. Without it, your loop contains: while (1) malloc(1024); This simple malloc-loop already hogs your system. If I add the required call to _unref(), your tool runs smoothly on my machine. > } > } > > under both userspace dbus and under kdbus. Userspace dbus burns some > CPU -- no big deal. I expected kdbus to fail to scale and burn a > disproportionate amount of CPU (because I don't see how it /can/ > scale). Instead it fell over completely. I didn't bother debugging > it, but offhand I'd guess that the system OOMed and didn't come back. I cannot see the relation to kdbus. > On very brief inspection, Rawhide seems to have a lot of kdbus > connections with 16MiB of mapped tmpfs stuff each. (53 of them > mapped, and I don't know how many exist with tmpfs backing but aren't > mapped. Presumably the number only goes up as the degree of reliance > on the userspace proxy goes down. What does this have to do with the proxy? Why would resource consumption go *up* as the proxy users decline? Please elaborate. > I don't know of any deployed > systems that solve it by broadcasting the lifetime of everything to > everyone and relying on those broadcasts going through, though. Luckily, kdbus does not do this. Thanks David -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/