Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp769382imm; Fri, 29 Jun 2018 06:10:40 -0700 (PDT) X-Google-Smtp-Source: ADUXVKL9ovfZCNGoWujvI7qLD9UrY4+bZBRDkVo95xM1I6ZfHfNTiKeqalawg5J7fP0McEX2zGSO X-Received: by 2002:a65:62ce:: with SMTP id m14-v6mr12587453pgv.407.1530277840923; Fri, 29 Jun 2018 06:10:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530277840; cv=none; d=google.com; s=arc-20160816; b=rwUHJwyeGw0ZFxHbbcM6ie+cZi+8/YrG4W8KJgu6y9MfitTUdgdChbibHJlY4ctSh6 9vnb52o0fNk4gky9xDJyuT6BKiGhAGifBzGpsJxRVCHV6i9vRnXTrnFFlBu4hCsMGe7P /Wgl5c+hO7B17gUaTPkZafZrO+ZNwkis2LPlXVOi7d8OZSfbWoAIUK6W7jUmk56l5Bkd m/1azRv61zE13HFAmpGKyjd+oiNSTp/Yd4sHaEIUTqWmdpHwyH63rCLIEOFBc/n7gG05 PKoAChha+em+vYjthJf+xgoJtB/Bw4QjRDluSy8KOlQoIaCRVY093Yzot0BkxNoXpfYn mFTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=UIiGxr4agWYBcxZ0xq6D9E9WhWF4+NBR3woKB8NRJo4=; b=zFGEb0fcqYSfeamHAFAnxXZAiRc5b4LaQeoQrOUgJ1EB2zJAtTu/WlXF5RuwXdLFPZ K52JvKbqPfvvZEfEQCYRCI/HSMNDRT/zmGXAsmX/l+e7V3Pudm6hQxPbnX1qv+KPjFQN R6Qqz47m640P3IVXkvI0PSa+GiXTxCQX101ESpY6Wnyj4zIDwMMBMY0iTCMd9i0Zrvzv +YnKXVeFV/peZIedIk9eyJdfP0bRXfhNd9NLyz54yVs9vhWl/vlH5f0lGjOgHFovVypv Tsv5Kkn0zLTlt/pO+oXWiqiEkquHSrpe317jh2GU81mxFIglxTCYZgSA2krUoID2/urW DXEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j12-v6si7960679pgf.359.2018.06.29.06.10.26; Fri, 29 Jun 2018 06:10:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934711AbeF2JwU (ORCPT + 99 others); Fri, 29 Jun 2018 05:52:20 -0400 Received: from mx2.suse.de ([195.135.220.15]:42466 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S934570AbeF2JwF (ORCPT ); Fri, 29 Jun 2018 05:52:05 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 626B9ACF3; Fri, 29 Jun 2018 09:52:01 +0000 (UTC) Date: Fri, 29 Jun 2018 11:52:00 +0200 From: Michal Hocko To: Shakeel Butt Cc: Jan Kara , Andrew Morton , Johannes Weiner , Vladimir Davydov , Jan Kara , Greg Thelen , Amir Goldstein , Roman Gushchin , Alexander Viro , LKML , Cgroups , linux-fsdevel , Linux MM Subject: Re: [PATCH 1/2] fs: fsnotify: account fsnotify metadata to kmemcg Message-ID: <20180629095200.GF13860@dhcp22.suse.cz> References: <20180627191250.209150-1-shakeelb@google.com> <20180627191250.209150-2-shakeelb@google.com> <20180628100253.jscxkw2d6vfhnbo5@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.0 (2018-05-17) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 28-06-18 12:21:26, Shakeel Butt wrote: > On Thu, Jun 28, 2018 at 12:03 PM Jan Kara wrote: > > > > On Wed 27-06-18 12:12:49, Shakeel Butt wrote: > > > A lot of memory can be consumed by the events generated for the huge or > > > unlimited queues if there is either no or slow listener. This can cause > > > system level memory pressure or OOMs. So, it's better to account the > > > fsnotify kmem caches to the memcg of the listener. > > > > > > However the listener can be in a different memcg than the memcg of the > > > producer and these allocations happen in the context of the event > > > producer. This patch introduces remote memcg charging API which the > > > producer can use to charge the allocations to the memcg of the listener. > > > > > > There are seven fsnotify kmem caches and among them allocations from > > > dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and > > > inotify_inode_mark_cachep happens in the context of syscall from the > > > listener. So, SLAB_ACCOUNT is enough for these caches. > > > > > > The objects from fsnotify_mark_connector_cachep are not accounted as they > > > are small compared to the notification mark or events and it is unclear > > > whom to account connector to since it is shared by all events attached to > > > the inode. > > > > > > The allocations from the event caches happen in the context of the event > > > producer. For such caches we will need to remote charge the allocations > > > to the listener's memcg. Thus we save the memcg reference in the > > > fsnotify_group structure of the listener. > > > > > > This patch has also moved the members of fsnotify_group to keep the size > > > same, at least for 64 bit build, even with additional member by filling > > > the holes. > > > > ... > > > > > static int __init fanotify_user_setup(void) > > > { > > > - fanotify_mark_cache = KMEM_CACHE(fsnotify_mark, SLAB_PANIC); > > > + fanotify_mark_cache = KMEM_CACHE(fsnotify_mark, > > > + SLAB_PANIC|SLAB_ACCOUNT); > > > fanotify_event_cachep = KMEM_CACHE(fanotify_event_info, SLAB_PANIC); > > > if (IS_ENABLED(CONFIG_FANOTIFY_ACCESS_PERMISSIONS)) { > > > fanotify_perm_event_cachep = > > > > Why don't you setup also fanotify_event_cachep and > > fanotify_perm_event_cachep caches with SLAB_ACCOUNT and instead specify > > __GFP_ACCOUNT manually? Otherwise the patch looks good to me. > > > > Hi Jan, IMHO having a visible __GFP_ACCOUNT along with > memalloc_use_memcg() makes the code more explicit and readable that we > want to targeted/remote memcg charging. Agreed. If you had an implicit SLAB_ACCOUNT then you could get inconsistencies when some allocations would get charged to the current task while others would not. -- Michal Hocko SUSE Labs