Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3535061ybd; Fri, 28 Jun 2019 10:16:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqyN8HJzYeSZs6w/63xRj9jxuy+/8i56nfDXZeKF6JA4JYZbmzDCxdCFpfVNh7Q2efQ7RYPY X-Received: by 2002:a17:90a:21d0:: with SMTP id q74mr14658798pjc.12.1561742208025; Fri, 28 Jun 2019 10:16:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561742208; cv=none; d=google.com; s=arc-20160816; b=feKZGoSPNtjHAXwl2sjNBoHd8GnY0wTgQdfvdapujTx60eQfpf06UGjkLuS/DIiPiz bW9XBdehknYIkAZXQ5IcG0Atp2rvP9GzkcwpZ/ZEbi7ElVnAy04V2sk5hC1p01x0ndtT CmA31SJEDXKiaWIFbnyAppt38yHpyEDZuSpS7TBA/LDfDge/qYymCTMxQMA52l+6ULi1 YGeMK6lKzGHGSHqRcVrE/vz4pfvAcm8TbrAauiwwyGlM16RM/ImwQJ6YnEvLsbfVk9ai ZnbXTU0zi9veHh1rbmKjHfY/k86UA5tcDz+sQoMfY8Auih5/D4yNkq3FZr4BTIEAKbfX 7JHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=X/aiZ9J+dd4UJLcrpdr3hoLcNRw4Si2ePjPQfQ4q8YM=; b=gjSMksBH94b3kvvlFgtsUyTIjv1Rcmmx1HB6ksPeyDRhcbonF5SY2SP0DEQ2NGcG8e 7YAlOmdB2GeCY366YkofvVJZKCr+lYhrnrXZTmMxKJ501EpGDHayMY0tFOR3S3qyTfO3 FWzpOwTrZbzh1+isN+7ocU1kyJUxviXyMZSWX/95RzFmkmFNqKwgtzveMRgp3UONd5jb zOzm8A+hacf1OR0C283c2mIJMii+h4hItTHEsowhzGYaJ+9O90YldNYYu7dUPxxAG4HL VlTl3wIZoW51kCc5B6L0zXm0uXAp64+3VwrIII7HBbpwaBgsVoRpgUcwijY8Qa2/mXiX E1ng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ahnblySt; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x24si3167476pfr.200.2019.06.28.10.16.31; Fri, 28 Jun 2019 10:16:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ahnblySt; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726672AbfF1RQ1 (ORCPT + 99 others); Fri, 28 Jun 2019 13:16:27 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:46554 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbfF1RQ1 (ORCPT ); Fri, 28 Jun 2019 13:16:27 -0400 Received: by mail-qt1-f195.google.com with SMTP id h21so7068575qtn.13; Fri, 28 Jun 2019 10:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=X/aiZ9J+dd4UJLcrpdr3hoLcNRw4Si2ePjPQfQ4q8YM=; b=ahnblyStPij2rtGufd7uEL9kTluJbQDHGS1iKvW73t2ebo3vRNn8FTuQKgazM07J6O +wXSCQjEJyVN+Z3gV5TMXtCb4Y0BIUZC4KxjAK9eozvCWn7wRcEL7pPsnDG3O9sFCOtj 7mtNmpZC6VCXYGNzMz40aPZe0km5xgpLwl3cJ2tu5+2EK3h7m9alVYStSrdTVHqHMtp8 JjPRmJwPCsm/WT2+xKeeXM+QzweX3GmM5GeVUw98UEEx+NBdcny3Wqp02cGUnsp03S2w 3znGOn57MuwAlyVtWFo05byzrDA0uYw0CMUTrbFQ7ORqxyJYHci8YsBH/OSm/emVhCjd KvHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=X/aiZ9J+dd4UJLcrpdr3hoLcNRw4Si2ePjPQfQ4q8YM=; b=A67boiHsH77haAJxP85NCS/+oeF3zPs6EK8GlFkwMxDYtI1xLLaXPaL0HwfcM5c/ec mrFIOf0GEiTirdNQTvkykMBoMRIJ77HkcDTaaqO2lmFAOxwi75sTft0lVSgJrE5InXAE vXIEIZRqX6CQG1K2guSgeApaCBllgy8hT7yb9krXV1fH8XvluLtTGUizwu/TRi5gwNir C7Fk59MIifYf7QUZ4YAIPA8vo5ngaKwqDyt1Re/lkfoIpwn/lMLmDm/3bOeNaAWW4VER EsTV3jYhil70N67MXSQvL5xHejhZ1vr5M0Hh5Y5lZiyogdQY5UhoxQvR04WEoAOzz4hX O6oA== X-Gm-Message-State: APjAAAUa6ifSla2KplYVzm3HdNYHBAGaU2RWLK08ttpCXV7tsrj8aTd1 GsEOOm7bqpNffRj3m6lpQjVxJzE6z1tbuMgaFbg= X-Received: by 2002:aed:36c5:: with SMTP id f63mr9236038qtb.239.1561742186465; Fri, 28 Jun 2019 10:16:26 -0700 (PDT) MIME-Version: 1.0 References: <20190624174219.25513-1-longman@redhat.com> <20190624174219.25513-3-longman@redhat.com> <20190626201900.GC24698@tower.DHCP.thefacebook.com> <063752b2-4f1a-d198-36e7-3e642d4fcf19@redhat.com> <20190627212419.GA25233@tower.DHCP.thefacebook.com> <0100016b9eb7685e-0a5ab625-abb4-4e79-ab86-07744b1e4c3a-000000@email.amazonses.com> In-Reply-To: <0100016b9eb7685e-0a5ab625-abb4-4e79-ab86-07744b1e4c3a-000000@email.amazonses.com> From: Yang Shi Date: Fri, 28 Jun 2019 10:16:13 -0700 Message-ID: Subject: Re: [PATCH 2/2] mm, slab: Extend vm/drop_caches to shrink kmem slabs To: Christopher Lameter Cc: Roman Gushchin , Waiman Long , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Alexander Viro , Jonathan Corbet , Luis Chamberlain , Kees Cook , Johannes Weiner , Michal Hocko , Vladimir Davydov , "linux-mm@kvack.org" , "linux-doc@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "cgroups@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Shakeel Butt , Andrea Arcangeli Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 28, 2019 at 8:32 AM Christopher Lameter wrote: > > On Thu, 27 Jun 2019, Roman Gushchin wrote: > > > so that objects belonging to different memory cgroups can share the same page > > and kmem_caches. > > > > It's a fairly big change though. > > Could this be done at another level? Put a cgoup pointer into the > corresponding structures and then go back to just a single kmen_cache for > the system as a whole? You can still account them per cgroup and there > will be no cleanup problem anymore. You could scan through a slab cache > to remove the objects of a certain cgroup and then the fragmentation > problem that cgroups create here will be handled by the slab allocators in > the traditional way. The duplication of the kmem_cache was not designed > into the allocators but bolted on later. I'm afraid this may bring in another problem for memcg page reclaim. When shrinking the slabs, the shrinker may end up scanning a very long list to find out the slabs for a specific memcg. Particularly for the count operation, it may have to scan the list from the beginning all the way down to the end. It may take unbounded time. When I worked on THP deferred split shrinker problem, I used to do like this, but it turns out it may take milliseconds to count the objects on the list, but it may just need reclaim a few of them. >