Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp9177836imu; Tue, 4 Dec 2018 23:42:06 -0800 (PST) X-Google-Smtp-Source: AFSGD/WzyIw+GTAyMVeMJHzinsIUEAxo4WFtSBu3bWKFwScnmBVe0W+DCwIYeRVA8vswJ8NeChXD X-Received: by 2002:a17:902:5a86:: with SMTP id r6mr22236140pli.301.1543995725968; Tue, 04 Dec 2018 23:42:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543995725; cv=none; d=google.com; s=arc-20160816; b=Mamt/Gd9SlSCYNmPqhonrgTCRHGFIHLN74hB/Iik/D/nmttOGWYf5KzcSLwiea+dfc Y5T52JQb/NSjakPD/CDUBNmQbRWLy9665UgyVysAXjIPcsub+KvNfifGFdDM2l3noEJo YZxqAJZ/qtSGq4ldcSGbgDxRSQpXkVwUwpiTkB94g6RnyzcWOCZ72513q2o2fOuqnq9H cZHg+Xj3GFZtm5yP4+FGPGYbZZ6w5YfVu1qMZ+7uan+wYKgjREKCSqkqzXe7HgS+2g1T KQNWoML9bLic00rI1jKiFkxBC/9yLW8TWwpzYUJj4EzQa7Hu4T6b6bgBvogV3fthjjAY JTqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=LutB4UMjchAnNYOrNPwRWiMSPBbcvDM/vMfMLyZxfvQ=; b=Phdmvp6mFPeh7SGyvIO7bXUV+zVY7tEptEiV/xvD3ggOHlmychACe/ZlZ1RhREcBvF S6495+u6Ye3KhYSJGLsggeT0qPVtocKyUKPoM79EO0PcGYdMFKwq31pXphwwrGCEq0gM NCdMMVZeOknF+NbyNdYR+/ueKSfP8k2iP7PYmjSIoaP8R5loJx8yzVP1DnB0iljbxEas O9g0M0bs6YKyOjWg18dsCxi9VE41euQS1ZqQ90elx/y1xbj6OhPbe5VKJRrwL8N65LD/ dtFOgWH+CgdQuEXPCQ3f00075SGORJ5Dj8qOBR9IEPJhuyFW4eMqokulebSmzilfkJTS TiVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=SWHxJXgT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o61si19889708pld.246.2018.12.04.23.41.51; Tue, 04 Dec 2018 23:42:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=SWHxJXgT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726924AbeLEHkE (ORCPT + 99 others); Wed, 5 Dec 2018 02:40:04 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:37318 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726102AbeLEHkD (ORCPT ); Wed, 5 Dec 2018 02:40:03 -0500 Received: by mail-pl1-f195.google.com with SMTP id b5so9659330plr.4 for ; Tue, 04 Dec 2018 23:40:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LutB4UMjchAnNYOrNPwRWiMSPBbcvDM/vMfMLyZxfvQ=; b=SWHxJXgTkvXhxSy1XJ55bx3coeIpsSdeRvKiZCiXXwz5nuIqDar3wmE9EE502EhaFU pSimZqb/JU4loZIHicQqbMJsDJXV6lVlQEbWXK+w5qJPCscDgUgAg5t3YX8hmE01Ffgh E9AUgyQkeJlHdRJjpWQtFWVawJ0NzobeEW758= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LutB4UMjchAnNYOrNPwRWiMSPBbcvDM/vMfMLyZxfvQ=; b=H8Dg4MD+vGiDWN8RoJYj2v4mOiERI1AX7q1GcnQRpbXu3IfimcMgNFDkUQ2MPuc1Ks Lu6mRzo3jjbKgD6JBs7j0N/5JbLeYkoR6avErtdPuvGGXOFmlaE0brZOTvxJmIlG9F67 XjoYuw20YLYO8Z7+lD5kepncycXx9WLiwM5/Bb3NdjxFzhH4x0awVhC5s3NjFfisuXVN pzhM9FG2oS04EbnzJGlC1C/JpWcDMO+w1163sZu6+Zi09ilQNrQaRegydy6qK7NBE7/8 wBsoRYp6WsqTjqgH34LFwyffnV++WCSFr6cGrlBRu3Jn8nL5KrIiELn+KBtrFEY8hfXx ZNxA== X-Gm-Message-State: AA+aEWZA4xYuKSgDTqZh2drZBlIFGqzD7K/XYbpt/ZSZmAD5/DugiDRR 3pY2G2bdWjq5N+aVfX3SD7EOoP4SRDdUq0RZyRggwQ== X-Received: by 2002:a17:902:d806:: with SMTP id a6mr22269405plz.172.1543995602584; Tue, 04 Dec 2018 23:40:02 -0800 (PST) MIME-Version: 1.0 References: <20181205054828.183476-1-drinkcat@chromium.org> <20181205054828.183476-3-drinkcat@chromium.org> <20181205072528.l7blg6y24ggblh4m@master> In-Reply-To: <20181205072528.l7blg6y24ggblh4m@master> From: Nicolas Boichat Date: Wed, 5 Dec 2018 15:39:51 +0800 Message-ID: Subject: Re: [PATCH v4 2/3] mm: Add support for kmem caches in DMA32 zone To: richard.weiyang@gmail.com Cc: Will Deacon , Michal Hocko , Levin Alexander , linux-mm@kvack.org, Christoph Lameter , Huaisheng Ye , Matthew Wilcox , linux-arm Mailing List , David Rientjes , yingjoe.chen@mediatek.com, Vlastimil Babka , Tomasz Figa , Mike Rapoport , Matthias Brugger , Joonsoo Kim , Robin Murphy , lkml , Pekka Enberg , iommu@lists.linux-foundation.org, Andrew Morton , Mel Gorman Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 5, 2018 at 3:25 PM Wei Yang wrote: > > On Wed, Dec 05, 2018 at 01:48:27PM +0800, Nicolas Boichat wrote: > >In some cases (e.g. IOMMU ARMv7s page allocator), we need to allocate > >data structures smaller than a page with GFP_DMA32 flag. > > > >This change makes it possible to create a custom cache in DMA32 zone > >using kmem_cache_create, then allocate memory using kmem_cache_alloc. > > > >We do not create a DMA32 kmalloc cache array, as there are currently > >no users of kmalloc(..., GFP_DMA32). The new test in check_slab_flags > >ensures that such calls still fail (as they do before this change). > > > >Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") > >Signed-off-by: Nicolas Boichat > >--- > > > >Changes since v2: > > - Clarified commit message > > - Add entry in sysfs-kernel-slab to document the new sysfs file > > > >(v3 used the page_frag approach) > > > >Documentation/ABI/testing/sysfs-kernel-slab | 9 +++++++++ > > include/linux/slab.h | 2 ++ > > mm/internal.h | 8 ++++++-- > > mm/slab.c | 4 +++- > > mm/slab.h | 3 ++- > > mm/slab_common.c | 2 +- > > mm/slub.c | 18 +++++++++++++++++- > > 7 files changed, 40 insertions(+), 6 deletions(-) > > > >diff --git a/Documentation/ABI/testing/sysfs-kernel-slab b/Documentation/ABI/testing/sysfs-kernel-slab > >index 29601d93a1c2ea..d742c6cfdffbe9 100644 > >--- a/Documentation/ABI/testing/sysfs-kernel-slab > >+++ b/Documentation/ABI/testing/sysfs-kernel-slab > >@@ -106,6 +106,15 @@ Description: > > are from ZONE_DMA. > > Available when CONFIG_ZONE_DMA is enabled. > > > >+What: /sys/kernel/slab/cache/cache_dma32 > >+Date: December 2018 > >+KernelVersion: 4.21 > >+Contact: Nicolas Boichat > >+Description: > >+ The cache_dma32 file is read-only and specifies whether objects > >+ are from ZONE_DMA32. > >+ Available when CONFIG_ZONE_DMA32 is enabled. > >+ > > What: /sys/kernel/slab/cache/cpu_slabs > > Date: May 2007 > > KernelVersion: 2.6.22 > >diff --git a/include/linux/slab.h b/include/linux/slab.h > >index 11b45f7ae4057c..9449b19c5f107a 100644 > >--- a/include/linux/slab.h > >+++ b/include/linux/slab.h > >@@ -32,6 +32,8 @@ > > #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) > > /* Use GFP_DMA memory */ > > #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U) > >+/* Use GFP_DMA32 memory */ > >+#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U) > > /* DEBUG: Store the last owner for bug hunting */ > > #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U) > > /* Panic if kmem_cache_create() fails */ > >diff --git a/mm/internal.h b/mm/internal.h > >index a2ee82a0cd44ae..fd244ad716eaf8 100644 > >--- a/mm/internal.h > >+++ b/mm/internal.h > >@@ -14,6 +14,7 @@ > > #include > > #include > > #include > >+#include > > #include > > > > /* > >@@ -34,9 +35,12 @@ > > #define GFP_CONSTRAINT_MASK (__GFP_HARDWALL|__GFP_THISNODE) > > > > /* Check for flags that must not be used with a slab allocator */ > >-static inline gfp_t check_slab_flags(gfp_t flags) > >+static inline gfp_t check_slab_flags(gfp_t flags, slab_flags_t slab_flags) > > { > >- gfp_t bug_mask = __GFP_DMA32 | __GFP_HIGHMEM | ~__GFP_BITS_MASK; > >+ gfp_t bug_mask = __GFP_HIGHMEM | ~__GFP_BITS_MASK; > >+ > >+ if (!IS_ENABLED(CONFIG_ZONE_DMA32) || !(slab_flags & SLAB_CACHE_DMA32)) > >+ bug_mask |= __GFP_DMA32; > > The original version doesn't check CONFIG_ZONE_DMA32. > > Do we need to add this condition here? > Could we just decide the bug_mask based on slab_flags? We can. The reason I did it this way is that when we don't have CONFIG_ZONE_DMA32, the compiler should be able to simplify to: bug_mask = __GFP_HIGHMEM | ~__GFP_BITS_MASK; if (true || ..) => if (true) bug_mask |= __GFP_DMA32; Then just bug_mask = __GFP_HIGHMEM | ~__GFP_BITS_MASK | __GFP_DMA32; And since the function is inline, slab_flags would not even need to be accessed at all. > > > > if (unlikely(flags & bug_mask)) { > > gfp_t invalid_mask = flags & bug_mask; > >diff --git a/mm/slab.c b/mm/slab.c > >index 65a774f05e7836..2fd3b9a996cbe6 100644 > >--- a/mm/slab.c > >+++ b/mm/slab.c > >@@ -2109,6 +2109,8 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) > > cachep->allocflags = __GFP_COMP; > > if (flags & SLAB_CACHE_DMA) > > cachep->allocflags |= GFP_DMA; > >+ if (flags & SLAB_CACHE_DMA32) > >+ cachep->allocflags |= GFP_DMA32; > > if (flags & SLAB_RECLAIM_ACCOUNT) > > cachep->allocflags |= __GFP_RECLAIMABLE; > > cachep->size = size; > >@@ -2643,7 +2645,7 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep, > > * Be lazy and only check for valid flags here, keeping it out of the > > * critical path in kmem_cache_alloc(). > > */ > >- flags = check_slab_flags(flags); > >+ flags = check_slab_flags(flags, cachep->flags); > > WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO)); > > local_flags = flags & (GFP_CONSTRAINT_MASK|GFP_RECLAIM_MASK); > > > >diff --git a/mm/slab.h b/mm/slab.h > >index 4190c24ef0e9df..fcf717e12f0a86 100644 > >--- a/mm/slab.h > >+++ b/mm/slab.h > >@@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, > > > > > > /* Legal flag mask for kmem_cache_create(), for various configurations */ > >-#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \ > >+#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ > >+ SLAB_CACHE_DMA32 | SLAB_PANIC | \ > > SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) > > > > #if defined(CONFIG_DEBUG_SLAB) > >diff --git a/mm/slab_common.c b/mm/slab_common.c > >index 70b0cc85db67f8..18b7b809c8d064 100644 > >--- a/mm/slab_common.c > >+++ b/mm/slab_common.c > >@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, > > SLAB_FAILSLAB | SLAB_KASAN) > > > > #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ > >- SLAB_ACCOUNT) > >+ SLAB_CACHE_DMA32 | SLAB_ACCOUNT) > > > > /* > > * Merge control. If this is set then no merging of slab caches will occur. > >diff --git a/mm/slub.c b/mm/slub.c > >index 21a3f6866da472..6d47765a82d150 100644 > >--- a/mm/slub.c > >+++ b/mm/slub.c > >@@ -1685,7 +1685,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > > > > static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) > > { > >- flags = check_slab_flags(flags); > >+ flags = check_slab_flags(flags, s->flags); > > > > return allocate_slab(s, > > flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); > >@@ -3577,6 +3577,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) > > if (s->flags & SLAB_CACHE_DMA) > > s->allocflags |= GFP_DMA; > > > >+ if (s->flags & SLAB_CACHE_DMA32) > >+ s->allocflags |= GFP_DMA32; > >+ > > if (s->flags & SLAB_RECLAIM_ACCOUNT) > > s->allocflags |= __GFP_RECLAIMABLE; > > > >@@ -5095,6 +5098,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf) > > SLAB_ATTR_RO(cache_dma); > > #endif > > > >+#ifdef CONFIG_ZONE_DMA32 > >+static ssize_t cache_dma32_show(struct kmem_cache *s, char *buf) > >+{ > >+ return sprintf(buf, "%d\n", !!(s->flags & SLAB_CACHE_DMA32)); > >+} > >+SLAB_ATTR_RO(cache_dma32); > >+#endif > >+ > > static ssize_t usersize_show(struct kmem_cache *s, char *buf) > > { > > return sprintf(buf, "%u\n", s->usersize); > >@@ -5435,6 +5446,9 @@ static struct attribute *slab_attrs[] = { > > #ifdef CONFIG_ZONE_DMA > > &cache_dma_attr.attr, > > #endif > >+#ifdef CONFIG_ZONE_DMA32 > >+ &cache_dma32_attr.attr, > >+#endif > > #ifdef CONFIG_NUMA > > &remote_node_defrag_ratio_attr.attr, > > #endif > >@@ -5665,6 +5679,8 @@ static char *create_unique_id(struct kmem_cache *s) > > */ > > if (s->flags & SLAB_CACHE_DMA) > > *p++ = 'd'; > >+ if (s->flags & SLAB_CACHE_DMA32) > >+ *p++ = 'D'; > > if (s->flags & SLAB_RECLAIM_ACCOUNT) > > *p++ = 'a'; > > if (s->flags & SLAB_CONSISTENCY_CHECKS) > >-- > >2.20.0.rc1.387.gf8505762e3-goog > > > >_______________________________________________ > >iommu mailing list > >iommu@lists.linux-foundation.org > >https://lists.linuxfoundation.org/mailman/listinfo/iommu > > -- > Wei Yang > Help you, Help me