Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp1258020pxb; Sat, 17 Apr 2021 11:34:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybB+VbABKffgkWpiyrwFW4k9PeBDf1LK8AfJfo70F9pUl5ANJyg+vNpgPFYV6EEbniKHez X-Received: by 2002:a65:558c:: with SMTP id j12mr4452348pgs.271.1618684472800; Sat, 17 Apr 2021 11:34:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618684472; cv=none; d=google.com; s=arc-20160816; b=vi6JSJYlXo1ZVQSbhgSZpS3sfhUojfrnkeywb0weAn5uss+54GJ2yje/C4B0F5VhKu dICrUntyKdHn3QCOT7FJftYjdpJgR71izRUccqhRV1mXYk86+h6r8ffbu5t5tJvpgbl5 6zugncW7C38t15yD6TbCr/PeJdUwNsUXg7sfBvC24yqch2dWxoWzibkH8j1n0LYAccVc 8Q8zZdEHYR9ShaLaIQWcvv01YRo9TS8NDVndWVpq3c/HgAK3s22rB1DTI4231cPSnmfE fd5TYJv7/ARdmmy6wZ0TiN+WLr5t6KBa8aoCXwRTXsg2q5eVL1DO163LuACPzaeja7k2 XbDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=rXzu2aaQG3S7cGTxRDcxeH5itwSf5UHiBbc+Mfp3iik=; b=n8P+i/APUVDqP77+f5Ara3+uawv9zx2dmB0DgIh2Rt9L74YJSieFIhSCVdFV8Wjz0r OWjpHwwV5NtX95ul+MzRwW29xKefl2w0i/rKPdmXsbA/Lz+TfiyWcN5rS8F3BP24XBsa KBGG8VqNeNjoTVhKtWXw1MfjKzfaikU1LWZwZSlXM6A654vBon7M/q85Fo4tbsjTg+fP bU3UAbjZtd6ngyd252iGDG/KdesYI4KfUxgflof1bP+09q0xW+nfFv167epQ8mNPmDD3 T281A0gpCATVxoh89tW5d+Q/d3RBTojtYbDB9jgyoKYJTzNDur8ngovl1ELI+roNvaDY IRyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=e9wLqXqu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si10335118pfx.205.2021.04.17.11.34.20; Sat, 17 Apr 2021 11:34:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=e9wLqXqu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236992AbhDQSci (ORCPT + 99 others); Sat, 17 Apr 2021 14:32:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236718AbhDQSch (ORCPT ); Sat, 17 Apr 2021 14:32:37 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAFA7C06175F for ; Sat, 17 Apr 2021 11:32:10 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id i21-20020a05600c3555b029012eae2af5d4so6289318wmq.4 for ; Sat, 17 Apr 2021 11:32:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=rXzu2aaQG3S7cGTxRDcxeH5itwSf5UHiBbc+Mfp3iik=; b=e9wLqXquNQTYWhXjUdKEFxVNM3EcZvI8CB4v0ovhwe/yevYVGVmzrbaHhE02CX0UKZ iAQ9XXHX1nTA8YswubW58TmD3iYRX4RADm9tE2JivkSsjiH4tHL6PevgUD+Q8o0lj13M Wh4WQNLWESeMTaegmO+swRQWtdK95e3TtBPFPjyoSMZO3m6jYk3AFl+7UJTVlzhYvukt kZIWWQL5KZN7mSFoC/bQUNXGH+80MdOC3b07Am6Bil612+h6z0k+ywWcIjHl/qR1rea2 iA5ItEuiIMldPKGr3Y9PSZWyZuR7p/rmUen+WySDnuz9aNuJAQg+hrIjwxVrU28R2S8P JYAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=rXzu2aaQG3S7cGTxRDcxeH5itwSf5UHiBbc+Mfp3iik=; b=JXbWNYYrkIDIY8ac8jO8ofkPYolliKo9ImipQk8R59Z000Yrpv/YHbOVgXrNeOZRtv D3qvOYeqP1DW1a3qRktlAwcRBTq3aqWvLx/DtHt/G92ZzlQZ3EI5KjyoGcJNslwNPOko f0ore0Rjs/0YdrAk6Fq1diEfm6xdqycCVCZpP7r18RHERRzCPvauXjHADEYVM+PePNHN C87HJCgMLvRRaqEjHcV34wohQxTlWoodxsiFYH6hOwzPrSul1iakiSYz5t2rSyrqFR7b 8EUy40yAZ6PagfPmdLwnK1uVxOfnAhuc87UgizQ8aqg4uzToB/0xo7LWBbhaxphHPCPC Cewg== X-Gm-Message-State: AOAM53197FSvVe5prEjaYqmoV3HYhNSPm29xFyOO4HzEOWlZhiQmuCPi za/PqURNip37pI3vbs2Y5hoKaA== X-Received: by 2002:a1c:770a:: with SMTP id t10mr13782514wmi.49.1618684329574; Sat, 17 Apr 2021 11:32:09 -0700 (PDT) Received: from apalos.home (ppp-94-65-92-88.home.otenet.gr. [94.65.92.88]) by smtp.gmail.com with ESMTPSA id a22sm15043900wrc.59.2021.04.17.11.32.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 17 Apr 2021 11:32:09 -0700 (PDT) Date: Sat, 17 Apr 2021 21:32:06 +0300 From: Ilias Apalodimas To: Matthew Wilcox Cc: brouer@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, mcroce@linux.microsoft.com, grygorii.strashko@ti.com, arnd@kernel.org, hch@lst.de, linux-snps-arc@lists.infradead.org, mhocko@kernel.org, mgorman@suse.de Subject: Re: [PATCH 1/2] mm: Fix struct page layout on 32-bit systems Message-ID: References: <20210416230724.2519198-1-willy@infradead.org> <20210416230724.2519198-2-willy@infradead.org> <20210417024522.GP2531743@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210417024522.GP2531743@casper.infradead.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Matthew, On Sat, Apr 17, 2021 at 03:45:22AM +0100, Matthew Wilcox wrote: > > Replacement patch to fix compiler warning. > > From: "Matthew Wilcox (Oracle)" > Date: Fri, 16 Apr 2021 16:34:55 -0400 > Subject: [PATCH 1/2] mm: Fix struct page layout on 32-bit systems > To: brouer@redhat.com > Cc: linux-kernel@vger.kernel.org, > linux-mm@kvack.org, > netdev@vger.kernel.org, > linuxppc-dev@lists.ozlabs.org, > linux-arm-kernel@lists.infradead.org, > linux-mips@vger.kernel.org, > ilias.apalodimas@linaro.org, > mcroce@linux.microsoft.com, > grygorii.strashko@ti.com, > arnd@kernel.org, > hch@lst.de, > linux-snps-arc@lists.infradead.org, > mhocko@kernel.org, > mgorman@suse.de > > 32-bit architectures which expect 8-byte alignment for 8-byte integers > and need 64-bit DMA addresses (arc, arm, mips, ppc) had their struct > page inadvertently expanded in 2019. When the dma_addr_t was added, > it forced the alignment of the union to 8 bytes, which inserted a 4 byte > gap between 'flags' and the union. > > Fix this by storing the dma_addr_t in one or two adjacent unsigned longs. > This restores the alignment to that of an unsigned long, and also fixes a > potential problem where (on a big endian platform), the bit used to denote > PageTail could inadvertently get set, and a racing get_user_pages_fast() > could dereference a bogus compound_head(). > > Fixes: c25fff7171be ("mm: add dma_addr_t to struct page") > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/linux/mm_types.h | 4 ++-- > include/net/page_pool.h | 12 +++++++++++- > net/core/page_pool.c | 12 +++++++----- > 3 files changed, 20 insertions(+), 8 deletions(-) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 6613b26a8894..5aacc1c10a45 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -97,10 +97,10 @@ struct page { > }; > struct { /* page_pool used by netstack */ > /** > - * @dma_addr: might require a 64-bit value even on > + * @dma_addr: might require a 64-bit value on > * 32-bit architectures. > */ > - dma_addr_t dma_addr; > + unsigned long dma_addr[2]; > }; > struct { /* slab, slob and slub */ > union { > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index b5b195305346..ad6154dc206c 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -198,7 +198,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, > > static inline dma_addr_t page_pool_get_dma_addr(struct page *page) > { > - return page->dma_addr; > + dma_addr_t ret = page->dma_addr[0]; > + if (sizeof(dma_addr_t) > sizeof(unsigned long)) > + ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; > + return ret; > +} > + > +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) > +{ > + page->dma_addr[0] = addr; > + if (sizeof(dma_addr_t) > sizeof(unsigned long)) > + page->dma_addr[1] = addr >> 16 >> 16; The 'error' that was reported will never trigger right? I assume this was compiled with dma_addr_t as 32bits (so it triggered the compilation error), but the if check will never allow this codepath to run. If so can we add a comment explaining this, since none of us will remember why in 6 months from now? > } > > static inline bool is_page_pool_compiled_in(void) > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index ad8b0707af04..f014fd8c19a6 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, > struct page *page, > unsigned int dma_sync_size) > { > + dma_addr_t dma_addr = page_pool_get_dma_addr(page); > + > dma_sync_size = min(dma_sync_size, pool->p.max_len); > - dma_sync_single_range_for_device(pool->p.dev, page->dma_addr, > + dma_sync_single_range_for_device(pool->p.dev, dma_addr, > pool->p.offset, dma_sync_size, > pool->p.dma_dir); > } > @@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > put_page(page); > return NULL; > } > - page->dma_addr = dma; > + page_pool_set_dma_addr(page, dma); > > if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > page_pool_dma_sync_for_device(pool, page, pool->p.max_len); > @@ -294,13 +296,13 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) > */ > goto skip_dma_unmap; > > - dma = page->dma_addr; > + dma = page_pool_get_dma_addr(page); > > - /* When page is unmapped, it cannot be returned our pool */ > + /* When page is unmapped, it cannot be returned to our pool */ > dma_unmap_page_attrs(pool->p.dev, dma, > PAGE_SIZE << pool->p.order, pool->p.dma_dir, > DMA_ATTR_SKIP_CPU_SYNC); > - page->dma_addr = 0; > + page_pool_set_dma_addr(page, 0); > skip_dma_unmap: > /* This may be the last page returned, releasing the pool, so > * it is not safe to reference pool afterwards. > -- > 2.30.2 > Thanks /Ilias