Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp919600pxb; Sat, 17 Apr 2021 00:35:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwoQlKo2IGw6lpx7MRVYq8YUTWX9ot+wMKaA3DOzmKfmsDSrYX351Qia8liGoOaC/pJ2XO2 X-Received: by 2002:a17:902:b94c:b029:ec:8f13:613d with SMTP id h12-20020a170902b94cb02900ec8f13613dmr1479668pls.72.1618644945441; Sat, 17 Apr 2021 00:35:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618644945; cv=none; d=google.com; s=arc-20160816; b=AyZxukS4EhAYgJucnsuzLvVlIOT4dicb74AxO5jTzWSwva+NkrGp+6O1X2vtUp9ONY EsjDVw30GmENvqKNtMdgHox82nWRKGibHyRAwHyx2WvhHav/J+WyZDW+OFFwqE/K6zbC iW56ZY0uRUtQev5DTdy7HJMFiqfx35cWp2D8phHwBqhEsNYTE8olbF22Z2Lzt8dyJXWb Gkgx1/p1rjBeXKDdZygWz8mu+UIg4chptA4THE8t5NdwVuCzlUg9hKuDequ+Cn8D8e0w ZDVN6mGSb8pNWDHrRwPjNAEcUkYVdYHUi60UNfh4hSjC3wAxAxMPo8ZOu8aDbTCm5SGf xiOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=ukfumMluZP0EnUtJDPV+zqcuT6EyAAYlsiNwPNai6SA=; b=uzy6wK+i3nG1mCPGwAwgcxp8HiKwyM33CLI+BVKY/EgK6WY1aiMrKX789YhD+ssxvK soDvH8zxUSvTs6HSsh+zUGaqxYRYWTiqTXmu4PwqbrW7rDI8CeQiVGPYcOaMrg2V5AxP +5L3YDEWAcKzfHUAgaXisYde8/vuSZNza7IEjedF8PpJvmp/rwXGZhJM437Frv2r9THE 4xSF29DsJstDiSx7zPPaHdjh6XBP290LTWt1l5SEh6OgCCmhCms3YIczOY9BRVM8PGRt NkXk/c07vQbtDuJg++mThv7ily9ghr3Mg/LJ5efOoJ5vaa+fMwDa+EMk2ASdKt0P2pp4 N53w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=L+tLYfTq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o8si9525121pfd.51.2021.04.17.00.35.33; Sat, 17 Apr 2021 00:35:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=L+tLYfTq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235685AbhDQHe7 (ORCPT + 99 others); Sat, 17 Apr 2021 03:34:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:55939 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230367AbhDQHe6 (ORCPT ); Sat, 17 Apr 2021 03:34:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618644870; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ukfumMluZP0EnUtJDPV+zqcuT6EyAAYlsiNwPNai6SA=; b=L+tLYfTqTGkcnroOcqRsrYv/YPG1LB9dxNKjwNo2Yz/K+P3wY5Red7Zo/brk6RSUmoE7Ce sOGgMJ8O/HXk8iSwig5dFr2YxDIQXl+TZ/jaZxRKUPwixcWUoeez1rauP8nhS7VPn2iWds 2LOX+H96F78rP93PlML0256fAPVv95Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-542-NiqHmWmsNzuzO21lRXyc-g-1; Sat, 17 Apr 2021 03:34:26 -0400 X-MC-Unique: NiqHmWmsNzuzO21lRXyc-g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 897248189C6; Sat, 17 Apr 2021 07:34:24 +0000 (UTC) Received: from carbon (unknown [10.36.110.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4F89E5D9C6; Sat, 17 Apr 2021 07:34:17 +0000 (UTC) Date: Sat, 17 Apr 2021 09:34:15 +0200 From: Jesper Dangaard Brouer To: "Matthew Wilcox (Oracle)" Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, ilias.apalodimas@linaro.org, mcroce@linux.microsoft.com, grygorii.strashko@ti.com, arnd@kernel.org, hch@lst.de, linux-snps-arc@lists.infradead.org, mhocko@kernel.org, mgorman@suse.de, brouer@redhat.com Subject: Re: [PATCH 1/2] mm: Fix struct page layout on 32-bit systems Message-ID: <20210417093415.41bcfde7@carbon> In-Reply-To: <20210416230724.2519198-2-willy@infradead.org> References: <20210416230724.2519198-1-willy@infradead.org> <20210416230724.2519198-2-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 17 Apr 2021 00:07:23 +0100 "Matthew Wilcox (Oracle)" wrote: > 32-bit architectures which expect 8-byte alignment for 8-byte integers > and need 64-bit DMA addresses (arc, arm, mips, ppc) had their struct > page inadvertently expanded in 2019. When the dma_addr_t was added, > it forced the alignment of the union to 8 bytes, which inserted a 4 byte > gap between 'flags' and the union. > > Fix this by storing the dma_addr_t in one or two adjacent unsigned longs. > This restores the alignment to that of an unsigned long, and also fixes a > potential problem where (on a big endian platform), the bit used to denote > PageTail could inadvertently get set, and a racing get_user_pages_fast() > could dereference a bogus compound_head(). > > Fixes: c25fff7171be ("mm: add dma_addr_t to struct page") > Signed-off-by: Matthew Wilcox (Oracle) > --- Acked-by: Jesper Dangaard Brouer Thanks you Matthew for working on a fix for this. It's been a pleasure working with you and exchanging crazy ideas with you for solving this. Most of them didn't work out, especially those that came to me during restless nights ;-). Having worked through the other solutions, some very intrusive and some could even be consider ugly. I think we have a good and non-intrusive solution/workaround in this patch. Thanks! > include/linux/mm_types.h | 4 ++-- > include/net/page_pool.h | 12 +++++++++++- > net/core/page_pool.c | 12 +++++++----- > 3 files changed, 20 insertions(+), 8 deletions(-) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 6613b26a8894..5aacc1c10a45 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -97,10 +97,10 @@ struct page { > }; > struct { /* page_pool used by netstack */ > /** > - * @dma_addr: might require a 64-bit value even on > + * @dma_addr: might require a 64-bit value on > * 32-bit architectures. > */ > - dma_addr_t dma_addr; > + unsigned long dma_addr[2]; > }; > struct { /* slab, slob and slub */ > union { > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index b5b195305346..db7c7020746a 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -198,7 +198,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, > > static inline dma_addr_t page_pool_get_dma_addr(struct page *page) > { > - return page->dma_addr; > + dma_addr_t ret = page->dma_addr[0]; > + if (sizeof(dma_addr_t) > sizeof(unsigned long)) > + ret |= (dma_addr_t)page->dma_addr[1] << 32; > + return ret; > +} > + > +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) > +{ > + page->dma_addr[0] = addr; > + if (sizeof(dma_addr_t) > sizeof(unsigned long)) > + page->dma_addr[1] = addr >> 32; > } > > static inline bool is_page_pool_compiled_in(void) > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index ad8b0707af04..f014fd8c19a6 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, > struct page *page, > unsigned int dma_sync_size) > { > + dma_addr_t dma_addr = page_pool_get_dma_addr(page); > + > dma_sync_size = min(dma_sync_size, pool->p.max_len); > - dma_sync_single_range_for_device(pool->p.dev, page->dma_addr, > + dma_sync_single_range_for_device(pool->p.dev, dma_addr, > pool->p.offset, dma_sync_size, > pool->p.dma_dir); > } > @@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > put_page(page); > return NULL; > } > - page->dma_addr = dma; > + page_pool_set_dma_addr(page, dma); > > if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > page_pool_dma_sync_for_device(pool, page, pool->p.max_len); > @@ -294,13 +296,13 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) > */ > goto skip_dma_unmap; > > - dma = page->dma_addr; > + dma = page_pool_get_dma_addr(page); > > - /* When page is unmapped, it cannot be returned our pool */ > + /* When page is unmapped, it cannot be returned to our pool */ > dma_unmap_page_attrs(pool->p.dev, dma, > PAGE_SIZE << pool->p.order, pool->p.dma_dir, > DMA_ATTR_SKIP_CPU_SYNC); > - page->dma_addr = 0; > + page_pool_set_dma_addr(page, 0); > skip_dma_unmap: > /* This may be the last page returned, releasing the pool, so > * it is not safe to reference pool afterwards. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer