Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp571829pxj; Tue, 18 May 2021 09:28:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyek01HakN4R1LIDwcXR7p+r/ivFp8Jy6WP7WdyJJX6WDS9ygOlK6hdZ6m43RmI1kadUfYB X-Received: by 2002:adf:ed46:: with SMTP id u6mr7037484wro.295.1621355326042; Tue, 18 May 2021 09:28:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621355325; cv=none; d=google.com; s=arc-20160816; b=R/BPb9eoPbltC3CQvPSbRUrs53Po9OPXXsmmgBFJuvwyrA6l5SgmDgluhl7VE1KPN/ M/MhhREgLtBCXdt7s1GwkCXS/82yU/deYLJDffQF7/cRcLDiZxuM2/K12aVtTrMFbFjI mPufiwpwql1vsOxa4eX6dVcut9B5njmAkJboF/rQkS+Zg+R536z1PhCS+5Yqe+ZB0+yT fYdI/ymxzopzQkdIrz/VCzdAWe6GeC1v2mQAJ1p7UMSSrCzNkZUdPnfe66hSEBdjFXYV QOZyf5VCKGalG9AZg7bTBWDAvKAQGYiELv7YnWncdW1VmipN/eLc+sCVILam3sws+ArE qqZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3JKbyEMzy6uNPDgKm/piqTFdDQVuZwzbwMi6yeMkmrQ=; b=UesyZx/0Brd/Pte9A2/47UgzpmPku6nllq1A/0UOTixVMTJOvzOlxINHfNOY5whgEG PzYBR101Et9qN4Rlhjj1vN0EeHchZJqR3oR7C5qy0hMRRSz/B3yzgig/MNLF3rRIU0eF 2fKBcYpM2opYLSVQ/aRz+BdDfmcQloU5ers5FDUxEZUbOMAgcRMD3d24qyRXthYvmi6t Z4APiN+cokEHpGEfEKAM/aV1v2gW8rd+vrDDDaEP4+lGZHCLVUj3JIBeNn8agVvrUTpv w7Jr/IzdbvEEzRqeAZBLWHBTXhPY7WVguZ/vNrnbzcWwNIqZeRynxMmo7DQjKi2wSQMg U6+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=FQNI+8Rr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c24si11288793eja.751.2021.05.18.09.28.22; Tue, 18 May 2021 09:28:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=FQNI+8Rr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245749AbhEQQCT (ORCPT + 99 others); Mon, 17 May 2021 12:02:19 -0400 Received: from mail.kernel.org ([198.145.29.99]:52064 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243751AbhEQPmB (ORCPT ); Mon, 17 May 2021 11:42:01 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3941761412; Mon, 17 May 2021 14:42:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621262540; bh=FcEvc5PoH4pw2nJbg/O4TvNBIRJdOnTrHryv7J8Bsek=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FQNI+8RrCRlKB8wUhKCG0QcIeEjyEc74GG5WzJKsKGLG9OuWw1byKM18sppF9VnR1 ijVU/MwXUHpuznQP6X5rL7VvqFWWlWn3rgQ8dpTtuExq7JGMzpcJhxrnNQgE2xDoLo 5LU4EAugFwLl3ms2K864LocpFayN9wXWc8yERCRU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Matthew Wilcox (Oracle)" , Ilias Apalodimas , Jesper Dangaard Brouer , Vlastimil Babka , Matteo Croce , Andrew Morton , Linus Torvalds Subject: [PATCH 5.11 303/329] mm: fix struct page layout on 32-bit systems Date: Mon, 17 May 2021 16:03:34 +0200 Message-Id: <20210517140312.343115841@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210517140302.043055203@linuxfoundation.org> References: <20210517140302.043055203@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox (Oracle) commit 9ddb3c14afba8bc5950ed297f02d4ae05ff35cd1 upstream. 32-bit architectures which expect 8-byte alignment for 8-byte integers and need 64-bit DMA addresses (arm, mips, ppc) had their struct page inadvertently expanded in 2019. When the dma_addr_t was added, it forced the alignment of the union to 8 bytes, which inserted a 4 byte gap between 'flags' and the union. Fix this by storing the dma_addr_t in one or two adjacent unsigned longs. This restores the alignment to that of an unsigned long. We always store the low bits in the first word to prevent the PageTail bit from being inadvertently set on a big endian platform. If that happened, get_user_pages_fast() racing against a page which was freed and reallocated to the page_pool could dereference a bogus compound_head(), which would be hard to trace back to this cause. Link: https://lkml.kernel.org/r/20210510153211.1504886-1-willy@infradead.org Fixes: c25fff7171be ("mm: add dma_addr_t to struct page") Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Ilias Apalodimas Acked-by: Jesper Dangaard Brouer Acked-by: Vlastimil Babka Tested-by: Matteo Croce Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/mm_types.h | 4 ++-- include/net/page_pool.h | 12 +++++++++++- net/core/page_pool.c | 12 +++++++----- 3 files changed, 20 insertions(+), 8 deletions(-) --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -97,10 +97,10 @@ struct page { }; struct { /* page_pool used by netstack */ /** - * @dma_addr: might require a 64-bit value even on + * @dma_addr: might require a 64-bit value on * 32-bit architectures. */ - dma_addr_t dma_addr; + unsigned long dma_addr[2]; }; struct { /* slab, slob and slub */ union { --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -198,7 +198,17 @@ static inline void page_pool_recycle_dir static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { - return page->dma_addr; + dma_addr_t ret = page->dma_addr[0]; + if (sizeof(dma_addr_t) > sizeof(unsigned long)) + ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; + return ret; +} + +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +{ + page->dma_addr[0] = addr; + if (sizeof(dma_addr_t) > sizeof(unsigned long)) + page->dma_addr[1] = upper_32_bits(addr); } static inline bool is_page_pool_compiled_in(void) --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_devic struct page *page, unsigned int dma_sync_size) { + dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_sync_size = min(dma_sync_size, pool->p.max_len); - dma_sync_single_range_for_device(pool->p.dev, page->dma_addr, + dma_sync_single_range_for_device(pool->p.dev, dma_addr, pool->p.offset, dma_sync_size, pool->p.dma_dir); } @@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pa put_page(page); return NULL; } - page->dma_addr = dma; + page_pool_set_dma_addr(page, dma); if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); @@ -294,13 +296,13 @@ void page_pool_release_page(struct page_ */ goto skip_dma_unmap; - dma = page->dma_addr; + dma = page_pool_get_dma_addr(page); - /* When page is unmapped, it cannot be returned our pool */ + /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - page->dma_addr = 0; + page_pool_set_dma_addr(page, 0); skip_dma_unmap: /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards.