Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755631AbdIGRhN (ORCPT ); Thu, 7 Sep 2017 13:37:13 -0400 Received: from mail-io0-f173.google.com ([209.85.223.173]:38845 "EHLO mail-io0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755540AbdIGRhI (ORCPT ); Thu, 7 Sep 2017 13:37:08 -0400 X-Google-Smtp-Source: AOwi7QC4ew5XI0bdyhxNewtgvQt3hNVrHFiTcuJZ1UIqQpJr/6NxOIoDAe+LTo2d4BSWdP/4GZ956w== From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , Konrad Rzeszutek Wilk , Tycho Andersen Subject: [PATCH v6 04/11] swiotlb: Map the buffer if it was unmapped by XPFO Date: Thu, 7 Sep 2017 11:36:02 -0600 Message-Id: <20170907173609.22696-5-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170907173609.22696-1-tycho@docker.com> References: <20170907173609.22696-1-tycho@docker.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2440 Lines: 75 From: Juerg Haefliger v6: * guard against lookup_xpfo() returning NULL CC: Konrad Rzeszutek Wilk Signed-off-by: Juerg Haefliger Signed-off-by: Tycho Andersen --- include/linux/xpfo.h | 4 ++++ lib/swiotlb.c | 3 ++- mm/xpfo.c | 15 +++++++++++++++ 3 files changed, 21 insertions(+), 1 deletion(-) diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 442c58ee930e..04590d1dcefa 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -30,6 +30,8 @@ void xpfo_kunmap(void *kaddr, struct page *page); void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp); void xpfo_free_pages(struct page *page, int order); +bool xpfo_page_is_unmapped(struct page *page); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -37,6 +39,8 @@ static inline void xpfo_kunmap(void *kaddr, struct page *page) { } static inline void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) { } static inline void xpfo_free_pages(struct page *page, int order) { } +static inline bool xpfo_page_is_unmapped(struct page *page) { return false; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ diff --git a/lib/swiotlb.c b/lib/swiotlb.c index a8d74a733a38..d4fee5ca2d9e 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -420,8 +420,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, { unsigned long pfn = PFN_DOWN(orig_addr); unsigned char *vaddr = phys_to_virt(tlb_addr); + struct page *page = pfn_to_page(pfn); - if (PageHighMem(pfn_to_page(pfn))) { + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { /* The buffer does not have a mapping. Map it in and copy */ unsigned int offset = orig_addr & ~PAGE_MASK; char *buffer; diff --git a/mm/xpfo.c b/mm/xpfo.c index bff24afcaa2e..cdbcbac582d5 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -220,3 +220,18 @@ void xpfo_kunmap(void *kaddr, struct page *page) spin_unlock(&xpfo->maplock); } EXPORT_SYMBOL(xpfo_kunmap); + +bool xpfo_page_is_unmapped(struct page *page) +{ + struct xpfo *xpfo; + + if (!static_branch_unlikely(&xpfo_inited)) + return false; + + xpfo = lookup_xpfo(page); + if (unlikely(!xpfo) && !xpfo->inited) + return false; + + return test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); +} +EXPORT_SYMBOL(xpfo_page_is_unmapped); -- 2.11.0