Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750965AbdIKHYc (ORCPT ); Mon, 11 Sep 2017 03:24:32 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:6003 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750789AbdIKHYb (ORCPT ); Mon, 11 Sep 2017 03:24:31 -0400 Subject: Re: [PATCH v6 03/11] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) To: Tycho Andersen , References: <20170907173609.22696-1-tycho@docker.com> <20170907173609.22696-4-tycho@docker.com> CC: , , Marco Benatto , Juerg Haefliger , From: Yisheng Xie Message-ID: <302be94d-7e44-001d-286c-2b0cd6098f7b@huawei.com> Date: Mon, 11 Sep 2017 15:24:09 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.0 MIME-Version: 1.0 In-Reply-To: <20170907173609.22696-4-tycho@docker.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.29.40] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A010201.59B63A2D.003E,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9581302fc409cf284ee5665a7bab41c6 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2196 Lines: 64 Hi Tycho, On 2017/9/8 1:36, Tycho Andersen wrote: > From: Juerg Haefliger > > This patch adds support for XPFO which protects against 'ret2dir' kernel > attacks. The basic idea is to enforce exclusive ownership of page frames > by either the kernel or userspace, unless explicitly requested by the > kernel. Whenever a page destined for userspace is allocated, it is > unmapped from physmap (the kernel's page table). When such a page is > reclaimed from userspace, it is mapped back to physmap. > > Additional fields in the page_ext struct are used for XPFO housekeeping, > specifically: > - two flags to distinguish user vs. kernel pages and to tag unmapped > pages. > - a reference counter to balance kmap/kunmap operations. > - a lock to serialize access to the XPFO fields. > > This patch is based on the work of Vasileios P. Kemerlis et al. who > published their work in this paper: > http://www.cs.columbia.edu/~vpk/papers/ret2dir.sec14.pdf > > [...] > +void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) > +{ > + int i, flush_tlb = 0; > + struct xpfo *xpfo; > + > + if (!static_branch_unlikely(&xpfo_inited)) > + return; > + > + for (i = 0; i < (1 << order); i++) { > + xpfo = lookup_xpfo(page + i); > + if (!xpfo) > + continue; > + > + WARN(test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags), > + "xpfo: unmapped page being allocated\n"); > + > + /* Initialize the map lock and map counter */ > + if (unlikely(!xpfo->inited)) { > + spin_lock_init(&xpfo->maplock); > + atomic_set(&xpfo->mapcount, 0); > + xpfo->inited = true; > + } > + WARN(atomic_read(&xpfo->mapcount), > + "xpfo: already mapped page being allocated\n"); > + > + if ((gfp & GFP_HIGHUSER) == GFP_HIGHUSER) { > + /* > + * Tag the page as a user page and flush the TLB if it > + * was previously allocated to the kernel. > + */ > + if (!test_and_set_bit(XPFO_PAGE_USER, &xpfo->flags)) > + flush_tlb = 1; I'm not sure whether I am miss anything, however, when the page was previously allocated to kernel, should we unmap the physmap (the kernel's page table) here? For we allocate the page to user now Yisheng Xie Thanks