Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752174AbdIKQ7l (ORCPT ); Mon, 11 Sep 2017 12:59:41 -0400 Received: from mail-pf0-f176.google.com ([209.85.192.176]:36101 "EHLO mail-pf0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751916AbdIKQ7k (ORCPT ); Mon, 11 Sep 2017 12:59:40 -0400 X-Google-Smtp-Source: ADKCNb6PWFdba6F2viS/VruQ2QAnHJdedgCn+dwAlB+YV2PYQL4jFP+aN9pR+4RAOf9An1nzVlvZeQ== Date: Mon, 11 Sep 2017 09:59:36 -0700 From: Tycho Andersen To: Juerg Haefliger Cc: Yisheng Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , x86@kernel.org Subject: Re: [PATCH v6 03/11] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Message-ID: <20170911165936.eeqdwzir3kxkhvza@docker> References: <20170907173609.22696-1-tycho@docker.com> <20170907173609.22696-4-tycho@docker.com> <302be94d-7e44-001d-286c-2b0cd6098f7b@huawei.com> <20170911145020.fat456njvyagcomu@docker> <57e95ad2-81d8-bf83-3e78-1313daa1bb80@canonical.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57e95ad2-81d8-bf83-3e78-1313daa1bb80@canonical.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2398 Lines: 74 On Mon, Sep 11, 2017 at 06:03:55PM +0200, Juerg Haefliger wrote: > > > On 09/11/2017 04:50 PM, Tycho Andersen wrote: > > Hi Yisheng, > > > > On Mon, Sep 11, 2017 at 03:24:09PM +0800, Yisheng Xie wrote: > >>> +void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) > >>> +{ > >>> + int i, flush_tlb = 0; > >>> + struct xpfo *xpfo; > >>> + > >>> + if (!static_branch_unlikely(&xpfo_inited)) > >>> + return; > >>> + > >>> + for (i = 0; i < (1 << order); i++) { > >>> + xpfo = lookup_xpfo(page + i); > >>> + if (!xpfo) > >>> + continue; > >>> + > >>> + WARN(test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags), > >>> + "xpfo: unmapped page being allocated\n"); > >>> + > >>> + /* Initialize the map lock and map counter */ > >>> + if (unlikely(!xpfo->inited)) { > >>> + spin_lock_init(&xpfo->maplock); > >>> + atomic_set(&xpfo->mapcount, 0); > >>> + xpfo->inited = true; > >>> + } > >>> + WARN(atomic_read(&xpfo->mapcount), > >>> + "xpfo: already mapped page being allocated\n"); > >>> + > >>> + if ((gfp & GFP_HIGHUSER) == GFP_HIGHUSER) { > >>> + /* > >>> + * Tag the page as a user page and flush the TLB if it > >>> + * was previously allocated to the kernel. > >>> + */ > >>> + if (!test_and_set_bit(XPFO_PAGE_USER, &xpfo->flags)) > >>> + flush_tlb = 1; > >> > >> I'm not sure whether I am miss anything, however, when the page was previously allocated > >> to kernel, should we unmap the physmap (the kernel's page table) here? For we allocate > >> the page to user now > >> > > Yes, I think you're right. Oddly, the XPFO_READ_USER test works > > correctly for me, but I think (?) should not because of this bug... > > IIRC, this is an optimization carried forward from the initial > implementation. The assumption is that the kernel will map the user > buffer so it's not unmapped on allocation but only on the first (and Does the kernel always map it, though? e.g. in the case of XPFO_READ_USER, I'm not sure where the kernel would do a kmap() of the test's user buffer. Tycho > subsequent) call of kunmap. I.e.: > - alloc -> noop > - kmap -> noop > - kunmap -> unmapped from the kernel > - kmap -> mapped into the kernel > - kunmap -> unmapped from the kernel > and so on until: > - free -> mapped back into the kernel > > I'm not sure if that make sense though since it leaves a window. > > ...Juerg > > > > > Tycho > >