Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2732105yba; Mon, 8 Apr 2019 03:34:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqyns+V8MpSq0Mk0Stvi35vff8znpqiAdYsN9UMlPY7+GiH33t9UC27wXdnpD6sLq11U/EXG X-Received: by 2002:a17:902:b484:: with SMTP id y4mr29335313plr.88.1554719661476; Mon, 08 Apr 2019 03:34:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554719661; cv=none; d=google.com; s=arc-20160816; b=ZV+fNNJkvQ63HH1hKRTmtzqaX9rc5dBpw5/HtIaMn9F5r88jfPhTW2dLiFiGfVIUB+ xwAuwivsxRs4kqPrcyOYLSLctDBVpL+Av+VYxI6QDgfK+RiUU67pEjrZ8JX0SuNwyybR dIwh4I2Jgfe8SHIBsVKWD2XM0dUDXz2GS6f2sETNDFO8aNyvbjY34mADxtULSDOoMMbF 9xAZVa1D1Oao0rybf1LNH5CSdvi65oT8VZtXnWka1lYPrIOWIN6rPjqLpWfCuVGZ8zEc jKFcmOqVKc1A0uUWEqXOQDdY+DmE5SxqRxgzwG0qBQxBvBPoLp8NxYNsMVkAJQ71uYfS 6K4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=MRps5IPAYODcXrTto1Efvwhm+7LgLOf3Kc/CdOvkr5w=; b=HIzDCcrgeq6PbmDanwVcy+pvm+uNxkn8MYnBYJ09FjTFYKT3cpsIgNTVD5OeyUwb/W 6fjexQGEQ2FTaXybaKHSUadwEHMJLru9ssUm9J93XoqT3ClkcSs787oarXtyT62ScPGu /eiPVy23ckoaAmuwI9a0lJZettgZtPBr7MpKuxvnB5AzXzw9KRD4iJDUIiMESCZNCqDK ubfOJ5PcQwJ1T1XcU0rAHtdeNPAnVZucrSAtqbt8UZzO7DW8J0VU7yfyu/cVKhD5mZKn XTRMeIbD6msWZ3Ilk5DWTcoAPUDfKtjquq/QAuE3extwp6jjEGJZBGpYtBzOi9t0QK8O VWoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d92si11428048pld.100.2019.04.08.03.34.06; Mon, 08 Apr 2019 03:34:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726539AbfDHKdQ (ORCPT + 99 others); Mon, 8 Apr 2019 06:33:16 -0400 Received: from foss.arm.com ([217.140.101.70]:45934 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726212AbfDHKdP (ORCPT ); Mon, 8 Apr 2019 06:33:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 055AC15AD; Mon, 8 Apr 2019 03:33:15 -0700 (PDT) Received: from [192.168.0.21] (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 025113F718; Mon, 8 Apr 2019 03:33:12 -0700 (PDT) Subject: Re: [PATCH] kvm: arm: Skip stage2 huge mappings for unaligned ipa backed by THP To: yuzenghui@huawei.com, linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, marc.zyngier@arm.com, christoffer.dall@arm.com, zhengxiang9@huawei.com, andrew.murray@arm.com, wanghaibin.wang@huawei.com References: <1554203176-3958-1-git-send-email-suzuki.poulose@arm.com> <2ea55b9c-09da-c3d0-3616-aa6be85b5a46@huawei.com> From: Suzuki K Poulose Message-ID: Date: Mon, 8 Apr 2019 11:35:30 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <2ea55b9c-09da-c3d0-3616-aa6be85b5a46@huawei.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/08/2019 04:50 AM, Zenghui Yu wrote: > > > On 2019/4/2 19:06, Suzuki K Poulose wrote: >> With commit a80868f398554842b14, we no longer ensure that the >> THP page is properly aligned in the guest IPA. Skip the stage2 >> huge mapping for unaligned IPA backed by transparent hugepages. >> >> Fixes: a80868f398554842b14 ("KVM: arm/arm64: Enforce PTE mappings at >> stage2 when needed") >> Reported-by: Eric Auger >> Cc: Marc Zyngier >> Cc: Chirstoffer Dall >> Cc: Zenghui Yu >> Cc: Zheng Xiang >> Tested-by: Eric Auger >> Signed-off-by: Suzuki K Poulose > > Hi Suzuki, > > Why not making use of fault_supports_stage2_huge_mapping()?  Let it do > some checks for us. > > fault_supports_stage2_huge_mapping() was intended to do a *two-step* > check to tell us that can we create stage2 huge block mappings, and this > check is both for hugetlbfs and THP.  With commit a80868f398554842b14, > we pass PAGE_SIZE as "map_size" for normal size pages (which turned out > to be almost meaningless), and unfortunately the THP check no longer > works. Thats correct. > > So we want to rework *THP* check process.  Your patch fixes the first > checking-step, but the second is still missed, am I wrong? It fixes the step explicitly for the THP by making sure that the GPA and the HVA are aligned to the map size. > > Can you please give a look at the below diff? > > > thanks, > > zenghui > >> --- >>   virt/kvm/arm/mmu.c | 4 +++- >>   1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >> index 27c9583..4a22f5b 100644 >> --- a/virt/kvm/arm/mmu.c >> +++ b/virt/kvm/arm/mmu.c >> @@ -1412,7 +1412,9 @@ static bool >> transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap) >>            * page accordingly. >>            */ >>           mask = PTRS_PER_PMD - 1; >> -        VM_BUG_ON((gfn & mask) != (pfn & mask)); > Somehow, I'd prefer keeping the VM_BUG_ON() here, let it report some > potential issues in the future (of course I hope none:) ) I don't think this calls for a VM_BUG_ON(). It is simply a case where the GPA is not aligned to HVA, but for normal VMA that could be made THP. We had this VM_BUG_ON(), which would have never hit because we would have set force_pte if they were not aligned. >> +        /* Skip memslots with unaligned IPA and user address */ >> +        if ((gfn & mask) != (pfn & mask)) >> +            return false; >>           if (pfn & mask) { >>               *ipap &= PMD_MASK; >>               kvm_release_pfn_clean(pfn); >> > > ---8>--- > > Rework fault_supports_stage2_huge_mapping(), let it check THP again. > > Signed-off-by: Zenghui Yu > --- >  virt/kvm/arm/mmu.c | 11 ++++++++++- >  1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 27c9583..5e1b258 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1632,6 +1632,15 @@ static bool > fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, >      uaddr_end = uaddr_start + size; > >      /* > +     * If the memslot is _not_ backed by hugetlbfs, then check if it > +     * can be backed by transparent hugepages. > +     * > +     * Currently only PMD_SIZE THPs are supported, revisit it later. > +     */ > +    if (map_size == PAGE_SIZE) > +        map_size = PMD_SIZE; > + This looks hackish. What is we support PUD_SIZE huge page in the future ? > +    /* >       * Pages belonging to memslots that don't have the same alignment >       * within a PMD/PUD for userspace and IPA cannot be mapped with > stage-2 >       * PMD/PUD entries, because we'll end up mapping the wrong pages. > @@ -1643,7 +1652,7 @@ static bool > fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, >       *    |abcde|fgh  Stage-1 block  |    Stage-1 block tv|xyz| >       *    +-----+--------------------+--------------------+---+ >       * > -     *    memslot->base_gfn << PAGE_SIZE: > +     *    memslot->base_gfn << PAGE_SHIFT: That could be fixed. >       *      +---+--------------------+--------------------+-----+ >       *      |abc|def  Stage-2 block  |    Stage-2 block   |tvxyz| >       *      +---+--------------------+--------------------+-----+ But personally I don't think this is the right way to fix it. And as mentioned, the VM_BUG_ON() is unnecessary and could easily be triggered by the user by using unaligned GPA/HVA. All we need to do is, use page mapping in such cases, which we do with my patch. Cheers Suzuki