Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2492740yba; Sun, 7 Apr 2019 20:54:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqxVmuvOfXDMtSsIPfHQsqxSnioMTuJ738+xHun5gl9r81dQTJWuQzMDTgIaCTk+w1pfqobY X-Received: by 2002:a62:ee0a:: with SMTP id e10mr27452749pfi.6.1554695695397; Sun, 07 Apr 2019 20:54:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554695695; cv=none; d=google.com; s=arc-20160816; b=ps/6NXSse2pbSQL8nCDUk+jZmyE3JqRONp3+VBCVU4xoTxJ9uVbMo7JL3eUqR7FDGG dqnMhBzOVOlCojVM0T+HkEKU7nBtZtTczy6wy1w9asDNM2QUJq/DeI11xNyoAgiL2wKF S5tAGDTAYz0f1pNK294moh7nCzZoXRHCuHyopEiSK0v2l7GUVVv7pZRi4hcSaCHTj2mf EjdJmR7+PtG0QvPEkNYr/TcldeYQ7urE7fgsATXMjuZrCJPVkOa5anFVNqIC3uq1vsLG F6Cy1T5e+f2fs3g51QfHkYXtrAXoAZ6vFJVHbv9aUJxIZuR87KJgcIKR9+ivJjpM+5DQ FuqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=4xW+vxPL2454YZfjtpdQCPPqUOWhc3LMYabKS/xt1Ak=; b=nGszbG21ZQ7Xd+4SqXR6hcANneUZP81lB4B0WxBel7tDZgf883DjqoOPStoOglzPTd RRLiytbC3ldhq12lU+urUuoSWwYJJcf7wR6egjvP8uv2PApg3CZIokuAUse2dPdkXnYy 2dUNpqfbbf+jo2g/s5b976LwUVFK3xupa5IXsKNbpVrLg79zLl5SI7z76yzxFIZFIyJl 26Ma0r0rMjZqVvx4DXkMqJrI42ht56yaKcWoX7cD3rqMxejYwKvs2ufKw7FpG6z4xQN9 5pD7EwR15dtaH6GCLAddX3yFtPCjhKzPo/C1N1RJ8gwuz3kSHT0v5dGTavhIa1dxlrcT 3PfA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 25si19975666pgx.421.2019.04.07.20.54.40; Sun, 07 Apr 2019 20:54:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726664AbfDHDxo (ORCPT + 99 others); Sun, 7 Apr 2019 23:53:44 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:6704 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726464AbfDHDxo (ORCPT ); Sun, 7 Apr 2019 23:53:44 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 12AFFC4684C4C3C99DD1; Mon, 8 Apr 2019 11:53:42 +0800 (CST) Received: from [127.0.0.1] (10.184.12.158) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.408.0; Mon, 8 Apr 2019 11:53:32 +0800 Subject: Re: [PATCH] kvm: arm: Skip stage2 huge mappings for unaligned ipa backed by THP To: Suzuki K Poulose , CC: , , , , Marc Zyngier , Chirstoffer Dall , "Zheng Xiang" , , "Wanghaibin (D)" References: <1554203176-3958-1-git-send-email-suzuki.poulose@arm.com> From: Zenghui Yu Message-ID: <2ea55b9c-09da-c3d0-3616-aa6be85b5a46@huawei.com> Date: Mon, 8 Apr 2019 11:50:58 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Thunderbird/64.0 MIME-Version: 1.0 In-Reply-To: <1554203176-3958-1-git-send-email-suzuki.poulose@arm.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.184.12.158] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/4/2 19:06, Suzuki K Poulose wrote: > With commit a80868f398554842b14, we no longer ensure that the > THP page is properly aligned in the guest IPA. Skip the stage2 > huge mapping for unaligned IPA backed by transparent hugepages. > > Fixes: a80868f398554842b14 ("KVM: arm/arm64: Enforce PTE mappings at stage2 when needed") > Reported-by: Eric Auger > Cc: Marc Zyngier > Cc: Chirstoffer Dall > Cc: Zenghui Yu > Cc: Zheng Xiang > Tested-by: Eric Auger > Signed-off-by: Suzuki K Poulose Hi Suzuki, Why not making use of fault_supports_stage2_huge_mapping()? Let it do some checks for us. fault_supports_stage2_huge_mapping() was intended to do a *two-step* check to tell us that can we create stage2 huge block mappings, and this check is both for hugetlbfs and THP. With commit a80868f398554842b14, we pass PAGE_SIZE as "map_size" for normal size pages (which turned out to be almost meaningless), and unfortunately the THP check no longer works. So we want to rework *THP* check process. Your patch fixes the first checking-step, but the second is still missed, am I wrong? Can you please give a look at the below diff? thanks, zenghui > --- > virt/kvm/arm/mmu.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 27c9583..4a22f5b 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1412,7 +1412,9 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap) > * page accordingly. > */ > mask = PTRS_PER_PMD - 1; > - VM_BUG_ON((gfn & mask) != (pfn & mask)); Somehow, I'd prefer keeping the VM_BUG_ON() here, let it report some potential issues in the future (of course I hope none:) ) > + /* Skip memslots with unaligned IPA and user address */ > + if ((gfn & mask) != (pfn & mask)) > + return false; > if (pfn & mask) { > *ipap &= PMD_MASK; > kvm_release_pfn_clean(pfn); > ---8>--- Rework fault_supports_stage2_huge_mapping(), let it check THP again. Signed-off-by: Zenghui Yu --- virt/kvm/arm/mmu.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 27c9583..5e1b258 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1632,6 +1632,15 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, uaddr_end = uaddr_start + size; /* + * If the memslot is _not_ backed by hugetlbfs, then check if it + * can be backed by transparent hugepages. + * + * Currently only PMD_SIZE THPs are supported, revisit it later. + */ + if (map_size == PAGE_SIZE) + map_size = PMD_SIZE; + + /* * Pages belonging to memslots that don't have the same alignment * within a PMD/PUD for userspace and IPA cannot be mapped with stage-2 * PMD/PUD entries, because we'll end up mapping the wrong pages. @@ -1643,7 +1652,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, * |abcde|fgh Stage-1 block | Stage-1 block tv|xyz| * +-----+--------------------+--------------------+---+ * - * memslot->base_gfn << PAGE_SIZE: + * memslot->base_gfn << PAGE_SHIFT: * +---+--------------------+--------------------+-----+ * |abc|def Stage-2 block | Stage-2 block |tvxyz| * +---+--------------------+--------------------+-----+ --