Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2202033imm; Mon, 16 Jul 2018 04:11:01 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeYTz+R024Em7+E7o33kMpjuZXamm2yPyrO6Hb4fMW8hK7tyWtkbCQkiKefwv+XyGm9zf39 X-Received: by 2002:a62:b20c:: with SMTP id x12-v6mr17843294pfe.64.1531739461825; Mon, 16 Jul 2018 04:11:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531739461; cv=none; d=google.com; s=arc-20160816; b=hNjJPeR92tnfNkfoI5+v0WnCXJ1RT3S86lfC/b9A9A8ffkXc+h08Ux7BDHNVeCwQuv wsJyv1FyZyb8lSwygnlENRTNGAPGidH26ir8vaoTw1pBTR4L9Fknammxomb97xRNXZYo ojXzMgliqDkQAZD25/LUgKfnxdvacv91Qd84xMbHJC6e/bat5jsg0MVRgc9s3grDr3k4 ZwPoENTSapQ4rUIJI+a0T57CK2diWrdcTl8ct6Mursr/kjFibZ2uxUYA9v24kn+A/4wJ 6kOauhdcV3UnnO6NmcWCNaZ02o8KIpQ0LVB1AlQa9Q/jQttSCzxrYz5BDmvaXpPtTjvv XrAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=cUYsKhP+1k1Y9h7ydqayNA6lsW5ZhnS0var9xRlgx0Q=; b=JZDt0otoAj9iIis+9MMR8b714nn0KtGpoFtQOWY28So/1cqCqn1hFUQwwqm5XWZVl2 KF9l+ia0LlF19ATpgOWBBtQoJOzXObHV5EXqojfdHmuK/ZQqMkO1cEWIvHP6tJiC5jO1 2uH+MdK38ZiW50tp2RT+cWC3xcjqG+Wd4id/krrE7hwT16uwruIdijbVlja6DoN30lyV iujHtDXjHRiKrLn4XcRazoAqAByEG32k7ipZxt2m/T1OLbqJIvYIodNHAG7byd03Rfc7 361DBc+fYUa/C93ctx3TKzK4mHR7Icr56B0/AKagxmG4ZL5qDVEJG4bRn6ZvVOYjSNGt 1MCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10-v6si5168777plo.336.2018.07.16.04.10.47; Mon, 16 Jul 2018 04:11:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730744AbeGPLgq (ORCPT + 99 others); Mon, 16 Jul 2018 07:36:46 -0400 Received: from foss.arm.com ([217.140.101.70]:57164 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730617AbeGPLgq (ORCPT ); Mon, 16 Jul 2018 07:36:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 770A215B2; Mon, 16 Jul 2018 04:09:51 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.206.33]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1B90E3F589; Mon, 16 Jul 2018 04:09:50 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, will.deacon@arm.com, Russell King , Catalin Marinas Subject: [PATCH v6 4/8] KVM: arm64: Support dirty page tracking for PUD hugepages Date: Mon, 16 Jul 2018 12:08:53 +0100 Message-Id: <20180716110857.19310-5-punit.agrawal@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180716110857.19310-1-punit.agrawal@arm.com> References: <20180716110857.19310-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Reviewed-by: Suzuki K Poulose Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_mmu.h | 16 ++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 11 +++++++---- 3 files changed, 33 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index d095c2d0b284..c3ac7a76fb69 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -80,6 +80,22 @@ void kvm_clear_hyp_idmap(void); #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* + * The following kvm_*pud*() functions are provided strictly to allow + * sharing code with arm64. They should never be called in practice. + */ +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + BUG(); +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + BUG(); + return false; +} + + static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) { *pmd = new_pmd; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 689def9bb9d5..84051930ddfe 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -239,6 +239,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp) return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN); } +static inline void kvm_set_s2pud_readonly(pud_t *pudp) +{ + kvm_set_s2pte_readonly((pte_t *)pudp); +} + +static inline bool kvm_s2pud_readonly(pud_t *pudp) +{ + return kvm_s2pte_readonly((pte_t *)pudp); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index e131b7f9b7d7..ed8f8271c389 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1288,9 +1288,12 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end) do { next = stage2_pud_addr_end(addr, end); if (!stage2_pud_none(*pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(stage2_pud_huge(*pud)); - stage2_wp_pmds(pud, addr, next); + if (stage2_pud_huge(*pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else { + stage2_wp_pmds(pud, addr, next); + } } } while (pud++, addr = next, addr != end); } @@ -1333,7 +1336,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) * * Called to start logging dirty pages after memory region * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns - * all present PMD and PTEs are write protected in the memory region. + * all present PUD, PMD and PTEs are write protected in the memory region. * Afterwards read of dirty page log can be called. * * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, -- 2.17.1