Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp220068imm; Mon, 1 Oct 2018 08:56:11 -0700 (PDT) X-Google-Smtp-Source: ACcGV60lH7Vmwo0+YlTFFpcFQ4WjRte6/K+yN+aKaKb0ZavkVbg8xLsCM9RDLDT/g6h01mD6PHCL X-Received: by 2002:a17:902:101:: with SMTP id 1-v6mr12338840plb.15.1538409371473; Mon, 01 Oct 2018 08:56:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538409371; cv=none; d=google.com; s=arc-20160816; b=OzE3W/mNYqlsZ1oNzPoetTrdl5sJKQtHeez2JEGjyXTh7aCaNavux344fvm7cr7FzE EyZv7sTWX6jgzRIRyrHwU32HezBEgy773WFjuNVhtDdiOnon0FjD7aZZkHZHNjKcwQy7 qSgF2Hi6J6bx3Jmu71kt1AUERSzqJemCsGHP2XTR3EZY0EFnvvhjQPpnVYv+/HQopHL4 5N4lOQl2uBuZfKZ2eVBnAJwh1U3vdU53ChNefO9q1Aht8KSfgJ4/vlQ/qRJHnxS5hQay yOqhqigMdHv5PVpqAFvMCgxYOvBSeX3TNEhGXjXdrFnTJV2+VkhYpDJTAqo6LPyel0NL LhAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=YxUrMJOkHLaLCqISLhIcTgTBmWrpZZqSCr5BOk0Fc34=; b=OGozGiC5o9pp6KDYAIYw5Bgtm22LOz+kFm3LMALQbfPqRsEemvTZlV1HkGzMwmnx/D L6wm3kDXMrwHeaPpFe2/vpXMbn3Dx/Gf0Oyx/hzmbEbfXenaIiIZEwFrei4nAVSDZRzI CN0jL0EvOgWPxZ9mbFnZ3XW/mUzyr3+YjckGn8KrPMdRls+YMc83jTywVrz+zfs52ntt 0Ests4QAk/uyu0vcfBj4NXg6sAQlA8B8Xz7naP3R9qqhPhxSyCO6FMdh8dyYqMw/3Dhu 9hObiXiwm4IV/yWlW0P/UoDefsg4MlCZXCaCyLenvFRUOeUeBVIyk1stmSRlVLjXBJon KjyA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m5-v6si522624pgi.327.2018.10.01.08.55.56; Mon, 01 Oct 2018 08:56:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726519AbeJAWdz (ORCPT + 99 others); Mon, 1 Oct 2018 18:33:55 -0400 Received: from foss.arm.com ([217.140.101.70]:51846 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726143AbeJAWdy (ORCPT ); Mon, 1 Oct 2018 18:33:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B98241650; Mon, 1 Oct 2018 08:55:27 -0700 (PDT) Received: from localhost (e105922-lin.emea.arm.com [10.4.13.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 36F873F5D3; Mon, 1 Oct 2018 08:55:27 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , marc.zyngier@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com, Russell King , Catalin Marinas Subject: [PATCH v8 5/9] KVM: arm64: Support dirty page tracking for PUD hugepages Date: Mon, 1 Oct 2018 16:54:39 +0100 Message-Id: <20181001155443.23032-6-punit.agrawal@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20181001155443.23032-1-punit.agrawal@arm.com> References: <20181001155443.23032-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Reviewed-by: Suzuki K Poulose Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_mmu.h | 15 +++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 11 +++++++---- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index e77212e53e77..9ec09f4cc284 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -87,6 +87,21 @@ void kvm_clear_hyp_idmap(void); #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* + * The following kvm_*pud*() functions are provided strictly to allow + * sharing code with arm64. They should never be called in practice. + */ +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + BUG(); +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + BUG(); + return false; +} + static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= L_PTE_S2_RDWR; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index baabea0cbb66..3cc342177474 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -251,6 +251,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp) return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN); } +static inline void kvm_set_s2pud_readonly(pud_t *pudp) +{ + kvm_set_s2pte_readonly((pte_t *)pudp); +} + +static inline bool kvm_s2pud_readonly(pud_t *pudp) +{ + return kvm_s2pte_readonly((pte_t *)pudp); +} + #define hyp_pte_table_empty(ptep) kvm_page_empty(ptep) #ifdef __PAGETABLE_PMD_FOLDED diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 21079eb5bc15..9c48f2ca6583 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1347,9 +1347,12 @@ static void stage2_wp_puds(struct kvm *kvm, pgd_t *pgd, do { next = stage2_pud_addr_end(kvm, addr, end); if (!stage2_pud_none(kvm, *pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(stage2_pud_huge(kvm, *pud)); - stage2_wp_pmds(kvm, pud, addr, next); + if (stage2_pud_huge(kvm, *pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else { + stage2_wp_pmds(kvm, pud, addr, next); + } } } while (pud++, addr = next, addr != end); } @@ -1392,7 +1395,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) * * Called to start logging dirty pages after memory region * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns - * all present PMD and PTEs are write protected in the memory region. + * all present PUD, PMD and PTEs are write protected in the memory region. * Afterwards read of dirty page log can be called. * * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, -- 2.18.0