Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp907605imu; Tue, 11 Dec 2018 09:22:34 -0800 (PST) X-Google-Smtp-Source: AFSGD/WmxdydD5mdDBr8lGnMbypHFa9TFPNpKIusjRtVZXWEXMgDON+sBiYGOau4KUwakfVkeqI1 X-Received: by 2002:a63:6c48:: with SMTP id h69mr14603650pgc.139.1544548954892; Tue, 11 Dec 2018 09:22:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544548954; cv=none; d=google.com; s=arc-20160816; b=t6fctmrRe3fZWNfsM+kEPYZikRlC1cyvg/QtLfDYU6ZJIOMzPp2QT3H9SJUf2zbrHi uoQ3YvU8NnfKI/S8t9DsmSuFqqfBLft3m5sqghrvyZxZfYTb/MdaxDehOW5bZNrSeWsL wJJbkmh5VFSDKWlIfSFuTMV5vxkz6KYR4XOy8sLp62Fqpz8E+XrWq9OxwQFne4LV/ci5 Fvbx5h1rA+a/pemUwQK7gbtncCXM5xTkOfnBYzcaNCxLykZSOLAzbqP+qNEGY+V4scz0 d1WV5dGic4ehjTwYIKwjWOF27ab8fAQMd4uH0nimFXeJLdkLYjJ0fXCjp16NcW0N7RMU ePrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=UbCV31I3rq9apaba5jbxrMZoQnJ90Hri3c+hnGz27+M=; b=0qgiOETR9l8VV1L9Y0G/3uqm6+KSDd1X2x+gUyqDeRvB0BnniBorf9lDKmg/5NWD0d DNLxNwYLG6HffsUVjHZH9IegGtxdrlLzV113ZfjR+fZbrU6v5OGhYNvX7PYLePXKpTDf CjMfxMp5ZlZgybijb3izgo4RgkDw2o2K25fMXCwX/wasRO0KBQ1/JFZIQwGR3OcCFQLg AOW02Xi14ERzA4p/08G6OPS6qD9OGVIaEr2Vx77pq747vV6fbYKkFWXSB3cuuwzyP2Nl RcNr5gR0YzS0hJ5FjPePdxNCISSJm7PiV2q6Ad7afzTZKuuOByC/WJrXowNIxZdyLQYt Vaxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 3si13122798pla.240.2018.12.11.09.22.19; Tue, 11 Dec 2018 09:22:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727816AbeLKRMP (ORCPT + 99 others); Tue, 11 Dec 2018 12:12:15 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53580 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727535AbeLKRMP (ORCPT ); Tue, 11 Dec 2018 12:12:15 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E5581688; Tue, 11 Dec 2018 09:12:14 -0800 (PST) Received: from en101.cambridge.arm.com (en101.cambridge.arm.com [10.1.196.93]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 724853F6A8; Tue, 11 Dec 2018 09:12:12 -0800 (PST) From: Suzuki K Poulose To: linux-arm-kernel@lists.infradead.org Cc: suzuki.poulose@arm.com, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, marc.zyngier@arm.com, christoffer.dall@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, anshuman.khandual@arm.com Subject: [PATCH v10 4/8] KVM: arm64: Support dirty page tracking for PUD hugepages Date: Tue, 11 Dec 2018 17:10:37 +0000 Message-Id: <1544548241-6417-5-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544548241-6417-1-git-send-email-suzuki.poulose@arm.com> References: <1544548241-6417-1-git-send-email-suzuki.poulose@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Punit Agrawal In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon [ Replaced BUG() => WARN_ON() in arm32 pud helpers ] Signed-off-by: Suzuki K Poulose --- arch/arm/include/asm/kvm_mmu.h | 15 +++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 11 +++++++---- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index e6eff8b..9fe6c30 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -87,6 +87,21 @@ void kvm_clear_hyp_idmap(void); #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* + * The following kvm_*pud*() functions are provided strictly to allow + * sharing code with arm64. They should never be called in practice. + */ +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + WARN_ON(1); +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + WARN_ON(1); + return false; +} + static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= L_PTE_S2_RDWR; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 13d4827..8da6d1b 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -251,6 +251,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp) return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN); } +static inline void kvm_set_s2pud_readonly(pud_t *pudp) +{ + kvm_set_s2pte_readonly((pte_t *)pudp); +} + +static inline bool kvm_s2pud_readonly(pud_t *pudp) +{ + return kvm_s2pte_readonly((pte_t *)pudp); +} + #define hyp_pte_table_empty(ptep) kvm_page_empty(ptep) #ifdef __PAGETABLE_PMD_FOLDED diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index fb5325f..1c669c3 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1347,9 +1347,12 @@ static void stage2_wp_puds(struct kvm *kvm, pgd_t *pgd, do { next = stage2_pud_addr_end(kvm, addr, end); if (!stage2_pud_none(kvm, *pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(stage2_pud_huge(kvm, *pud)); - stage2_wp_pmds(kvm, pud, addr, next); + if (stage2_pud_huge(kvm, *pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else { + stage2_wp_pmds(kvm, pud, addr, next); + } } } while (pud++, addr = next, addr != end); } @@ -1392,7 +1395,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) * * Called to start logging dirty pages after memory region * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns - * all present PMD and PTEs are write protected in the memory region. + * all present PUD, PMD and PTEs are write protected in the memory region. * Afterwards read of dirty page log can be called. * * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, -- 2.7.4