Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp2309116ybt; Tue, 16 Jun 2020 02:38:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz/4Zd3pXkTa9EW8A903Wfh2DutlLkVZY7W8mRAxIvXwDvrjZzarHOo0STMK2ZVDfH4EtGI X-Received: by 2002:a50:bb41:: with SMTP id y59mr1672361ede.311.1592300333471; Tue, 16 Jun 2020 02:38:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592300333; cv=none; d=google.com; s=arc-20160816; b=ktEiWmSRSd0T/0nAR5iDuepGH5Dx4ulKBhEYZIJzw00i9kfkdcBMiM9QnQ1xpC/vFY KDR0cbBlDfyepoGYTHSqIWQfT19PywbLV07pO47Y/iCRy8Fcyt/TIFPP547KqzEyfx9M KxrKh6y3SsBMGXFcq/apJ3MqP1XPu0ampz2riRR5jB1g4AIZYKHtMJdxuekpCFkQO5Tp Wrul9iBAXbjFHPrs/sO3IA76+neIZ/SpaBqg3xeLVxzYqwoP6pnjZU8hHLDemr4UGxsv JtWCUd7sVTDIPSeBXJxqYcMSI4hwoTvOJK4UYE4kVqOQgUpMtSzRhz9lsb9AkQkeBdPN /mUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=px4Z1TI/WLUf8+8CJFqq0RJC41/8HObBhjPTKyjj/6A=; b=lMoFWcnZ6furkVMMPshBLYC2vJ8yt+mkN2pJuYiBXU2S8l0PW0IiDS3/7kvG2UG9Xl //bSp3Tflqrv22G411EHZLTA7VvlygOcHp9CsPv1QNxmqKdk2ciG/Kf3TaQqLrjtfwDW DPq0/4M8N1+/Zx+SOa3Nevels035dz4sih96w9h7zR/s1n3tPw6HdDaazPi+JBpFH++n UfE6+KZzCdbyIpdilEBkpfSE/+Lq9zMNbi3OJfUR0pc545bcgMeR//2g+bkq92yoBhi4 EkdxnzkoDDCKwD4j2APXyl+kuf3Zoy5R5dUZ+Av9+w66gmIV/ZG8h0UnmrQNokDXqDiO 5DgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dm20si10236003edb.457.2020.06.16.02.38.31; Tue, 16 Jun 2020 02:38:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728399AbgFPJg2 (ORCPT + 99 others); Tue, 16 Jun 2020 05:36:28 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:6339 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728038AbgFPJgS (ORCPT ); Tue, 16 Jun 2020 05:36:18 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C14F1D5C778EF500B1DF; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:07 +0800 From: Keqian Zhu To: , , , CC: Catalin Marinas , Marc Zyngier , James Morse , Will Deacon , "Suzuki K Poulose" , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , , Keqian Zhu Subject: [PATCH 04/12] KVM: arm64: Support clear DBM bit for PTEs Date: Tue, 16 Jun 2020 17:35:45 +0800 Message-ID: <20200616093553.27512-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This supports clear DBM bit for PTEs, to realize dynamic enable of hardware DBM. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/kvm/mmu.c | 151 ++++++++++++++++++++++++++++++ 2 files changed, 153 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c3e6fcc664b1..9ea2dcfd609c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -480,6 +480,8 @@ u64 __kvm_call_hyp(void *hypfn, ...); void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); +void kvm_mmu_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *memslot); +void kvm_mmu_clear_dbm_all(struct kvm *kvm); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 27407153121b..f08b0fbca0a0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2446,6 +2446,157 @@ int kvm_mmu_init(void) return err; } +#ifdef CONFIG_ARM64_HW_AFDBM +/** + * stage2_clear_dbm_ptes() - clear DBM bit from PMD range + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_ptes(pmd_t *pmd, phys_addr_t addr, + phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte) && kvm_s2pte_dbm(pte)) + kvm_clear_s2pte_dbm(pte); + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_clear_dbm_pmds() - clear DBM bit from PUD range + * @kvm: The KVM pointer + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_pmds(struct kvm *kvm, pud_t *pud, + phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = stage2_pmd_offset(kvm, pud, addr); + do { + next = stage2_pmd_addr_end(kvm, addr, end); + if (!pmd_none(*pmd) && !pmd_thp_or_huge(*pmd)) + stage2_clear_dbm_ptes(pmd, addr, next); + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_clear_dbm_puds() - clear DBM bit from P4D range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_puds(struct kvm *kvm, p4d_t *p4d, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = stage2_pud_offset(kvm, p4d, addr); + do { + next = stage2_pud_addr_end(kvm, addr, end); + if (!stage2_pud_none(kvm, *pud) && !stage2_pud_huge(kvm, *pud)) + stage2_clear_dbm_pmds(kvm, pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * stage2_clear_dbm_p4ds() - clear DBM bit from PGD range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_p4ds(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + p4d_t *p4d; + phys_addr_t next; + + p4d = stage2_p4d_offset(kvm, pgd, addr); + do { + next = stage2_p4d_addr_end(kvm, addr, end); + if (!stage2_p4d_none(kvm, *p4d)) + stage2_clear_dbm_puds(kvm, p4d, addr, next); + } while (p4d++, addr = next, addr != end); +} + +/** + * stage2_clear_dbm_range() - clear DBM bit from stage2 memory + * region range + * @kvm: The KVM pointer + * @addr: Start address of range + * @end: End address of range + */ +static void stage2_clear_dbm_range(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + pgd_t *pgd; + phys_addr_t next; + + pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr); + do { + cond_resched_lock(&kvm->mmu_lock); + if (!READ_ONCE(kvm->arch.pgd)) + break; + next = stage2_pgd_addr_end(kvm, addr, end); + if (stage2_pgd_present(kvm, *pgd)) + stage2_clear_dbm_p4ds(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +/** + * kvm_mmu_clear_dbm() - clear DBM bit from stage2 PTEs for memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to clear DBM bit + * + * After this function returns, DBM bit of all block or page descriptors + * is cleared. + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +void kvm_mmu_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + stage2_clear_dbm_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); + kvm_flush_remote_tlbs(kvm); +} + +/** + * kvm_mmu_clear_dbm_all() - clear DBM bit from stage2 PTEs for whole VM + * @kvm: The KVM pointer + * + * Called with kvm->slots_lock mutex acquired. + */ +void kvm_mmu_clear_dbm_all(struct kvm *kvm) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslots = slots->memslots; + struct kvm_memory_slot *memslot; + int slot; + + if (unlikely(!slots->used_slots)) + return; + + for (slot = 0; slot < slots->used_slots; slot++) { + memslot = &memslots[slot]; + kvm_mmu_clear_dbm(kvm, memslot); + } +} +#endif /* CONFIG_ARM64_HW_AFDBM */ + void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, -- 2.19.1