Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp682573imm; Thu, 5 Jul 2018 07:13:33 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfcwTKXbaPoJUWp6thS4KGYH59moaHab9CqiIIqqjQ9FVbdpgHeF6DcsHYFuA9YfEXyM2bK X-Received: by 2002:a63:24c4:: with SMTP id k187-v6mr5778262pgk.434.1530800013891; Thu, 05 Jul 2018 07:13:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530800013; cv=none; d=google.com; s=arc-20160816; b=IHhSR31tAb+uceknH33UuFif/cnSzZBW3OQnnTSExW2wORZIfj+P34hzfN3uH/B6Kx nTIX3pRAfoMs2SmqGvwH0PGJmWw5ERSTKnKm+8yBKx0TRpK3FWzMMfvZRyZVdSRHDioQ uCFgjyTRlvgio9OTYhwsg8R1epLrTy67p/QTevmKhmzhnfw5MZ5j77shJxuxCxvNm4yO H08F3FUgT0MSQNfm7WAx3RqQZtvVzLa9x1m9PI42KJa9TNG9IH3uzIof8oOCK1RsNcPv 08aHisYAEUy+OaqNCwVBVBvoZyRKjVzh7SdV4TselZmT6Q1VW567L/oWzgWgs4IVtrFj NK7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=EDDc8wlhAMWNy19lozJfrrU3TPl9dFjlDslyrC0tMRM=; b=XlvU0udB3IwI+16ouFfQTzyNvvAVX3E8/oMTx4AKVZ9OsFiE7lvOfwy+18lbhLY/ms 4zVhbtlVkQ0uJf3pQXQItPhV9I96hDQM1EJMlmtUxClKLnbirrM0YHZHABVrDJ0udk4Y Fr950O0tIVcw0U1pxk+g22BZ/Npo+JVWan0z9X/eVQS2V++rYEQF+mBmD6EOKy7ltscq yVuCK2hS6wndUC8sCsYDwHlcFsSL3OkbVvXtDT2K2t0FZ7M348wJ3WRyeXExypFBGV0a iOiRtyRA2YA9q+xSjffIENOW9igPuMLaDUdvJy5biFkrcFDvzOpMRNcZTxlumTDRbZtF IYMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v3-v6si5403169pgs.172.2018.07.05.07.13.08; Thu, 05 Jul 2018 07:13:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753931AbeGEOJm (ORCPT + 99 others); Thu, 5 Jul 2018 10:09:42 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50278 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753449AbeGEOJk (ORCPT ); Thu, 5 Jul 2018 10:09:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0541518A; Thu, 5 Jul 2018 07:09:40 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.206.33]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7AE493F5BA; Thu, 5 Jul 2018 07:09:39 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, Russell King , Catalin Marinas , Will Deacon Subject: [PATCH v4 3/7] KVM: arm64: Support dirty page tracking for PUD hugepages Date: Thu, 5 Jul 2018 15:08:46 +0100 Message-Id: <20180705140850.5801-4-punit.agrawal@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180705140850.5801-1-punit.agrawal@arm.com> References: <20180705140850.5801-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_mmu.h | 16 ++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 11 +++++++---- 3 files changed, 33 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index d095c2d0b284..c23722f75d5c 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -80,6 +80,22 @@ void kvm_clear_hyp_idmap(void); #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* + * The following kvm_*pud*() functionas are provided strictly to allow + * sharing code with arm64. They should never be called in practice. + */ +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + BUG(); +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + BUG(); + return false; +} + + static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) { *pmd = new_pmd; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 689def9bb9d5..84051930ddfe 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -239,6 +239,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp) return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN); } +static inline void kvm_set_s2pud_readonly(pud_t *pudp) +{ + kvm_set_s2pte_readonly((pte_t *)pudp); +} + +static inline bool kvm_s2pud_readonly(pud_t *pudp) +{ + return kvm_s2pte_readonly((pte_t *)pudp); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 040cd0bce5e1..db04b18218c1 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1288,9 +1288,12 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end) do { next = stage2_pud_addr_end(addr, end); if (!stage2_pud_none(*pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(stage2_pud_huge(*pud)); - stage2_wp_pmds(pud, addr, next); + if (stage2_pud_huge(*pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else { + stage2_wp_pmds(pud, addr, next); + } } } while (pud++, addr = next, addr != end); } @@ -1333,7 +1336,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) * * Called to start logging dirty pages after memory region * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns - * all present PMD and PTEs are write protected in the memory region. + * all present PUD, PMD and PTEs are write protected in the memory region. * Afterwards read of dirty page log can be called. * * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, -- 2.17.1