Received: by 10.223.176.5 with SMTP id f5csp968481wra; Tue, 6 Feb 2018 10:15:15 -0800 (PST) X-Google-Smtp-Source: AH8x227jlplhyF0h4f7nl/EypPHnkGs9F+TPT3MwZXFrWIaPKb2l5kb5Z2ojVlriHHlz6DSg5+nh X-Received: by 2002:a17:902:930a:: with SMTP id bc10-v6mr3262912plb.19.1517940914986; Tue, 06 Feb 2018 10:15:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517940914; cv=none; d=google.com; s=arc-20160816; b=HgizaDGe6C64Al4cOOxKL70G97YCRiu421hnKSjH9MnbdNLIjGhfdexvzUQR1JyFCh vhB1be8u+jXSfJsuI2ML+Gs7gwXPL103tF3xxC1vm2jCX8XNFzS7zzAhA157dNfVepVw aLJvxCAdAB9ERsuEuo9jz2HEjwAOhYHRIXgZbzmXaskyy/eMYLGMrT3zQpwaGdvT6Jis OEj4km1Qw6J1fJScrJbhw3vUg+7hNf4cI76dGInRvbq4G+utFv9VSHQPTCCsntrGOjXZ rijk1vhnLd+3muwDqFIPig3dw3jm1/HQvO0Tb5AvGmt/Si9dPUMPWaZL4+ua90noxYvt Wr4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=95eFHBwD6AOQKbsp4KJd6WYC5TrMttcAPLfTOdgiSnc=; b=KZ1o5eVdfqe/lvZAresEtgeJaIuT4kNYRKgM3H2tZ/5L+caGdTocqH805HVmCDCoUg rv24sCnem6SAX5hlPTfa5Cup7r3G3lW7KeT0J07MTYSSxlIMNz42AYTrPb57v4wOgWP7 M30BONNE+63ua6EahWmzImAqaz0+YqI85PZUAefwz46nyZW6lUhvQ2ix76X6vkANSLMJ YM7TxCBybLdlNuTEDcOzK/AW/zqBObiVodnSRc4nkFH8N1h+DQvJzsj+2afX0Q4+Fyjc uvYAmDUEMkpJfDF63w9GYiJHuNOmXSH90TAM7/m4Cx0JKym1H5Cc80/qPBe+ORZnUq+W JTzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k23-v6si1384537pli.490.2018.02.06.10.15.01; Tue, 06 Feb 2018 10:15:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752887AbeBFSOA (ORCPT + 99 others); Tue, 6 Feb 2018 13:14:00 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:41736 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752536AbeBFSNx (ORCPT ); Tue, 6 Feb 2018 13:13:53 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 954DE1596; Tue, 6 Feb 2018 10:13:52 -0800 (PST) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 168CC3F25C; Tue, 6 Feb 2018 10:13:51 -0800 (PST) From: Punit Agrawal To: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, Marc Zyngier Subject: Re: [RFC 2/4] KVM: arm64: Support dirty page tracking for PUD hugepages References: <20180110190729.18383-1-punit.agrawal@arm.com> <20180110190729.18383-3-punit.agrawal@arm.com> <20180206145508.GC23160@cbox> Date: Tue, 06 Feb 2018 18:13:50 +0000 In-Reply-To: <20180206145508.GC23160@cbox> (Christoffer Dall's message of "Tue, 6 Feb 2018 15:55:08 +0100") Message-ID: <87a7wmhtpd.fsf@e105922-lin.cambridge.arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Christoffer Dall writes: > On Wed, Jan 10, 2018 at 07:07:27PM +0000, Punit Agrawal wrote: >> In preparation for creating PUD hugepages at stage 2, add support for >> write protecting PUD hugepages when they are encountered. Write >> protecting guest tables is used to track dirty pages when migrating VMs. >> >> Also, provide trivial implementations of required kvm_s2pud_* helpers to >> allow code to compile on arm32. >> >> Signed-off-by: Punit Agrawal >> Cc: Christoffer Dall >> Cc: Marc Zyngier >> --- >> arch/arm/include/asm/kvm_mmu.h | 9 +++++++++ >> arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ >> virt/kvm/arm/mmu.c | 9 ++++++--- >> 3 files changed, 25 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h >> index fa6f2174276b..3fbe919b9181 100644 >> --- a/arch/arm/include/asm/kvm_mmu.h >> +++ b/arch/arm/include/asm/kvm_mmu.h >> @@ -103,6 +103,15 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) >> return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; >> } >> >> +static inline void kvm_set_s2pud_readonly(pud_t *pud) >> +{ >> +} >> + >> +static inline bool kvm_s2pud_readonly(pud_t *pud) >> +{ >> + return true; > > why true? Shouldn't this return the pgd's readonly value, strictly > speaking, or if we rely on this never being called, have VM_BUG_ON() ? It returns true as it prevents a call to kvm_set_s2pud_readonly() but both of the above functions should never be called on ARM due to stage2_pud_huge() returning 0. I'll add a VM_BUG_ON(pud) to indicate that these functions should never be called and... > > In any case, a comment explaining why we unconditionally return true > would be nice. ... add a comment to explain what's going on. > >> +} >> + >> static inline bool kvm_page_empty(void *ptr) >> { >> struct page *ptr_page = virt_to_page(ptr); >> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h >> index 672c8684d5c2..dbfd18e08cfb 100644 >> --- a/arch/arm64/include/asm/kvm_mmu.h >> +++ b/arch/arm64/include/asm/kvm_mmu.h >> @@ -201,6 +201,16 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) >> return kvm_s2pte_readonly((pte_t *)pmd); >> } >> >> +static inline void kvm_set_s2pud_readonly(pud_t *pud) >> +{ >> + kvm_set_s2pte_readonly((pte_t *)pud); >> +} >> + >> +static inline bool kvm_s2pud_readonly(pud_t *pud) >> +{ >> + return kvm_s2pte_readonly((pte_t *)pud); >> +} >> + >> static inline bool kvm_page_empty(void *ptr) >> { >> struct page *ptr_page = virt_to_page(ptr); >> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >> index 9dea96380339..02eefda5d71e 100644 >> --- a/virt/kvm/arm/mmu.c >> +++ b/virt/kvm/arm/mmu.c >> @@ -1155,9 +1155,12 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end) >> do { >> next = stage2_pud_addr_end(addr, end); >> if (!stage2_pud_none(*pud)) { >> - /* TODO:PUD not supported, revisit later if supported */ >> - BUG_ON(stage2_pud_huge(*pud)); >> - stage2_wp_pmds(pud, addr, next); >> + if (stage2_pud_huge(*pud)) { >> + if (!kvm_s2pud_readonly(pud)) >> + kvm_set_s2pud_readonly(pud); >> + } else { >> + stage2_wp_pmds(pud, addr, next); >> + } >> } >> } while (pud++, addr = next, addr != end); >> } >> -- >> 2.15.1 >> > > Otherwise: > > Reviewed-by: Christoffer Dall Thanks for taking a look. Punit