Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp4918302pxb; Sun, 6 Feb 2022 07:33:34 -0800 (PST) X-Google-Smtp-Source: ABdhPJyzrN3f001aYsUKfWg3WksPMf4XWJFqCOKs7t9lVNz8EU9pfZn+qI3YZW0yku5G0PVvs/wk X-Received: by 2002:a63:a102:: with SMTP id b2mr6202354pgf.459.1644161614062; Sun, 06 Feb 2022 07:33:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644161614; cv=none; d=google.com; s=arc-20160816; b=mpxjja+NjsgQl2ekF+G+7TpMNTcn3ZAGe9sW1Z5C70uSSYbY7PDtj7+uLQ2yKj8cUX wQzKOEWXhSU0SDbU7maZDDxqJKBjp4NoHGkRQTvTVTPfbymAX1FksOhRV7nekf40Sa6i 6OGlnj7sboUnjTchw7xdOAqT4503KTIfPEaHzzQD872txYxCkXFW+3akKZbjEtO7ESAN NHvzYnZGOhgYfqXIoznk9gYNdMFD4tYjZRoPjdYSaHC+B4OXcY+o5NJ+vC2tlVOrZTOt JNJqdv27AH5/wE8VNIPwMd5N4UUVJxgjl5WXylN3qelwn/J3fbcCt9ywPhP20vkv5ljc g2xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=CT7Hm/Nr7YHrouVT14eiWQtHBeJqitv4EnDHr/IDEIg=; b=P9FasnLxoxNgMZyew59zpUJ6Zji9wi5+/Dl8OfU7UzyoAefGW+QFMlG1p5IAxZMX4+ A8tkcQmtlzj8kyF6PXZPTJXo/rTTBPNOTqEQFrPdTAYYqDRN8xdTZSqeklEdxCZruihV RXdEaesRcJ8RqqQvCw1E59i3tk2ly28kCyQqirW/9VQ1FmukolCwEed9b/Wy4BoRcrkb n7CzXXHwRUCUIwi/HoyaGsCjVyPDa4VnMh64eJOrJJP+W5mC9N9bzjvWKEd5ui8gQFYx vTiP97qdse7p6jBkpyBjuPUDIVzj9I8pDQuMDOInKfzwpJx1XAS9Dux+bzlyT2UR3Ys+ MBbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=bekO2WJg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j71si1025562pgd.770.2022.02.06.07.33.18; Sun, 06 Feb 2022 07:33:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=bekO2WJg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377595AbiBDSmF (ORCPT + 99 others); Fri, 4 Feb 2022 13:42:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240619AbiBDSmE (ORCPT ); Fri, 4 Feb 2022 13:42:04 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1318C06173D for ; Fri, 4 Feb 2022 10:42:04 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id x3so5899083pll.3 for ; Fri, 04 Feb 2022 10:42:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=CT7Hm/Nr7YHrouVT14eiWQtHBeJqitv4EnDHr/IDEIg=; b=bekO2WJgXZCz4CENwlr9fHGuyddrnUUerNs1iE/u6JCb7n66SUK4eImfV+bJXJESx/ E4sPqs1B3XCf2SX7rgHw2AApHouWd0i8ArytUxa5tbke5Ap5mdGyCuh+9SjQCISPvNQZ fMWKiJP4skz2JPJkyu7feLw7eFcAIxGLUn4LNhXcPdxOgW4nuFhEGBmU9ArBHVHrnt0l LPYXScBTCWwrZa3RD40whmBg6Ebf1CB1XAfThd6YlZvBVuSqGPf8gpuLrxWHiPLRGm8B 5udeIv7QauaUVmzT/bl6D0IlLtG58KKh4aG1kSf4utuAu0/VVe/0Mr/DZzvpyjBYDul8 3kVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=CT7Hm/Nr7YHrouVT14eiWQtHBeJqitv4EnDHr/IDEIg=; b=nTIzyT2m+60h4nDIHsOvAZAtoU/cdnmN6+ko3XEDtA/U3wwUohiuKr1KHrwb1FByvr rbQ/8ao2Q3nwNG5zwSrdDD7gyueMLldFtl4TNTyRs/FvPIQ/6STNQJJUw9ZpAfHw11u0 DbRwfWmnnygY0hY6Sm96BvcJl0bjyD8Ir/5t9dsn+oRLZuy26Ce5nyKB79PkyexzR9aB yRHRVZhWvqGnohgZVJGLv62jL4aUQqvt+86fbZq6e065PZ4luPkPbskvNOCBqcJWmy0U YTvN/xvN+EYbcyncAjyJUMcJMRYocTTC+8gPgQPjM1kvQ/5SmwRbYRctdG8Otmy50eYg gF+g== X-Gm-Message-State: AOAM532Lyg5Rfnzg2X0xY3eHPR/YPztHXX5IbHrVUXNadgGiiyN/F299 T04XF4MtMXFoupdRhP4d8qPVVw== X-Received: by 2002:a17:90b:350c:: with SMTP id ls12mr352741pjb.44.1644000123888; Fri, 04 Feb 2022 10:42:03 -0800 (PST) Received: from google.com (254.80.82.34.bc.googleusercontent.com. [34.82.80.254]) by smtp.gmail.com with ESMTPSA id ip5sm14483980pjb.13.2022.02.04.10.42.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Feb 2022 10:42:02 -0800 (PST) Date: Fri, 4 Feb 2022 18:41:59 +0000 From: David Matlack To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, seanjc@google.com, vkuznets@redhat.com Subject: Re: [PATCH 04/23] KVM: MMU: constify uses of struct kvm_mmu_role_regs Message-ID: References: <20220204115718.14934-1-pbonzini@redhat.com> <20220204115718.14934-5-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220204115718.14934-5-pbonzini@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 04, 2022 at 06:56:59AM -0500, Paolo Bonzini wrote: > struct kvm_mmu_role_regs is computed just once and then accessed. Use > const to enforce this. > > Signed-off-by: Paolo Bonzini Reviewed-by: David Matlack > --- > arch/x86/kvm/mmu/mmu.c | 19 +++++++++++-------- > 1 file changed, 11 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 0039b2f21286..3add9d8b0630 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -208,7 +208,7 @@ struct kvm_mmu_role_regs { > * the single source of truth for the MMU's state. > */ > #define BUILD_MMU_ROLE_REGS_ACCESSOR(reg, name, flag) \ > -static inline bool __maybe_unused ____is_##reg##_##name(struct kvm_mmu_role_regs *regs)\ > +static inline bool __maybe_unused ____is_##reg##_##name(const struct kvm_mmu_role_regs *regs)\ > { \ > return !!(regs->reg & flag); \ > } > @@ -255,7 +255,7 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) > return regs; > } > > -static int role_regs_to_root_level(struct kvm_mmu_role_regs *regs) > +static int role_regs_to_root_level(const struct kvm_mmu_role_regs *regs) > { > if (!____is_cr0_pg(regs)) > return 0; > @@ -4666,7 +4666,7 @@ static void paging32_init_context(struct kvm_mmu *context) > } > > static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, > - struct kvm_mmu_role_regs *regs) > + const struct kvm_mmu_role_regs *regs) > { > union kvm_mmu_extended_role ext = {0}; > > @@ -4687,7 +4687,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, > } > > static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, > - struct kvm_mmu_role_regs *regs, > + const struct kvm_mmu_role_regs *regs, > bool base_only) > { > union kvm_mmu_role role = {0}; > @@ -4723,7 +4723,8 @@ static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) > > static union kvm_mmu_role > kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, > - struct kvm_mmu_role_regs *regs, bool base_only) > + const struct kvm_mmu_role_regs *regs, > + bool base_only) > { > union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, regs, base_only); > > @@ -4769,7 +4770,8 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) > > static union kvm_mmu_role > kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, > - struct kvm_mmu_role_regs *regs, bool base_only) > + const struct kvm_mmu_role_regs *regs, > + bool base_only) > { > union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, regs, base_only); > > @@ -4782,7 +4784,8 @@ kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, > > static union kvm_mmu_role > kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, > - struct kvm_mmu_role_regs *regs, bool base_only) > + const struct kvm_mmu_role_regs *regs, > + bool base_only) > { > union kvm_mmu_role role = > kvm_calc_shadow_root_page_role_common(vcpu, regs, base_only); > @@ -4940,7 +4943,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) > } > > static union kvm_mmu_role > -kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, struct kvm_mmu_role_regs *regs) > +kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) > { > union kvm_mmu_role role; > > -- > 2.31.1 > >