Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp33997324rwd; Sun, 9 Jul 2023 03:00:16 -0700 (PDT) X-Google-Smtp-Source: APBJJlEAfKJqUfSPKbk7TdNxrCQsxzmTxluG2j6zVnbeLaNKg9/pI+1BwpSiHtmk7dcSGnsc08Nh X-Received: by 2002:a17:902:f546:b0:1b8:ae12:563f with SMTP id h6-20020a170902f54600b001b8ae12563fmr10021901plf.29.1688896816275; Sun, 09 Jul 2023 03:00:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688896816; cv=none; d=google.com; s=arc-20160816; b=JmKRw/c8ZS30zKzLAIztcnQTJM9/Yk+MPUBIP17ILJIZC1GZhHBfiTbhaj1EtJ4mx8 s/WqXEwe69JK69JVJlZJnoqqFP8Unw9TNiRY5e1KLlrIK9kHL+KLGsbCeyoX7vns1JDx QWlZgdSfUP6obJzzp/eRdZSEyFPDWVe4HmRdyjhM//A2wDdp7h9sJr4Ym+l+vDsY71q6 TYWbmGEaGe5mo/yojkXcTz0W+LUPu2Hzh0tupuF3vTThHtuhkzNcGIrE0nP4QV2D1qxb CdplPkS7aWqfzgLchSW2NwilH5xqCz6oBKJZD3/4l8GcPIniiJLhdR2/2glwR6YKr3wG oe8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:message-id:user-agent :references:in-reply-to:subject:cc:to:from:date:mime-version :dkim-signature; bh=8LDC4LdRC373oD7ek5MyfqwepydPCFET+OL1adEwY1Y=; fh=3DatAHeSKuDLixrEbiZXrhTRCQWrZwAAuM8B7MYSxt0=; b=NKxZOHlrmDuhnqbxGGq1lJVw3vio7oQrioDuyuidUMX1NP8bieTiquRpDgEvht4Ooo dHbXx7rrf+LyKneoFoQ5AzT2VCPUGL6awUgrLWlRgvUeaSm0cJxb8ZqTfAP+xd6j6Nat SF4HIsCh56taeMH7RhoRcKX/a28m1+5H2mmCRcit/X+6K3sQ09xEmBMRSVnmhDZGJGjy zu0onsU78s2n0FyS2gAMKbEE3lfLA/XHh/Y8CvMpgkIcPg1H70oLzZjyH74n8RTPM7sc uL+11E8z9Sixg3bmKbCNw5Thhsm/hI2ELCINh+cEDtjEP/N+OQd+rDoCzopuE96wbd2V yYcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (bad version) header.i=@208.org header.s=dkim header.b=0mzb9vua; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a170903110700b001ae42afbf93si7489406plh.450.2023.07.09.03.00.04; Sun, 09 Jul 2023 03:00:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=neutral (bad version) header.i=@208.org header.s=dkim header.b=0mzb9vua; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230188AbjGIJhe (ORCPT + 99 others); Sun, 9 Jul 2023 05:37:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229494AbjGIJhc (ORCPT ); Sun, 9 Jul 2023 05:37:32 -0400 Received: from mail.208.org (unknown [183.242.55.162]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C35161AB for ; Sun, 9 Jul 2023 02:37:28 -0700 (PDT) Received: from mail.208.org (email.208.org [127.0.0.1]) by mail.208.org (Postfix) with ESMTP id 4QzMVR6VbyzBR0gp for ; Sun, 9 Jul 2023 17:37:19 +0800 (CST) Authentication-Results: mail.208.org (amavisd-new); dkim=pass reason="pass (just generated, assumed good)" header.d=208.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=208.org; h= content-transfer-encoding:content-type:message-id:user-agent :references:in-reply-to:subject:to:from:date:mime-version; s= dkim; t=1688895439; x=1691487440; bh=QvxCOEycc/QYyn3xclPRD10mRyn QaD19qhvQIXlhbT0=; b=0mzb9vuaj2PX/2nIiv1kCK4sbBZ5lbbevxtAHcWaNaU 3U4rFeRaAIyYrbrke7AtaHPG4KnnLrWQTD10OB0rIXpMTjd0ROw5B7KbQNXhm2Gt BPGdqIc0be97Cxo17if9I939X26lJ8G6ncTO7jbndxS73u46qJD7KhqtJjnjC4Lz byzGxOiFa0ThK7XY7Rmv8Vry76I8PpHm55oZXW41WnSjX47+5N9p14bCpIYVTS2I liK0ayKCMODTizFDFVu9RAh6PP/ogRXFRnmFvYO8pJ83BQO1svhboCjk2rJOJht9 ty9DVMgMYban2cpgbg6SJGs5mMHPFcysa3PSmlia61g== X-Virus-Scanned: amavisd-new at mail.208.org Received: from mail.208.org ([127.0.0.1]) by mail.208.org (mail.208.org [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Z687CZyqBmlO for ; Sun, 9 Jul 2023 17:37:19 +0800 (CST) Received: from localhost (email.208.org [127.0.0.1]) by mail.208.org (Postfix) with ESMTPSA id 4QzMVQ71SHzBJBgJ; Sun, 9 Jul 2023 17:37:18 +0800 (CST) MIME-Version: 1.0 Date: Sun, 09 Jul 2023 17:37:18 +0800 From: xuanzhenggang001@208suo.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com Cc: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] KVM: x86/mmu: prefer 'unsigned int' to bare use of 'unsigned' In-Reply-To: <20230709093359.30916-1-denghuilong@cdjrlc.com> References: <20230709093359.30916-1-denghuilong@cdjrlc.com> User-Agent: Roundcube Webmail Message-ID: X-Sender: xuanzhenggang001@208suo.com Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE,SPF_HELO_FAIL,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Fix the following warnings reported by checkpatch: arch/x86/kvm/mmu/mmu.c:161: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:320: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:1761: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:2051: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:2483: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:2540: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:3707: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:4986: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:4998: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:5082: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:5093: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:5632: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' arch/x86/kvm/mmu/mmu.c:5655: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' Signed-off-by: Zhenggang Xuan --- arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 262b84763f35..b58da7845783 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -158,7 +158,7 @@ struct kvm_shadow_walk_iterator { hpa_t shadow_addr; u64 *sptep; int level; - unsigned index; + unsigned int index; }; #define for_each_shadow_entry_using_root(_vcpu, _root, _addr, _walker) \ @@ -317,7 +317,7 @@ static gfn_t get_mmio_spte_gfn(u64 spte) return gpa >> PAGE_SHIFT; } -static unsigned get_mmio_spte_access(u64 spte) +static unsigned int get_mmio_spte_access(u64 spte) { return spte & shadow_mmio_access_mask; } @@ -1758,7 +1758,7 @@ static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) kmem_cache_free(mmu_page_header_cache, sp); } -static unsigned kvm_page_table_hashfn(gfn_t gfn) +static unsigned int kvm_page_table_hashfn(gfn_t gfn) { return hash_64(gfn, KVM_MMU_HASH_SHIFT); } @@ -2048,7 +2048,7 @@ static int mmu_pages_next(struct kvm_mmu_pages *pvec, for (n = i+1; n < pvec->nr; n++) { struct kvm_mmu_page *sp = pvec->page[n].sp; - unsigned idx = pvec->page[n].idx; + unsigned int idx = pvec->page[n].idx; int level = sp->role.level; parents->idx[level-1] = idx; @@ -2480,7 +2480,7 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, } static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned direct_access) + unsigned int direct_access) { if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) { struct kvm_mmu_page *child; @@ -2537,7 +2537,7 @@ static int kvm_mmu_page_unlink_children(struct kvm *kvm, struct list_head *invalid_list) { int zapped = 0; - unsigned i; + unsigned int i; for (i = 0; i < SPTE_ENT_PER_PAGE; ++i) zapped += mmu_page_zap_pte(kvm, sp, sp->spt + i, invalid_list); @@ -3704,7 +3704,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) struct kvm_mmu *mmu = vcpu->arch.mmu; u8 shadow_root_level = mmu->root_role.level; hpa_t root; - unsigned i; + unsigned int i; int r; write_lock(&vcpu->kvm->mmu_lock); @@ -4983,7 +4983,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly) static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) { - unsigned byte; + unsigned int byte; const u8 x = BYTE_MASK(ACC_EXEC_MASK); const u8 w = BYTE_MASK(ACC_WRITE_MASK); @@ -4995,7 +4995,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) bool efer_nx = is_efer_nx(mmu); for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) { - unsigned pfec = byte << 1; + unsigned int pfec = byte << 1; /* * Each "*f" variable has a 1 bit for each UWX value @@ -5079,7 +5079,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) */ static void update_pkru_bitmask(struct kvm_mmu *mmu) { - unsigned bit; + unsigned int bit; bool wp; mmu->pkru_mask = 0; @@ -5090,7 +5090,7 @@ static void update_pkru_bitmask(struct kvm_mmu *mmu) wp = is_cr0_wp(mmu); for (bit = 0; bit < ARRAY_SIZE(mmu->permissions); ++bit) { - unsigned pfec, pkey_bits; + unsigned int pfec, pkey_bits; bool check_pkey, check_write, ff, uf, wf, pte_user; pfec = bit << 1; @@ -5629,7 +5629,7 @@ static bool detect_write_flooding(struct kvm_mmu_page *sp) static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, int bytes) { - unsigned offset, pte_size, misaligned; + unsigned int offset, pte_size, misaligned; pgprintk("misaligned: gpa %llx bytes %d role %x\n", gpa, bytes, sp->role.word); @@ -5652,7 +5652,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) { - unsigned page_offset, quadrant; + unsigned int page_offset, quadrant; u64 *spte; int level;