Received: by 2002:a05:6520:4211:b029:f4:110d:56bc with SMTP id o17csp1624002lkv; Wed, 19 May 2021 14:19:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzP8koV+Os6vRXd+mT/9fz++ixZy/1Ahgx4UsELGA3ubq35etrhEgAip0NXS/POp4HY7HT/ X-Received: by 2002:a92:d4c7:: with SMTP id o7mr1310030ilm.130.1621459175285; Wed, 19 May 2021 14:19:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621459175; cv=none; d=google.com; s=arc-20160816; b=daNY7WMBeK3ZIYRK60dmip/QcjtglTR53zTj8ffiBiWKq+bkFKhs3MMtldRBCe+R13 h/AGWntTx0fBgOgDI6Bj3+KwCxYYmcL/ugeP/rqFDTLvO5l/i/jTlwjIVVSzEXHy9GBe neYswN5uO7UoQh0avGwPT4mF9D56WG+cxH0RLfEI5jTW0v6nxU8531HgsAjz7VIU1sNo Wo/ivWrH6APaDomBJnROKcKElImjm//f5RfXxMqVpxRjbHZRzcO65TvAmJYabSXXy/28 szbjOCdx/yDOflEP5RdpRLfCqj6VRhIUHioHM+g2kNVXDj2miwYphRoHmQUmaHlhKAQe +n1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=8fPkrJZl4xbjKZWSb4LautfbTP6fpPHvxZUlNFpdvSA=; b=J/DAR4W1kqf+9o/en7QL9tlogd6hgdUwHUcHyNkjEbBVbaCGx+Nk1ZT0RlhvNSfgAk kWg3ndnp1QUQY+T6LnxVQRlkcwIqOzkf1y5jFMHXOPmBrSj5tz086lahGUDO7F3kk0oJ oCs0Wiivk/g4NCeYGxNn83lvTX00qsH8q1yE/xFFGVH3t3rr+01bx+mIfR7QFRyFScOf gBrYq/H/2ih2IQXMk5tLQnf3dx7207WvuKT4ZUJRuMqe/QTfeqtSW9t79quYlFmYMr3U GOKg4sGUdDh7gshvYNVMwttl33IB4qy8bUoJ931YgUSOKZBvS9KVMetRgV2KiIUJ7lFq DX7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=wCA+ul99; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s13si390949iow.103.2021.05.19.14.19.21; Wed, 19 May 2021 14:19:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=wCA+ul99; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229638AbhESVCV (ORCPT + 99 others); Wed, 19 May 2021 17:02:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229556AbhESVCS (ORCPT ); Wed, 19 May 2021 17:02:18 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B291C06175F for ; Wed, 19 May 2021 14:00:57 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id 27so9017928pgy.3 for ; Wed, 19 May 2021 14:00:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8fPkrJZl4xbjKZWSb4LautfbTP6fpPHvxZUlNFpdvSA=; b=wCA+ul9992t4Z3RZZg7u/sCLnlCW1UpUlXad2PEtqU7sXx/IpINVWngj3nGutMGJSM DyOznK2ZiHYjYqGw6jCarfrR/HPvBbZMG06mpWvqewKGFovKCO+wl3aNaJCz+9mzzNJI UHYeVGSB+LmxV9XDidTKfTS/9e9V9o1scQdbUAcivKUkN4rb45ZFLenLJLiNWKU4Fu9W /RWKbyu3hA+jGi/ErLBE8GyqbXE1hmgWycS7xieAiJg8nruu8TXAO6h0BT2o9xki7RCx ce0kPdRUCD0VWcmxZeZsZknMp1uqHWioYiZwh5PzpF2m83LrONvnHU9lm7lLbzhVXIc7 3xRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8fPkrJZl4xbjKZWSb4LautfbTP6fpPHvxZUlNFpdvSA=; b=FXsLZpshlzFMhAAUWjDt2LIlBYEzwawS0hsZU/oQgfO7WkXbMqIjUlBCaGTT+UzXiN qZGprttTjrakqtkBR07308vorUWWAPwfRRAccxIzbW3bt/Lc2Xsld2FMvSy+PRewvKZ6 61P3sYM5zbOUESgvq3CCTcysRnvRl4igHXZPXBBysIxGg/hWtAhtFWx7tEyF9ys7jxPA 30Z1k1b1k0vlpZXCMgiF1giNQm1FBoseGg4e7p/TSjT46IltCtGkoTHsrQ8gSBIxvjLP 92qsgvraQsNXsmXrshZVrOsiYMNnU/hiE35r2MyTZDTCt6rb3YtrxNoTrVaiWA47gcea Ghlg== X-Gm-Message-State: AOAM531E8mgTRPwYY9/dfGo7y0iHDHoLzopdgjiisJYIGSJo1JXbWIcj JWfEy9EYltDOezQSkVbz4icSRg== X-Received: by 2002:a63:5060:: with SMTP id q32mr1088180pgl.32.1621458056727; Wed, 19 May 2021 14:00:56 -0700 (PDT) Received: from google.com (240.111.247.35.bc.googleusercontent.com. [35.247.111.240]) by smtp.gmail.com with ESMTPSA id e7sm251429pfl.171.2021.05.19.14.00.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 May 2021 14:00:56 -0700 (PDT) Date: Wed, 19 May 2021 21:00:52 +0000 From: Sean Christopherson To: "Maciej S. Szmigiero" Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Igor Mammedov , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Christian Borntraeger , Janosch Frank , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 1/8] KVM: x86: Cache total page count to avoid traversing the memslot array Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, May 16, 2021, Maciej S. Szmigiero wrote: > From: "Maciej S. Szmigiero" > > There is no point in recalculating from scratch the total number of pages > in all memslots each time a memslot is created or deleted. > > Just cache the value and update it accordingly on each such operation so > the code doesn't need to traverse the whole memslot array each time. > > Signed-off-by: Maciej S. Szmigiero > --- > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 5bd550eaf683..8c7738b75393 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -11112,9 +11112,21 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, > const struct kvm_memory_slot *new, > enum kvm_mr_change change) > { > - if (!kvm->arch.n_requested_mmu_pages) > - kvm_mmu_change_mmu_pages(kvm, > - kvm_mmu_calculate_default_mmu_pages(kvm)); > + if (change == KVM_MR_CREATE) > + kvm->arch.n_memslots_pages += new->npages; > + else if (change == KVM_MR_DELETE) { > + WARN_ON(kvm->arch.n_memslots_pages < old->npages); Heh, so I think this WARN can be triggered at will by userspace on 32-bit KVM by causing the running count to wrap. KVM artificially caps the size of a single memslot at ((1UL << 31) - 1), but userspace could create multiple gigantic slots to overflow arch.n_memslots_pages. I _think_ changing it to a u64 would fix the problem since KVM forbids overlapping memslots in the GPA space. Also, what about moving the check-and-WARN to prepare_memory_region() so that KVM can error out if the check fails? Doesn't really matter, but an explicit error for userspace is preferable to underflowing the number of pages and getting weird MMU errors/behavior down the line. > + kvm->arch.n_memslots_pages -= old->npages; > + } > + > + if (!kvm->arch.n_requested_mmu_pages) { If we're going to bother caching the number of pages then we should also skip the update when the number pages isn't changing, e.g. if (change == KVM_MR_CREATE || change == KVM_MR_DELETE) { if (change == KVM_MR_CREATE) kvm->arch.n_memslots_pages += new->npages; else kvm->arch.n_memslots_pages -= old->npages; if (!kvm->arch.n_requested_mmu_pages) { unsigned long nr_mmu_pages; nr_mmu_pages = kvm->arch.n_memslots_pages * KVM_PERMILLE_MMU_PAGES / 1000; nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); } } > + unsigned long nr_mmu_pages; > + > + nr_mmu_pages = kvm->arch.n_memslots_pages * > + KVM_PERMILLE_MMU_PAGES / 1000; > + nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); > + kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); > + } > > /* > * FIXME: const-ify all uses of struct kvm_memory_slot.