Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78F25C64EC4 for ; Fri, 10 Mar 2023 00:07:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231379AbjCJAHb (ORCPT ); Thu, 9 Mar 2023 19:07:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230422AbjCJAH2 (ORCPT ); Thu, 9 Mar 2023 19:07:28 -0500 Received: from mail-ua1-x929.google.com (mail-ua1-x929.google.com [IPv6:2607:f8b0:4864:20::929]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 761F7F92C6 for ; Thu, 9 Mar 2023 16:07:27 -0800 (PST) Received: by mail-ua1-x929.google.com with SMTP id m5so2403144uae.11 for ; Thu, 09 Mar 2023 16:07:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678406846; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=R96Vy3bzWe3iRlAJWoP7dbbEtRj3288SsN12+zliR+o=; b=U7ED+uGZhrJWU0A5W675pD4df+3lGt5XlElIpc1w6sqypbxbG8DkybmFGFX2aJ1OwB pdX/uLjX66DSi2BqIVNJs6xQlYNPNtd+NjvFoZIyWR4HYRKJfL6hROhLcnShRXHN7eEg bhrR8ZoA6kWY/xFFCF4qlO3ySRBuPmddDSeUs7YZp+1P3PvoB+1wn/KCLA0AsJeHbD7j dTf/i0eQz6ZmRFDEzUoJEMs6UcnUYVHjHvyi0mgQfnSeVXVJX3Fk776MWYb5v8rRYnzw G+Ig5YPAYTdId3nwycjZ0TRSpzvY9Jtdmva5N9+4iKlDmafouJWnEM2uRWpd/PO5csb6 tztQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678406846; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R96Vy3bzWe3iRlAJWoP7dbbEtRj3288SsN12+zliR+o=; b=RlbDyhtkLmc2YAe+7sXmEHfUf0iOCCbV8861e7ZUU82iB53aGeWn9uDNNH9A7H8z4I JGvL/YpqRvOenCKvgARvsF0So37Ya3z3pTUbrjQYRlqEjthrtffFnG5pyQWCCeLU7skY XEK7j7CSmbKsR5GWN+4mYMHRf5djbFzzPeSmEN307x/+/7jRRV9p0FZdnBQcgqcFjWov wM8i7GPgU0BcOCa2j9E8UOUDm9X4pZtQXT4gHs80BCGaj4QTC0Fv4CEc53pLXXimpCLh ysYTCQNieJiveD0P7evSdsf0g5gnwfDtjRfnhABykXwMD/ri2H4WKPx5acIrt8ay2D1x /YCA== X-Gm-Message-State: AO0yUKWgZNZRa4DPhqlnFS6CBSiQJQFD6/OPAoDZQf1kKtBQ5MLJTqmx bOBPq1BgiM0tP4jXnvcsTcvuyDDpDeYUpqiwLxS9uw== X-Google-Smtp-Source: AK7set+4oVnjR4bN52kPfzdD9tmb+OgdDWIojEP8m5f9m3kD8C6EfeW1j6EnKxCzuFysYGxS9YKqfIcQq/F5fZMdtTg= X-Received: by 2002:ab0:5619:0:b0:68b:716e:ed8a with SMTP id y25-20020ab05619000000b0068b716eed8amr369831uaa.0.1678406846398; Thu, 09 Mar 2023 16:07:26 -0800 (PST) MIME-Version: 1.0 References: <20230306224127.1689967-1-vipinsh@google.com> <20230306224127.1689967-6-vipinsh@google.com> <20230309175849.0000565e@gmail.com> In-Reply-To: From: David Matlack Date: Thu, 9 Mar 2023 16:06:59 -0800 Message-ID: Subject: Re: [Patch v4 05/18] KVM: x86/mmu: Add split_shadow_page_cache pages to global count of MMU cache pages To: Vipin Sharma Cc: Zhi Wang , seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, jmattson@google.com, mizhang@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 9, 2023 at 4:05=E2=80=AFPM David Matlack = wrote: > > On Thu, Mar 09, 2023 at 11:59:00AM -0800, Vipin Sharma wrote: > > On Thu, Mar 9, 2023 at 7:58=E2=80=AFAM Zhi Wang wrote: > > > > > > On Mon, 6 Mar 2023 14:41:14 -0800 > > > Vipin Sharma wrote: > > > > > > > Add pages in split_shadow_page_cache to the global counter > > > > kvm_total_unused_cached_pages. These pages will be freed by MMU shr= inker > > > > in future commit. > > > > > > > > Signed-off-by: Vipin Sharma > > > > --- > > > > arch/x86/kvm/mmu/mmu.c | 7 +++++-- > > > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > > index df8dcb7e5de7..0ebb8a2eaf47 100644 > > > > --- a/arch/x86/kvm/mmu/mmu.c > > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > > @@ -6149,7 +6149,9 @@ static void mmu_free_vm_memory_caches(struct = kvm *kvm) > > > > { > > > > kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache); > > > > kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache)= ; > > > > - kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache)= ; > > > > + mutex_lock(&kvm->slots_lock); > > > > + mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache); > > > > + mutex_unlock(&kvm->slots_lock); > > > > > > Taking the lock of the calling path in the layer of cache topping/fre= e layer > > > seems off. > > > > > > My vote goes to have a lock for each cache and take the lock of the c= ache when > > > topping/free the cache. It is more self-contained and architecturally= nice. > > > > > > > Yeah, this can be one way. However, in future patches when I am adding > > per NUMA node cache, it will add up a lot of locks for the same code > > path before a topup. In split huge page case we know what NUMA node we > > need to allocate from so we can fine tune which lock to take but in > > fault path code we don't know what NUMA node the page will be coming > > from so we need to topup all of the NUMA caches. Having a single lock > > simplifies code a little bit. > > > > I agree with you on being more self-contained. I will wait for others > > to also chime in on this and go from there. > > As a general rule, please only added locking when it's needed. Adding > the lock in this commit is just confusing. > > But that aside, I don't think acquiring the slots lock is even needed in > this commit. Correction: even needed in the *next* commit > mmu_free_vm_memory_caches() is never called while the the > VM is on vm_list. i.e. This can never race with the shrinker. > > If you want to be paranoid you can add a WARN to ensure that stays true > going forward: > > /* ... comment ... */ > WARN_ON_ONCE(!list_empty(&kvm->vm_list));