Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp12585695rwl; Tue, 3 Jan 2023 17:18:20 -0800 (PST) X-Google-Smtp-Source: AMrXdXvlO+zpTANTWYDi9kn4fquFILiyEv3jRQJq+DX06XDIqfbSlA4WZ+/CZ3VeYLvDZ+BcMzWw X-Received: by 2002:a17:902:e845:b0:189:fd83:eb80 with SMTP id t5-20020a170902e84500b00189fd83eb80mr65973795plg.62.1672795100592; Tue, 03 Jan 2023 17:18:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672795100; cv=none; d=google.com; s=arc-20160816; b=ITMZnBvCubQdjbjm6YvLJNkDeARKbDIRGAE+PND18APDUs1jtugAw0opuFUM62dMxa vkSP1vedByP7IaZG8ZSL8/lharTEUMXfzTPdpDrlE22vgsWQ/abVp4FJBDzfa8tKr/lF SGLlWPJBma3vDjaUkXzfyykq5lq/mqSSPgpe63eNStV54aLBhuVlNhPD6NRsLexemjNh RNtSFU0hYP7QMLnaaOEOcOROIJwSPOOqIxfuFusKWLYQkmNhCK+EgWPXoyiL+O/UWu2j gAZnWmkn16U/xzYITLZj/Nht8/ME0zEbwuiKb18tiKkS8cg2oIb6Xsq9YM1l78ShxWae yPyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=CjDUy4DyLWqialDx+oLtHpMfqyOR8g2BaJhUO6gTmfk=; b=DZ0PYPvj6w/vKCkn56evQQRVqmuTeKR4ZOhkUoqb7AARL9Ik3cgVzA6Z4EzWgKbSzo lAnIRE3YLowQTG2xZtHaEaKGPX2xX7jF5nHdfErjpRf94jr4EyxChxAosh2BX2AL2vfg i/j7pYi7d7yw4I1xuSrVCFOfIx1oESLK7b4Lk1S9hGNbCaSJ/8Shrad8GYZsrOJX1KR2 fjqMpCbtg91Sclp5OwHShrHVDEFZKdfrXakf4J6BxiOwYqoQpl92frpBVVY4IgykpKv7 gVUFgqAf6uk1G4K7fbuqRjrY+CwCn+ezs/S1NLHtk3LYIPsA7XXWY3hppz8dvKZ8mxZp ukcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=eR5jD+Kz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s7-20020a170902988700b00192c7f19784si7360339plp.546.2023.01.03.17.18.13; Tue, 03 Jan 2023 17:18:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=eR5jD+Kz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238911AbjADBAo (ORCPT + 57 others); Tue, 3 Jan 2023 20:00:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233130AbjADBAm (ORCPT ); Tue, 3 Jan 2023 20:00:42 -0500 Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com [IPv6:2607:f8b0:4864:20::112b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC00317411 for ; Tue, 3 Jan 2023 17:00:41 -0800 (PST) Received: by mail-yw1-x112b.google.com with SMTP id 00721157ae682-45ef306bd74so459294567b3.2 for ; Tue, 03 Jan 2023 17:00:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=CjDUy4DyLWqialDx+oLtHpMfqyOR8g2BaJhUO6gTmfk=; b=eR5jD+Kzi4enxS2mo0zQ8w7UbSuNkQNe0DLqJqyFknrRav4Y85zJTCrtONPeXEl8Pf 9Fmv5Cgw3apZ887oi4QlOhfTSK6lSBziIiebarh9uQpX7Hy3qx4NOdYJ2tIydgJDKk15 bI81jiaOmY1C2CdF3vDXX/xaqE7HHUabKzHHlG2ndn14gVi+DadXFibMjziKCuiddHDD rNvgBRHgpVtV45mlf0h9/CUp4BKoMV/cbjXAtR51SnxwNuNfdrxLtMOlCWJt0tLMFL+f BVuzmhvpxKeQoIvDC9guGycKyRvQzdMuQwGc+XMt+ebcRscUaXfD3ZdLNWNKHWyqByAM EzKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CjDUy4DyLWqialDx+oLtHpMfqyOR8g2BaJhUO6gTmfk=; b=A72Tn3WP4NUPugLYnfu77Kxm+YTqcJSs95jd4WUCwhroVF41zq8Z5optw2ahSIe65v iJ/Z1xjT+rEBhkY/aBwLAXdyN8qPfrusPbbX3HshqsF4MehD8D3n8a9x4aaSSqfHfSNf xuh0m23rqlxE/gXQLNJXi+lIvBNgxsatDqELiX416MK6/ilAV0sBbMcJjE6y4qEgHuBk KdfcYXfW7lPMOo/hifyaY++4wfB9Bho1gtpzq3cIkWclsU11vNXN4CtxHqIMhHxYoCFU bPQDfQbUmb0PUOg1Jw0c/Mjvv4dwYJgGqTNm32p4t/gE45OdO45JGnj+HlO7paR6swik z0bQ== X-Gm-Message-State: AFqh2kqFpHiN+vxA1OjwVTKTn4OhshL11vdvP467+dh2Cc5HklI+3WFh DHOcBa0OR9iiVwvBIR3BBcZkG6wqyR0oD3iQKOfy0g== X-Received: by 2002:a0d:d882:0:b0:36f:f251:213b with SMTP id a124-20020a0dd882000000b0036ff251213bmr3410913ywe.228.1672794040800; Tue, 03 Jan 2023 17:00:40 -0800 (PST) MIME-Version: 1.0 References: <20221222023457.1764-1-vipinsh@google.com> <20221222023457.1764-2-vipinsh@google.com> In-Reply-To: From: Vipin Sharma Date: Tue, 3 Jan 2023 17:00:04 -0800 Message-ID: Subject: Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches To: Mingwei Zhang Cc: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 3, 2023 at 11:32 AM Mingwei Zhang wrote: > > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma wrote: > > > > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache, > > + spinlock_t *cache_lock) > > +{ > > + int orig_nobjs; > > + > > + spin_lock(cache_lock); > > + orig_nobjs = cache->nobjs; > > + kvm_mmu_free_memory_cache(cache); > > + if (orig_nobjs) > > + percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs); > > + > > + spin_unlock(cache_lock); > > +} > > I think the mmu_cache allocation and deallocation may cause the usage > of GFP_ATOMIC (as observed by other reviewers as well). Adding a new > lock would definitely sound like a plan, but I think it might affect > the performance. Alternatively, I am wondering if we could use a > mmu_cache_sequence similar to mmu_notifier_seq to help avoid the > concurrency? > Can you explain more about the performance impact? Each vcpu will have its own mutex. So, only contention will be with the mmu_shrinker. This shrinker will use mutex_try_lock() which will not block to wait for the lock, it will just pass on to the next vcpu. While shrinker is holding the lock, vcpu will be blocked in the page fault path but I think it should not have a huge impact considering it will execute rarely and for a small time. > Similar to mmu_notifier_seq, mmu_cache_sequence should be protected by > mmu write lock. In the page fault path, each vcpu has to collect a > snapshot of mmu_cache_sequence before calling into > mmu_topup_memory_caches() and check the value again when holding the > mmu lock. If the value is different, that means the mmu_shrinker has > removed the cache objects and because of that, the vcpu should retry. > Yeah, this can be one approach. I think it will come down to the performance impact of using mutex which I don't think should be a concern.