Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp12163450rwl; Tue, 3 Jan 2023 09:55:17 -0800 (PST) X-Google-Smtp-Source: AMrXdXu9L82Fd0/xAwEcWQnmiB3VhRQtXCqtkiS5pnyrOG1k96VgXm6yUfNPfGSd5hFoAqlSod02 X-Received: by 2002:a17:906:1c4e:b0:7c1:49e:6e3b with SMTP id l14-20020a1709061c4e00b007c1049e6e3bmr39888000ejg.68.1672768517337; Tue, 03 Jan 2023 09:55:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672768517; cv=none; d=google.com; s=arc-20160816; b=TxbNTpgYwQV8e+uk9LMXc9Yy8w2BdAVl56mn02glW8du2V9x85PLpPVTQXXLnWq4gY ii+Fa0i+6pZuFcgzqJYjpUMYcTLftg3RAHyxFf69Cqg74lppHGJKrWqnHv3sKsjHK+os PJ/uUVkOJWUHZO03kBTNoX6hvE2lmMYXKsL0qYFtF650Hj13AGoS7HlsZcS2t6rM+/Zb UGrdj0rT29sq1Db+r3xh6hC7R3FypcdPoNsa8ijmPEAadnaQqXsFu3FTmbxV2V5Kx3cF 7mzhv7r97ErxwMiCvxw1SXVoZFlRUr35tLnP7IeJdvAx9v+WN0lE9dsMSr4hme/b9Kj7 w3Hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=qKTgLEVHfpoE4LRyGeW2aBvpWAAPdaN/ODQpVhyWmJg=; b=kmkVx6oz5Edlpm2FcK0UZNPxQjlhmkMLAKYno4czBbn+dLQt1X7wnBDr2D71q59LH6 jZ9GPI8IIli9ttDbfN9JmCkjPTC/+wXVnOHynH9z+M1OKQ2AO9TsbMeoFNwa7HU8Hx1s Joqv/Frz8R9akCMjsmZbXVREcdqO3UQzNXINYzOujEBN3frxOt4ZstyCjpFEZYJU+3s7 mIF23xVkqIeB8pZLDZ2Cy8kuOH0+nbKHwpz5usSKSD3mUmAX49QneJmUvMZaPs9FVgFc PK1IAz7BAoHx7oNvKVZuDEqEtKqp8F6LOFIxcNHzw+Uyh1I8NZahC4UBbpm0Rs25iTTv zxDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=r4Ry87zi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hs11-20020a1709073e8b00b007aec7f879basi27827715ejc.22.2023.01.03.09.55.02; Tue, 03 Jan 2023 09:55:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=r4Ry87zi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238191AbjACRkL (ORCPT + 60 others); Tue, 3 Jan 2023 12:40:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238322AbjACRjg (ORCPT ); Tue, 3 Jan 2023 12:39:36 -0500 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFB0DFD19 for ; Tue, 3 Jan 2023 09:39:34 -0800 (PST) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-46d4840b51fso386604957b3.12 for ; Tue, 03 Jan 2023 09:39:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=qKTgLEVHfpoE4LRyGeW2aBvpWAAPdaN/ODQpVhyWmJg=; b=r4Ry87zih11LdqU6ILa2uPmN/JJ4XIyd0xMj+t4dTCEJq5U99ZDfGSAB6+zG/H9nX/ JiQtDJOp1I1GXDc70gHhc3X8HbdQBC6pgxKn4BrCc2GW6nerocXkGCv1ZkvPipdzxmEZ 0dGMIaKkib8cyARCkkEdMHPrtiTQz885H7t7wMz0mhjiIgVs/cRgfNcB2B0/XkMdlMJu NpHqMwwyCA6FQQ856ggx7YpZyDlDJJ/y3mxh/YMaciEpGcQoTk9DL8R12J52xkvlKvHV 0WHTGdcbocafgKq0W+qhIvMJzpVyV41Cl79W/TNJuy1+lYRa4EYTGgVPddot0ItUeSt4 Nx5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qKTgLEVHfpoE4LRyGeW2aBvpWAAPdaN/ODQpVhyWmJg=; b=UgRwXew9y7UlxqMnmepj6PtUc02RHcFRvRrYVSp84kOw5BaZqn758gASFXjiOonzNf hMiMAbof/w1wHR02FWvFvOaFcdosP9kchBVYBFQuB/XPab/n4LPoVMl+iCXdjcQnwTyf L0jy6ukpGoRP5h5ihnkICSJB5S5yxCfAl9djG0LimuSlvI931uFcj5IOBfH3ZE9sl6JY lTvf4bqEosEFPWr595uKMgf3P3WwdKEA1Fd9WzEfn651zlhAJ1xUlp3m6unleQsK9uyK Ccod9M2XqDxoYzeokyfIQsNm7OL3kDvyxM7F7WKF0hNNQ2Ha6fjDyDxiPknWq/ZuKsxP pmsg== X-Gm-Message-State: AFqh2kosGmBdypLL00fDuHugS1FTuaZkywq7omL9oqY79TvDSiq7JJ1v 4lvw5YIz69KysdGEg3X88O3ERuaScwjQIDeF7s5r3Q== X-Received: by 2002:a0d:cc87:0:b0:475:3ae:cf with SMTP id o129-20020a0dcc87000000b0047503ae00cfmr4888663ywd.354.1672767573896; Tue, 03 Jan 2023 09:39:33 -0800 (PST) MIME-Version: 1.0 References: <20221222023457.1764-1-vipinsh@google.com> <20221222023457.1764-2-vipinsh@google.com> In-Reply-To: From: Vipin Sharma Date: Tue, 3 Jan 2023 09:38:58 -0800 Message-ID: Subject: Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches To: David Matlack Cc: Ben Gardon , seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 29, 2022 at 1:15 PM David Matlack wrote: > > On Wed, Dec 28, 2022 at 02:07:49PM -0800, Vipin Sharma wrote: > > On Tue, Dec 27, 2022 at 10:37 AM Ben Gardon wrote: > > > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma wrote: > > > > > > > > Tested this change by running dirty_log_perf_test while dropping cache > > > > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval > > > > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel > > > > logs from kvm_mmu_memory_cache_alloc(), which is expected. > > > > > > Oh, that's not a good thing. I don't think we want to be hitting those > > > warnings. For one, kernel warnings should not be expected behavior, > > > probably for many reasons, but at least because Syzbot will find it. > > > In this particular case, we don't want to hit that because in that > > > case we'll try to do a GFP_ATOMIC, which can fail, and if it fails, > > > we'll BUG: > > > > > > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) > > > { > > > void *p; > > > > > > if (WARN_ON(!mc->nobjs)) > > > p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT); > > > else > > > p = mc->objects[--mc->nobjs]; > > > BUG_ON(!p); > > > return p; > > > } > > > > > > Perhaps the risk of actually panicking is small, but it probably > > > indicates that we need better error handling around failed allocations > > > from the cache. > > > Or, the slightly less elegant approach might be to just hold the cache > > > lock around the cache topup and use of pages from the cache, but > > > adding better error handling would probably be cleaner. > > > > I was counting on the fact that shrinker will ideally run only in > > extreme cases, i.e. host is running on low memory. So, this WARN_ON > > will only be rarely used. I was not aware of Syzbot, it seems like it > > will be a concern if it does this kind of testing. > > In an extreme low-memory situation, forcing vCPUS to do GFP_ATOMIC > allocations to handle page faults is risky. Plus it's a waste of time to > free that memory since it's just going to get immediately reallocated. > > > > > I thought about keeping a mutex, taking it during topup and releasing > > it after the whole operation is done but I stopped it as the duration > > of holding mutex will be long and might block the memory shrinker > > longer. I am not sure though, if this is a valid concern. > > Use mutex_trylock() to skip any vCPUs that are currently handling page > faults. oh yeah! Thanks.