Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp2116107rdg; Sun, 15 Oct 2023 11:53:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGXPUJX/fcjjDl6b8A1gnJRvv8Al3qaEP3uLeFT98OVQkkYVz2IkaD6ete2UlD0EfsCav0S X-Received: by 2002:a17:902:e542:b0:1c5:d8a3:8783 with SMTP id n2-20020a170902e54200b001c5d8a38783mr8849017plf.11.1697396013784; Sun, 15 Oct 2023 11:53:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697396013; cv=none; d=google.com; s=arc-20160816; b=dOJW71RmrdBkk5nR45tPwpqlA47Kpn2FbeyAiKvDZfapyFKYjrSHOxKqCeydFYvvxP EGjPtiIqvjxjXFEYWjl5Fw88YNhNtBykfvRLQ8OF2XLsYAEdXkN/N8lAcb7Uo1WZXFcE fmaSIHSE3c/WJMwr6VAQ1w1RlMfroQ6g8GUqSc8I6neoBHnp+JEnE5hhm9TMHCY09wzI e3rRu3Hb/GlzBN1rDP4P/qoHm3lm8mQi7JiweEJlzDRoK1R2N+sUcjIVOP+90Tc8m2Pm ABOwG+hroispDEvDq9OTiac4yXe3KZ1RQ5WrnA2E5yf04Nds3laNK5s9RLR0LaDgkAyj IBpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=7ybP0ma4tc+Xq4kUDw8war/2xM2/22QQy+SPdH2O6+k=; fh=XzfoSolhD0wzYiO5O2hfrEOXYJEqG7B5DZKJyM/HQJQ=; b=y8LsFwMQ7KMI+5Uk94CHPp754nsCTqtHXkdBi75NRIC0M2xpOPpZeuUIOIT/bdYPGc I5fzR02LhghKBThwdpKDtetAIPI01AqlER8RJfXXWQ1HkyKZL2TUzAcuLhnxheO6LfDP ybWWBZt4GnMuR+zvwAhq6oifwbAHZyMK2SMrOkKxWIQHqehRQR+fxdyvTCH1dAK44q8M tLRMEpBN9FVwtpBd84x7JmUgOH2PEUyKSHJZLIqqbR15n5rf6+FJzzzLLObSB9guOPuz yYaM9EUehYXCqxACN340vlzszLbhqBWDNgssbJjL2i0VEUR2XE/R7ev+OQ3gOLk9jF7v z3tw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=f0kOjvvA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id s7-20020a170902ea0700b001ca3cd7ead6si3296035plg.452.2023.10.15.11.53.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 15 Oct 2023 11:53:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=f0kOjvvA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 0E0998096FF7; Sun, 15 Oct 2023 11:53:31 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229641AbjJOSxF (ORCPT + 99 others); Sun, 15 Oct 2023 14:53:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229518AbjJOSxE (ORCPT ); Sun, 15 Oct 2023 14:53:04 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34205A9; Sun, 15 Oct 2023 11:53:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697395983; x=1728931983; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=452iiupu1odqW6I+HChLA99o8E7DPv+HDaY8t/F97ZU=; b=f0kOjvvAEvugyCW3nhkWJn/5z8uuPVd7rOm8PuKyDseNpHMWTr1CPSgJ ZVxlAkPBB291HH+jitUZmz6um+dNXi6vevJiWmblvQJshvLX7AUbwXRYU 5ye2eotgn0iLMR2yLvxz7hJFwtmllW9Bxq1jPXyDlRiEY4MghncRKAYlN DZCv3hXpz9Ps4VdWaHmTwNAPY/X9zubqtkGw9vvhoIIVHsADn1PIBF70/ +3Rc25RCjcYvjddxuEw0b/QQPIuCsO3wDi7HNqJJV3jrjz75r/mAoM90l 44ZA4HVQ5XhpAM2TBhS2yJeOKacuV+ACxL8wEYcrGB34orNrq+y5JxK3z Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="385248199" X-IronPort-AV: E=Sophos;i="6.03,226,1694761200"; d="scan'208";a="385248199" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2023 11:53:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="790561121" X-IronPort-AV: E=Sophos;i="6.03,226,1694761200"; d="scan'208";a="790561121" Received: from bmihaile-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.249.37.165]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2023 11:52:54 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id BD735109D0A; Sun, 15 Oct 2023 21:52:51 +0300 (+03) Date: Sun, 15 Oct 2023 21:52:51 +0300 From: "Kirill A. Shutemov" To: Nikolay Borisov Cc: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, stable@kernel.org Subject: Re: [PATCH] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Message-ID: <20231015185251.umkdsr7jx2qrlm2x@box.shutemov.name> References: <20231014204040.28765-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 15 Oct 2023 11:53:31 -0700 (PDT) On Sun, Oct 15, 2023 at 08:02:16PM +0300, Nikolay Borisov wrote: > > > On 14.10.23 г. 23:40 ч., Kirill A. Shutemov wrote: > > Michael reported soft lockups on a system that has unaccepted memory. > > This occurs when a user attempts to allocate and accept memory on > > multiple CPUs simultaneously. > > > > The root cause of the issue is that memory acceptance is serialized with > > a spinlock, allowing only one CPU to accept memory at a time. The other > > CPUs spin and wait for their turn, leading to starvation and soft lockup > > reports. > > > > To address this, the code has been modified to release the spinlock > > while accepting memory. This allows for parallel memory acceptance on > > multiple CPUs. > > > > A newly introduced "accepting_list" keeps track of which memory is > > currently being accepted. This is necessary to prevent parallel > > acceptance of the same memory block. If a collision occurs, the lock is > > released and the process is retried. > > > > Such collisions should rarely occur. The main path for memory acceptance > > is the page allocator, which accepts memory in MAX_ORDER chunks. As long > > as MAX_ORDER is equal to or larger than the unit_size, collisions will > > never occur because the caller fully owns the memory block being > > accepted. > > > > Aside from the page allocator, only memblock and deferered_free_range() > > accept memory, but this only happens during boot. > > > > The code has been tested with unit_size == 128MiB to trigger collisions > > and validate the retry codepath. > > > > Signed-off-by: Kirill A. Shutemov > > Reported-by: Michael Roth > Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") > > Cc: > > --- > > drivers/firmware/efi/unaccepted_memory.c | 55 ++++++++++++++++++++++-- > > 1 file changed, 51 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c > > index 853f7dc3c21d..8af0306c8e5c 100644 > > --- a/drivers/firmware/efi/unaccepted_memory.c > > +++ b/drivers/firmware/efi/unaccepted_memory.c > > @@ -5,9 +5,17 @@ > > #include > > #include > > -/* Protects unaccepted memory bitmap */ > > +/* Protects unaccepted memory bitmap and accepting_list */ > > static DEFINE_SPINLOCK(unaccepted_memory_lock); > > +struct accept_range { > > + struct list_head list; > > + unsigned long start; > > + unsigned long end; > > +}; > > + > > +static LIST_HEAD(accepting_list); > > + > > /* > > * accept_memory() -- Consult bitmap and accept the memory if needed. > > * > > @@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > > { > > struct efi_unaccepted_memory *unaccepted; > > unsigned long range_start, range_end; > > + struct accept_range range, *entry; > > unsigned long flags; > > u64 unit_size; > > @@ -78,20 +87,58 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > > if (end > unaccepted->size * unit_size * BITS_PER_BYTE) > > end = unaccepted->size * unit_size * BITS_PER_BYTE; > > - range_start = start / unit_size; > > - > > + range.start = start / unit_size; > > + range.end = DIV_ROUND_UP(end, unit_size); > > +retry: > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > > + > > + /* > > + * Check if anybody works on accepting the same range of the memory. > > + * > > + * The check with unit_size granularity. It is crucial to catch all > > + * accept requests to the same unit_size block, even if they don't > > + * overlap on physical address level. > > + */ > > + list_for_each_entry(entry, &accepting_list, list) { > > + if (entry->end < range.start) > > + continue; > > + if (entry->start >= range.end) > > + continue; > > + > > + /* > > + * Somebody else accepting the range. Or at least part of it. > > + * > > + * Drop the lock and retry until it is complete. > > + */ > > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > + cond_resched(); > > + goto retry; > > + } > > So this works for the cases where we have concurrent acceptance of the same > range. What about the same range being accepted multiple times, one after > the other, the current code doesn't prevent this. That's why we have the bitmap. The bits got cleared there after the first accept. On the second, attempt for_each_set_bitrange_from() will skip the range. > What if you check whether the current range is fully contained within the > duplicate entry and if it's fully covered simply return ? If it is fully covered we still need to wait until somebody else finish the accept, so we cannot "just return". We can try to return if we saw the range on accepting_list list before, but it is disappeared, indicating that accept has been completed. But I don't think this optimization worthwhile. As I mentioned before, the collision is hardly happens. One more spin and bitmap check would not make a difference. And it adds complexity. > > > + > > + /* > > + * Register that the range is about to be accepted. > > + * Make sure nobody else will accept it. > > + */ > > + list_add(&range.list, &accepting_list); > > + > > + range_start = range.start; > > for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap, > > - DIV_ROUND_UP(end, unit_size)) { > > + range.end) { > > unsigned long phys_start, phys_end; > > unsigned long len = range_end - range_start; > > phys_start = range_start * unit_size + unaccepted->phys_base; > > phys_end = range_end * unit_size + unaccepted->phys_base; > > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > + > > arch_accept_memory(phys_start, phys_end); > > + > > + spin_lock_irqsave(&unaccepted_memory_lock, flags); > > bitmap_clear(unaccepted->bitmap, range_start, len); > > } > > + > > + list_del(&range.list); > > spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > } -- Kiryl Shutsemau / Kirill A. Shutemov