Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp3339674iob; Mon, 16 May 2022 19:58:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzyCbdLZxFgr/1R19AogvBU8t3VHFnNseewFwZ8pe3RY0fs6esOwvMKbxsLlVfNBMT0nDch X-Received: by 2002:a05:6a00:21c8:b0:4c4:4bd:dc17 with SMTP id t8-20020a056a0021c800b004c404bddc17mr20238725pfj.57.1652756335327; Mon, 16 May 2022 19:58:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652756335; cv=none; d=google.com; s=arc-20160816; b=OD3i9RtvDWej+hNdw3XI6Qj1x1pbpCWxSXglfJgu70yZXgYr6W773Dj8owgogkk6dW ynXy4cjJZC9yLHyanJyodaC/DLqZe1hBstYYbixq0Zrgd8O9wUqhsFAHZiH+VVDwDGq2 2yZyPhRDB+0CcZICfKcRYBzPCLuN+HjhEi5I2KBnzkLzdUSvz8PayRbVsCf+EaRjvhT3 auDDtP5SUQO8cm8eOUkLQWxQ6tY5boZV4DIS0cAMlgnLFkMu6AMFG5uz7KN/j3vOb/CM dW9imnFF4lgDrFfXHKO8w9NMDH7kIl/BoEIsN+nIUT6gnJLRruQ0TKBZmjk/ohiSqeA0 YLUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=4iqi3cf2bJ5qLgwx4a9wfkz9slTDwmYAPMQipvmiDmc=; b=fvQnK8zOGOZU3rdqu+k68Tp0yBwo4x6v1R+n/vnk2KveU5lDZtFyvn74FiPtkPqg/Y xIO97Oq0TYqvw0MeTyfcVmW9IdRX3xAom9mSnxir3mYi6sAnpVqPX5hJ8JHqQBoWGSsK ItYvWaCEL7bRMamazC5UXE6iJy2Z3ydmvl/sWAwycjHhOF8D9nFHy6koxtDqNCISm/jQ NCvY/4XCpUd8N/kiLzoTvejNZES6kgu0buJ6R2+PMn0eddAqUyVDUw+qq+MvVXhs70IK AIgxyokzUKN30P4RjU4et/TNvlkQZ4fOHZkNf9u8YT62O3KHFeqQzMSTacMPtfxtUG4a NxSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gw19-20020a17090b0a5300b001d653c53a03si1491029pjb.11.2022.05.16.19.58.44; Mon, 16 May 2022 19:58:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241752AbiEPKx1 (ORCPT + 99 others); Mon, 16 May 2022 06:53:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242864AbiEPKxW (ORCPT ); Mon, 16 May 2022 06:53:22 -0400 Received: from outbound-smtp60.blacknight.com (outbound-smtp60.blacknight.com [46.22.136.244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DE0C25EB9 for ; Mon, 16 May 2022 03:53:14 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp60.blacknight.com (Postfix) with ESMTPS id DEFA5FA878 for ; Mon, 16 May 2022 11:53:12 +0100 (IST) Received: (qmail 8810 invoked from network); 16 May 2022 10:53:12 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 May 2022 10:53:12 -0000 Date: Mon, 16 May 2022 11:53:11 +0100 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3 Message-ID: <20220516105311.GL3441@techsingularity.net> References: <20220512085043.5234-1-mgorman@techsingularity.net> <20220512124325.751781bb88ceef5c37ca653e@linux-foundation.org> <20220513142330.GI3441@techsingularity.net> <20220513123805.41e560392d028c271b36847d@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20220513123805.41e560392d028c271b36847d@linux-foundation.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 13, 2022 at 12:38:05PM -0700, Andrew Morton wrote: > > The sentence can be dropped because it adds little and is potentially > > confusing. The PCP being safe to access remotely is specific to the > > context of the CPU being hot-removed and there are other special corner > > cases like zone_pcp_disable that modifies a per-cpu structure remotely > > but not in a way that causes corruption. > > OK. I pasted in your para from the other email. Current 0/n blurb: > > Some setups, notably NOHZ_FULL CPUs, may be running realtime or > latency-sensitive applications that cannot tolerate interference due to > per-cpu drain work queued by __drain_all_pages(). Introduce a new > mechanism to remotely drain the per-cpu lists. It is made possible by > remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has > two advantages, the time to drain is more predictable and other unrelated > tasks are not interrupted. > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote > per-cpu lists drain support" -- avoid interference of a high priority task > due to a workqueue item draining per-cpu page lists. While many workloads > can tolerate a brief interruption, it may cause a real-time task running > on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is > non-deterministic. > > Currently an IRQ-safe local_lock protects the page allocator per-cpu > lists. The local_lock on its own prevents migration and the IRQ disabling > protects from corruption due to an interrupt arriving while a page > allocation is in progress. > > This series adjusts the locking. A spinlock is added to struct > per_cpu_pages to protect the list contents while local_lock_irq continues > to prevent migration and IRQ reentry. This allows a remote CPU to safely > drain a remote per-cpu list. > > This series is a partial series. Follow-on work should allow the > local_irq_save to be converted to a local_irq to avoid IRQs being > disabled/enabled in most cases. Consequently, there are some TODO > comments highlighting the places that would change if local_irq was used. > However, there are enough corner cases that it deserves a series on its > own separated by one kernel release and the priority right now is to avoid > interference of high priority tasks. > Looks good, thanks! -- Mel Gorman SUSE Labs