Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp695607pxf; Wed, 31 Mar 2021 13:44:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwPzjGla4b0VNQLKiq5UaTgnXn0teyRuzostVxkfQbxMCZkS3G+SM0dt/Qpx0cq8KnIM91e X-Received: by 2002:a17:907:98f5:: with SMTP id ke21mr5706694ejc.552.1617223471721; Wed, 31 Mar 2021 13:44:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617223471; cv=none; d=google.com; s=arc-20160816; b=e4GW297uT8kgfor219PY0tVJzs9NaH3jb3QSkIMq0Za6NZubpN6k2+L3mBUI7gvTlk AMDCrK3pQxyQZodUqVlQ6SSoXiuVh1eVO++LmsIo9PUywrteXetxkhBG8pBLgBZN+MnU uwLwqWTJxOGQWRX6p0ScuFzxzB0U0BnTHCXivhPu0Kg36Hge11anrD9UVt+ikNtipcty At65Brg70nC15CzDf4idHDOwtmxQMRrMS0CQRX+manQLLtR53A7kOSL1wwdueAcvN0P9 hftO9TSOyzuh00gZ1zce2VzoRV6Sk9iF6AtpFjKW90Ufi1VZEROoHscBFFsow+YcYUY0 6iDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=m8Rn6L8qNcLFyi6w7g8et7jjujd+tzSffhkgk7Z2wJo=; b=v+nGsEW0KeSuvOA1TinC4969REpNZClinv0H6aPgy4A3LEqwxjCaszi7Ph0aic2enx E2g/FAH5ZSphNhYtbeiHgXwcjVEvxqPO2h2qmGrePbZDWWrdBMVgNqju/yLOI2MaBCUh mF0iSH5f85JjSz8GnW8oJ1gi+wEhlj2aOsXBT+mt0Xy9GwDgyjyskxwj3gVgdlz4RiLX cYTPEXMsR9G3+x2fdI07KjxPI0o8IfWMbYBk/cupVEjQXTIaFPhWpsFYFWT1lYgp8Sfq wPPAznquPv3u3LFHp/kg9ltq17MSqbv78dJqtxdp1KHCZY+gee9DtYVFvvawY0MAhe/9 zpHg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x21si2525014eju.471.2021.03.31.13.44.08; Wed, 31 Mar 2021 13:44:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236418AbhCaUm3 (ORCPT + 99 others); Wed, 31 Mar 2021 16:42:29 -0400 Received: from outbound-smtp46.blacknight.com ([46.22.136.58]:58713 "EHLO outbound-smtp46.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236637AbhCaUmM (ORCPT ); Wed, 31 Mar 2021 16:42:12 -0400 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp46.blacknight.com (Postfix) with ESMTPS id 8E85FFAE77 for ; Wed, 31 Mar 2021 21:42:11 +0100 (IST) Received: (qmail 5619 invoked from network); 31 Mar 2021 20:42:11 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 31 Mar 2021 20:42:11 -0000 Date: Wed, 31 Mar 2021 21:42:10 +0100 From: Mel Gorman To: Thomas Gleixner Cc: Linux-MM , Linux-RT-Users , LKML , Chuck Lever , Jesper Dangaard Brouer , Matthew Wilcox , Sebastian Andrzej Siewior Subject: Re: [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock Message-ID: <20210331204210.GB3697@techsingularity.net> References: <20210329120648.19040-1-mgorman@techsingularity.net> <20210329120648.19040-3-mgorman@techsingularity.net> <877dln640j.ffs@nanos.tec.linutronix.de> <20210331110137.GA3697@techsingularity.net> <871rbv5iel.ffs@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <871rbv5iel.ffs@nanos.tec.linutronix.de> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 31, 2021 at 07:42:42PM +0200, Thomas Gleixner wrote: > On Wed, Mar 31 2021 at 12:01, Mel Gorman wrote: > > On Wed, Mar 31, 2021 at 11:55:56AM +0200, Thomas Gleixner wrote: > > @@ -887,13 +887,11 @@ void cpu_vm_stats_fold(int cpu) > > > > pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu); > > > > - preempt_disable(); > > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) > > if (pzstats->vm_stat_diff[i]) { > > int v; > > > > - v = pzstats->vm_stat_diff[i]; > > - pzstats->vm_stat_diff[i] = 0; > > + v = this_cpu_xchg(pzstats->vm_stat_diff[i], 0); > > Confused. pzstats is not a percpu pointer. zone->per_cpu_zonestats is. > > But @cpu is not necessarily the current CPU. > I was drinking drain cleaner instead of coffee. The code was also broken to begin with. drain_pages() is draining pagesets of a local or dead CPU. For a local CPU, disabling IRQs prevent an IRQ arriving during the drain, trying to allocate a page and potentially corrupt the local pageset -- ok. zone_pcp_reset is accessing a remote CPUs pageset, freeing the percpu pointer and resetting it to boot_pageset. zone_pcp_reset calling local_irq_save() does not offer any special protection against drain_pages because there are two separate IRQs involved. This particular patch may have no reason to touch zone_pcp_reset, cpu_vm_stats_fold or drain_zonestat at all but I need to think about it more tomorrow. -- Mel Gorman SUSE Labs