Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753402AbdHVTad (ORCPT ); Tue, 22 Aug 2017 15:30:33 -0400 Received: from mail-oi0-f66.google.com ([209.85.218.66]:33918 "EHLO mail-oi0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752429AbdHVTaW (ORCPT ); Tue, 22 Aug 2017 15:30:22 -0400 MIME-Version: 1.0 In-Reply-To: <20170822190828.GO32112@worktop.programming.kicks-ass.net> References: <37D7C6CF3E00A74B8858931C1DB2F077537879BB@SHSMSX103.ccr.corp.intel.com> <20170818144622.oabozle26hasg5yo@techsingularity.net> <37D7C6CF3E00A74B8858931C1DB2F07753787AE4@SHSMSX103.ccr.corp.intel.com> <20170818185455.qol3st2nynfa47yc@techsingularity.net> <20170821183234.kzennaaw2zt2rbwz@techsingularity.net> <37D7C6CF3E00A74B8858931C1DB2F07753788B58@SHSMSX103.ccr.corp.intel.com> <37D7C6CF3E00A74B8858931C1DB2F0775378A24A@SHSMSX103.ccr.corp.intel.com> <20170822190828.GO32112@worktop.programming.kicks-ass.net> From: Linus Torvalds Date: Tue, 22 Aug 2017 12:30:19 -0700 X-Google-Sender-Auth: L-1yXxk-k12KGlO2E-nOIHOZuBQ Message-ID: Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk To: Peter Zijlstra Cc: "Liang, Kan" , Mel Gorman , Mel Gorman , "Kirill A. Shutemov" , Tim Chen , Ingo Molnar , Andi Kleen , Andrew Morton , Johannes Weiner , Jan Kara , linux-mm , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1726 Lines: 41 On Tue, Aug 22, 2017 at 12:08 PM, Peter Zijlstra wrote: > > So that migration stuff has a filter on, we need two consecutive numa > faults from the same page_cpupid 'hash', see > should_numa_migrate_memory(). Hmm. That is only called for MPOL_F_MORON. We don't actually know what policy the problem space uses, since tthis is some specialized load. I could easily see somebody having set MPOL_PREFERRED with MPOL_F_LOCAL and then touch it from every single node. Isn't that even the default? > And since this appears to be anonymous memory (no THP) this is all a > single address space. However, we don't appear to invalidate TLBs when > we upgrade the PTE protection bits (not strictly required of course), so > we can have multiple CPUs trip over the same 'old' NUMA PTE. > > Still, generating such a migration storm would be fairly tricky I think. Well, Mel seems to have been unable to generate a load that reproduces the long page waitqueues. And I don't think we've had any other reports of this either. So "quite tricky" may well be exactly what it needs. Likely also with a user load that does something that the people involved in the automatic numa migration would have considered completely insane and never tested or even thought about. Users sometimes do completely insane things. It may have started as a workaround for some particular case where they did something wrong "on purpose", and then they entirely forgot about it, and five years later it's running their whole infrastructure and doing insane things because the "particular case" it was tested with was on some broken preproduction machine with totally broken firmware tables for memory node layout. Linus