Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp559777ybm; Wed, 27 May 2020 02:18:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzYbxLxQhD55jmdqh7a+W51VQYcJoxtvQsdy2kK8CRCjahEyITfJljDSO581NUU8CdkqfAV X-Received: by 2002:a50:eacb:: with SMTP id u11mr22341159edp.162.1590571118891; Wed, 27 May 2020 02:18:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590571118; cv=none; d=google.com; s=arc-20160816; b=kJRN8LY+OK1Nx92CwaoC7dhkimTcnAeiIHrKJ3gjt8oq/R9RtK0oTEI2exX5f5pL8B SGf0ChIHIryi7dTlaVjhAWAm1YFdsUUyv/g446JdnlW0hQ8dgtOURMKIOl/lcFQ9SvQM MVyhia3VN3gX6oMuw2MAxYQZ+GSrFRoL79MIDydPa0S75UlrXKnyYyi+RJnTKK4+kZR7 ouz0QNhBTFyJ12DODI9TmS5NJVlHyoes1adLFUqp0Bbn69lQdlOgmeGPJIy6TL4leQuU TIsCRZMCt+8J2A8OZYtnecdi3yhDXz5jkQ6J6GwAqMXVFzLjIdTpbGlunHdS5jUfHTIV dIRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=3cwS2ryqqPqY1ea/u5GtqMXqE7jEJJyBvBFRXfvmB5c=; b=0RgvRHYBiJUiOSuPGucewPQVDoI2YXRjiEr5RDwjJhrIdyPXgHGtOvn8YP3RiRGC5V 3MpuoTVPiSJfoatJRpSrrSbSWCB4u6RTwBih3a7oiWZ5wXso7xJ0C3mHKUBjkvK/NYgr m/Wolsxw2gb2hQwXJ3aSf/qXKDOSje0aWHoJwKnK89YEdRxPKdOQ8IaKHA3ueWzM6x9Y aqHaFntvnvLWEXLdUKZxT3MsNboFrQr6Pk0S8roLTMun5EM10jwG75ROc4gJd18eicjr KJ12/CZB5cJQu1oIYzcn7plZKd+VUUSATJHGkpwMoPW4KQLx7s3rapn028U4S3RMmwnZ Od3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h9si1527089ejf.485.2020.05.27.02.18.15; Wed, 27 May 2020 02:18:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726567AbgEZXms (ORCPT + 99 others); Tue, 26 May 2020 19:42:48 -0400 Received: from foss.arm.com ([217.140.110.172]:58706 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726326AbgEZXmr (ORCPT ); Tue, 26 May 2020 19:42:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 231AB1FB; Tue, 26 May 2020 16:42:47 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 426523F52E; Tue, 26 May 2020 16:42:46 -0700 (PDT) References: <865de121-8190-5d30-ece5-3b097dc74431@kernel.dk> <20200526200015.GG325280@hirez.programming.kicks-ass.net> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Peter Zijlstra Cc: Jens Axboe , Ingo Molnar , "linux-kernel\@vger.kernel.org" , Stefano Garzarella Subject: Re: [PATCH] sched/fair: don't NUMA balance for kthreads In-reply-to: <20200526200015.GG325280@hirez.programming.kicks-ass.net> Date: Wed, 27 May 2020 00:42:40 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 26/05/20 21:00, Peter Zijlstra wrote: > On Tue, May 26, 2020 at 05:40:06PM +0100, Valentin Schneider wrote: > >> > Change the task_tick_numa() check to exclude kernel threads in general, >> > as it doesn't make sense to attempt ot balance for kthreads anyway. >> > >> >> Does it? (this isn't a rethorical question) >> >> Suppose a given kthread ends up doing more accesses to some pages >> (via use_mm()) than the other threads that access them, wouldn't it make >> sense to take that into account when it comes to NUMA balancing? > > Well, task_tick_numa() tries and farm off a bunch of actual work to > task_work_add(), and there's so very little userspace for a kernel > thread to return to... :-) Err, true... I did say pipe dreams! I had only really taken note of the exit / return to userspace callbacks, but I see io_uring has its own task_work_run() calls, which (I think) explains how we can end up with a kthread actually running task_numa_work(). I'm also thinking we really don't want that task_numa_work() to be left hanging on the task_work list, because that self-looping thing will not play nice to whatever else has been queued (which AFAICT shouldn't happen under normal conditions, i.e. !kthreads).