Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp4943156ybi; Tue, 28 May 2019 05:12:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqzxCTjhh0YWBpwFM52rkIbsYPs2r4hxLkKevR7Ck97ghdZdksZieVZfoTYSHQN7v3xGtCrK X-Received: by 2002:a17:902:7883:: with SMTP id q3mr60216315pll.89.1559045530590; Tue, 28 May 2019 05:12:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559045530; cv=none; d=google.com; s=arc-20160816; b=xZ0Tb4xELDc9xRvkbT/oVf9d2GlxVqy0j5N2dNXCl8ARkpsQqucM7VJ40efZWHmVCA JAX46JtZqDrIM5tnax3c9g+M49j6TknG+WRkCc0yHZX8IJSjFMvUBlq1VVGdey8bsbhp jJQrKA0kT5nWpw6VXva/XuGkXHvxRUMXTflD9GYhxbHQA9AKJhDasYKO3xuLidjp/N0n jwD6nuCRQA5fK7z1dhY1//DLLiprv3rncLh5BkPN99QNK5dowbwUKNVuYOIgQ7dOTTts duC0qgZDM2P+OlUzZcIuaLVR+rI1YjPPp6lp2LjMjJFP8js5Bin3KtAK+T/wBQSVn6Xg zSFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=9VklOtfYu2ZafmCaMKS1w1H+bulkTSRts6Zdtwiwxk0=; b=BzwJnH6VV08obMnJwkGQooQMFozLq1feA0wFjF3dD4hkO4uoi8a+QwNxefd4ao4mfC fZpYaGIE5dvtT6a9SrMK/yADU2tJW+06yPumCtfuPevnehqPbADpAA/zZjFdV0s3pPDO qUqPB0KKMA2MBUeTgI0ojSwUiR11+sk5WUnXMDzJFaG78iBihvxinB3fy3Y2A0rgi2dO R8nBYRjkcKRE9CE7kv5gGPBwxu7MRud3kEkpCZL0o41ajy2xx5HjXIKs5AT3CsQ3OqMh rx9/u8k7U1kNhlGyouasJzGoey1flj/NfLbMXz2d/NbdHHjkOkPlkjHVE14E5+S4mc5D LKlg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c14si21113743pgd.314.2019.05.28.05.11.53; Tue, 28 May 2019 05:12:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726879AbfE1MKj (ORCPT + 99 others); Tue, 28 May 2019 08:10:39 -0400 Received: from mx2.suse.de ([195.135.220.15]:58162 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726620AbfE1MKj (ORCPT ); Tue, 28 May 2019 08:10:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 70416AEF9; Tue, 28 May 2019 12:10:38 +0000 (UTC) Date: Tue, 28 May 2019 14:10:37 +0200 From: Michal =?iso-8859-1?Q?Koutn=FD?= To: Joel Savitz Cc: Li Zefan , Tejun Heo , Waiman Long , Phil Auld , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] cpuset: restore sanity to cpuset_cpus_allowed_fallback() Message-ID: <20190528121036.GC31588@blackbody.suse.cz> References: <20190409204003.6428-1-jsavitz@redhat.com> <20190521143414.GJ5307@blackbody.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Thanks for digging through this. On Fri, May 24, 2019 at 11:33:55AM -0400, Joel Savitz wrote: > It is a bit ambiguous, but I performed no action on the task's cpuset > nor did I offline any cpus at point (a). So did you do any operation that left you with cpu_active_mask & 0xf0 == 0 ? (If so, I think the demo code should be made without it to avoid the confusion.) Regardless, the demo code should also specify in what cpuset it happens (for the v2 case). > I think the /proc/$$/status value is intended to simply reflect the > user-specified policy stating which cpus the task is allowed to run on > without consideration for hardware state, whereas the taskset value is > representative of the cpus that the task can actually be run on given > the restriction policy specified by the user via the cpuset mechanism. Yes, it seems to me to be somewhat analogous to effective_cpus vs cpus_allowed in the v2 cpuset. > By the way, I posted a v2 of this patch that correctly handles cgroup > v2 behavior. I see the original version made the state = cpuset in select_fallback_rq mostly redundant. The split on v2 (hierarchy) in v2 (patch) makes some sense. Although, on v1 we will lose the "no longer affine to..." message (which is what happens in your demo IIUC). Michal