Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp2837500ybk; Tue, 12 May 2020 09:16:55 -0700 (PDT) X-Google-Smtp-Source: APiQypKEmKwD6Zr41DylRGQvSVYG+p6wVQxZ7IR8AbNFQqk5DBJomioPaVDfZtnq+wKHIGEmkNWF X-Received: by 2002:aa7:d0d8:: with SMTP id u24mr19319021edo.138.1589300214886; Tue, 12 May 2020 09:16:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589300214; cv=none; d=google.com; s=arc-20160816; b=G7Nh52G5az61fyGQkwBzCNOjiOodeqV2DC5S5Dy4ns/SdnSYgcvhToObVvIMq1ZQ+1 +h/Gr5alrx3l6Z+pAIhKlvyEd4wRqfXKlP1YLGkBs/SCTPE/NijXpxanjeNEMW6JeoNu 5Zz1OeYWrH9ChkjZrcgBe5o7avSHugv4pO+/t9XLwMhHhDCr4QuO+lnea+goYAZQXnVS DTOgaMwiOKdi4bAldmzM5357iGpwSjuOYkwzgUOJPgF/qsDNzx37OLYEI0PyR8Gso4Qm 2H5F+u2jah5pbzIJKybwmEMWR4QkxjcFtnNSOpQpvu6YDjW9n6mYboZlS0gdNpw+7oEW g61g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=fJOjkyW2Ot+QiN9MZgzf1SkFejEWjXEw2sMYewTpPCs=; b=Zoz9JsRabnGH5We6IlS48HFJfsjcMEFfXQHfU2vD6uQ7kD57CZSPYaaLvGXHVXO0nb 6s7xY6KcLN+3bjExSi7L6Art7jOy4KFby1QXF9ixoP/b8OQQ5jsJGRZCSn6bcBojNz+R fBQ41QOLQSjYjsXmCG4+f7tucC2vbPoyhF30Zlk0pxC6u5QGzEPpbW8JwEJM1l2wZ7F4 x9iROgmiw71xxXXEiccB8xcU7Ezkd/eWKfKj+2WmmurCbY27QOXYhWJH+WvZ5/QVG/13 dAOEhPbsIHo71TeXZHxcPjjjY5eLC4UUdEt1fEfzhCaOrfjyxjAjH2Th42IZ4ET4iz0O Fc7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ox5si8449610ejb.530.2020.05.12.09.16.30; Tue, 12 May 2020 09:16:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728208AbgELQOl (ORCPT + 99 others); Tue, 12 May 2020 12:14:41 -0400 Received: from mx2.suse.de ([195.135.220.15]:59296 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728164AbgELQOk (ORCPT ); Tue, 12 May 2020 12:14:40 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 50689ABBD; Tue, 12 May 2020 16:14:42 +0000 (UTC) Date: Tue, 12 May 2020 09:09:15 -0700 From: Davidlohr Bueso To: Oleg Nesterov Cc: akpm@linux-foundation.org, peterz@infradead.org, paulmck@kernel.org, tglx@linutronix.de, linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: Re: [PATCH 1/2] kernel/sys: only rely on rcu for getpriority(2) Message-ID: <20200512160915.n3plwrwwrlpfqyrs@linux-p48b> References: <20200512000353.23653-1-dave@stgolabs.net> <20200512000353.23653-2-dave@stgolabs.net> <20200512150936.GA28621@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20200512150936.GA28621@redhat.com> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 12 May 2020, Oleg Nesterov wrote: >On 05/11, Davidlohr Bueso wrote: >> >> Currently the tasklist_lock is shared mainly in order to observe >> the list atomically for the PRIO_PGRP and PRIO_USER cases, as >> the actual lookups are already rcu-safe, > >not really... > >do_each_pid_task(PIDTYPE_PGID) can race with change_pid(PIDTYPE_PGID) >which moves the task from one hlist to another. Yes, it is safe in >that task_struct can't go away. But still this is not right because >do_each_pid_task() can scan the wrong (2nd) hlist. Hmm I didn't think about this case, I guess this is also busted in ioprio_get(2) then. > >> (ii) exit (deletion), this window is small but if a task is >> deleted with the highest nice and it is not observed this would >> cause a change in return semantics. To further reduce the window >> we ignore any tasks that are PF_EXITING in the 'old' version of >> the list. > >can't understand... > >could you explain in details why do you think this PF_EXITING check >makes any sense? My logic was that if the task with the highest prio exited while we were iterating the list, it would not be necessarily seen with rcu and the syscall would return the highest prio of a task that exited; and checking against PF_EXITING was a way to ignore such scenarios as we were going to race with it anyway. At this point it seems that we can just remove the lock for the PRIO_PROCESS case. Thanks, Davidlohr