Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp2094100ybi; Thu, 18 Jul 2019 03:18:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqwqyqQrA00IaVXkFQxwyx7m6mg6403FmsIT5JFqOTHNkJ/Plx/ct9X7r9xunRS1RAJmAhHS X-Received: by 2002:a17:90a:2567:: with SMTP id j94mr50381134pje.121.1563445105179; Thu, 18 Jul 2019 03:18:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563445105; cv=none; d=google.com; s=arc-20160816; b=ABMWu+9+MUVadWxdVU++eL7tXndrslKnrlJ47Rv99fgl+80yggBPPTxR0hQoQ7qkgR NOv7doonW7wfck4oawY+Qf01/hEQMRLOBHZKXSMvdzNza8iEHRXly4BS8mQmvJwL0xtR 2qcfQkypI7mQJzic2HFF2uHFn3txvigwP4wk99nOAgOl4iqRyzCVy87fVOL851ysNCdj Lb2xHbju8VfOpYItKfqQ21KxMlmbPAe6bDxww9JlqIc8x4pnxzVioWeHVVLJQ5N+pgXb RHMR3P3pL1eqH9BpZfc3a+SPqqnYO+R4FklxXswn+3AmbZr3xA2stjnpU1n2kbRksxkK oLWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=oBf2lMyfhY7EY3KqVK9i6b5kfQhduu/064vS7gNPNXs=; b=WeUdD34VWBIix5sK4TrxntwySz5ihZKcldjclkewBkK9NBTO2MCzYd+3ox3aqM8VIw IPkI704JVssL6BTdd0cV8O/1v9iuddnaj/7o9yNR4M+Dhz23L0gNbG4U6laRfgjkMWhf 9jzaZ3aifVd32I3f1JsKE0aRhb/58bQ5owTSt3t1Wm6FEbm6PmMe0IROFOF7PI84hqkz BWQbOiOnQFcla6C52Zsa5p3rPWg5Bv0nz93J6AnsmxV6qDn1NJgJUWRVNNiv6Vlcisol ygwJeVpzXeIEwlMZJEeFBGlxM3T5ulq6k4LOcEUmiWLGEy5Ok6rOqYDyEySQjO8hs9XY o6qQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brauner.io header.s=google header.b=WlEH0FIE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n13si25062015pgv.304.2019.07.18.03.18.08; Thu, 18 Jul 2019 03:18:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@brauner.io header.s=google header.b=WlEH0FIE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389762AbfGRKRo (ORCPT + 99 others); Thu, 18 Jul 2019 06:17:44 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:33666 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726454AbfGRKRo (ORCPT ); Thu, 18 Jul 2019 06:17:44 -0400 Received: by mail-wr1-f67.google.com with SMTP id n9so28119397wru.0 for ; Thu, 18 Jul 2019 03:17:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brauner.io; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=oBf2lMyfhY7EY3KqVK9i6b5kfQhduu/064vS7gNPNXs=; b=WlEH0FIED2hjJ3xoIvYe2qPOvQ2qO+x+VBe+H3SWgJlPS2i82UNkA0xSRuHQfStXz9 MYG+LZSWG26byysWhUrg/oU3uYv5nZiA4DSn/bJl0U6jU8HAVNS8gPnHL7AdhNnKaqJr 3IoS/lbEoaCRlcxso8fqepk+Vg6NTZzEvWGTSDPQ9qIE4+IDNCh9X6JzzqOmpXRsf71a HvMJjGc1mTO7fzqR/aT/c0TOQmttDjrAnYOrZ+GEDqKchn6oUV8U7Dnz9pa2cPJ4D5qP zv+L4hqQCmabpOXK1Az9oj6+KqAbj3MuHBdnHx/EITx5nXpFEe7NNV4iWMMEVNfP5t4c v8Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=oBf2lMyfhY7EY3KqVK9i6b5kfQhduu/064vS7gNPNXs=; b=bcSkZjpQpMSCy/ZaKs02IXh8G9I16aJnEVFWXsXtEgz79VuSYZpNbxiAwFTZXZAib4 vfcAbFJk7wEf1dIrzZt/oxwESMsQaukm/dypVCcFVmFla1bukEd3bXekl1nflRlTXUaJ /25Sff3UBlHmVavFFckc/BS/aOe+0Y6E3ed0JdHkKN3pnMzwY5JhgDZz0Q+nt93RF9O2 hf79ITNrAxlH9BJfIjAaJ78it2vROcqY/vQPKen+OTwuYMWG4P8pdjIb5OLW8Ot9NaLt 9Y8DFvfP5fDfNqG1OPBkTeQd7YTSTwKRNrtdkeSfYp5CYzFHwBJllDXY+c7tDLpnwglL l26w== X-Gm-Message-State: APjAAAUpZjX3++S4ODevF44jXLytgjGxp2/aP9Pojwyj+VNCernAwJXi qLy6ndwzdjvWhRIArd6JVr8= X-Received: by 2002:adf:ec0d:: with SMTP id x13mr50445144wrn.240.1563445061438; Thu, 18 Jul 2019 03:17:41 -0700 (PDT) Received: from brauner.io ([213.220.153.21]) by smtp.gmail.com with ESMTPSA id s188sm19818806wmf.40.2019.07.18.03.17.40 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 18 Jul 2019 03:17:40 -0700 (PDT) Date: Thu, 18 Jul 2019 12:17:40 +0200 From: Christian Brauner To: "Joel Fernandes (Google)" Cc: linux-kernel@vger.kernel.org, Suren Baghdasaryan , kernel-team@android.com, Andrea Arcangeli , Andrew Morton , "Eric W. Biederman" , Oleg Nesterov , Tejun Heo Subject: Re: [PATCH RFC v1] pidfd: fix a race in setting exit_state for pidfd polling Message-ID: <20190718101735.pbu6nji6mfwq4mxa@brauner.io> References: <20190717172100.261204-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20190717172100.261204-1-joel@joelfernandes.org> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 17, 2019 at 01:21:00PM -0400, Joel Fernandes wrote: > From: Suren Baghdasaryan > > There is a race between reading task->exit_state in pidfd_poll and writing > it after do_notify_parent calls do_notify_pidfd. Expected sequence of > events is: > > CPU 0 CPU 1 > ------------------------------------------------ > exit_notify > do_notify_parent > do_notify_pidfd > tsk->exit_state = EXIT_DEAD > pidfd_poll > if (tsk->exit_state) > > However nothing prevents the following sequence: > > CPU 0 CPU 1 > ------------------------------------------------ > exit_notify > do_notify_parent > do_notify_pidfd > pidfd_poll > if (tsk->exit_state) > tsk->exit_state = EXIT_DEAD > > This causes a polling task to wait forever, since poll blocks because > exit_state is 0 and the waiting task is not notified again. A stress > test continuously doing pidfd poll and process exits uncovered this bug, Btw, if that stress test is in any way upstreamable I'd like to put this into for-next as well. :) Christian