Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1534612ybl; Wed, 14 Aug 2019 19:21:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqwIls4sTyieU6lpt+rgNTGTAt4nrzqH4s3w8EEp7cVHI5Zq/u3RHB6aEK/VFZVWk5Q0xQEp X-Received: by 2002:a63:ee08:: with SMTP id e8mr1839874pgi.70.1565835705363; Wed, 14 Aug 2019 19:21:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565835705; cv=none; d=google.com; s=arc-20160816; b=ql+DAJC72z8zso8AW90mTnPOpDr4NGLCclux8yyUWp0Eyxg41qBVK3pxiHvrTsWThJ MunPrkK3aoY+ceGgyvSH5Fc9a64bWF+hMeiPzH7ObA7A2uPB2ZbsCd+EBvorWJKFHO2B dEChk4ermaRJ5air+HVLXi1mSgTQOba7SHhB05gRL92Q9a3Qv/aYhePUIT0puvoQ/luc MCo6eyPlMC7XC556+rVR4oMVyvBE0ih5n62vtcEAALrE3z3a0u//wTdoRNJ+JPeMa+Rp sVqtk2QCpax6GkBaLMFzjt7lYeTNxiB6GwphU+wuiLIko8J1vWOYDviBqfLVQAfpgujU uZZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=lWygpcIljbVhJhOX3Wn2ApktJi9NObt5wK3R9OOTYAQ=; b=EmcAtnWSgZzLnNp/E2reuNyh+iWCV2ijl88ZY53IRzKrhal9Bxf7HiALJnEVMwJTjW h+uMgbmVuJbZOrg3x/clt0BvxCR/CtdD2pGZ67nMntycaiIrPz1INK83AWUPsSNvWw59 8vp/FlyWrtpD0SS3+YPF1uZZk75LfGWSM+Ao/93zEZEZxBpNNMzSaZUhwpXNFHXw/Gfk tc3nKrRVG+/YiopGy9D6edzqpQJdCc1Usv0o+9bv9oog9N/0MJ4lqceDSo9sVe/GSse1 gyT9xy+JJj6WtzcP7+5+KBoBJJGeGSvI6N4q9oW9om633So3oOx9TbLeVlIMiI2UlAEB KKug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b188si993588pfb.279.2019.08.14.19.21.29; Wed, 14 Aug 2019 19:21:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729485AbfHOCAS (ORCPT + 99 others); Wed, 14 Aug 2019 22:00:18 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:43688 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727273AbfHOCAS (ORCPT ); Wed, 14 Aug 2019 22:00:18 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=kerneljasonxing@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0TZVw0gI_1565834379; Received: from ali-6c96cfdd2b5d.local(mailfrom:kerneljasonxing@linux.alibaba.com fp:SMTPD_---0TZVw0gI_1565834379) by smtp.aliyun-inc.com(127.0.0.1); Thu, 15 Aug 2019 09:59:54 +0800 Subject: Re: [PATCH v2] psi: get poll_work to run when calling poll syscall next time To: hannes@cmpxchg.org, surenb@google.com Cc: dennis@kernel.org, mingo@redhat.com, axboe@kernel.dk, lizefan@huawei.com, peterz@infradead.org, tj@kernel.org, linux-kernel@vger.kernel.org, caspar@linux.alibaba.com, joseph.qi@linux.alibaba.com References: <1563864339-2621-1-git-send-email-kerneljasonxing@linux.alibaba.com> <1564463819-120014-1-git-send-email-kerneljasonxing@linux.alibaba.com> From: Jason Xing Message-ID: <3208597b-05d1-2461-729b-c35bd6811188@linux.alibaba.com> Date: Thu, 15 Aug 2019 09:59:39 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <1564463819-120014-1-git-send-email-kerneljasonxing@linux.alibaba.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, It's been delayed for no reason a couple of days. Any comments and suggestions on this patch V2 would be appreciated. Thanks, Jason On 2019/7/30 下午1:16, Jason Xing wrote: > Only when calling the poll syscall the first time can user > receive POLLPRI correctly. After that, user always fails to > acquire the event signal. > > Reproduce case: > 1. Get the monitor code in Documentation/accounting/psi.txt > 2. Run it, and wait for the event triggered. > 3. Kill and restart the process. > > If the user doesn't kill the monitor process, it seems the > poll_work works fine. After killing and restarting the monitor, > the poll_work in kernel will never run again due to the wrong > value of poll_scheduled. Therefore, we should reset the value > as group_init() does after the last trigger is destroyed. > > [PATCH V2] > In the patch v2, I put the atomic_set(&group->poll_scheduled, 0); > into the right place. > Here I quoted from Johannes as the best explaination: > "The question is why we can end up with poll_scheduled = 1 but the work > not running (which would reset it to 0). And the answer is because the > scheduling side sees group->poll_kworker under RCU protection and then > schedules it, but here we cancel the work and destroy the worker. The > cancel needs to pair with resetting the poll_scheduled flag." > > Signed-off-by: Jason Xing > Reviewed-by: Caspar Zhang > Reviewed-by: Joseph Qi > Reviewed-by: Suren Baghdasaryan > Acked-by: Johannes Weiner > --- > kernel/sched/psi.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c > index 7acc632..acdada0 100644 > --- a/kernel/sched/psi.c > +++ b/kernel/sched/psi.c > @@ -1131,7 +1131,14 @@ static void psi_trigger_destroy(struct kref *ref) > * deadlock while waiting for psi_poll_work to acquire trigger_lock > */ > if (kworker_to_destroy) { > + /* > + * After the RCU grace period has expired, the worker > + * can no longer be found through group->poll_kworker. > + * But it might have been already scheduled before > + * that - deschedule it cleanly before destroying it. > + */ > kthread_cancel_delayed_work_sync(&group->poll_work); > + atomic_set(&group->poll_scheduled, 0); > kthread_destroy_worker(kworker_to_destroy); > } > kfree(t); >