Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1818920ybz; Thu, 30 Apr 2020 06:08:17 -0700 (PDT) X-Google-Smtp-Source: APiQypJS8ZLltmUSRIzuY6R0fSRhCuCYn/JLCetI3bjP2pSicNXaxIhYX/c/3+NYUrUJrMXl3xDd X-Received: by 2002:aa7:d0d8:: with SMTP id u24mr2729614edo.138.1588252097347; Thu, 30 Apr 2020 06:08:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588252097; cv=none; d=google.com; s=arc-20160816; b=bdq+tbOCAisI5go+cZeAia3f6c08RvoafjS7XEsdgfvXj19MWArGmwQcESsmu9JLAl 29iyxN9fRmqAiMIR7JR53W9SquGEuctH0TWY74ue4KXYS0VcuTF4hI6iBAlcnk4OHJow D1LyWs49pQ/NJfPTqJm1imZX98vuP0Sk+hQIuXfGkV2mYgbqR2Oi2sW7l6gqgiLR63YD /vIPm7WrkuZAIjpOmJ0c81R15xgMjyOJYfRm+Hv1fDthM/hzbW1ZaMQmNS/yJWQ1CrM7 nlAdwa/N57tYuubGNseR2chicB0rrCn8u9BqVXKL6N4hVqyOoR9DxdQd9k6sH8x81HZW R2MQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from; bh=iOZHkojzA7Bse9iwi+7AYCVLnNeIHDmQQGrA7lGuLPs=; b=YYtM+5P54Pwu8Qh2AqYsI1bkyhMiMnIewxiA2vmOU4wJgX14dilW7fxg0yQIo99GnN cHSmuQO6P9qs1/oWRYlnz/S1W+aKGsivI/B3wgrolDJ9T5O/yaVKQcxmMobYZ8WV4woA VLCLNxmchF/TX0ZIeOCQBAbbXZdPFRkAk+7AsIwKGN/DdfTe8fOpw3LwQJ+kNK6Reqen HI38WPycBtc8kUyXMgkV9edhOGaZsWd/xLwQmXoekrBVVb4gLQ7wNlWnK+w3uXKGd7lX mgzVmoTVQqpgb7Rjj/Zn+Vl7Xp/lpyUCTjMJrfRq+bMIppPYtkr22fHxWlwu97WWpKix D4pw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h10si5634982ejt.278.2020.04.30.06.07.45; Thu, 30 Apr 2020 06:08:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727029AbgD3NDi (ORCPT + 99 others); Thu, 30 Apr 2020 09:03:38 -0400 Received: from mx2.suse.de ([195.135.220.15]:49728 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726961AbgD3NDh (ORCPT ); Thu, 30 Apr 2020 09:03:37 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5BD26AC85; Thu, 30 Apr 2020 13:03:35 +0000 (UTC) From: Roman Penyaev Cc: Roman Penyaev , Andrew Morton , Khazhismel Kumykov , Alexander Viro , Heiher , Jason Baron , stable@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] epoll: atomically remove wait entry on wake up Date: Thu, 30 Apr 2020 15:03:26 +0200 Message-Id: <20200430130326.1368509-2-rpenyaev@suse.de> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200430130326.1368509-1-rpenyaev@suse.de> References: <20200430130326.1368509-1-rpenyaev@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit To: unlisted-recipients:; (no To-header on input) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch does two things: 1. fixes lost wakeup introduced by: 339ddb53d373 ("fs/epoll: remove unnecessary wakeups of nested epoll") 2. improves performance for events delivery. The description of the problem is the following: if N (>1) threads are waiting on ep->wq for new events and M (>1) events come, it is quite likely that >1 wakeups hit the same wait queue entry, because there is quite a big window between __add_wait_queue_exclusive() and the following __remove_wait_queue() calls in ep_poll() function. This can lead to lost wakeups, because thread, which was woken up, can handle not all the events in ->rdllist. (in better words the problem is described here: https://lkml.org/lkml/2019/10/7/905) The idea of the current patch is to use init_wait() instead of init_waitqueue_entry(). Internally init_wait() sets autoremove_wake_function as a callback, which removes the wait entry atomically (under the wq locks) from the list, thus the next coming wakeup hits the next wait entry in the wait queue, thus preventing lost wakeups. Problem is very well reproduced by the epoll60 test case [1]. Wait entry removal on wakeup has also performance benefits, because there is no need to take a ep->lock and remove wait entry from the queue after the successful wakeup. Here is the timing output of the epoll60 test case: With explicit wakeup from ep_scan_ready_list() (the state of the code prior 339ddb53d373): real 0m6.970s user 0m49.786s sys 0m0.113s After this patch: real 0m5.220s user 0m36.879s sys 0m0.019s The other testcase is the stress-epoll [2], where one thread consumes all the events and other threads produce many events: With explicit wakeup from ep_scan_ready_list() (the state of the code prior 339ddb53d373): threads events/ms run-time ms 8 5427 1474 16 6163 2596 32 6824 4689 64 7060 9064 128 6991 18309 After this patch: threads events/ms run-time ms 8 5598 1429 16 7073 2262 32 7502 4265 64 7640 8376 128 7634 16767 (number of "events/ms" represents event bandwidth, thus higher is better; number of "run-time ms" represents overall time spent doing the benchmark, thus lower is better) [1] tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c [2] https://github.com/rouming/test-tools/blob/master/stress-epoll.c Signed-off-by: Roman Penyaev Cc: Andrew Morton Cc: Khazhismel Kumykov Cc: Alexander Viro Cc: Heiher Cc: Jason Baron Cc: stable@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- fs/eventpoll.c | 43 ++++++++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 19 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index d6ba0e52439b..aba03ee749f8 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1822,7 +1822,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, { int res = 0, eavail, timed_out = 0; u64 slack = 0; - bool waiter = false; wait_queue_entry_t wait; ktime_t expires, *to = NULL; @@ -1867,21 +1866,23 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, */ ep_reset_busy_poll_napi_id(ep); - /* - * We don't have any available event to return to the caller. We need - * to sleep here, and we will be woken by ep_poll_callback() when events - * become available. - */ - if (!waiter) { - waiter = true; - init_waitqueue_entry(&wait, current); - + do { + /* + * Internally init_wait() uses autoremove_wake_function(), + * thus wait entry is removed from the wait queue on each + * wakeup. Why it is important? In case of several waiters + * each new wakeup will hit the next waiter, giving it the + * chance to harvest new event. Otherwise wakeup can be + * lost. This is also good performance-wise, because on + * normal wakeup path no need to call __remove_wait_queue() + * explicitly, thus ep->lock is not taken, which halts the + * event delivery. + */ + init_wait(&wait); write_lock_irq(&ep->lock); __add_wait_queue_exclusive(&ep->wq, &wait); write_unlock_irq(&ep->lock); - } - for (;;) { /* * We don't want to sleep if the ep_poll_callback() sends us * a wakeup in between. That's why we set the task state @@ -1911,10 +1912,20 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, timed_out = 1; break; } - } + + /* We were woken up, thus go and try to harvest some events */ + eavail = 1; + + } while (0); __set_current_state(TASK_RUNNING); + if (!list_empty_careful(&wait.entry)) { + write_lock_irq(&ep->lock); + __remove_wait_queue(&ep->wq, &wait); + write_unlock_irq(&ep->lock); + } + send_events: /* * Try to transfer events to user space. In case we get 0 events and @@ -1925,12 +1936,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, !(res = ep_send_events(ep, events, maxevents)) && !timed_out) goto fetch_events; - if (waiter) { - write_lock_irq(&ep->lock); - __remove_wait_queue(&ep->wq, &wait); - write_unlock_irq(&ep->lock); - } - return res; } -- 2.24.1