Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7285320imu; Wed, 14 Nov 2018 14:53:39 -0800 (PST) X-Google-Smtp-Source: AJdET5e+POzcNCt9A+ijYCxgaN4Z8yks3MPgrBX4/9we9c2gBjKS05nWA1JoIJlsbNMooA6eOyXn X-Received: by 2002:a17:902:30f:: with SMTP id 15-v6mr3793284pld.155.1542236019737; Wed, 14 Nov 2018 14:53:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542236019; cv=none; d=google.com; s=arc-20160816; b=ayR8K27fxqX4CQhshuwyXgZWf4F/GNaAf+WdI5kVC9lSN4zs5tkMNMAT/Zq3VvNw1o a2N3sUW0LuLIKNcJK35yvrfPsxIq5wtkvOMNqrf0wlgNhcbD9ToduP9QyliOaSL3Q13u 6g/+lNLqr5leNe8bN/fbPKMrEwWNtu96jAWTYCgsdk3ES32OygXLcP5TuHotVeWC3mT/ JDIKM92v7M0o8O7HUow52IPwjSdFDr07nVsGW9G0zn6845j4odP8g7IUkmYrzSTpW5pb RKlWA05uekDXugRAHzZNz7JzuDSbYvFpQB6lLr15LUhyyr08dRQFII+OoE49UvetGXa9 18LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=GSOltWYw9Hf4uTD4mbRMQdYszLopcEB/7t+kE3rCQ4Q=; b=V5kUilKzHopDQ/7+czfCTe5s4kRhjr0M4dF+JXYB6l/fOg8aJfsZjxaY7TXFxtOxvE cNHqlr5dKPp61DfZnIaC5M4RftJObrH4T/SuVJisU0nUmkJRNY2KSmTdv/zshbUJK2NW w3kuVl45RbLJdxqJ6Eu6GjyLAm5Z2Jqr/FMtDk52uazsnBY2u4lLQTlbGnmAYeG7NhKI HIzqt9hgM2hGID/DU108JIJgrBukTwVyL7U/Bcl3zB2/yPFv9ztsZluQ5g0ZW9ZQDRMz cbQ83M9r6gr3KExOr+uh45FSwkxtn80/rkqs+d1wP9c/T+t6KlCFF2giSiM1NVXeoCSw 7+FA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o14si4714704pgj.59.2018.11.14.14.53.24; Wed, 14 Nov 2018 14:53:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727168AbeKOI4e (ORCPT + 99 others); Thu, 15 Nov 2018 03:56:34 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:45638 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726790AbeKOI4e (ORCPT ); Thu, 15 Nov 2018 03:56:34 -0500 Received: from akpm3.svl.corp.google.com (unknown [104.133.8.65]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 9B13EA5E; Wed, 14 Nov 2018 22:51:20 +0000 (UTC) Date: Wed, 14 Nov 2018 14:51:19 -0800 From: Andrew Morton To: Davidlohr Bueso Cc: jbaron@akamai.com, viro@zeniv.linux.org.uk, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: Re: [PATCH 2/2] fs/epoll: deal with wait_queue only once Message-Id: <20181114145119.2e00ce7530d32fc4958c3707@linux-foundation.org> In-Reply-To: <20181114182532.27981-2-dave@stgolabs.net> References: <20181114182532.27981-1-dave@stgolabs.net> <20181114182532.27981-2-dave@stgolabs.net> X-Mailer: Sylpheed 3.6.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 14 Nov 2018 10:25:32 -0800 Davidlohr Bueso wrote: > There is no reason why we rearm the waitiqueue upon every > fetch_events retry (for when events are found yet send_events() > fails). If nothing else, this saves four lock operations per > retry, and furthermore reduces the scope of the lock even > further. > > .. > > --- a/fs/eventpoll.c > +++ b/fs/eventpoll.c > @@ -1749,6 +1749,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, > { > int res = 0, eavail, timed_out = 0; > u64 slack = 0; > + bool waiter = false; > wait_queue_entry_t wait; > ktime_t expires, *to = NULL; > > @@ -1786,6 +1787,15 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, > if (eavail) > goto send_events; > > + if (!waiter) { > + waiter = true; > + init_waitqueue_entry(&wait, current); > + > + spin_lock_irq(&ep->wq.lock); > + __add_wait_queue_exclusive(&ep->wq, &wait); > + spin_unlock_irq(&ep->wq.lock); > + } > + > /* > * Busy poll timed out. Drop NAPI ID for now, we can add > * it back in when we have moved a socket with a valid NAPI > @@ -1798,10 +1808,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, > * We need to sleep here, and we will be wake up by > * ep_poll_callback() when events will become available. > */ > - init_waitqueue_entry(&wait, current); > - spin_lock_irq(&ep->wq.lock); > - __add_wait_queue_exclusive(&ep->wq, &wait); > - spin_unlock_irq(&ep->wq.lock); Why was this moved to before the ep_reset_busy_poll_napi_id() call? That movement placed the code ahead of the block comment which serves to explain its function. This? Which also fixes that comment and reflows it to use 80 cols. --- a/fs/eventpoll.c~fs-epoll-deal-with-wait_queue-only-once-fix +++ a/fs/eventpoll.c @@ -1787,15 +1787,6 @@ fetch_events: if (eavail) goto send_events; - if (!waiter) { - waiter = true; - init_waitqueue_entry(&wait, current); - - spin_lock_irq(&ep->wq.lock); - __add_wait_queue_exclusive(&ep->wq, &wait); - spin_unlock_irq(&ep->wq.lock); - } - /* * Busy poll timed out. Drop NAPI ID for now, we can add * it back in when we have moved a socket with a valid NAPI @@ -1804,10 +1795,18 @@ fetch_events: ep_reset_busy_poll_napi_id(ep); /* - * We don't have any available event to return to the caller. - * We need to sleep here, and we will be wake up by - * ep_poll_callback() when events will become available. + * We don't have any available event to return to the caller. We need + * to sleep here, and we will be woken by ep_poll_callback() when events + * become available. */ + if (!waiter) { + waiter = true; + init_waitqueue_entry(&wait, current); + + spin_lock_irq(&ep->wq.lock); + __add_wait_queue_exclusive(&ep->wq, &wait); + spin_unlock_irq(&ep->wq.lock); + } for (;;) { /* _