Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1567308ybv; Thu, 6 Feb 2020 06:14:48 -0800 (PST) X-Google-Smtp-Source: APXvYqzTwuuJe3QoHquXWKw/YOS5glpvI0hJzRvtd7nETVSRgbpLaTVrR2EqCOe3HcxxacoBWJkd X-Received: by 2002:a05:6830:22cc:: with SMTP id q12mr31257001otc.110.1580998488039; Thu, 06 Feb 2020 06:14:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580998488; cv=none; d=google.com; s=arc-20160816; b=VwrqYI68+NsUu3ZxiV44U9hV9zf57m/YkehjeHt3YNBj9xghTJfrECngI0k6LCz7OX yrWFpdweWlrVR1A5/cZEL7HKqH8uKRtne0njYxPPAyVXf1gncZZzGtsr58TNHZObQypI l8T+So0Vfd2hBbnk2l+0hBoy6Cq4v0GE/ijEZLj36xVQM5pT7cJeAvHdhYFQ/YY5sCWR 6TOa9rbg2BF/DAfKEBtnBVPuK28kK4KjlY/Pvt9gQo55C7ovBSzysrO5OTthdWYN4QK8 9rDzXWph/uLr41VsgvQoptZHCtLsPgDyMIQCHM6fDql3G7RdoQCa2xblCCX2mAr39+q3 dRsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=NukWoHKZUuWLDGwrpelsbg60hFXghSed7YXvNfERK3Y=; b=goJyoNUB3G+mHUXMDw/ZlYlqgnki5yj7M52MTDky+RYcD+dM8xD03iVN1/gdHpYtVV rb3q/cndRzjZ/NpzAr4DuxffPJKuP6nnnLNN/IXVfjrT2amqXxwLFLYq4+rQ7QzDHf1L d0FOohoHxTJ1+ulxTHPWqVTJWB5GxE9Ysp/1K3eBeqzBlypE3RL+XM1Rwj+Y1FlULBgi WwOoVxiwRXDRbPRjGzsdsgZvF9WfJRm/rE5eDGBg0deH7xN+cX0XhdiJgIlGAdIumY1f zi2exdenRBCSmjrPXbxiyayW+uzfOlDrN0pRyY2Xwwil9sBdgJpKqZ73zz68O5070UfQ nxIw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m24si1923251otf.101.2020.02.06.06.14.22; Thu, 06 Feb 2020 06:14:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727825AbgBFONX (ORCPT + 99 others); Thu, 6 Feb 2020 09:13:23 -0500 Received: from bhuna.collabora.co.uk ([46.235.227.227]:40400 "EHLO bhuna.collabora.co.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727060AbgBFONW (ORCPT ); Thu, 6 Feb 2020 09:13:22 -0500 Received: from turingmachine.home (unknown [IPv6:2804:431:c7f5:7989:d711:794d:1c68:5ed3]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: tonyk) by bhuna.collabora.co.uk (Postfix) with ESMTPSA id A714729524A; Thu, 6 Feb 2020 14:13:14 +0000 (GMT) From: =?UTF-8?q?Andr=C3=A9=20Almeida?= To: linux-kernel@vger.kernel.org, tglx@linutronix.de Cc: kernel@collabora.com, krisman@collabora.com, shuah@kernel.org, linux-kselftest@vger.kernel.org, rostedt@goodmis.org, ryao@gentoo.org, peterz@infradead.org, dvhart@infradead.org, mingo@redhat.com, z.figura12@gmail.com, steven@valvesoftware.com, pgriffais@valvesoftware.com, =?UTF-8?q?Andr=C3=A9=20Almeida?= Subject: [PATCH v2 0/4] Implement FUTEX_WAIT_MULTIPLE operation Date: Thu, 6 Feb 2020 11:10:47 -0300 Message-Id: <20200206141051.6124-1-andrealmeid@collabora.com> X-Mailer: git-send-email 2.25.0 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, This patchset implements a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows a thread to wait on several futexes at the same time, and be awoken by any of them. The use case lies in the Wine implementation of the Windows NT interface WaitMultipleObjects. This Windows API function allows a thread to sleep waiting on the first of a set of event sources (mutexes, timers, signal, console input, etc) to signal. Considering this is a primitive synchronization operation for Windows applications, being able to quickly signal events on the producer side, and quickly go to sleep on the consumer side is essential for good performance of those running over Wine. Since this API exposes a mechanism to wait on multiple objects, and we might have multiple waiters for each of these events, a M->N relationship, the current Linux interfaces fell short on performance evaluation of large M,N scenarios. We experimented, for instance, with eventfd, which has performance problems discussed below, but we also experimented with userspace solutions, like making each consumer wait on a condition variable guarding the entire list of objects, and then waking up multiple variables on the producer side, but this is prohibitively expensive since we either need to signal many condition variables or share that condition variable among multiple waiters, and then verify for the event being signaled in userspace, which means dealing with often false positive wakes ups. The natural interface to implement the behavior we want, also considering that one of the waitable objects is a mutex itself, would be the futex interface. Therefore, this patchset proposes a mechanism for a thread to wait on multiple futexes at once, and wake up on the first futex that was awaken. In particular, using futexes in our Wine use case reduced the CPU utilization by 4% for the game Beat Saber and by 1.5% for the game Shadow of Tomb Raider, both running over Proton (a Wine based solution for Windows emulation), when compared to the eventfd interface. This implementation also doesn't rely of file descriptors, so it doesn't risk overflowing the resource. In time, we are also proposing modifications to glibc and libpthread to make this feature available for Linux native multithreaded applications using libpthread, which can benefit from the behavior of waiting on any of a group of futexes. Technically, the existing FUTEX_WAIT implementation can be easily reworked by using futex_wait_multiple() with a count of one, and I have a patch showing how it works. I'm not proposing it, since futex is such a tricky code, that I'd be more comfortable to have FUTEX_WAIT_MULTIPLE running upstream for a couple development cycles, before considering modifying FUTEX_WAIT. The patch series includes an extensive set of kselftests validating the behavior of the interface. We also implemented support[1] on Syzkaller and survived the fuzzy testing. Finally, if you'd rather pull directly a branch with this set you can find it here: https://gitlab.collabora.com/tonyk/linux/commits/futex-dev === Performance of eventfd === Polling on several eventfd contexts with semaphore semantics would provide us with the exact semantics we are looking for. However, as shown below, in a scenario with sufficient producers and consumers, the eventfd interface itself becomes a bottleneck, in particular because each thread will compete to acquire a sequence of waitqueue locks for each eventfd context in the poll list. In addition, in the uncontended case, where the producer is ready for consumption, eventfd still requires going into the kernel on the consumer side. When a write or a read operation in an eventfd file succeeds, it will try to wake up all threads that are waiting to perform some operation to the file. The lock (ctx->wqh.lock) that hold the access to the file value (ctx->count) is the same lock used to control access the waitqueue. When all those those thread woke, they will compete to get this lock. Along with that, the poll() also manipulates the waitqueue and need to hold this same lock. This lock is specially hard to acquire when poll() calls poll_freewait(), where it tries to free all waitqueues associated with this poll. While doing that, it will compete with a lot of read and write operations that have been waken. In our use case, with a huge number of parallel reads, writes and polls, this lock is a bottleneck and hurts the performance of applications. Our implementation of futex, however, decrease the calls of spin lock by more than 80% in some user applications. Finally, eventfd operates on file descriptors, which is a limited resource that has shown its limitation in our use cases. Despite the Windows interface not waiting on more than 64 objects at once, we still have multiple waiters at the same time, and we were easily able to exhaust the FD limits on applications like games. The RFC for this patch can be found here: https://lkml.org/lkml/2019/7/30/1399 Thanks, André [1] https://github.com/andrealmeid/syzkaller/tree/futex-wait-multiple Gabriel Krisman Bertazi (4): futex: Implement mechanism to wait on any of several futexes selftests: futex: Add FUTEX_WAIT_MULTIPLE timeout test selftests: futex: Add FUTEX_WAIT_MULTIPLE wouldblock test selftests: futex: Add FUTEX_WAIT_MULTIPLE wake up test include/uapi/linux/futex.h | 20 + kernel/futex.c | 356 +++++++++++++++++- .../selftests/futex/functional/.gitignore | 1 + .../selftests/futex/functional/Makefile | 3 +- .../futex/functional/futex_wait_multiple.c | 173 +++++++++ .../futex/functional/futex_wait_timeout.c | 38 +- .../futex/functional/futex_wait_wouldblock.c | 28 +- .../testing/selftests/futex/functional/run.sh | 3 + .../selftests/futex/include/futextest.h | 22 + 9 files changed, 635 insertions(+), 7 deletions(-) create mode 100644 tools/testing/selftests/futex/functional/futex_wait_multiple.c -- 2.25.0