Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp682875yba; Thu, 18 Apr 2019 07:56:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqz3TDkEXmCrsh9YzcCgMh3mRPEUr6tMMhfarSceQCjqK2niY7FB4JjZufJYIDwcg2CFuU4T X-Received: by 2002:a62:ab14:: with SMTP id p20mr96417852pff.23.1555599381476; Thu, 18 Apr 2019 07:56:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555599381; cv=none; d=google.com; s=arc-20160816; b=PTGwvIQisrtcw0sKJLH3ZjqHYjO0ApTbrkXgvcZX/4Qus0mu9giUkSKQC/SwZta32D aJasDK/a0Efaf6hBWkg9hhr/zmxIyBc7xOWlP5zw9mWTnd0Fd4L6rMh4Z+JP63+yE/r9 yxgRjgMo3wdg2ZZGLqwaMVdDi2SvRoEYrgnvwIqAZz/7JBJjTu1EA0tNnbfQTkXH3zc8 EQhQSsFv7Ff5O6sdBcu7PzC/OltFlHI0fZoJrvQBFTyeTiRdz5ecBE54bkmVbxJyV8FM 0C99rDoCsY+CI/u+mhUPut8l1ccsBK3QLr9KXuWDk+l07SfFuXe4l3W6TBFOXOAN7ZoV 4wEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:autocrypt:openpgp:from:references:cc:to :subject; bh=20D4/fye7DIGzuY6JlB1PKwmN15mhgTAvdOkCRlVcrQ=; b=KG85ppy26ET9hhcTE5KTuS6y3E4e+QxfH6orom4bU39hwxdlidg4K2EBhNufikKfP2 wPKhnX4GoD8fYT9/haU3LDCvAoA8xYKuFwaSMAY/vV+nmsSKoDgn3G1f3zPZnBVlksXd ZRP+5zrRVD0zBXWAt2HkjAQmWZA4NiEThWOLj8m1l6VPJp5IO9cYkYDcShGl1t6FpFfF 3bnv4VDXwmW5K7/t9VnkqIt0Q/e0OKevLBXKUSJZR1sljcu0aI/ms8X2FhonXYvTi6nY mLB2Zv6Xf9MIyj8uYGfPh7s3E2LrCR+KE9AbhJ1y5QapYg3qaD6/lG6fjoBI7kMndHpc NfxQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5si2266639plo.13.2019.04.18.07.56.06; Thu, 18 Apr 2019 07:56:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389247AbfDROyW convert rfc822-to-8bit (ORCPT + 99 others); Thu, 18 Apr 2019 10:54:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40162 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388161AbfDROyW (ORCPT ); Thu, 18 Apr 2019 10:54:22 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2E61A753CF; Thu, 18 Apr 2019 14:54:21 +0000 (UTC) Received: from llong.remote.csb (dhcp-17-19.bos.redhat.com [10.18.17.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 292BC60BEC; Thu, 18 Apr 2019 14:54:20 +0000 (UTC) Subject: Re: [PATCH v4 14/16] locking/rwsem: Guard against making count negative To: Peter Zijlstra Cc: Ingo Molnar , Will Deacon , Thomas Gleixner , linux-kernel@vger.kernel.org, x86@kernel.org, Davidlohr Bueso , Linus Torvalds , Tim Chen , huang ying References: <20190413172259.2740-1-longman@redhat.com> <20190413172259.2740-15-longman@redhat.com> <20190418135151.GB12232@hirez.programming.kicks-ass.net> <20190418144036.GE12232@hirez.programming.kicks-ass.net> From: Waiman Long Openpgp: preference=signencrypt Autocrypt: addr=longman@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFgsZGsBEAC3l/RVYISY3M0SznCZOv8aWc/bsAgif1H8h0WPDrHnwt1jfFTB26EzhRea XQKAJiZbjnTotxXq1JVaWxJcNJL7crruYeFdv7WUJqJzFgHnNM/upZuGsDIJHyqBHWK5X9ZO jRyfqV/i3Ll7VIZobcRLbTfEJgyLTAHn2Ipcpt8mRg2cck2sC9+RMi45Epweu7pKjfrF8JUY r71uif2ThpN8vGpn+FKbERFt4hW2dV/3awVckxxHXNrQYIB3I/G6mUdEZ9yrVrAfLw5M3fVU CRnC6fbroC6/ztD40lyTQWbCqGERVEwHFYYoxrcGa8AzMXN9CN7bleHmKZrGxDFWbg4877zX 0YaLRypme4K0ULbnNVRQcSZ9UalTvAzjpyWnlnXCLnFjzhV7qsjozloLTkZjyHimSc3yllH7 VvP/lGHnqUk7xDymgRHNNn0wWPuOpR97J/r7V1mSMZlni/FVTQTRu87aQRYu3nKhcNJ47TGY evz/U0ltaZEU41t7WGBnC7RlxYtdXziEn5fC8b1JfqiP0OJVQfdIMVIbEw1turVouTovUA39 Qqa6Pd1oYTw+Bdm1tkx7di73qB3x4pJoC8ZRfEmPqSpmu42sijWSBUgYJwsziTW2SBi4hRjU h/Tm0NuU1/R1bgv/EzoXjgOM4ZlSu6Pv7ICpELdWSrvkXJIuIwARAQABzR9Mb25nbWFuIExv bmcgPGxsb25nQHJlZGhhdC5jb20+wsF/BBMBAgApBQJYLGRrAhsjBQkJZgGABwsJCAcDAgEG FQgCCQoLBBYCAwECHgECF4AACgkQbjBXZE7vHeYwBA//ZYxi4I/4KVrqc6oodVfwPnOVxvyY oKZGPXZXAa3swtPGmRFc8kGyIMZpVTqGJYGD9ZDezxpWIkVQDnKM9zw/qGarUVKzElGHcuFN ddtwX64yxDhA+3Og8MTy8+8ZucM4oNsbM9Dx171bFnHjWSka8o6qhK5siBAf9WXcPNogUk4S fMNYKxexcUayv750GK5E8RouG0DrjtIMYVJwu+p3X1bRHHDoieVfE1i380YydPd7mXa7FrRl 7unTlrxUyJSiBc83HgKCdFC8+ggmRVisbs+1clMsK++ehz08dmGlbQD8Fv2VK5KR2+QXYLU0 rRQjXk/gJ8wcMasuUcywnj8dqqO3kIS1EfshrfR/xCNSREcv2fwHvfJjprpoE9tiL1qP7Jrq 4tUYazErOEQJcE8Qm3fioh40w8YrGGYEGNA4do/jaHXm1iB9rShXE2jnmy3ttdAh3M8W2OMK 4B/Rlr+Awr2NlVdvEF7iL70kO+aZeOu20Lq6mx4Kvq/WyjZg8g+vYGCExZ7sd8xpncBSl7b3 99AIyT55HaJjrs5F3Rl8dAklaDyzXviwcxs+gSYvRCr6AMzevmfWbAILN9i1ZkfbnqVdpaag QmWlmPuKzqKhJP+OMYSgYnpd/vu5FBbc+eXpuhydKqtUVOWjtp5hAERNnSpD87i1TilshFQm TFxHDzbOwU0EWCxkawEQALAcdzzKsZbcdSi1kgjfce9AMjyxkkZxcGc6Rhwvt78d66qIFK9D Y9wfcZBpuFY/AcKEqjTo4FZ5LCa7/dXNwOXOdB1Jfp54OFUqiYUJFymFKInHQYlmoES9EJEU yy+2ipzy5yGbLh3ZqAXyZCTmUKBU7oz/waN7ynEP0S0DqdWgJnpEiFjFN4/ovf9uveUnjzB6 lzd0BDckLU4dL7aqe2ROIHyG3zaBMuPo66pN3njEr7IcyAL6aK/IyRrwLXoxLMQW7YQmFPSw drATP3WO0x8UGaXlGMVcaeUBMJlqTyN4Swr2BbqBcEGAMPjFCm6MjAPv68h5hEoB9zvIg+fq M1/Gs4D8H8kUjOEOYtmVQ5RZQschPJle95BzNwE3Y48ZH5zewgU7ByVJKSgJ9HDhwX8Ryuia 79r86qZeFjXOUXZjjWdFDKl5vaiRbNWCpuSG1R1Tm8o/rd2NZ6l8LgcK9UcpWorrPknbE/pm MUeZ2d3ss5G5Vbb0bYVFRtYQiCCfHAQHO6uNtA9IztkuMpMRQDUiDoApHwYUY5Dqasu4ZDJk bZ8lC6qc2NXauOWMDw43z9He7k6LnYm/evcD+0+YebxNsorEiWDgIW8Q/E+h6RMS9kW3Rv1N qd2nFfiC8+p9I/KLcbV33tMhF1+dOgyiL4bcYeR351pnyXBPA66ldNWvABEBAAHCwWUEGAEC AA8FAlgsZGsCGwwFCQlmAYAACgkQbjBXZE7vHeYxSQ/+PnnPrOkKHDHQew8Pq9w2RAOO8gMg 9Ty4L54CsTf21Mqc6GXj6LN3WbQta7CVA0bKeq0+WnmsZ9jkTNh8lJp0/RnZkSUsDT9Tza9r GB0svZnBJMFJgSMfmwa3cBttCh+vqDV3ZIVSG54nPmGfUQMFPlDHccjWIvTvyY3a9SLeamaR jOGye8MQAlAD40fTWK2no6L1b8abGtziTkNh68zfu3wjQkXk4kA4zHroE61PpS3oMD4AyI9L 7A4Zv0Cvs2MhYQ4Qbbmafr+NOhzuunm5CoaRi+762+c508TqgRqH8W1htZCzab0pXHRfywtv 0P+BMT7vN2uMBdhr8c0b/hoGqBTenOmFt71tAyyGcPgI3f7DUxy+cv3GzenWjrvf3uFpxYx4 yFQkUcu06wa61nCdxXU/BWFItryAGGdh2fFXnIYP8NZfdA+zmpymJXDQeMsAEHS0BLTVQ3+M 7W5Ak8p9V+bFMtteBgoM23bskH6mgOAw6Cj/USW4cAJ8b++9zE0/4Bv4iaY5bcsL+h7TqQBH Lk1eByJeVooUa/mqa2UdVJalc8B9NrAnLiyRsg72Nurwzvknv7anSgIkL+doXDaG21DgCYTD wGA5uquIgb8p3/ENgYpDPrsZ72CxVC2NEJjJwwnRBStjJOGQX4lV1uhN1XsZjBbRHdKF2W9g weim8xU= Organization: Red Hat Message-ID: <4cbd3c18-c9c0-56eb-4e01-ee355a69057a@redhat.com> Date: Thu, 18 Apr 2019 10:54:19 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20190418144036.GE12232@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 18 Apr 2019 14:54:21 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/18/2019 10:40 AM, Peter Zijlstra wrote: > On Thu, Apr 18, 2019 at 10:08:28AM -0400, Waiman Long wrote: >> On 04/18/2019 09:51 AM, Peter Zijlstra wrote: >>> On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote: >>>> inline void __down_read(struct rw_semaphore *sem) >>>> { >>>> + long count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, >>>> + &sem->count); >>>> + >>>> + if (unlikely(count & RWSEM_READ_FAILED_MASK)) { >>>> + rwsem_down_read_failed(sem, count); >>>> DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); >>>> } else { >>>> rwsem_set_reader_owned(sem); >>> *groan*, that is not provably correct. It is entirely possible to get >>> enough fetch_add()s piled on top of one another to overflow regardless. >>> >>> Unlikely, yes, impossible, no. >>> >>> This makes me nervious as heck, I really don't want to ever have to >>> debug something like that :-( >> The number of fetch_add() that can pile up is limited by the number of >> CPUs available in the system. >> Yes, if you have a 32k processor system that have all the CPUs trying >> to acquire the same read-lock, we will have a problem. > Having more CPUs than that is not impossible these days. > Having more than 32k CPUs contending for the same cacheline will be horribly slow. >> Or as Linus had said that if we could have tasks kept >> preempted right after doing the fetch_add with newly scheduled tasks >> doing the fetch_add at the same lock again, we could have overflow with >> less CPUs. > That. > >> How about disabling preemption before fetch_all and re-enable >> it afterward to address the latter concern? > Performance might be an issue, look at what preempt_disable() + > preempt_enable() generate for ARM64 for example. That's not particularly > pretty. That is just for the preempt kernel. Right? Thinking about it some more, the above scenario is less likely to happen for CONFIG_PREEMPT_VOLUNTARY kernel and the preempt_disable cost will be lower. A preempt RT kernel is less likely to run on system with many CPUs anyway. We could make that a conifg option as well in a follow-on patch and let the distributors decide. >> I have no solution for the first case, though. > A cmpxchg() loop can fix this, but that again has performance > implications like you mentioned a while back. Exactly. Cheers, Longman