Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3322031yba; Tue, 16 Apr 2019 09:02:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqzb6LV1ZZPFznvhVLwS1C31tVShQ4v2ke0ZX2CaQQPacVDMAu4Aoz5R8nR6/cLTk4da947G X-Received: by 2002:aa7:8615:: with SMTP id p21mr83941794pfn.98.1555430551602; Tue, 16 Apr 2019 09:02:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555430551; cv=none; d=google.com; s=arc-20160816; b=up0CGva2QXN1dcsQnS1/8K0uUUXZ4pyNdaJZN0YFVgXnB9EEe8WX7trb4SAgEfxN4V 1y/lo+6ApmcZTjGOpGVAsm07z7HpxW0ktdVxcnDIZQUPKpLZdSrKKZUVeuGLkgrH8pdz ENkVLvyT62444696auXefQEHSPZF5nm2ZYWGyeFoLUPYOrXkXHw2My6Y2BDBS04xl5Gv d2WkQv0wEC49sIlKUOIuLGEJ0BRkMsKZE22csgyBAjwCNW/d0czk17Os03rXl14k62ZH 3clnUyGL7QyAXXx/hhopCZ7+fnYXTNunH0g5dtuMqW0P8L/0OTunHdZjEXma7AIjjCJE toXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=QtVOPp9XpMjn95ZD00G+bQ5IUpR0FEhnQTx4/Nr+hTo=; b=ZiEIwDq2y8Td5jgI3ESORpgiPIiq3snl1IZMXkY6aUP/w2wiaSjHeDeYytmUAUYo4h g9n7cVSVvzAshsMyY5um3wo8marbZbzRunfpjpjGrPJwsJXJvM73nP68siaz3GcZvpyV EYezZvbqaY6AjaziDkdd/CI3A3tomrhOYdXl/UsQhwRxzOqHqXksjZxTsC+j4sJYpYvQ ZD2XDpee8zOais9S+qzn8Rzo9/sfcH4wq9ExvnxQ4Fy1vxgDp5LRhk3wl95S3K3cqovS aRTf8VX7mC/YompWaRTcLh6jpTRgIhtsq96PvKCFsv+B2Hz+9ZEQNOce1XBj5M8nUkwd fKDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=YaMoOdnx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x15si49589330pgi.524.2019.04.16.09.02.14; Tue, 16 Apr 2019 09:02:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=YaMoOdnx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729690AbfDPQB0 (ORCPT + 99 others); Tue, 16 Apr 2019 12:01:26 -0400 Received: from merlin.infradead.org ([205.233.59.134]:47652 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727618AbfDPQB0 (ORCPT ); Tue, 16 Apr 2019 12:01:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=QtVOPp9XpMjn95ZD00G+bQ5IUpR0FEhnQTx4/Nr+hTo=; b=YaMoOdnxJ6Q6o9/SSeBzYK5XG to9JdTAU3hSRJa9F4p5GxIL4cmonUeGswTX5+yQtObIVzTAGRWC7FoXefWjN9rTlXQItuyJ6HOdcB HjSlBUPddVbBKQdKknIFL98qv1N+zXilh71ROxQWTRwh0XSshPz1HeQk3lsZwNeQLyQePc6XxpQil YVoorGVKTzldpc47fxClZ+xDbfYE+opVHubv9DYWcpeMOyRSBJHBi870tj9Z73DJ8l5qNCLif6mjn mKXOPQMzwRy+byJh+1HTIsIb3dbcUDSi8fPV+gaieLYdFZrv8nm/OhOR0bbTfTscxFbt0QXXjak2a HSkJHg38A==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hGQWI-0003yH-PU; Tue, 16 Apr 2019 16:01:15 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 5A36F29AC164D; Tue, 16 Apr 2019 18:01:13 +0200 (CEST) Date: Tue, 16 Apr 2019 18:01:13 +0200 From: Peter Zijlstra To: Waiman Long Cc: Ingo Molnar , Will Deacon , Thomas Gleixner , linux-kernel@vger.kernel.org, x86@kernel.org, Davidlohr Bueso , Linus Torvalds , Tim Chen , huang ying Subject: Re: [PATCH v4 06/16] locking/rwsem: Code cleanup after files merging Message-ID: <20190416160113.GM12232@hirez.programming.kicks-ass.net> References: <20190413172259.2740-1-longman@redhat.com> <20190413172259.2740-7-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190413172259.2740-7-longman@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org More cleanups.. --- --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -303,7 +303,7 @@ static void __rwsem_mark_wake(struct rw_ list_del(&waiter->list); /* * Ensure calling get_task_struct() before setting the reader - * waiter to nil such that rwsem_down_read_failed() cannot + * waiter to nil such that rwsem_down_read_slow() cannot * race with do_exit() by always holding a reference count * to the task to wakeup. */ @@ -500,7 +500,7 @@ static bool rwsem_optimistic_spin(struct * Wait for the read lock to be granted */ static inline struct rw_semaphore __sched * -__rwsem_down_read_failed_common(struct rw_semaphore *sem, int state) +rwsem_down_read_slow(struct rw_semaphore *sem, int state) { long count, adjustment = -RWSEM_READER_BIAS; struct rwsem_waiter waiter; @@ -572,23 +572,11 @@ __rwsem_down_read_failed_common(struct r return ERR_PTR(-EINTR); } -static inline struct rw_semaphore * __sched -rwsem_down_read_failed(struct rw_semaphore *sem) -{ - return __rwsem_down_read_failed_common(sem, TASK_UNINTERRUPTIBLE); -} - -static inline struct rw_semaphore * __sched -rwsem_down_read_failed_killable(struct rw_semaphore *sem) -{ - return __rwsem_down_read_failed_common(sem, TASK_KILLABLE); -} - /* * Wait until we successfully acquire the write lock */ static inline struct rw_semaphore * -__rwsem_down_write_failed_common(struct rw_semaphore *sem, int state) +rwsem_down_write_slow(struct rw_semaphore *sem, int state) { long count; bool waiting = true; /* any queued threads before us */ @@ -689,18 +677,6 @@ __rwsem_down_write_failed_common(struct return ERR_PTR(-EINTR); } -static inline struct rw_semaphore * __sched -rwsem_down_write_failed(struct rw_semaphore *sem) -{ - return __rwsem_down_write_failed_common(sem, TASK_UNINTERRUPTIBLE); -} - -static inline struct rw_semaphore * __sched -rwsem_down_write_failed_killable(struct rw_semaphore *sem) -{ - return __rwsem_down_write_failed_common(sem, TASK_KILLABLE); -} - /* * handle waking up a waiter on the semaphore * - up_read/up_write has decremented the active part of count if we come here @@ -749,7 +725,7 @@ inline void __down_read(struct rw_semaph { if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, &sem->count) & RWSEM_READ_FAILED_MASK)) { - rwsem_down_read_failed(sem); + rwsem_down_read_slow(sem, TASK_UNINTERRUPTIBLE); DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), sem); } else { @@ -761,7 +737,7 @@ static inline int __down_read_killable(s { if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, &sem->count) & RWSEM_READ_FAILED_MASK)) { - if (IS_ERR(rwsem_down_read_failed_killable(sem))) + if (IS_ERR(rwsem_down_read_slow(sem, TASK_KILLABLE))) return -EINTR; DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), sem); @@ -794,34 +770,38 @@ static inline int __down_read_trylock(st */ static inline void __down_write(struct rw_semaphore *sem) { - if (unlikely(atomic_long_cmpxchg_acquire(&sem->count, 0, - RWSEM_WRITER_LOCKED))) - rwsem_down_write_failed(sem); + long tmp = RWSEM_UNLOCKED_VALUE; + + if (unlikely(atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, + RWSEM_WRITER_LOCKED))) + rwsem_down_write_slow(sem, TASK_UNINTERRUPTIBLE); rwsem_set_owner(sem); } static inline int __down_write_killable(struct rw_semaphore *sem) { - if (unlikely(atomic_long_cmpxchg_acquire(&sem->count, 0, - RWSEM_WRITER_LOCKED))) - if (IS_ERR(rwsem_down_write_failed_killable(sem))) + long tmp = RWSEM_UNLOCKED_VALUE; + + if (unlikely(atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, + RWSEM_WRITER_LOCKED))) { + if (IS_ERR(rwsem_down_write_slow(sem, TASK_KILLABLE))) return -EINTR; + } rwsem_set_owner(sem); return 0; } static inline int __down_write_trylock(struct rw_semaphore *sem) { - long tmp; + long tmp = RWSEM_UNLOCKED_VALUE; lockevent_inc(rwsem_wtrylock); - tmp = atomic_long_cmpxchg_acquire(&sem->count, RWSEM_UNLOCKED_VALUE, - RWSEM_WRITER_LOCKED); - if (tmp == RWSEM_UNLOCKED_VALUE) { - rwsem_set_owner(sem); - return true; - } - return false; + if (!atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, + RWSEM_WRITER_LOCKED)) + return false; + + rwsem_set_owner(sem); + return true; } /* @@ -831,12 +811,11 @@ inline void __up_read(struct rw_semaphor { long tmp; - DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), - sem); + DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), sem); rwsem_clear_reader_owned(sem); tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count); - if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) - == RWSEM_FLAG_WAITERS)) + if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) == + RWSEM_FLAG_WAITERS)) rwsem_wake(sem); } @@ -848,7 +827,7 @@ static inline void __up_write(struct rw_ DEBUG_RWSEMS_WARN_ON(sem->owner != current, sem); rwsem_clear_owner(sem); if (unlikely(atomic_long_fetch_add_release(-RWSEM_WRITER_LOCKED, - &sem->count) & RWSEM_FLAG_WAITERS)) + &sem->count) & RWSEM_FLAG_WAITERS)) rwsem_wake(sem); }