Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2273476pxj; Sun, 16 May 2021 20:52:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxqqZQ4X4X6CH33oNw31QfRjxY2uGJQ4qZSMFFeAioowk/fSmeLUQ51VwEU02LBQ96vIlaQ X-Received: by 2002:a02:ca07:: with SMTP id i7mr6686928jak.11.1621223538515; Sun, 16 May 2021 20:52:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621223538; cv=none; d=google.com; s=arc-20160816; b=puULRBpOTzijiuVxD/Ww3I/thJ5eZZQHJdles3k/gSFQQ0Ez0dbWuYOmFzH9nC9XK6 F7q4hDwwjauDxsFUvXZRTFU7aY54Sc3xVE4v6I2Po++OYR41BLEZ+ctlyobiAidOyU5O APqhKX020NyoXuaJthdJT5HdD/0V70v7tIUwHDUd1U43yBooY5RgyjliUGhb8JK4oZMU bUG7hlP76hyN+Y1UDx9aB5gyrW+43+aTp0XSRNiHagDGWTovrOYTL7XKAHyy4BAajhjt UeQcvfaPFP0om3BM72Om9fPxDMTHWI1SpgS5Ltw5whR9j+zsi3U5z08dQc+yLWdCiFwc gL6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=rjQxxsc9O9OCxrOrewx2No+2tfwNkz0Vj0CoKFFreKw=; b=kPoHlFlB3LeVaaYRm87cfo2LeKkYiJXYu1rU5xXjvuoc65uClEFb/ii2Syk+PRIfkH 98szuyJcPuoncFYkv9vOHfAxgZYMSlnfJURrwNrbGGT6kxwofWOLJB5eHBUDKaF9QVz3 bztjeIsHSyLlzgBNOpqBq5ryNVAsWfpCdQsJelmdxXZVlBEiPZnBPcVX6YzYmwo2j0dX ZDV2StcE1+48fsi9G8/tP4ctKaf+k3mBwPT8yT9oOkvMWGOASlXXGt6xflRsq2niyZhV 2n/YUNLxEETEyKJWZoLS21L6Q3eZSQ9bD59SirJfcb3y0JxiNhUL/B6nlLENUMsCZSCk 3Usg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k10si16727760iow.62.2021.05.16.20.52.05; Sun, 16 May 2021 20:52:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234255AbhEQDNt (ORCPT + 99 others); Sun, 16 May 2021 23:13:49 -0400 Received: from mx2.suse.de ([195.135.220.15]:34766 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232509AbhEQDNt (ORCPT ); Sun, 16 May 2021 23:13:49 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 97408B223; Mon, 17 May 2021 03:12:32 +0000 (UTC) Date: Sun, 16 May 2021 20:12:27 -0700 From: Davidlohr Bueso To: Manfred Spraul Cc: LKML , Andrew Morton , "Paul E . McKenney" , 1vier1@web.de Subject: Re: [PATCH] ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock Message-ID: <20210517031227.qqir3hk2z45pzjum@offworld> References: <20210514175319.12195-1-manfred@colorfullife.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20210514175319.12195-1-manfred@colorfullife.com> User-Agent: NeoMutt/20201120 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 14 May 2021, Manfred Spraul wrote: >The patch solves two weaknesses in ipc/sem.c: > >1) The initial read of use_global_lock in sem_lock() is an >intentional race. KCSAN detects these accesses and prints >a warning. > >2) The code assumes that plain C read/writes are not >mangled by the CPU or the compiler. > >To solve both issues, use READ_ONCE()/WRITE_ONCE(). >Plain C reads are used in code that owns sma->sem_perm.lock. > Reviewed-by: Davidlohr Bueso >Signed-off-by: Manfred Spraul >--- > ipc/sem.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > >diff --git a/ipc/sem.c b/ipc/sem.c >index bf534c74293e..a0ad3a3edde2 100644 >--- a/ipc/sem.c >+++ b/ipc/sem.c >@@ -217,6 +217,8 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it); > * this smp_load_acquire(), this is guaranteed because the smp_load_acquire() > * is inside a spin_lock() and after a write from 0 to non-zero a > * spin_lock()+spin_unlock() is done. >+ * To prevent the compiler/cpu temporarily writing 0 to use_global_lock, >+ * READ_ONCE()/WRITE_ONCE() is used. > * > * 2) queue.status: (SEM_BARRIER_2) > * Initialization is done while holding sem_lock(), so no further barrier is >@@ -342,10 +344,10 @@ static void complexmode_enter(struct sem_array *sma) > * Nothing to do, just reset the > * counter until we return to simple mode. > */ >- sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS; >+ WRITE_ONCE(sma->use_global_lock, USE_GLOBAL_LOCK_HYSTERESIS); > return; > } >- sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS; >+ WRITE_ONCE(sma->use_global_lock, USE_GLOBAL_LOCK_HYSTERESIS); > > for (i = 0; i < sma->sem_nsems; i++) { > sem = &sma->sems[i]; >@@ -371,7 +373,8 @@ static void complexmode_tryleave(struct sem_array *sma) > /* See SEM_BARRIER_1 for purpose/pairing */ > smp_store_release(&sma->use_global_lock, 0); > } else { >- sma->use_global_lock--; >+ WRITE_ONCE(sma->use_global_lock, >+ sma->use_global_lock-1); > } > } > >@@ -412,7 +415,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops, > * Initial check for use_global_lock. Just an optimization, > * no locking, no memory barrier. > */ >- if (!sma->use_global_lock) { >+ if (!READ_ONCE(sma->use_global_lock)) { > /* > * It appears that no complex operation is around. > * Acquire the per-semaphore lock. >-- >2.31.1 >