Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp50981imj; Thu, 14 Feb 2019 15:08:33 -0800 (PST) X-Google-Smtp-Source: AHgI3IbxIZurQCTPCCYMD7po1Vgtk16neVI8EYDEK7uy0g6W6LFiwB41mg/hkR6nzjNOn1T/sZue X-Received: by 2002:a63:ea15:: with SMTP id c21mr5905850pgi.361.1550185713651; Thu, 14 Feb 2019 15:08:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550185713; cv=none; d=google.com; s=arc-20160816; b=Saa26qpfjlLnSXojFx8qc2h7nn1dkynil9MVPw4/AKaLOZWsPyR/wIu8qFNKM7F8LE gTU0jvhrTcQk6WIrJ7AbDuE32s21SSqwf5f3abZahtbbM6asA3CBfjDOzylnitQT7ywC DI2rjkFb2g3/iwjKdk9rIgjtImhVDjBQfhbYvf/6uCfBaxjcAY/E3IZXDgKtsE8XHo/W 5sAEFSnlIws/2WCg5rIdPVqjGdsmGWUOpGXhDrLs7b6R7uGkMaSbVj37LZYtHdFmX+TI hpOPuziwmMFt9MPged2Ds05Xnpdr1LW5nIEo3Ax+oqWu/YfbEv+LdK2iq6U8kjgG/m1f d6/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from; bh=0BlqWsV/HvkfIgMcTCQZRzOnGuKI3hNkxCIAVVaYkIs=; b=LlXR0vNrgG+RvA/yrd8wRreXraijzUD+pLUXavKw/ELyrBBwLFPBASjDFosG3Kqmpu D4+1jXWa4hmQCFxoN+QkSOM3x7rdpzIt0M/UTn8diRwFVYDn014U04zmUrnrIAo06WHv Fnii6NKce17hQ0Wa24Mzdub9ALiZPJJOyCkKSPiPbLFDLzJR3UCAY94PMYcMXVOV5vjg U+GFdpPuTezb+jDyjOCwMV/8BgQQaGJQWU1k+qg3n6T+iNloNVXKydRhlzMFSKPjUbY1 uWRDNKGcM/xopYF3aYwq/3CEzNy91Nh45KbsEUTkcGc5BSR1fkjyF1GD73TibBkoUZIf hV3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n7si3630037plk.18.2019.02.14.15.08.18; Thu, 14 Feb 2019 15:08:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391956AbfBNMKl (ORCPT + 99 others); Thu, 14 Feb 2019 07:10:41 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:49453 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729933AbfBNMKj (ORCPT ); Thu, 14 Feb 2019 07:10:39 -0500 Received: from localhost ([127.0.0.1] helo=vostro.local) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1guFqY-000563-B4; Thu, 14 Feb 2019 13:10:30 +0100 From: John Ogness To: Petr Mladek Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Sergey Senozhatsky , Steven Rostedt , Daniel Wang , Andrew Morton , Linus Torvalds , Greg Kroah-Hartman , Alan Cox , Jiri Slaby , Peter Feiner , linux-serial@vger.kernel.org, Sergey Senozhatsky Subject: Re: [RFC PATCH v1 02/25] printk-rb: add prb locking functions References: <20190212143003.48446-1-john.ogness@linutronix.de> <20190212143003.48446-3-john.ogness@linutronix.de> <20190213154541.wvft64nf352vghou@pathway.suse.cz> <87pnrvs707.fsf@linutronix.de> <20190214103324.viexpifsyons5qya@pathway.suse.cz> Date: Thu, 14 Feb 2019 13:10:28 +0100 In-Reply-To: <20190214103324.viexpifsyons5qya@pathway.suse.cz> (Petr Mladek's message of "Thu, 14 Feb 2019 11:33:24 +0100") Message-ID: <87y36ih8p7.fsf@linutronix.de> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.4 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-02-14, Petr Mladek wrote: >>> cpu_store looks like an implementation detail. The caller >>> needs to remember it to handle the nesting properly. >>> >>> We could achieve the same with a recursion counter hidden >>> in struct prb_lock. > > The atomic operations are tricky. I feel other lost in them. > Well, I still think that it might easier to detect nesting > on the same CPU, see below. > > Also there is no need to store irq flags in per-CPU variable. > Only the first owner of the lock need to store the flags. The others > are spinning or nested. > > struct prb_cpulock { > atomic_t owner; > unsigned int flags; > int nesting; /* intialized to 0 */ > }; > > void prb_lock(struct prb_cpulock *cpu_lock) > { > unsigned int flags; > int cpu; I added an explicit preempt_disable here: cpu = get_cpu(); > /* > * The next condition might be valid only when > * we are nested on the same CPU. It means > * the IRQs are already disabled and no > * memory barrier is needed. > */ > if (cpu_lock->owner == smp_processor_id()) { > cpu_lock->nested++; > return; > } > > /* Not nested. Take the lock */ > local_irq_save(flags); > cpu = smp_processor_id(); > > for (;;) { With fixups so it builds/runs: unsigned int prev_cpu = -1; > if (atomic_try_cmpxchg_acquire(&cpu_lock->owner, &prev_cpu, cpu)) { > cpu_lock->flags = flags; > break; > } > > cpu_relax(); > } > } > > void prb_unlock(struct prb_cpulock *cpu_lock) > { > unsigned int flags; > > if (cpu_lock->nested) > cpu_lock->nested--; And the matching preempt_enable(). goto out; > } > > /* We must be the first lock owner */ > flags = cpu_lock->flags; > atomic_set_release(&cpu_lock->owner, -1); > local_irq_restore(flags); out: put_cpu(); > } > > Or do I miss anything? It looks great. I've run my stress tests on it and everything is running well. Thanks for simplifying this! John Ogness