Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp1666336imj; Thu, 14 Feb 2019 10:01:10 -0800 (PST) X-Google-Smtp-Source: AHgI3IaQuJdCf5fqPb6OXCRDfR/sFEIgdkwzUGqdsmwnOxTxyr2mz+oQ92d2tIdvbHuvcrT+08rW X-Received: by 2002:aa7:8a17:: with SMTP id m23mr5346515pfa.258.1550167270007; Thu, 14 Feb 2019 10:01:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550167269; cv=none; d=google.com; s=arc-20160816; b=V7ZAInDmsvZxyTEK0KH1qpvuoRaVJihz5dEvwRWFBujZ+Qp70xPDBhG67vZaEvbCx3 3EAFg9bX859+b3qPFrPQyiwZmBF9XytOsJ1Tg9dGdUJy8utD+HOVofGB+bQtNHurfo79 vjA1m8Cp0Z/dy1gRGuJV6b8rF5zv9Z96k+MNfnsxwL8MqSCDdgnVOz0wZ396TSxxAC2S XbWni4q6mJPo4BEL9K5CBP3RIDgNPbYOUnzLFAPc7Mv6KOLwc3dpVcEQWRRmSbC+jN1V BHjEI1hFLcpIkwoA6v1MsTFAp2mo/kNgFtv5wDy7jvY3PwkNmcUhH0T+1sOj9KV7qDtF VrFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=UbcpmVmDes1rr9g3CZj9OdljkAtt95iQNv492djOEEQ=; b=OwaqaWTH4BMX7ZWqjpqNRj9rwxF3p2rcYrh/p4vOTSUXZ4zGkQEphpCunsuYF932XY ON3a8iP8STnZdnEKvU5Ylz88nBM+6Fltv1hDKwEGByeOp44dFosTY7iOSINCsGh4WG+N MsUy4X9IKLg1ZIiyDAUEgZqjwbza8SG/rLdYvC+orODhDjt6OIKAj0kTxJYUO+7mbOZc iVCRGRFe1OSJ42MxlHB1xgyto4bawYNRa4xENzX18ii/7Zu+hOEFvt+cYVHlHMJfyI9z ANbrgsZo6bhXyUVr11qYTu7gPgxEqOAWGV38BxvZTrvgDLzu4VsAx2oA9uVzLeKWiwFf OKoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y16si2920870pgh.33.2019.02.14.10.00.53; Thu, 14 Feb 2019 10:01:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406529AbfBNKd2 (ORCPT + 99 others); Thu, 14 Feb 2019 05:33:28 -0500 Received: from mx2.suse.de ([195.135.220.15]:50284 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2404171AbfBNKd2 (ORCPT ); Thu, 14 Feb 2019 05:33:28 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E63B7AF6E; Thu, 14 Feb 2019 10:33:25 +0000 (UTC) Date: Thu, 14 Feb 2019 11:33:24 +0100 From: Petr Mladek To: John Ogness Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Sergey Senozhatsky , Steven Rostedt , Daniel Wang , Andrew Morton , Linus Torvalds , Greg Kroah-Hartman , Alan Cox , Jiri Slaby , Peter Feiner , linux-serial@vger.kernel.org, Sergey Senozhatsky Subject: Re: [RFC PATCH v1 02/25] printk-rb: add prb locking functions Message-ID: <20190214103324.viexpifsyons5qya@pathway.suse.cz> References: <20190212143003.48446-1-john.ogness@linutronix.de> <20190212143003.48446-3-john.ogness@linutronix.de> <20190213154541.wvft64nf352vghou@pathway.suse.cz> <87pnrvs707.fsf@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87pnrvs707.fsf@linutronix.de> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 2019-02-13 22:39:20, John Ogness wrote: > On 2019-02-13, Petr Mladek wrote: > >> +/* > >> + * prb_unlock: Perform a processor-reentrant spin unlock. > >> + * @cpu_lock: A pointer to the lock object. > >> + * @cpu_store: A "flags" object storing lock status information. > >> + * > >> + * Release the lock. The calling processor must be the owner of the lock. > >> + * > >> + * It is safe to call this function from any context and state. > >> + */ > >> +void prb_unlock(struct prb_cpulock *cpu_lock, unsigned int cpu_store) > >> +{ > >> + unsigned long *flags; > >> + unsigned int cpu; > >> + > >> + cpu = atomic_read(&cpu_lock->owner); > >> + atomic_set_release(&cpu_lock->owner, cpu_store); > >> + > >> + if (cpu_store == -1) { > >> + flags = per_cpu_ptr(cpu_lock->irqflags, cpu); > >> + local_irq_restore(*flags); > >> + } > > > > cpu_store looks like an implementation detail. The caller > > needs to remember it to handle the nesting properly. > > It's really no different than "flags" in irqsave/irqrestore. > > > We could achieve the same with a recursion counter hidden > > in struct prb_lock. > > The only way I see how that could be implemented is if the cmpxchg > encoded the cpu owner and counter into a single integer. (Upper half as > counter, lower half as cpu owner.) Both fields would need to be updated > with a single cmpxchg. The critical cmpxchg being the one where the CPU > becomes unlocked (counter goes from 1 to 0 and cpu owner goes from N to > -1). The atomic operations are tricky. I feel other lost in them. Well, I still think that it might easier to detect nesting on the same CPU, see below. Also there is no need to store irq flags in per-CPU variable. Only the first owner of the lock need to store the flags. The others are spinning or nested. struct prb_cpulock { atomic_t owner; unsigned int flags; int nesting; /* intialized to 0 */ }; void prb_lock(struct prb_cpulock *cpu_lock) { unsigned int flags; int cpu; /* * The next condition might be valid only when * we are nested on the same CPU. It means * the IRQs are already disabled and no * memory barrier is needed. */ if (cpu_lock->owner == smp_processor_id()) { cpu_lock->nested++; return; } /* Not nested. Take the lock */ local_irq_save(flags); cpu = smp_processor_id(); for (;;) { if (atomic_try_cmpxchg_acquire(&cpu_lock->owner, -1, cpu)) { cpu_lock->flags = flags; break; } cpu_relax(); } } void prb_unlock(struct prb_cpulock *cpu_lock) { unsigned int flags; if (cpu_lock->nested) cpu_lock->nested--; return; } /* We must be the first lock owner */ flags = cpu_lock->flags; atomic_set_release(&cpu_lock->owner, -1); local_irq_restore(flags); } Or do I miss anything? Best Regards, Petr