Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp3743227ybl; Tue, 20 Aug 2019 01:24:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqxeRi4CAXsGorV3Q5l0d/03sPIyUvAS6b3iWFZ/x6Lw7/wGq3a+GpO33NTwPBgCA/lpakBS X-Received: by 2002:aa7:9907:: with SMTP id z7mr27795084pff.13.1566289443285; Tue, 20 Aug 2019 01:24:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566289443; cv=none; d=google.com; s=arc-20160816; b=b8ZEj4NzA8Jz+bvm80TznWNEdE9R/us8fgwOguj3G7w0OY4pjbC9Bj0Q1SDR7xu1oS g1YrcDE26k4ahthEyNHdL/tzl//PD3U4SZ2cNgi0WwLT2mu4OZGJkk9kx3DOmF7pVkhz l0a30yhHaGiuEMAZu5VeFpbTNayuPGVHri33by4uRTvB2gjG0j8YEw0h4T1gV6MRzv1J wv7stKCCXDepzzYHHJ9y4rvYi3a8RZrglC9oYDonOPM5PhcubFacj+VQe9alkkAQqIJs UPTJEegAOvbmyBhNSnpzaO6J104fTspTIGResaFHMsnQ5P5Bm/kwKf5xiSnmKP8L35B+ jp/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=rEAVf+UYeWMS4Xl8FpqhWU64fYuw1dlZ3L88VL5Ske4=; b=jws+9/CgNMHJiwIH96DoeMpuAf/acLsEO6Ifn5p9gHl9IkZqxpbYNMqlffGt9ejYpi QsYUwFt0c70Ik8MbOOi/+TRU6DlYeSUFtnUZzRnByyAwnPlvpXkLT4/vgDJ+0AR/gQhr Y8JcBMaHcZRsdKBRfinu0R4DcRm/FxfRzhM77On+EcmpmzSUgRHG5VwooDw4UdKNJsHS Z8QKkXtk1GTky9ghUI+DsI3TvwtzC5fUSCqo4XYdzXr5qXG1IHTfuBOKjXZL1PPKoW94 09NCLr1wKx79zayUyLk+do9EGT2bkNMFI2OM5wKFrq+8KH0UKcuHqKpspiH1mhogAJGn +jZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z9si11603124plo.66.2019.08.20.01.23.47; Tue, 20 Aug 2019 01:24:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729356AbfHTIWz (ORCPT + 99 others); Tue, 20 Aug 2019 04:22:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:52484 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728426AbfHTIWz (ORCPT ); Tue, 20 Aug 2019 04:22:55 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F0C06AEA1; Tue, 20 Aug 2019 08:22:53 +0000 (UTC) Date: Tue, 20 Aug 2019 10:22:53 +0200 From: Petr Mladek To: John Ogness Cc: linux-kernel@vger.kernel.org, Andrea Parri , Sergey Senozhatsky , Sergey Senozhatsky , Steven Rostedt , Brendan Higgins , Peter Zijlstra , Thomas Gleixner , Linus Torvalds , Greg Kroah-Hartman Subject: assign_desc() barriers: Re: [RFC PATCH v4 1/9] printk-rb: add a new printk ringbuffer implementation Message-ID: <20190820082253.ybys4fsakxxdvahx@pathway.suse.cz> References: <20190807222634.1723-1-john.ogness@linutronix.de> <20190807222634.1723-2-john.ogness@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190807222634.1723-2-john.ogness@linutronix.de> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 2019-08-08 00:32:26, John Ogness wrote: > --- /dev/null > +++ b/kernel/printk/ringbuffer.c > +/** > + * assign_desc() - Assign a descriptor to the caller. > + * > + * @e: The entry structure to store the assigned descriptor to. > + * > + * Find an available descriptor to assign to the caller. First it is checked > + * if the tail descriptor from the committed list can be recycled. If not, > + * perhaps a never-used descriptor is available. Otherwise, data blocks will > + * be invalidated until the tail descriptor from the committed list can be > + * recycled. > + * > + * Assigned descriptors are invalid until data has been reserved for them. > + * > + * Return: true if a descriptor was assigned, otherwise false. > + * > + * This will only fail if it was not possible to invalidate data blocks in > + * order to recycle a descriptor. This can happen if a writer has reserved but > + * not yet committed data and that reserved data is currently the oldest data. > + */ > +static bool assign_desc(struct prb_reserved_entry *e) > +{ > + struct printk_ringbuffer *rb = e->rb; > + struct prb_desc *d; > + struct nl_node *n; > + unsigned long i; > + > + for (;;) { > + /* > + * jA: > + * > + * Try to recycle a descriptor on the committed list. > + */ > + n = numlist_pop(&rb->nl); > + if (n) { > + d = container_of(n, struct prb_desc, list); > + break; > + } > + > + /* Fallback to static never-used descriptors. */ > + if (atomic_read(&rb->desc_next_unused) < DESCS_COUNT(rb)) { > + i = atomic_fetch_inc(&rb->desc_next_unused); > + if (i < DESCS_COUNT(rb)) { > + d = &rb->descs[i]; > + atomic_long_set(&d->id, i); > + break; > + } > + } > + > + /* > + * No descriptor available. Make one available for recycling > + * by invalidating data (which some descriptor will be > + * referencing). > + */ > + if (!dataring_pop(&rb->dr)) > + return false; > + } > + > + /* > + * jB: > + * > + * Modify the descriptor ID so that users of the descriptor see that > + * it has been recycled. A _release() is used so that prb_getdesc() > + * callers can see all data ringbuffer updates after issuing a > + * pairing smb_rmb(). See iA for details. > + * > + * Memory barrier involvement: > + * > + * If dB->iA reads from jB, then dI reads the same value as > + * jA->cD->hA. > + * > + * Relies on: > + * > + * RELEASE from jA->cD->hA to jB > + * matching > + * RMB between dB->iA and dI > + */ > + atomic_long_set_release(&d->id, atomic_long_read(&d->id) + > + DESCS_COUNT(rb)); atomic_long_set_release() might be a bit confusing here. There is no related acquire. In fact, d->id manipulation has barriers from both sides: + smp_rmb() before so that all reads are finished before the id is updated (release) + smp_wmb() after so that the new ID is written before other related values are modified (acquire). The smp_wmb() barrier is in prb_reserve(). I would move it here. Best Regards, Petr > + > + e->desc = d; > + return true; > +} > + > +/** > + * prb_reserve() - Reserve data in the ringbuffer. > + * > + * @e: The entry structure to setup. > + * > + * @rb: The ringbuffer to reserve data in. > + * > + * @size: The size of the data to reserve. > + * > + * This is the public function available to writers to reserve data. > + * > + * Context: Any context. Disables local interrupts on success. > + * Return: A pointer to the reserved data or an ERR_PTR if data could not be > + * reserved. > + * > + * If the provided size is legal, this will only fail if it was not possible > + * to invalidate the oldest data block. This can happen if a writer has > + * reserved but not yet committed data and that reserved data is currently > + * the oldest data. > + * > + * The ERR_PTR values and their meaning: > + * > + * * -EINVAL: illegal @size value > + * * -EBUSY: failed to reserve a descriptor (@fail count incremented) > + * * -ENOMEM: failed to reserve data (invalid descriptor committed) > + */ > +char *prb_reserve(struct prb_reserved_entry *e, struct printk_ringbuffer *rb, > + unsigned int size) > +{ > + struct prb_desc *d; > + unsigned long id; > + char *buf; > + > + if (!dataring_checksize(&rb->dr, size)) > + return ERR_PTR(-EINVAL); > + > + e->rb = rb; > + > + /* > + * Disable interrupts during the reserve/commit window in order to > + * minimize the number of reserved but not yet committed data blocks > + * in the data ringbuffer. Although such data blocks are not bad per > + * se, they act as blockers for writers once the data ringbuffer has > + * wrapped back to them. > + */ > + local_irq_save(e->irqflags); > + > + /* kA: */ > + if (!assign_desc(e)) { > + /* Failures to reserve descriptors are counted. */ > + atomic_long_inc(&rb->fail); > + buf = ERR_PTR(-EBUSY); > + goto err_out; > + } > + > + d = e->desc; > + > + /* > + * kB: > + * > + * The descriptor ID has been updated so that its users can see that > + * it is now invalid. Issue an smp_wmb() so that upcoming changes to > + * the descriptor will not be associated with the old descriptor ID. > + * This pairs with the smp_rmb() of prb_desc_busy() (see hB for > + * details) and the smp_rmb() within numlist_read() and the smp_rmb() > + * of prb_iter_next_valid_entry() (see mD for details). > + * > + * Memory barrier involvement: > + * > + * If hA reads from kC, then hC reads from jB. > + * If mC reads from kC, then mE reads from jB. > + * > + * Relies on: > + * > + * WMB between jB and kC > + * matching > + * RMB between hA and hC > + * > + * WMB between jB and kC > + * matching > + * RMB between mC and mE > + */ > + smp_wmb(); > + > + id = atomic_long_read(&d->id); > + > + /* kC: */ > + buf = dataring_push(&rb->dr, size, &d->desc, id); > + if (!buf) { > + /* Put the invalid descriptor on the committed list. */ > + numlist_push(&rb->nl, &d->list, id); > + buf = ERR_PTR(-ENOMEM); > + goto err_out; > + } > + > + return buf; > +err_out: > + local_irq_restore(e->irqflags); > + return buf; > +} > +EXPORT_SYMBOL(prb_reserve);