Received: by 2002:ac0:a679:0:0:0:0:0 with SMTP id p54csp1627164imp; Fri, 22 Feb 2019 07:26:34 -0800 (PST) X-Google-Smtp-Source: AHgI3IYXy2GUTRIqi5Qry0aTlCSHJ8b9B8ioZ5+/MmbYGFY4zTV1NpK0uhAJxevXT2UMXgD7HvKh X-Received: by 2002:a17:902:2aab:: with SMTP id j40mr4693622plb.271.1550849194765; Fri, 22 Feb 2019 07:26:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550849194; cv=none; d=google.com; s=arc-20160816; b=heoiFhPVcWbFx4mA7fz2DQpeXylzjMfO1DTEh8zHMi+Bu5Mhz6iKTyC+Uw3QTO1trc eV0WSO1Y+WwqDuRg94V4cyMq1zCtC9LxXh6VY6cy6qvaolyIj+/ApChqqBAf6fh+beWd L29OATJyDQT7qai8yaJFHqGAvcUy9+plBNLCQ0xl/zvt3gB6O/6e7SB/TjIwDN2raiYI M63+1DOI+Rl1lsGniaFwR7HV17wHWgYfoMJqZFmQiQKQ+1pIMx/LpWlYexyMKbxv01Mt zX32dQcnY91j63CbR/YyKAv+IeKw+O077foR0lZPF/PoYKRtIWbl91l9X9NlDRrZfvkQ 53Tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=VfNo4oqzNF5B/Z1ajhuD8+570qo8/M29zmQCJz/znpQ=; b=TRoCuZKdRTZVFUnzpesMG+EZ2TwvVQ63jzUDRNB5rU68on2Scm5d6TNGMoxFKoXGnq u5ICYtBIqnWMm8xrZyKUDy8cmzy291KTKqCtXyNi5RFWMa7tmtt5lXxdeztKzCrH9tsb s2//taAvbZqlgq3xCDrNUiJXjQrLo86z5viiq0BipAjQc78+TAnD+6a5CvpOZghTGMvF aF8aRlsk7BmaWkbeK7KK8aFRufejQg1kA53wuMCN+U/pyd9MJgdbu/hpcuvkv1+rWKt1 0OXfYBjhSr7CYUyVfBD77hFeWm6XUcYfXP/gY8hyIHyKSFBf3iaxqKwJOwlIebdRwoyr QX/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g132si1690161pfb.23.2019.02.22.07.26.18; Fri, 22 Feb 2019 07:26:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727145AbfBVPZn (ORCPT + 99 others); Fri, 22 Feb 2019 10:25:43 -0500 Received: from mx2.suse.de ([195.135.220.15]:44094 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725892AbfBVPZn (ORCPT ); Fri, 22 Feb 2019 10:25:43 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A497DAC63; Fri, 22 Feb 2019 15:25:41 +0000 (UTC) Date: Fri, 22 Feb 2019 16:25:41 +0100 From: Petr Mladek To: John Ogness Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Sergey Senozhatsky , Steven Rostedt , Daniel Wang , Andrew Morton , Linus Torvalds , Greg Kroah-Hartman , Alan Cox , Jiri Slaby , Peter Feiner , linux-serial@vger.kernel.org, Sergey Senozhatsky Subject: Re: [RFC PATCH v1 10/25] printk: redirect emit/store to new ringbuffer Message-ID: <20190222152541.33xp2btltwcecxz7@pathway.suse.cz> References: <20190212143003.48446-1-john.ogness@linutronix.de> <20190212143003.48446-11-john.ogness@linutronix.de> <20190220090112.xbnauwt2w7gwtebo@pathway.suse.cz> <8736oijgpf.fsf@linutronix.de> <20190222144302.44zl37p75qgaixf3@pathway.suse.cz> <87va1byia5.fsf@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87va1byia5.fsf@linutronix.de> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 2019-02-22 16:06:26, John Ogness wrote: > On 2019-02-22, Petr Mladek wrote: > >>>> + rbuf = prb_reserve(&h, &sprint_rb, PRINTK_SPRINT_MAX); > >>> > >>> The second ring buffer for temporary buffers is really interesting > >>> idea. > >>> > >>> Well, it brings some questions. For example, how many users might > >>> need a reservation in parallel. Or if the nested use might cause > >>> some problems when we decide to use printk-specific ring buffer > >>> implementation. I still have to think about it. > >> > >> Keep in mind that it is only used by the writers, which have the > >> prb_cpulock. Typically there would only be 2 max users: a non-NMI > >> writer that was interrupted during the reserve/commit window and the > >> interrupting NMI that does printk. The only exception would be if the > >> printk-code code itself triggers a BUG_ON or WARN_ON within the > >> reserve/commit window. Then you will have an additional user per > >> recursion level. > > > > I am not sure it is worth to call the ring buffer machinery just > > to handle 2-3 buffers. > > It may be slightly overkill, but: > > 1. We have the prb_cpulock at this point anyway, so it will be > fast. (Both ring buffers share the same prb_cpulock.) I am still not persuaded that we really need the lock. The implementation looks almost ready for a fully lockless writers. But I might be wrong. The lock might be fine when it makes the code easier and does not bring any deadlocks. > 2. Getting a safe buffer is just 1 line of code: prb_reserve() The problem is how complicated code is hidden behind this 1 line of code. > 3. Why should we waste _any_ lines of code implementing the handling of > these special 3-4 buffers? It might be worth if it makes the code more strighforward and less prone to bugs. > > Well, it might be just my mental block. We need to be really careful > > to avoid infinite recursion when storing messages into the log > > buffer. > > The recursion works well. I inserted a triggerable BUG_ON() in > vprintk_emit() _within_ the reserve/commit window and I see a clean > backtrace on the emergency console. Have you tested all possible error situations that might happen? Testing helps a lot. But the real life often brings surprises. Best Regards, Petr