Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp3086682pxa; Tue, 18 Aug 2020 06:22:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOdN6H1ZJD/AgeC2aZNXeFGjdYVWIrPjnVg4ymmAq+DBlelpoZL1X4eE1y5FI8LXXZo+ca X-Received: by 2002:a05:6402:1643:: with SMTP id s3mr20620423edx.185.1597756932161; Tue, 18 Aug 2020 06:22:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597756932; cv=none; d=google.com; s=arc-20160816; b=KBCncSWuX4p3jIGcotLAwb/OfTaKw1a3gB8x7P6DTfr3+lR4l5Ck7IsKCO0leRGo1h LhnL93R0jsa51Mi/4k8PMt2FNhlhD7z28ugaVeiFM61MOxxfLh0Qxdrx7DSQrppgV6OV vyfcbtC0JFRuuOrJGvOaw/LRcaIDsEmXC6bEeXD8eVEuxnFjs9JIQeNxX2tNzqurOMSE l/Hz0Mkb1Y5LbOhrEsu7nCYDjA4JtMVDWNEEXsraWvFeTS1cH8AzonV5jH7ThQ72Sc8L Uj+kvJ3g9s8txQKVVpVrQMGpRKxhVbrCKFuij+4qoaow+ekYt04VE4QxmJD0rwOhnm/c ijew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=yDDil0AUbZ0wmokmM/QHNHE+fyW15U6tmDP5zuUGqO4=; b=zUBPnXHGOK40yBnJwT8wMfLdhcPY1QBuiWO2Q6dqM79p6FVHN8FpfsXumX5lOLNJqH ww+tu4tmUtOb0qpryyPlxuznltrCyKA+yppozkGbnTyWajEz2+wizuPcp6NFbxTNTpr4 laZr016btxCGfG7xs84t9fpx3hFVoEjZoMI6z2ReiWb0zMiBLvdTswaMPNOgX+jxA9Hu ZxuhKNQzYOODB3G25KR8P7f7g5D+6ukwlX65/lQX+tJ4QJm9pbwbZPJ+L0MzSVirQMk4 LD92afK8Q9JcuZGAVcR8c9ehi2HOifqEIHmscLEjqX5JYsPkVARDLJPvmkT2jj/XjUVW A6vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hPUCTqXS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k13si12949307ejb.579.2020.08.18.06.21.48; Tue, 18 Aug 2020 06:22:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hPUCTqXS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726635AbgHRNTB (ORCPT + 99 others); Tue, 18 Aug 2020 09:19:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726617AbgHRNSr (ORCPT ); Tue, 18 Aug 2020 09:18:47 -0400 Received: from mail-lj1-x243.google.com (mail-lj1-x243.google.com [IPv6:2a00:1450:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FBDEC061342 for ; Tue, 18 Aug 2020 06:18:46 -0700 (PDT) Received: by mail-lj1-x243.google.com with SMTP id i10so21395508ljn.2 for ; Tue, 18 Aug 2020 06:18:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yDDil0AUbZ0wmokmM/QHNHE+fyW15U6tmDP5zuUGqO4=; b=hPUCTqXSRTJbtoSpUcqVa72Pi616CqOCXEesCSvBgHZnk6f8IumMxFDDzZfuIblroK ejDxDGB38/9/NmxAtDbFkRlfCKiXMzjfqVUAFDoc7emGmzJgQrizkrwRxDgmfXzb/fcD /XY5L0L1PlPbTdt52PGQBV7bp+JNs+Dy/fwr25FIyEq0xbwV06R34HZCVX4HN4/BjWE9 It80EGd8AgS/6a+Cs4TztyLGpRjmQK0mLeyLikA6hlAbroVZpCZmM7X/IAeVHlt+O4Hr BCEzg06q6o/JDgAQsmVEybV8jWEkrV14oasnDTEeXktQAIrOC+TP1XnhFNBeRdQ/f9EI AmtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yDDil0AUbZ0wmokmM/QHNHE+fyW15U6tmDP5zuUGqO4=; b=VM8p7reWwcoeBfvKGZsoomCov5FmaEuTvct3OXetBpYoakVJzcN98+ROYp8ORfsogz SLDUUH93HqrqJDYBcOOjLtTx9Mxhcn1R31Y/vjZ0VC4xE7T69+Y+QohVhwxczFYH8qdZ M1ynFZhsPta7tpddJ8ZSplddimNcHWXaKCNJLZicu3xDtBA6ZbgLvPuujpC5pJcCEMvI iiL39/oJ3cOqMUdU0xIF07+VV+7yk4ROs/5KGu2CumqHRkIviopp0SyKdwndDnIBe+es 5IiszLl9y4ldMuzZxKFsvZ2mNq42kXkknhKHFo2Gd5tJi1V2Ei6k+xkMyGmw4M2z1EVO lskw== X-Gm-Message-State: AOAM532tdCPzrtP/S/zwnOP7ieH3nXXAEavZL15ORx1FdHBspRl0Ym/Q EokAvvJHbCrkPAiAhE3fOdd1Qf5rlsHWfWpuQ2SBmA== X-Received: by 2002:a05:651c:294:: with SMTP id b20mr9143135ljo.4.1597756722365; Tue, 18 Aug 2020 06:18:42 -0700 (PDT) MIME-Version: 1.0 References: <1595333413-30052-1-git-send-email-sumit.garg@linaro.org> <1595333413-30052-3-git-send-email-sumit.garg@linaro.org> <20200814141322.lffebtamfjt2qrym@holly.lan> <20200817143222.x524v6xqw5hvzvjs@holly.lan> In-Reply-To: <20200817143222.x524v6xqw5hvzvjs@holly.lan> From: Sumit Garg Date: Tue, 18 Aug 2020 18:48:30 +0530 Message-ID: Subject: Re: [RFC 2/5] serial: core: Add framework to allow NMI aware serial drivers To: Daniel Thompson Cc: Doug Anderson , Greg Kroah-Hartman , linux-serial@vger.kernel.org, kgdb-bugreport@lists.sourceforge.net, Jiri Slaby , Russell King - ARM Linux , Jason Wessel , LKML , Linux ARM Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 17 Aug 2020 at 20:02, Daniel Thompson wrote: > > On Mon, Aug 17, 2020 at 07:53:55PM +0530, Sumit Garg wrote: > > On Mon, 17 Aug 2020 at 19:27, Doug Anderson wrote: > > > > > > Hi, > > > > > > On Mon, Aug 17, 2020 at 5:27 AM Sumit Garg wrote: > > > > > > > > Thanks for your suggestion, irq_work_schedule() looked even better > > > > without any overhead, see below: > > > > > > > > diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h > > > > index 3082378..1eade89 100644 > > > > --- a/include/linux/irq_work.h > > > > +++ b/include/linux/irq_work.h > > > > @@ -3,6 +3,7 @@ > > > > #define _LINUX_IRQ_WORK_H > > > > > > > > #include > > > > +#include > > > > > > > > /* > > > > * An entry can be in one of four states: > > > > @@ -24,6 +25,11 @@ struct irq_work { > > > > void (*func)(struct irq_work *); > > > > }; > > > > > > > > +struct irq_work_schedule { > > > > + struct irq_work work; > > > > + struct work_struct *sched_work; > > > > +}; > > > > + > > > > static inline > > > > void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *)) > > > > { > > > > { > > > > @@ -39,6 +45,7 @@ void init_irq_work(struct irq_work *work, void > > > > (*func)(struct irq_work *)) > > > > > > > > bool irq_work_queue(struct irq_work *work); > > > > bool irq_work_queue_on(struct irq_work *work, int cpu); > > > > +bool irq_work_schedule(struct work_struct *sched_work); > > > > > > > > void irq_work_tick(void); > > > > void irq_work_sync(struct irq_work *work); > > > > diff --git a/kernel/irq_work.c b/kernel/irq_work.c > > > > index eca8396..3880316 100644 > > > > --- a/kernel/irq_work.c > > > > +++ b/kernel/irq_work.c > > > > @@ -24,6 +24,8 @@ > > > > static DEFINE_PER_CPU(struct llist_head, raised_list); > > > > static DEFINE_PER_CPU(struct llist_head, lazy_list); > > > > > > > > +static struct irq_work_schedule irq_work_sched; > > > > + > > > > /* > > > > * Claim the entry so that no one else will poke at it. > > > > */ > > > > @@ -79,6 +81,25 @@ bool irq_work_queue(struct irq_work *work) > > > > } > > > > EXPORT_SYMBOL_GPL(irq_work_queue); > > > > > > > > +static void irq_work_schedule_fn(struct irq_work *work) > > > > +{ > > > > + struct irq_work_schedule *irq_work_sched = > > > > + container_of(work, struct irq_work_schedule, work); > > > > + > > > > + if (irq_work_sched->sched_work) > > > > + schedule_work(irq_work_sched->sched_work); > > > > +} > > > > + > > > > +/* Schedule work via irq work queue */ > > > > +bool irq_work_schedule(struct work_struct *sched_work) > > > > +{ > > > > + init_irq_work(&irq_work_sched.work, irq_work_schedule_fn); > > > > + irq_work_sched.sched_work = sched_work; > > > > + > > > > + return irq_work_queue(&irq_work_sched.work); > > > > +} > > > > +EXPORT_SYMBOL_GPL(irq_work_schedule); > > > > > > Wait, howzat work? There's a single global variable that you stash > > > the "sched_work" into with no locking? What if two people schedule > > > work at the same time? > > > > This API is intended to be invoked from NMI context only, so I think > > there will be a single user at a time. > > How can you possibly know that? I guess here you are referring to NMI nesting, correct? Anyway, I am going to shift to another implementation as mentioned in the other thread. -Sumit > > This is library code, not a helper in a driver. > > > Daniel. > > > > And we can make that explicit > > as well: > > > > +/* Schedule work via irq work queue */ > > +bool irq_work_schedule(struct work_struct *sched_work) > > +{ > > + if (in_nmi()) { > > + init_irq_work(&irq_work_sched.work, irq_work_schedule_fn); > > + irq_work_sched.sched_work = sched_work; > > + > > + return irq_work_queue(&irq_work_sched.work); > > + } > > + > > + return false; > > +} > > +EXPORT_SYMBOL_GPL(irq_work_schedule); > > > > -Sumit > > > > > > > > -Doug