Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp755332imm; Fri, 5 Oct 2018 11:11:06 -0700 (PDT) X-Google-Smtp-Source: ACcGV62d9nCYYpWhxrXWOyqfF3N07bIdmZz1M42zmpYjKgL+/DgICct94QTAMKarpNDVsfdr6i6F X-Received: by 2002:a62:6f43:: with SMTP id k64-v6mr12705467pfc.87.1538763066557; Fri, 05 Oct 2018 11:11:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538763066; cv=none; d=google.com; s=arc-20160816; b=PWK/4aq52fMSuK0/G1eG61i32fZgfwNpfdTUksNYTVSPBfybSLUACha7ybSAiGDJM5 vCbktAnlqYQHcOee95iPJLbnukVI6IMzp+bgrH+gGMLPy7yqaQsn46sXLbx/QU6lPwS8 2seH0C5aEZ78N6nt1zSn+n3OG7eC+2zYHILUDXsCljgx9B3SD5hWrc5+VRuHXJxZ3+1B F4WziSlrHpKORmlf0zw3TyHTBaoNnDKgaVGNFrGicXGAwhpiQkoAxKAz+k3MXf64a94B uZ39FsprfkplqrWajKGTNYhfFZBm9kZ3YHGmLgjA4em1J1Vaz6tc56ouBnNd9dMkjZtk /seQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=Q/OkGwYs09CeAg7et50gsWsaaLGtWxtXLSBJX1905OY=; b=x5/74nc4TGNCKt3R8b43HLPsYWCyVZtYFu7V8Gfpg5D4lIm/+XvCO1AzD+btG28qM3 cQNR5tpt08Qt9CVufgO4HuB/c4u9Kp+mGG9Y3wnsvYg/q3q41TnJ12boO/TsLSBMXHj7 n15pdgakajbZWwrSRjytNjy1Gos9qHwnxdah2u/2lYUHdSV+VPGyRDDt9cwOiimvkA+D RJBCE1z/uxPMDRQJqfKfZREDBaFp5mkq54Gxiqlb4WVY5Ga6JXCTPgu2XKjPcLGk+v62 3b0kGRpuqOU6VSLIbg2PMHnst4i7go9Axrt7TLJHPGBPO4IoLBgbJiOHsNC4yiS1bkb0 3ovw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amarulasolutions.com header.s=google header.b=eEQ8SGiK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m66-v6si1430758pfm.191.2018.10.05.11.10.51; Fri, 05 Oct 2018 11:11:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amarulasolutions.com header.s=google header.b=eEQ8SGiK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728503AbeJFBKf (ORCPT + 99 others); Fri, 5 Oct 2018 21:10:35 -0400 Received: from mail-ed1-f66.google.com ([209.85.208.66]:34307 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728044AbeJFBKf (ORCPT ); Fri, 5 Oct 2018 21:10:35 -0400 Received: by mail-ed1-f66.google.com with SMTP id q19-v6so12449625edr.1 for ; Fri, 05 Oct 2018 11:10:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amarulasolutions.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Q/OkGwYs09CeAg7et50gsWsaaLGtWxtXLSBJX1905OY=; b=eEQ8SGiKweMyCWnWR5J+o/C+x10CRAWEXBopu5Ng6gyLsHo5GAzee6pcTTr+OpOcpi KH1T9df0arxJBMVYfIx11qK0SOcVqrJPtZp1lbR0wXnkIpB4OvqbluUUS0XbEK9IROwn BgXuEVcQwBNvvAG3jc6R2ztn06Zrk+/KvcRJg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Q/OkGwYs09CeAg7et50gsWsaaLGtWxtXLSBJX1905OY=; b=q2I6tI7xq5eOxKYme7NcDn1F7afiBzgIPdB8Wz/Wt6o1Rsd0DKk1rzHXwORZRCcrkm 1h2eF32vTtM/56o4WMCx2gXJ0+t4SkYtQhMxzmIIIJE+jxIm5i0JlQ3Ipd+lnymXzrxx HrB7dbrMn6w3SbEKeUYlsEsyT2TM3W5P4cYiQTaWNxYrsPMeDXVJb6CoWeD+2BCwz9/S vdcLwed28DFH+y5dMqzt25aYpwI/29/FdXMMvDxPjYynf33TiskV09V5mEVw3Aborpo3 yfV4fiDl49KFzgkf0eeuLyH4jYe61Qe6REpCSQ+QabBl2PezB+56O63vrae6E6VevK69 klrA== X-Gm-Message-State: ABuFfoh7i5mIYJWcHgBW0XfKb2NKSqf0Eb3ZKD/MlwW/mOHGANBEMvbY n27Y1SwAFs0cnZhnh0/jkqwgnUssF1Pzsw== X-Received: by 2002:a50:93c5:: with SMTP id o63-v6mr15831898eda.154.1538763042004; Fri, 05 Oct 2018 11:10:42 -0700 (PDT) Received: from andrea (15.152.230.94.awnet.cz. [94.230.152.15]) by smtp.gmail.com with ESMTPSA id p6-v6sm2516949edr.48.2018.10.05.11.10.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 05 Oct 2018 11:10:41 -0700 (PDT) Date: Fri, 5 Oct 2018 20:10:35 +0200 From: Andrea Parri To: Julia Cartwright Cc: Ingo Molnar , Thomas Gleixner , Peter Zijlstra , "linux-kernel@vger.kernel.org" , "linux-rt-users@vger.kernel.org" , Steffen Trumtrar , Tim Sander , Sebastian Andrzej Siewior , Guenter Roeck Subject: Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock Message-ID: <20181005181035.GA19828@andrea> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Julia, On Fri, Sep 28, 2018 at 09:03:51PM +0000, Julia Cartwright wrote: > In order to enable the queuing of kthread work items from hardirq > context even when PREEMPT_RT_FULL is enabled, convert the worker > spin_lock to a raw_spin_lock. > > This is only acceptable to do because the work performed under the lock > is well-bounded and minimal. Clearly not my topic..., but out of curiosity: What do you mean by "well-bounded" and "minimal"? Can you maybe point me to some doc.? Andrea > > Cc: Sebastian Andrzej Siewior > Cc: Guenter Roeck > Reported-and-tested-by: Steffen Trumtrar > Reported-by: Tim Sander > Signed-off-by: Julia Cartwright > --- > include/linux/kthread.h | 2 +- > kernel/kthread.c | 42 ++++++++++++++++++++--------------------- > 2 files changed, 22 insertions(+), 22 deletions(-) > > diff --git a/include/linux/kthread.h b/include/linux/kthread.h > index c1961761311d..ad292898f7f2 100644 > --- a/include/linux/kthread.h > +++ b/include/linux/kthread.h > @@ -85,7 +85,7 @@ enum { > > struct kthread_worker { > unsigned int flags; > - spinlock_t lock; > + raw_spinlock_t lock; > struct list_head work_list; > struct list_head delayed_work_list; > struct task_struct *task; > diff --git a/kernel/kthread.c b/kernel/kthread.c > index 486dedbd9af5..c1d9ee6671c6 100644 > --- a/kernel/kthread.c > +++ b/kernel/kthread.c > @@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker, > struct lock_class_key *key) > { > memset(worker, 0, sizeof(struct kthread_worker)); > - spin_lock_init(&worker->lock); > + raw_spin_lock_init(&worker->lock); > lockdep_set_class_and_name(&worker->lock, key, name); > INIT_LIST_HEAD(&worker->work_list); > INIT_LIST_HEAD(&worker->delayed_work_list); > @@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr) > > if (kthread_should_stop()) { > __set_current_state(TASK_RUNNING); > - spin_lock_irq(&worker->lock); > + raw_spin_lock_irq(&worker->lock); > worker->task = NULL; > - spin_unlock_irq(&worker->lock); > + raw_spin_unlock_irq(&worker->lock); > return 0; > } > > work = NULL; > - spin_lock_irq(&worker->lock); > + raw_spin_lock_irq(&worker->lock); > if (!list_empty(&worker->work_list)) { > work = list_first_entry(&worker->work_list, > struct kthread_work, node); > list_del_init(&work->node); > } > worker->current_work = work; > - spin_unlock_irq(&worker->lock); > + raw_spin_unlock_irq(&worker->lock); > > if (work) { > __set_current_state(TASK_RUNNING); > @@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker, > bool ret = false; > unsigned long flags; > > - spin_lock_irqsave(&worker->lock, flags); > + raw_spin_lock_irqsave(&worker->lock, flags); > if (!queuing_blocked(worker, work)) { > kthread_insert_work(worker, work, &worker->work_list); > ret = true; > } > - spin_unlock_irqrestore(&worker->lock, flags); > + raw_spin_unlock_irqrestore(&worker->lock, flags); > return ret; > } > EXPORT_SYMBOL_GPL(kthread_queue_work); > @@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t) > if (WARN_ON_ONCE(!worker)) > return; > > - spin_lock(&worker->lock); > + raw_spin_lock(&worker->lock); > /* Work must not be used with >1 worker, see kthread_queue_work(). */ > WARN_ON_ONCE(work->worker != worker); > > @@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t) > list_del_init(&work->node); > kthread_insert_work(worker, work, &worker->work_list); > > - spin_unlock(&worker->lock); > + raw_spin_unlock(&worker->lock); > } > EXPORT_SYMBOL(kthread_delayed_work_timer_fn); > > @@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker, > unsigned long flags; > bool ret = false; > > - spin_lock_irqsave(&worker->lock, flags); > + raw_spin_lock_irqsave(&worker->lock, flags); > > if (!queuing_blocked(worker, work)) { > __kthread_queue_delayed_work(worker, dwork, delay); > ret = true; > } > > - spin_unlock_irqrestore(&worker->lock, flags); > + raw_spin_unlock_irqrestore(&worker->lock, flags); > return ret; > } > EXPORT_SYMBOL_GPL(kthread_queue_delayed_work); > @@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work) > if (!worker) > return; > > - spin_lock_irq(&worker->lock); > + raw_spin_lock_irq(&worker->lock); > /* Work must not be used with >1 worker, see kthread_queue_work(). */ > WARN_ON_ONCE(work->worker != worker); > > @@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work) > else > noop = true; > > - spin_unlock_irq(&worker->lock); > + raw_spin_unlock_irq(&worker->lock); > > if (!noop) > wait_for_completion(&fwork.done); > @@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork, > * any queuing is blocked by setting the canceling counter. > */ > work->canceling++; > - spin_unlock_irqrestore(&worker->lock, *flags); > + raw_spin_unlock_irqrestore(&worker->lock, *flags); > del_timer_sync(&dwork->timer); > - spin_lock_irqsave(&worker->lock, *flags); > + raw_spin_lock_irqsave(&worker->lock, *flags); > work->canceling--; > } > > @@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker, > unsigned long flags; > int ret = false; > > - spin_lock_irqsave(&worker->lock, flags); > + raw_spin_lock_irqsave(&worker->lock, flags); > > /* Do not bother with canceling when never queued. */ > if (!work->worker) > @@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker, > fast_queue: > __kthread_queue_delayed_work(worker, dwork, delay); > out: > - spin_unlock_irqrestore(&worker->lock, flags); > + raw_spin_unlock_irqrestore(&worker->lock, flags); > return ret; > } > EXPORT_SYMBOL_GPL(kthread_mod_delayed_work); > @@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork) > if (!worker) > goto out; > > - spin_lock_irqsave(&worker->lock, flags); > + raw_spin_lock_irqsave(&worker->lock, flags); > /* Work must not be used with >1 worker, see kthread_queue_work(). */ > WARN_ON_ONCE(work->worker != worker); > > @@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork) > * In the meantime, block any queuing by setting the canceling counter. > */ > work->canceling++; > - spin_unlock_irqrestore(&worker->lock, flags); > + raw_spin_unlock_irqrestore(&worker->lock, flags); > kthread_flush_work(work); > - spin_lock_irqsave(&worker->lock, flags); > + raw_spin_lock_irqsave(&worker->lock, flags); > work->canceling--; > > out_fast: > - spin_unlock_irqrestore(&worker->lock, flags); > + raw_spin_unlock_irqrestore(&worker->lock, flags); > out: > return ret; > } > -- > 2.18.0 >