Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2971200imu; Mon, 19 Nov 2018 08:48:18 -0800 (PST) X-Google-Smtp-Source: AJdET5cDwVMVv3fyzT4qW6dBQSPHhQD6c/pO2BN2OBYwKH0L6GdYzcKF12DeEdTpqg5frZkO2WUi X-Received: by 2002:a62:f909:: with SMTP id o9-v6mr2152437pfh.244.1542646098135; Mon, 19 Nov 2018 08:48:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542646098; cv=none; d=google.com; s=arc-20160816; b=yN4tYG87rRfcxlfsp4qh2jSUYOqFiIh5RJf4W1reibvcWVMS1J/fem4zMWivbp2xgo 1v+HYueUk0yrWizIjzh5Mom7eceNyoGk8ELshb2iz3frSljLhExftv+rEa04vsjMunDM ZaWssP6P45OKQG0XIbD/P+hzOtoxaCAQExYNiuk6f7b3HYtVnHkm+EWVaz9uVlFqnuqB tnE7hMYaiIMoQjhUXGvNGqhNQS6jykzKuj2PnclsrQJZ3JntFuxy4/CxjuzNRiuHJd3K hJRNrAyvhe9hcJFz9ITVJRZl2YJqABZubVpnt7rKE1ZOj5QbuM7S4qZIZBkyrnwC+CJx BVyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=QVpNxCdRrTkzT/YiFruIcmxY+LQ14Mbhsv5I5cebKVo=; b=DhnKzZTmqv//GjUjsAyEG0eEhgKV2BwYchxCTtWRIYQILQ1mReVBha+s8JN3ob9+0V QTU840qxzK+PQBAS6rP4rAXKq2U4wD3Sg/E8hcVqkd5cSN1+V282ZzjHVvW4fbouY5RE sSNnnQT4Q+51i3p8rwgQkwwWNH7EHIlQ0qgH6PEp/8Ht7W2lSs6+gWTdQVUByMvKfYRI pGpZtsFJXac5Mlk4vS3OyghjxyGUKyW90DVHxOr2ypm7M89uAlMy9iJA1tz55OsgF+v9 5qW+6ndTMxTQkKJujcQWeZOjiXPWXxbG6f1UP+YgqygRN54xAtlbgxWffcBk4SaEburw /cLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b="eaqD/x3v"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 28si18851333pgw.364.2018.11.19.08.48.01; Mon, 19 Nov 2018 08:48:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b="eaqD/x3v"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389085AbeKTDLd (ORCPT + 99 others); Mon, 19 Nov 2018 22:11:33 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:47122 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388196AbeKTDLb (ORCPT ); Mon, 19 Nov 2018 22:11:31 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id wAJGiEB5140956; Mon, 19 Nov 2018 16:45:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=corp-2018-07-02; bh=QVpNxCdRrTkzT/YiFruIcmxY+LQ14Mbhsv5I5cebKVo=; b=eaqD/x3vKjJoG+DEwMCBvkRRfRuk+kPHUkFtjRVYZU7JXeIZOh7bYqwGKNJHogPcvhqo bJxdcjUeMwQ4BcVGIIuCxLAuDmN5mInYkCWjj7Bl2+E9t+gZ+ojqkfr3LP2/Q+UeGJSk ZvSUk7tCwOBtB6VXx+LTSR3w96uFw6FyNJHSryp1z9ul+TzlUAdwgh9DFc/dI6UTVgGl jkpviPEcqRCvyl7l/IBywGhi0VS//Zo5aRU8OspYh0M4xzd0mXl+5pmYETMyhBKK57KF UrJvOkqhkjlDHsHmX9fugG256yBelaQRqBNOJmnlwuDb5lZPJW9a1dOsLHmjG5pR0kRv yQ== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2120.oracle.com with ESMTP id 2ntbmqf3r8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 19 Nov 2018 16:45:48 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id wAJGjlMn010944 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 19 Nov 2018 16:45:47 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id wAJGjj5Q014005; Mon, 19 Nov 2018 16:45:45 GMT Received: from ca-dmjordan1.us.oracle.com (/10.211.9.48) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 19 Nov 2018 08:45:45 -0800 Date: Mon, 19 Nov 2018 08:45:54 -0800 From: Daniel Jordan To: Tejun Heo Cc: Daniel Jordan , linux-mm@kvack.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, aarcange@redhat.com, aaron.lu@intel.com, akpm@linux-foundation.org, alex.williamson@redhat.com, bsd@redhat.com, darrick.wong@oracle.com, dave.hansen@linux.intel.com, jgg@mellanox.com, jwadams@google.com, jiangshanlai@gmail.com, mhocko@kernel.org, mike.kravetz@oracle.com, Pavel.Tatashin@microsoft.com, prasad.singamsetty@oracle.com, rdunlap@infradead.org, steven.sistare@oracle.com, tim.c.chen@intel.com, vbabka@suse.cz Subject: Re: [RFC PATCH v4 05/13] workqueue, ktask: renice helper threads to prevent starvation Message-ID: <20181119164554.axobolrufu26kfah@ca-dmjordan1.us.oracle.com> References: <20181105165558.11698-1-daniel.m.jordan@oracle.com> <20181105165558.11698-6-daniel.m.jordan@oracle.com> <20181113163400.GK2509588@devbig004.ftw2.facebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181113163400.GK2509588@devbig004.ftw2.facebook.com> User-Agent: NeoMutt/20180323-268-5a959c X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9082 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=825 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811190153 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 13, 2018 at 08:34:00AM -0800, Tejun Heo wrote: > Hello, Daniel. Hi Tejun, sorry for the delay. Plumbers... > On Mon, Nov 05, 2018 at 11:55:50AM -0500, Daniel Jordan wrote: > > static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, > > - bool from_cancel) > > + struct nice_work *nice_work, int flags) > > { > > struct worker *worker = NULL; > > struct worker_pool *pool; > > @@ -2868,11 +2926,19 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, > > if (pwq) { > > if (unlikely(pwq->pool != pool)) > > goto already_gone; > > + > > + /* not yet started, insert linked work before work */ > > + if (unlikely(flags & WORK_FLUSH_AT_NICE)) > > + insert_nice_work(pwq, nice_work, work); > > So, I'm not sure this works that well. e.g. what if the work item is > waiting for other work items which are at lower priority? Also, in > this case, it'd be a lot simpler to simply dequeue the work item and > execute it synchronously. Good idea, that is much simpler (and shorter). So doing it this way, the current task's nice level would be adjusted while running the work synchronously. > > > } else { > > worker = find_worker_executing_work(pool, work); > > if (!worker) > > goto already_gone; > > pwq = worker->current_pwq; > > + if (unlikely(flags & WORK_FLUSH_AT_NICE)) { > > + set_user_nice(worker->task, nice_work->nice); > > + worker->flags |= WORKER_NICED; > > + } > > } > > I'm not sure about this. Can you see whether canceling & executing > synchronously is enough to address the latency regression? In my testing, canceling was practically never successful because these are long running jobs, so by the time the main ktask thread gets around to flushing/nice'ing the works, worker threads have already started running them. I had to write a no-op ktask to hit the first path where you suggest dequeueing. So adjusting the priority of a running worker seems required to address the latency issue. So instead of flush_work_at_nice, how about this?: void renice_work_sync(work_struct *work, long nice); If a worker is running the work, renice the worker to 'nice' and wait for it to finish (what this patch does now), and if the work isn't running, dequeue it and run in the current thread, again at 'nice'. Thanks for taking a look.