Received: by 10.223.176.46 with SMTP id f43csp1836273wra; Thu, 25 Jan 2018 00:34:17 -0800 (PST) X-Google-Smtp-Source: AH8x2276PI6zqdZVW101S38VjgtJCM+LYf0Hzs9m6yuyK2eF+upAatD+3XIvu0cXopdWeTsBvLj0 X-Received: by 10.98.28.209 with SMTP id c200mr5866194pfc.24.1516869257507; Thu, 25 Jan 2018 00:34:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516869257; cv=none; d=google.com; s=arc-20160816; b=CVIoVnQmbszucjF7l6K99K1Aj3oqcftm1HNIa+PJImdLkBzwMDvjQmhK2L7m9/sZTK 1kn8Jekzmnza0mENSkfaN4wvk3+fFH8S5cESGZP4yd1hQD5T+UtbnLuqm5dtpMHkAHts P/+ip2xnNet4gRNWDggguEfwqHfvNScQ1kZn3W/F8LMk6uaLi8R7MT/K9sZe0zzX+GDG kvAcCyULd5wTa/Ids18FYpdEVI4BtkUheLLi4GSJfNspomXTfRH+9ZqwZxSRIPhud3uF yRRAq8w+/ZzVmXXcGgi+2g7XsY/KC92GPI0FO/PuahjkUyTdFzDLjPIJdXiif2S1GK2R Xt0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=UygVQjI2h28Ra4Olhsc8dkab/WJ1njlKIe9kR8EcdLE=; b=d+F1khoGpMH1uJE9+Q3l7058BzZP9+QcmcyOsdCYewn14NGtEF5NoWBB80w3UkfmtJ fr2YSsfpV0oIKMr6zUfw1sU11t3DKhzYVx8fwM9w6XqGylrtz9iOa0MdNeilkgCydCVo Q+7EnhShvcoAvbdctGdu3djbtxkJkfqYJophcL97LZDFYx5Isu1/whuvWvI6v7q53mlE RnbwYDGVco5POiACigyV/BR8aLLxWXm6m1Ya0svSbnVbORcs2w8yRhYjnwYVaiAbsrjH 7Bn9YbsGkNsHUk1kaNkXQrEF0RpVkVkyb5LtdGAn8ij+78imkK4Tl+VULRgOtA97Jzl7 JHZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c1-v6si1627527plk.813.2018.01.25.00.34.03; Thu, 25 Jan 2018 00:34:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751439AbeAYIdX (ORCPT + 99 others); Thu, 25 Jan 2018 03:33:23 -0500 Received: from shells.gnugeneration.com ([66.240.222.126]:55484 "EHLO shells.gnugeneration.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751266AbeAYIdW (ORCPT ); Thu, 25 Jan 2018 03:33:22 -0500 Received: by shells.gnugeneration.com (Postfix, from userid 1000) id ECF0D1A4064C; Thu, 25 Jan 2018 00:33:21 -0800 (PST) Date: Thu, 25 Jan 2018 00:33:21 -0800 From: vcaputo@pengaru.com To: Enric Balletbo Serra Cc: linux-kernel , Tim Murray , tj@kernel.org Subject: Re: [REGRESSION] (>= v4.12) IO w/dmcrypt causing audio underruns Message-ID: <20180125083321.jmrsqpfszj6ugsay@shells.gnugeneration.com> References: <20171129183919.GQ692@shells.gnugeneration.com> <20171201213322.GW692@shells.gnugeneration.com> <20180117224852.3cr33lhot3rpahg7@shells.gnugeneration.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 19, 2018 at 11:57:32AM +0100, Enric Balletbo Serra wrote: > Hi Vito, > > 2018-01-17 23:48 GMT+01:00 : > > On Mon, Dec 18, 2017 at 10:25:33AM +0100, Enric Balletbo Serra wrote: > >> Hi Vito, > >> > >> 2017-12-01 22:33 GMT+01:00 : > >> > On Wed, Nov 29, 2017 at 10:39:19AM -0800, vcaputo@pengaru.com wrote: > >> >> Hello, > >> >> > >> >> Recently I noticed substantial audio dropouts when listening to MP3s in > >> >> `cmus` while doing big and churny `git checkout` commands in my linux git > >> >> tree. > >> >> > >> >> It's not something I've done much of over the last couple months so I > >> >> hadn't noticed until yesterday, but didn't remember this being a problem in > >> >> recent history. > >> >> > >> >> As there's quite an accumulation of similarly configured and built kernels > >> >> in my grub menu, it was trivial to determine approximately when this began: > >> >> > >> >> 4.11.0: no dropouts > >> >> 4.12.0-rc7: dropouts > >> >> 4.14.0-rc6: dropouts (seem more substantial as well, didn't investigate) > >> >> > >> >> Watching top while this is going on in the various kernel versions, it's > >> >> apparent that the kworker behavior changed. Both the priority and quantity > >> >> of running kworker threads is elevated in kernels experiencing dropouts. > >> >> > >> >> Searching through the commit history for v4.11..v4.12 uncovered: > >> >> > >> >> commit a1b89132dc4f61071bdeaab92ea958e0953380a1 > >> >> Author: Tim Murray > >> >> Date: Fri Apr 21 11:11:36 2017 +0200 > >> >> > >> >> dm crypt: use WQ_HIGHPRI for the IO and crypt workqueues > >> >> > >> >> Running dm-crypt with workqueues at the standard priority results in IO > >> >> competing for CPU time with standard user apps, which can lead to > >> >> pipeline bubbles and seriously degraded performance. Move to using > >> >> WQ_HIGHPRI workqueues to protect against that. > >> >> > >> >> Signed-off-by: Tim Murray > >> >> Signed-off-by: Enric Balletbo i Serra > >> >> Signed-off-by: Mike Snitzer > >> >> > >> >> --- > >> >> > >> >> Reverting a1b8913 from 4.14.0-rc6, my current kernel, eliminates the > >> >> problem completely. > >> >> > >> >> Looking at the diff in that commit, it looks like the commit message isn't > >> >> even accurate; not only is the priority of the dmcrypt workqueues being > >> >> changed - they're also being made "CPU intensive" workqueues as well. > >> >> > >> >> This combination appears to result in both elevated scheduling priority and > >> >> greater quantity of participant worker threads effectively starving any > >> >> normal priority user task under periods of heavy IO on dmcrypt volumes. > >> >> > >> >> I don't know what the right solution is here. It seems to me we're lacking > >> >> the appropriate mechanism for charging CPU resources consumed on behalf of > >> >> user processes in kworker threads to the work-causing process. > >> >> > >> >> What effectively happens is my normal `git` user process is able to > >> >> greatly amplify what share of CPU it takes from the system by generating IO > >> >> on what happens to be a high-priority CPU-intensive storage volume. > >> >> > >> >> It looks potentially complicated to fix properly, but I suspect at its core > >> >> this may be a fairly longstanding shortcoming of the page cache and its > >> >> asynchronous design. Something that has been exacerbated substantially by > >> >> the introduction of CPU-intensive storage subsystems like dmcrypt. > >> >> > >> >> If we imagine the whole stack simplified, where all the IO was being done > >> >> synchronously in-band, and the dmcrypt kernel code simply ran in the > >> >> IO-causing process context, it would be getting charged to the calling > >> >> process and scheduled accordingly. The resource accounting and scheduling > >> >> problems all emerge with the page cache, buffered IO, and async background > >> >> writeback in a pool of unrelated worker threads, etc. That's how it > >> >> appears to me anyways... > >> >> > >> >> The system used is a X61s Thinkpad 1.8Ghz with 840 EVO SSD, lvm on dmcrypt. > >> >> The kernel .config is attached in case it's of interest. > >> >> > >> >> Thanks, > >> >> Vito Caputo > >> > > >> > > >> > > >> > Ping... > >> > > >> > Could somebody please at least ACK receiving this so I'm not left wondering > >> > if my mails to lkml are somehow winding up flagged as spam, thanks! > >> > >> Sorry I did not notice your email before you ping me directly. It's > >> interesting that issue, though we didn't notice this problem. It's a > >> bit far since I tested this patch but I'll setup the environment again > >> and do more tests to understand better what is happening. > >> > > > > Any update on this? > > > > I did not reproduce the issue for now. Can you try what happens if you > remove the WQ_CPU_INTENSIVE in the kcryptd_io workqueue? > > - cc->io_queue = alloc_workqueue("kcryptd_io", WQ_HIGHPRI | > WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1); > cc->io_queue = alloc_workqueue("kcryptd_io", WQ_HIGHPRI | WQ_MEM_RECLAIM, 1); > FWIW if I change both "kcryptd" and "kcryptd_io" workqueues to just WQ_CPU_INTENSIVE, removing WQ_HIGHPRIO, the problem goes away. Doing this to "kcryptd_io" alone, as mentioned in my previous email, was ineffective. Perhaps revert just the WQ_HIGHPRIO bit from the dmcrypt workqueues? Regards, Vito Caputo