Received: by 10.223.176.46 with SMTP id f43csp1795516wra; Wed, 24 Jan 2018 23:50:01 -0800 (PST) X-Google-Smtp-Source: AH8x2257oTGyTEAqMCGDaYR0W75cBcqlzx7RIwdCx3A2WMO3oFJnWR4+kZTLq19A24lUHiVF9JUE X-Received: by 10.101.91.3 with SMTP id y3mr12696161pgq.260.1516866601814; Wed, 24 Jan 2018 23:50:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516866601; cv=none; d=google.com; s=arc-20160816; b=A6E+89Xn027D139BdjOZ52pF+FiyE9G6ZrtG/KkYSz8yN9nkHzEjk7gF/LEKfuei7p gYrv0u9C9B5Tf85aAK5tHQ8ekAkEw/pmqW0IvvLzxd0TPxq74QQTVZnTIq8/l8Tjv5Qg uhh5+H3Jd487yH6+FwVrUfNnqaQ3SAG0anjlbMoveCysXkhUtVXPvtCCQ3GeBLU36uFJ ZWIMS6DgkfldFY6PYAswqi3srSwftlFDOjzrPVUrC+x994mhASHNFf+6Ml3++mqHbomc JsMVwXTomhcEEf71Fw27o6rcGexUsyBVSqdoPdnSt3khhUCJ8JwQHOFezldAwDsXlaox Tc6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=vwQ+qnRqGVsTBkJU0cqLqyr/BssHP76EvLQLF+12nHc=; b=Yc3o8lbtveHN3AUAPDopWCBzZ0ubE3QzrnRyEdPUN/7IGYeKD7PiwV9Fpfp931NuDx BiBEUXpVlDuE7FbwJdxo52Yrbz41rF8+EWnKaCTOr0iK9IA5QXUVj3lh7hkdEV3R45Xc 6PGnlfd0ZglsDEsIgT27Ktd1feI5IoGfkli2tU4rQEnXTCIfviN7qxcK0foy9OiPm4oC Yq4cBui2B6j+/a8zq8LU48R3bylIBo33uYAgiqJiD9f0NDmzOvj3cOtZR60+BpDzPndK QTW7rf0s9EUPESojV0vgj4bgLredR9zupDvkkDIrPXvoXOVuPREo7m7ya7fpXl3SBOLT tUtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f91-v6si1570381plb.377.2018.01.24.23.49.48; Wed, 24 Jan 2018 23:50:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751366AbeAYHtK (ORCPT + 99 others); Thu, 25 Jan 2018 02:49:10 -0500 Received: from shells.gnugeneration.com ([66.240.222.126]:55242 "EHLO shells.gnugeneration.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751181AbeAYHtH (ORCPT ); Thu, 25 Jan 2018 02:49:07 -0500 Received: by shells.gnugeneration.com (Postfix, from userid 1000) id 5EC6B1A4064C; Wed, 24 Jan 2018 23:49:07 -0800 (PST) Date: Wed, 24 Jan 2018 23:49:07 -0800 From: vcaputo@pengaru.com To: Enric Balletbo Serra Cc: linux-kernel , Tim Murray , tj@kernel.org Subject: Re: [REGRESSION] (>= v4.12) IO w/dmcrypt causing audio underruns Message-ID: <20180125074907.clnpu2m36nklos3j@shells.gnugeneration.com> References: <20171129183919.GQ692@shells.gnugeneration.com> <20171201213322.GW692@shells.gnugeneration.com> <20180117224852.3cr33lhot3rpahg7@shells.gnugeneration.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 19, 2018 at 11:57:32AM +0100, Enric Balletbo Serra wrote: > Hi Vito, > > 2018-01-17 23:48 GMT+01:00 : > > On Mon, Dec 18, 2017 at 10:25:33AM +0100, Enric Balletbo Serra wrote: > >> Hi Vito, > >> > >> 2017-12-01 22:33 GMT+01:00 : > >> > On Wed, Nov 29, 2017 at 10:39:19AM -0800, vcaputo@pengaru.com wrote: > >> >> Hello, > >> >> > >> >> Recently I noticed substantial audio dropouts when listening to MP3s in > >> >> `cmus` while doing big and churny `git checkout` commands in my linux git > >> >> tree. > >> >> > >> >> It's not something I've done much of over the last couple months so I > >> >> hadn't noticed until yesterday, but didn't remember this being a problem in > >> >> recent history. > >> >> > >> >> As there's quite an accumulation of similarly configured and built kernels > >> >> in my grub menu, it was trivial to determine approximately when this began: > >> >> > >> >> 4.11.0: no dropouts > >> >> 4.12.0-rc7: dropouts > >> >> 4.14.0-rc6: dropouts (seem more substantial as well, didn't investigate) > >> >> > >> >> Watching top while this is going on in the various kernel versions, it's > >> >> apparent that the kworker behavior changed. Both the priority and quantity > >> >> of running kworker threads is elevated in kernels experiencing dropouts. > >> >> > >> >> Searching through the commit history for v4.11..v4.12 uncovered: > >> >> > >> >> commit a1b89132dc4f61071bdeaab92ea958e0953380a1 > >> >> Author: Tim Murray > >> >> Date: Fri Apr 21 11:11:36 2017 +0200 > >> >> > >> >> dm crypt: use WQ_HIGHPRI for the IO and crypt workqueues > >> >> > >> >> Running dm-crypt with workqueues at the standard priority results in IO > >> >> competing for CPU time with standard user apps, which can lead to > >> >> pipeline bubbles and seriously degraded performance. Move to using > >> >> WQ_HIGHPRI workqueues to protect against that. > >> >> > >> >> Signed-off-by: Tim Murray > >> >> Signed-off-by: Enric Balletbo i Serra > >> >> Signed-off-by: Mike Snitzer > >> >> > >> >> --- > >> >> > >> >> Reverting a1b8913 from 4.14.0-rc6, my current kernel, eliminates the > >> >> problem completely. > >> >> > >> >> Looking at the diff in that commit, it looks like the commit message isn't > >> >> even accurate; not only is the priority of the dmcrypt workqueues being > >> >> changed - they're also being made "CPU intensive" workqueues as well. > >> >> > >> >> This combination appears to result in both elevated scheduling priority and > >> >> greater quantity of participant worker threads effectively starving any > >> >> normal priority user task under periods of heavy IO on dmcrypt volumes. > >> >> > >> >> I don't know what the right solution is here. It seems to me we're lacking > >> >> the appropriate mechanism for charging CPU resources consumed on behalf of > >> >> user processes in kworker threads to the work-causing process. > >> >> > >> >> What effectively happens is my normal `git` user process is able to > >> >> greatly amplify what share of CPU it takes from the system by generating IO > >> >> on what happens to be a high-priority CPU-intensive storage volume. > >> >> > >> >> It looks potentially complicated to fix properly, but I suspect at its core > >> >> this may be a fairly longstanding shortcoming of the page cache and its > >> >> asynchronous design. Something that has been exacerbated substantially by > >> >> the introduction of CPU-intensive storage subsystems like dmcrypt. > >> >> > >> >> If we imagine the whole stack simplified, where all the IO was being done > >> >> synchronously in-band, and the dmcrypt kernel code simply ran in the > >> >> IO-causing process context, it would be getting charged to the calling > >> >> process and scheduled accordingly. The resource accounting and scheduling > >> >> problems all emerge with the page cache, buffered IO, and async background > >> >> writeback in a pool of unrelated worker threads, etc. That's how it > >> >> appears to me anyways... > >> >> > >> >> The system used is a X61s Thinkpad 1.8Ghz with 840 EVO SSD, lvm on dmcrypt. > >> >> The kernel .config is attached in case it's of interest. > >> >> > >> >> Thanks, > >> >> Vito Caputo > >> > > >> > > >> > > >> > Ping... > >> > > >> > Could somebody please at least ACK receiving this so I'm not left wondering > >> > if my mails to lkml are somehow winding up flagged as spam, thanks! > >> > >> Sorry I did not notice your email before you ping me directly. It's > >> interesting that issue, though we didn't notice this problem. It's a > >> bit far since I tested this patch but I'll setup the environment again > >> and do more tests to understand better what is happening. > >> > > > > Any update on this? > > > > I did not reproduce the issue for now. Can you try what happens if you > remove the WQ_CPU_INTENSIVE in the kcryptd_io workqueue? > > - cc->io_queue = alloc_workqueue("kcryptd_io", WQ_HIGHPRI | > WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1); > cc->io_queue = alloc_workqueue("kcryptd_io", WQ_HIGHPRI | WQ_MEM_RECLAIM, 1); > FYI I also tried just removing WQ_HIGHPRI, retaining WQ_CPU_INTENSIVE, also bad results. So far just reverting a1b8913 has been the best solution. I haven't studied the dmcrypt code, is there reason to observe the effects of these changes on both the workqueues touched by a1b8913 instead of just kcryptd_io? Regards, Vito Caputo