Received: by 2002:a05:6520:4d:b0:139:a872:a4c9 with SMTP id i13csp2568353lkm; Mon, 20 Sep 2021 18:56:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxP7NlYHs6uyNeI1eqrpqYyxFFlt7WhMtoojYGpS2p1kL+OktmMVTlaEXzElA7VcH0XQcyo X-Received: by 2002:a05:6e02:ecd:: with SMTP id i13mr19585825ilk.143.1632189026957; Mon, 20 Sep 2021 18:50:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632189026; cv=none; d=google.com; s=arc-20160816; b=RWlLtfh1Srr3z3XG5uhxt5g703svpzSJx++lElOR629P5pcx3bKRWLcY6Onql1vBDr yfco3eLNH45YOEVZH/qOvNmKhjSM+5voLU602/MZS4got5sghU8dzKPlj68T75tUgYRA Skswym/YuaMeNAeXO6JSQfs78K0bwW3ra/TmtO1dySsFG+YmVqHS5lGl4gq7w7oAmFQa fkDqhVlJSPvDELYsqFK3Oim7c7Ti0EQ/s2GLFaRGyPYND4czy9uhRBG9hRVpqkZ6lGOE dAJD8Syhg1XUhqxcOri7d4eBc15+/EVh1iI+RJtPMyGFkOxs0wOLlF5/utMJmmTQMypP 0t8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=t93hG/ZhSMHmmr77W3vSPuHyBf0ulv8uPjt4VN564v4=; b=PB2Eev/HAHTHDEAHS2bwFv/+cpjpEMappqrkAoviTQw35H47+lFi1LDH0V73cAf/+8 sojohYpLYElnldVytdzFqlxeH5OK6fkEQlRDzJ7jhyzhHi5f4nGpfkEopJ7bdSBwORe8 mXOzWi9Su7VPfDBal795iEEKRLRodWr8igPnnexO7a5B5/+QobS/NxsHTR5q/Ry2BuNQ DSAAPE+tbidaysXOJmYOWdQu1Vl9CAJ9PoJrxcOJPhBhW/ycKV5tLmMgE6XJG4b6E3g5 aC3BkARmLyKie8I+Q71hVTnElaGKlFQHcn7rfghBna/+z18nFFXc8Eofo9qJ4PU9HmyC Dhvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=zp3y9qft; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i14si13152301ilb.141.2021.09.20.18.50.15; Mon, 20 Sep 2021 18:50:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=zp3y9qft; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358174AbhITSFc (ORCPT + 99 others); Mon, 20 Sep 2021 14:05:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:57658 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356437AbhITR75 (ORCPT ); Mon, 20 Sep 2021 13:59:57 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 05DE5613B1; Mon, 20 Sep 2021 17:15:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1632158143; bh=xnOhGEu/UnEKhDj5LFFI8Bgz1MkAYlSNyc7RZmoFzb0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zp3y9qftyF+asHHIXO73FHMvmI6BbDgbhqjuDv1lcpaWv1qrWJmehOuBJyMkpR9w2 NQRHk7MoTwbNOXJAej6kLbqqmGnjd8nTBMfuRyyf4J29vXC7TjiNPGkIYeVfEIqFsd bxlwPl7wRwRC6CZ+pjILbaGvgKRcyQeiBK1GCzJ0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, DJ Gregor , Mikulas Patocka , Arne Welzel , Mike Snitzer Subject: [PATCH 5.4 021/260] dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc() Date: Mon, 20 Sep 2021 18:40:39 +0200 Message-Id: <20210920163931.844537956@linuxfoundation.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210920163931.123590023@linuxfoundation.org> References: <20210920163931.123590023@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Arne Welzel commit 528b16bfc3ae5f11638e71b3b63a81f9999df727 upstream. On systems with many cores using dm-crypt, heavy spinlock contention in percpu_counter_compare() can be observed when the page allocation limit for a given device is reached or close to be reached. This is due to percpu_counter_compare() taking a spinlock to compute an exact result on potentially many CPUs at the same time. Switch to non-exact comparison of allocated and allowed pages by using the value returned by percpu_counter_read_positive() to avoid taking the percpu_counter spinlock. This may over/under estimate the actual number of allocated pages by at most (batch-1) * num_online_cpus(). Currently, batch is bounded by 32. The system on which this issue was first observed has 256 CPUs and 512GB of RAM. With a 4k page size, this change may over/under estimate by 31MB. With ~10G (2%) allowed dm-crypt allocations, this seems an acceptable error. Certainly preferred over running into the spinlock contention. This behavior was reproduced on an EC2 c5.24xlarge instance with 96 CPUs and 192GB RAM as follows, but can be provoked on systems with less CPUs as well. * Disable swap * Tune vm settings to promote regular writeback $ echo 50 > /proc/sys/vm/dirty_expire_centisecs $ echo 25 > /proc/sys/vm/dirty_writeback_centisecs $ echo $((128 * 1024 * 1024)) > /proc/sys/vm/dirty_background_bytes * Create 8 dmcrypt devices based on files on a tmpfs * Create and mount an ext4 filesystem on each crypt devices * Run stress-ng --hdd 8 within one of above filesystems Total %system usage collected from sysstat goes to ~35%. Write throughput on the underlying loop device is ~2GB/s. perf profiling an individual kworker kcryptd thread shows the following profile, indicating spinlock contention in percpu_counter_compare(): 99.98% 0.00% kworker/u193:46 [kernel.kallsyms] [k] ret_from_fork | --ret_from_fork kthread worker_thread | --99.92%--process_one_work | |--80.52%--kcryptd_crypt | | | |--62.58%--mempool_alloc | | | | | --62.24%--crypt_page_alloc | | | | | --61.51%--__percpu_counter_compare | | | | | --61.34%--__percpu_counter_sum | | | | | |--58.68%--_raw_spin_lock_irqsave | | | | | | | --58.30%--native_queued_spin_lock_slowpath | | | | | --0.69%--cpumask_next | | | | | --0.51%--_find_next_bit | | | |--10.61%--crypt_convert | | | | | |--6.05%--xts_crypt ... After applying this patch and running the same test, %system usage is lowered to ~7% and write throughput on the loop device increases to ~2.7GB/s. perf report shows mempool_alloc() as ~8% rather than ~62% in the profile and not hitting the percpu_counter() spinlock anymore. |--8.15%--mempool_alloc | | | |--3.93%--crypt_page_alloc | | | | | --3.75%--__alloc_pages | | | | | --3.62%--get_page_from_freelist | | | | | --3.22%--rmqueue_bulk | | | | | --2.59%--_raw_spin_lock | | | | | --2.57%--native_queued_spin_lock_slowpath | | | --3.05%--_raw_spin_lock_irqsave | | | --2.49%--native_queued_spin_lock_slowpath Suggested-by: DJ Gregor Reviewed-by: Mikulas Patocka Signed-off-by: Arne Welzel Fixes: 5059353df86e ("dm crypt: limit the number of allocated pages") Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-crypt.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -2092,7 +2092,12 @@ static void *crypt_page_alloc(gfp_t gfp_ struct crypt_config *cc = pool_data; struct page *page; - if (unlikely(percpu_counter_compare(&cc->n_allocated_pages, dm_crypt_pages_per_client) >= 0) && + /* + * Note, percpu_counter_read_positive() may over (and under) estimate + * the current usage by at most (batch - 1) * num_online_cpus() pages, + * but avoids potential spinlock contention of an exact result. + */ + if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) && likely(gfp_mask & __GFP_NORETRY)) return NULL;