Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp9977719rwd; Wed, 21 Jun 2023 14:36:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4kAJ2H/ClZ8hQFit879ITERJ4lFG7XyCg+lv+nYmmTPxIhfyNGAXwJijQh8UcjwdXzOHoE X-Received: by 2002:a05:6a00:168d:b0:64d:3e99:83a5 with SMTP id k13-20020a056a00168d00b0064d3e9983a5mr18541376pfc.26.1687383383195; Wed, 21 Jun 2023 14:36:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687383383; cv=none; d=google.com; s=arc-20160816; b=T9rurkZTdS184IAVQ1GhU5U6VZ3N2XtJj4pJwxo+ueJlZ+CRUydWeRc902XTQHKpuy ivA5NWvnjkmIiJB4ONREnfsZpQUZFXsxRSuD6Hy+A8UHPfzKCyZA47oBGtj/aPdRkYrk TLyTGjJPDExd0uh5i9NWwkvruEbmi95Pdnrvf8wP0W3FjVcGD9M/eVlz7TUTvcH7Xk6/ HS62KmE8lAYxltsF5L9YrNVuT6KZSNtjGy1BGXLTrdjCSXND++RCNhrWdV8lQPFeYD2I 6F3/rOnHAlKRnDeapnX9dMlLzEWChp2bpKntFk/1yvZD9r888zq0yLhgBrqLn43ZQj1z 6JNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:sender:dkim-signature; bh=KDVeKgt20anLjpmZL3lLbfkT9Je6VtDNWrGWDpMPcz4=; b=QwPGAQczeG+KkJ4JDLN331N2dPBKz5DMt7GMub16TzrFycAmcqXM2kfXWkrWRmYrPZ 7EOr182MUOH1Shs9r3VxSh3NXm9fwWfAV5hW0HCYKWk8z9Qqptmo9HnR8sTYEbZ7D3Yq 4Rv1RJP/41JWe//5GPXY5Oqy7tJgLYBb7ZUKrcpkbkDNbE8z4ImA/Uxk37XEoRLyuB1N 1857Cgqs6G861VSpq0jw0NG5pCWsFToPaE9Cm/rWSeI0R0r+B85lkxXMe3htjb2/U+Hg AZADgIYEzCg/rBn6g1VhRlqAgQg8cte3Z/iaLZXYaqOt2T+xByCnscQxBlUItVcM/65T VX+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=RvbWobbu; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h7-20020a056a00000700b0066872c159cbsi813618pfk.122.2023.06.21.14.36.02; Wed, 21 Jun 2023 14:36:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=RvbWobbu; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229657AbjFUV2t (ORCPT + 99 others); Wed, 21 Jun 2023 17:28:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229969AbjFUV2s (ORCPT ); Wed, 21 Jun 2023 17:28:48 -0400 Received: from mail-il1-x12b.google.com (mail-il1-x12b.google.com [IPv6:2607:f8b0:4864:20::12b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CF991BE4; Wed, 21 Jun 2023 14:28:41 -0700 (PDT) Received: by mail-il1-x12b.google.com with SMTP id e9e14a558f8ab-34207e81c98so30991325ab.2; Wed, 21 Jun 2023 14:28:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687382921; x=1689974921; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=KDVeKgt20anLjpmZL3lLbfkT9Je6VtDNWrGWDpMPcz4=; b=RvbWobbubv0WAnB5MOzIkt7ykTs7F8l/9ejrg40RaTJHbrFOtrQSoADnE7lnud5Iuk nYiXpQXNkb67FFaaNt3hgXnj38zPMCUj+JYo/JvOoMraNl5Z/mZcyFm6JTqw9ca+Tc5T JYJVraxgeDFzY/4YqtRyW/+Fv0akQQBJxXoc8izGzN90PjzovTBRgR+hWJ6KQPSXraVl jTgQHfu2I3BAUHjH6uJGQ0SBb8s2R/yi0D8AsdW+gx6usnqEXwspfOgrivolDw2ni7lp YT8lqGLD8sUbcRp3NfgyoGJw3/INtgWg25KYYTKtgDfQo/TC69L1NHjm1wxE3dqxvAOn 9+AQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687382921; x=1689974921; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KDVeKgt20anLjpmZL3lLbfkT9Je6VtDNWrGWDpMPcz4=; b=AI9g4y/3Vt4m70AZOHp2e0++3VsN2TUebiBxOaoRjuFkiMnwwOTbMUfdJ/VrQVTLO4 hpbwwDBK++DJPYTt98F2NQn9pVcG40EX3kvPMStmUiFB4iiqks9rPI1xd3SeR3WPlCUL 3cXlFlkaCaBb7GCHzVYUrk94RlDlOdeMvvnayJ4cF0VrngC+G8byVR10tCPcSsFtbgfR 3VAEdvzoYS8iiOzxCtUU960Vq6INq6gOAkghg4pTPISd4qBRUzDFcDS0CfreIk750Xhg ASfpquJFldaN1fLk5UUMtNL6dd0+jT0zRGDIIsc1PXnFNTFpvTbX8fO754CEiZxJL1oX YUSQ== X-Gm-Message-State: AC+VfDx9Qv2QlXNBm7ru1idUu1p2J145LHluUioDH4SYlnjf11GW/b/T /BR3lrLMmKTA1FLPZaKxBCg= X-Received: by 2002:a92:c10e:0:b0:335:7be2:26ca with SMTP id p14-20020a92c10e000000b003357be226camr16876500ile.19.1687382920609; Wed, 21 Jun 2023 14:28:40 -0700 (PDT) Received: from localhost (dhcp-72-235-13-41.hawaiiantel.net. [72.235.13.41]) by smtp.gmail.com with ESMTPSA id v2-20020a92c6c2000000b0033bc3a3ea39sm1524188ilm.70.2023.06.21.14.28.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 14:28:40 -0700 (PDT) Sender: Tejun Heo Date: Wed, 21 Jun 2023 11:28:38 -1000 From: Tejun Heo To: Chuck Lever III Cc: open list , Linux NFS Mailing List Subject: Re: contention on pwq->pool->lock under heavy NFS workload Message-ID: References: <38FA0353-5303-4A3D-86A5-EF1E989CD497@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <38FA0353-5303-4A3D-86A5-EF1E989CD497@oracle.com> X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Hello, On Wed, Jun 21, 2023 at 03:26:22PM +0000, Chuck Lever III wrote: > lock_stat reports that the pool->lock kernel/workqueue.c:1483 is the highest > contended lock on my test NFS client. The issue appears to be that the three > NFS-related workqueues, rpciod_workqueue, xprtiod_workqueue, and nfsiod all > get placed in the same worker_pool, so they have to fight over one pool lock. > > I notice that ib_comp_wq is allocated with the same flags, but I don't see > significant contention there, and a trace_printk in __queue_work shows that > work items queued on that WQ seem to alternate between at least two different > worker_pools. > > Is there a preferred way to ensure the NFS WQs get spread a little more fairly > amongst the worker_pools? Can you share the output of lstopo on the test machine? The following branch has pending workqueue changes which makes unbound workqueues finer grained by default and a lot more flexible in how they're segmented. git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git affinity-scopes-v2 Can you please test with the brnach? If the default doesn't improve the situation, you can set WQ_SYSFS on the affected workqueues and change their scoping by writing to /sys/devices/virtual/WQ_NAME/affinity_scope. Please take a look at https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git/tree/Documentation/core-api/workqueue.rst?h=affinity-scopes-v2#n350 for more details. Thanks. -- tejun