Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1003830ybj; Tue, 5 May 2020 11:14:06 -0700 (PDT) X-Google-Smtp-Source: APiQypJ3eyiKikHYbbJR08Cbkve32yfkXE+DrY8RgmoztqSDs7dzbkJgOD4P1hPHbKEfplNFbZiF X-Received: by 2002:a17:906:8257:: with SMTP id f23mr4009595ejx.196.1588702445941; Tue, 05 May 2020 11:14:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588702445; cv=none; d=google.com; s=arc-20160816; b=gGD5G+7sYlyj8WVhnkE2q8ykL0YlTx4IqGhYDzWQleQXkH4c4x8PlbMO6o8kkjuSOT XgMbZlm97ZBxP7SaPbDVMXuOjikdrw1039bdokpK3NxI+mQYfLCmxIsk1dmR++ztH/KT b6anxXaoVk0Rlrd+UDp/rmj+cOtFU9keKX1WduX03vbztGDTuI6B9ZxeLZaf3jdjm7Po CH4GXNmGrMw+DkIPihCLUzDwXQbgEdxz2Ed0BqfonyHZYg10IlviUlo+/RTpfoeOGMcv UTuHUhFNRoKI7PtBvD1QBMi8IGwgYNQL2upX4MgeiKxjUB4/Ues/o2vfXfJsJOaeJcyk Aj2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:in-reply-to:message-id:date :subject:cc:to:from:ironport-sdr:dkim-signature; bh=+821Wwf5AYr1K8nxKevr7tEFWbfM2VMfbVR0LMsRGRU=; b=zuDbzPWG5bIM5avQuinBqWjGGlDE3ltlmjcJGbrlRVIBDzYF2aq1WfzUU1qjqd1XQG SrBt9usT3hMBPqvm+PzrAKQGrW0TY1EeoPnaBvYew7guxx5U53/9VeuAYzPI3RosJbzi /3WnURSg3vwP6U2ELlrtTI8YFKJhOsCwK/QY/g4b3x3NFxfiJdjuSilVdO/0JF9tXAjK qrjowfxKHjCAPyUKc4/FllVAoKQkZGO5K0GYq1JKySo5GZLnYM2V0f2WRCeh6yv5jzlS 7iIjQTDWL2KcG6VWE1bxyntXUxI8xauRWj1iBgwCtpiwBRPXYFDV4MiRhWEdI0zl1Zs7 oJQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=Fw32OJqk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a24si1765819edy.1.2020.05.05.11.13.42; Tue, 05 May 2020 11:14:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=Fw32OJqk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730800AbgEESLb (ORCPT + 99 others); Tue, 5 May 2020 14:11:31 -0400 Received: from smtp-fw-33001.amazon.com ([207.171.190.10]:59334 "EHLO smtp-fw-33001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730184AbgEESLb (ORCPT ); Tue, 5 May 2020 14:11:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1588702291; x=1620238291; h=from:to:cc:subject:date:message-id:in-reply-to: mime-version; bh=+821Wwf5AYr1K8nxKevr7tEFWbfM2VMfbVR0LMsRGRU=; b=Fw32OJqkT9B3700Rs9H3z7XRNZaGo9VPDnYN9pLA8pm3reLQ5fTsrUeM Bv9j2h5CYdOgZLqHcCRJEyZt3/ZAeZQtbUn7MOG6BzHH2jIzXftcNDnXk vyQDA8dAvqjrC6U8NUcRZuAtulPeewL+/1qgwLIPWpnTtV6UEFaHlBNBq Y=; IronPort-SDR: EW34MZA2llje8LI32uObFMWC21OA8p4OnrRx9nolhn7kb7tsekUE0ly/Qhvy52SivyxEb2Z0jP dGTlwrxrmSBw== X-IronPort-AV: E=Sophos;i="5.73,356,1583193600"; d="scan'208";a="42850494" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-397e131e.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 05 May 2020 18:11:29 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2c-397e131e.us-west-2.amazon.com (Postfix) with ESMTPS id A6856A21CC; Tue, 5 May 2020 18:11:27 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 5 May 2020 18:11:27 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.162.200) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 5 May 2020 18:11:16 +0000 From: SeongJae Park To: "Paul E. McKenney" CC: Eric Dumazet , SeongJae Park , Eric Dumazet , David Miller , "Al Viro" , Jakub Kicinski , "Greg Kroah-Hartman" , , netdev , LKML , SeongJae Park , , , Subject: Re: Re: [PATCH net v2 0/2] Revert the 'socket_alloc' life cycle change Date: Tue, 5 May 2020 20:11:01 +0200 Message-ID: <20200505181101.16384-1-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200505172850.GD2869@paulmck-ThinkPad-P72> (raw) MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.43.162.200] X-ClientProxiedBy: EX13D06UWA004.ant.amazon.com (10.43.160.164) To EX13D31EUA001.ant.amazon.com (10.43.165.15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 5 May 2020 10:28:50 -0700 "Paul E. McKenney" wrote: > On Tue, May 05, 2020 at 09:37:42AM -0700, Eric Dumazet wrote: > > > > > > On 5/5/20 9:31 AM, Eric Dumazet wrote: > > > > > > > > > On 5/5/20 9:25 AM, Eric Dumazet wrote: > > >> > > >> > > >> On 5/5/20 9:13 AM, SeongJae Park wrote: > > >>> On Tue, 5 May 2020 09:00:44 -0700 Eric Dumazet wrote: > > >>> > > >>>> On Tue, May 5, 2020 at 8:47 AM SeongJae Park wrote: > > >>>>> > > >>>>> On Tue, 5 May 2020 08:20:50 -0700 Eric Dumazet wrote: > > >>>>> > > >>>>>> > > >>>>>> > > >>>>>> On 5/5/20 8:07 AM, SeongJae Park wrote: > > >>>>>>> On Tue, 5 May 2020 07:53:39 -0700 Eric Dumazet wrote: > > >>>>>>> > > >>>>>> > > >>>>>>>> Why do we have 10,000,000 objects around ? Could this be because of > > >>>>>>>> some RCU problem ? > > >>>>>>> > > >>>>>>> Mainly because of a long RCU grace period, as you guess. I have no idea how > > >>>>>>> the grace period became so long in this case. > > >>>>>>> > > >>>>>>> As my test machine was a virtual machine instance, I guess RCU readers > > >>>>>>> preemption[1] like problem might affected this. > > >>>>>>> > > >>>>>>> [1] https://www.usenix.org/system/files/conference/atc17/atc17-prasad.pdf > > >>>>>>> > > >>>>>>>> > > >>>>>>>> Once Al patches reverted, do you have 10,000,000 sock_alloc around ? > > >>>>>>> > > >>>>>>> Yes, both the old kernel that prior to Al's patches and the recent kernel > > >>>>>>> reverting the Al's patches didn't reproduce the problem. > > >>>>>>> > > >>>>>> > > >>>>>> I repeat my question : Do you have 10,000,000 (smaller) objects kept in slab caches ? > > >>>>>> > > >>>>>> TCP sockets use the (very complex, error prone) SLAB_TYPESAFE_BY_RCU, but not the struct socket_wq > > >>>>>> object that was allocated in sock_alloc_inode() before Al patches. > > >>>>>> > > >>>>>> These objects should be visible in kmalloc-64 kmem cache. > > >>>>> > > >>>>> Not exactly the 10,000,000, as it is only the possible highest number, but I > > >>>>> was able to observe clear exponential increase of the number of the objects > > >>>>> using slabtop. Before the start of the problematic workload, the number of > > >>>>> objects of 'kmalloc-64' was 5760, but I was able to observe the number increase > > >>>>> to 1,136,576. > > >>>>> > > >>>>> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME > > >>>>> before: 5760 5088 88% 0.06K 90 64 360K kmalloc-64 > > >>>>> after: 1136576 1136576 100% 0.06K 17759 64 71036K kmalloc-64 > > >>>>> > > >>>> > > >>>> Great, thanks. > > >>>> > > >>>> How recent is the kernel you are running for your experiment ? > > >>> > > >>> It's based on 5.4.35. > > >>> > > >>>> > > >>>> Let's make sure the bug is not in RCU. > > >>> > > >>> One thing I can currently say is that the grace period passes at last. I > > >>> modified the benchmark to repeat not 10,000 times but only 5,000 times to run > > >>> the test without OOM but easily observable memory pressure. As soon as the > > >>> benchmark finishes, the memory were freed. > > >>> > > >>> If you need more tests, please let me know. > > >>> > > >> > > >> I would ask Paul opinion on this issue, because we have many objects > > >> being freed after RCU grace periods. > > >> > > >> If RCU subsystem can not keep-up, I guess other workloads will also suffer. > > >> > > >> Sure, we can revert patches there and there trying to work around the issue, > > >> but for objects allocated from process context, we should not have these problems. > > >> > > > > > > I wonder if simply adjusting rcu_divisor to 6 or 5 would help > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > index d9a49cd6065a20936edbda1b334136ab597cde52..fde833bac0f9f81e8536211b4dad6e7575c1219a 100644 > > > --- a/kernel/rcu/tree.c > > > +++ b/kernel/rcu/tree.c > > > @@ -427,7 +427,7 @@ module_param(qovld, long, 0444); > > > static ulong jiffies_till_first_fqs = ULONG_MAX; > > > static ulong jiffies_till_next_fqs = ULONG_MAX; > > > static bool rcu_kick_kthreads; > > > -static int rcu_divisor = 7; > > > +static int rcu_divisor = 6; > > > module_param(rcu_divisor, int, 0644); > > > > > > /* Force an exit from rcu_do_batch() after 3 milliseconds. */ > > > > > > > To be clear, you can adjust the value without building a new kernel. > > > > echo 6 >/sys/module/rcutree/parameters/rcu_divisor > > Worth a try! If that helps significantly, I have some ideas for updating > that heuristic, such as checking for sudden increases in the number of > pending callbacks. > > But I would really also like to know whether there are long readers and > whether v5.6 fares better. I will share the results as soon as possible :) Thanks, SeongJae Park > > Thanx, Paul