Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp71265pxb; Tue, 21 Sep 2021 19:04:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzq1BzP6bNjf2ko9au1Q4CO/+ECHyQeV+jTm9ss8q3QgyEHR+gwJeyZg+PoZsX4qgGQa1kr X-Received: by 2002:a6b:7119:: with SMTP id q25mr2456910iog.69.1632276285171; Tue, 21 Sep 2021 19:04:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632276285; cv=none; d=google.com; s=arc-20160816; b=GajrUEUKN3ZmQ2bND2b0JAe8sQZ4Xz1BNf4Tmjwebxe+2+jDughXBEvLqMvR1ppXJa gfcEhihMqAV8NhxK4ScsTfhOS6H53Zv0G99ZoWeO6fI0oOZxPNdFOjJPkaqevNTw9ZFp x3ZtobBN7EkrGnHREBJiHx1nW+Ni1D9TCffuT+64L3yMYA5NJM0KCSmXfxDkpklVg+ns mbOxzqkGU8hCy1DSd20oK5nXc6OKNKiNPu+eP7DG6v9ugs8pjwqyU2f3/DIlD4AOLFqi 4LDFikEISANmb+27XQ5bIhf1ErJQWMXzrZyv6Dv5J+wlIlRaWKxQ8xtd6jsTjdOZUmQ2 I6AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=f6i7VqFtcha03J84HYQTRwxmm4I5eovKarVghp26xmE=; b=SZkxaf9eEQ6gmM1oe8TRyqJaRNkqTfW9lGZad+6xr9bjJQ+tGTxToRnK+d7eFAvFIT papufWd+fFDWWwNpGiJ23HJUMr8Ocz5TLJeD4mvbeedeoLzTTMXhR4+EFOX4NjI4HxWl Anty/ldRYlSarXH1UKkGW3ist4iAEVRpDmVrhPOe+9mWyq6DY/op4g4eKCZrQh5GIvvF O7vcLdfM4FAj9ZaZDlDIQnO4CJtwkqmm1pD9InxNR9Ae45vb7kq/0ZjdZbYwU6Z2zrVr gx+XixBs/iwit8Q1a4TAykGF61F0BXeONNIWqvLvoVzLKs9sS4rn24JLGrZSGnsU5mLf McJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r4si1056265ilb.106.2021.09.21.19.04.34; Tue, 21 Sep 2021 19:04:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231124AbhIUXuH (ORCPT + 99 others); Tue, 21 Sep 2021 19:50:07 -0400 Received: from mga17.intel.com ([192.55.52.151]:50231 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229824AbhIUXuG (ORCPT ); Tue, 21 Sep 2021 19:50:06 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10114"; a="203641189" X-IronPort-AV: E=Sophos;i="5.85,311,1624345200"; d="scan'208";a="203641189" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 16:48:37 -0700 X-IronPort-AV: E=Sophos;i="5.85,311,1624345200"; d="scan'208";a="474274829" Received: from agluck-desk2.sc.intel.com (HELO agluck-desk2.amr.corp.intel.com) ([10.3.52.146]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 16:48:37 -0700 Date: Tue, 21 Sep 2021 16:48:36 -0700 From: "Luck, Tony" To: Dave Hansen Cc: Sean Christopherson , Jarkko Sakkinen , Matthew Wilcox , "Zhang, Cathy" , "linux-sgx@vger.kernel.org" , "x86@kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages Message-ID: References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> <20210917213836.175138-3-tony.luck@intel.com> <8c39b812-b77a-7d63-2d82-f1c0401a5f16@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8c39b812-b77a-7d63-2d82-f1c0401a5f16@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 21, 2021 at 03:32:14PM -0700, Dave Hansen wrote: > On 9/21/21 1:50 PM, Luck, Tony wrote: > >> Did we ever figure out how much space storing really big ranges in the > >> xarray consumes? > > No. Willy said the existing xarray code would be less than optimal with > > this usage, but that things would be much better when he applied some > > maple tree updates to the internals of xarray. > > > > If there is some easy way to measure the memory backing an xarray I'm > > happy to get the data. Or if someone else can synthesize it ... the two > > ranges on my system that are added to the xarray are: > > > > $ dmesg | grep -i sgx > > [ 8.496844] sgx: EPC section 0x8000c00000-0x807f7fffff > > [ 8.505118] sgx: EPC section 0x10000c00000-0x1007fffffff > > > > I.e. two ranges of a bit under 2GB each. > > > > But I don't think the overhead can be too hideous: > > > > $ grep MemFree /proc/meminfo > > MemFree: 1048682016 kB > > > > I still have ~ 1TB free. Which is much greater that the 640 KB which should > > be "enough for anybody" :-). > > There is a kmem_cache_create() for the xarray nodes. So, you should be > able to see the difference in /proc/meminfo's "Slab" field. Maybe boot > with init=/bin/sh to reduce the noise and look at meminfo both with and > without SGX your patch applied, or just with the xarray bits commented out. > > I don't quite know how the data structures are munged, but xas_alloc() > makes it look like 'struct xa_node' is allocated from > radix_tree_node_cachep. If that's the case, you should also be able to > see this in even more detail in: > > # grep radix /proc/slabinfo > radix_tree_node 432305 482412 584 28 4 : tunables 0 0 > 0 : slabdata 17229 17229 0 > > again, on a system with and without your new code enabled. Booting with init=/bin/sh and running that grep command right away at the prompt: With the xa_store_range() call commented out of my kernel: radix_tree_node 9800 9968 584 56 8 : tunables 0 0 0 : slabdata 178 178 0 With xa_store_range() enabled: radix_tree_node 9950 10136 584 56 8 : tunables 0 0 0 : slabdata 181 181 0 The head of the file says these are the field names: # name : tunables : slabdata So I think this means that I have (9950 - 9800) * 584 = 87600 more bytes allocated. Maybe that's a lot? But percentage-wise is seems in the noise. E.g. We allocate one "struct sgx_epc_page" for each SGX page. On my system I have 4GB of SGX EPC, so around 32 MB of these structures. -Tony