Received: by 10.223.185.116 with SMTP id b49csp8014079wrg; Thu, 1 Mar 2018 15:28:29 -0800 (PST) X-Google-Smtp-Source: AG47ELuezG3LgM47qOQmXhNqekSEC4HI6/FB8Lixdmc58ZjYSzbTLuczp37qFpNPNWFzzErl8YyN X-Received: by 10.98.86.15 with SMTP id k15mr3671969pfb.187.1519946909042; Thu, 01 Mar 2018 15:28:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519946909; cv=none; d=google.com; s=arc-20160816; b=GErVrqSyVkwQ49fd84pcG2uSyHQdiiosdbQs2i1Dhm7hwKIoD+kBJl56hKcNinQCCh +1W8OfYHzYmEttUqm6bXVPn9uD9Kjh+qW6r8BLvMTPlstBd18yal1E1nH1ih2lo/Uw0n 7C3v4MyOU5VWReORz2ci0KfBumCIHPDIBJdgloVSNjSDj14B/r7F26lT4XAGL1URV09L 5/N00fuqj4sI/gwjoWujUuOdj/vAuPppLVOMnzuKQMFkr3rwhIFS9aa4RStAiglNwJz0 j+C/+Jsrc+fgluTzt1arIiNgWjnXnlxqPoEvB7+80Hd6IokJWM+bDkEZJCJ4+POiCT3s LjFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id :arc-authentication-results; bh=tx8Y/XjZ0TboDfbZAKH8aruAAcYr6HW7n+BPXq6FphQ=; b=wf/c5NqF+9ackqORWUN+4FM7WY1aG8cwIWfAgIVuetpTkDp6eAA26WRM94+Xy2Gm0D iXrWwhUAJymVgnI0e2e+/nCWkk8JaNdOyiLYi5sRX1u5A7v29eCicZCKm2r8jiUT7RwU HtX2nlQwCJYXATc2vhqa2KggKi4OyrmubAs60jmOktFPordovunWKCaS54Yba07Y5Doq 3fU0CXGwo8qbjwnNi2nhbYFRiTCZJchkknrN8ivFpt+v/1XfSEz8R0/c7ncY+DLJXVuu cBHox1T6bk4dT3JZmEWSUZm9fZlUqT9z7KJ8/T423nrIB0teGSCldTQsuOedgFJkzZVZ 32Ag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h8si3727278pfi.117.2018.03.01.15.28.14; Thu, 01 Mar 2018 15:28:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163066AbeCAX1U (ORCPT + 99 others); Thu, 1 Mar 2018 18:27:20 -0500 Received: from gate.crashing.org ([63.228.1.57]:46072 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162906AbeCAX1S (ORCPT ); Thu, 1 Mar 2018 18:27:18 -0500 Received: from localhost (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.14.1) with ESMTP id w21NQ829001180; Thu, 1 Mar 2018 17:26:09 -0600 Message-ID: <1519946767.4592.49.camel@kernel.crashing.org> Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory From: Benjamin Herrenschmidt To: Logan Gunthorpe , Dan Williams Cc: Jens Axboe , Keith Busch , Oliver OHalloran , Alex Williamson , linux-nvdimm , linux-rdma , linux-pci@vger.kernel.org, Linux Kernel Mailing List , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, =?ISO-8859-1?Q?J=E9r=F4me?= Glisse , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig Date: Fri, 02 Mar 2018 10:26:07 +1100 In-Reply-To: <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> <1519945204.4592.45.camel@au1.ibm.com> <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.26.5 (3.26.5-1.fc27) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2018-03-01 at 16:19 -0700, Logan Gunthorpe wrote: (Switching back to my non-IBM address ...) > On 01/03/18 04:00 PM, Benjamin Herrenschmidt wrote: > > We use only 52 in practice but yes. > > > > > That's 64PB. If you use need > > > a sparse vmemmap for the entire space it will take 16TB which leaves you > > > with 63.98PB of address space left. (Similar calculations for other > > > numbers of address bits.) > > > > We only have 52 bits of virtual space for the kernel with the radix > > MMU. > > Ok, assuming you only have 52 bits of physical address space: the sparse > vmemmap takes 1TB and you're left with 3.9PB of address space for other > things. So, again, why doesn't that work? Is my math wrong The big problem is not the vmemmap, it's the linear mapping. Cheers, Ben.