Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp677639img; Wed, 20 Mar 2019 08:36:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqxZwOEqbk5USvAuCRmi9iCqfW9HD7XDBYJC9e7p8K8ypljFbT/pqFz6Kf0U3ek9tQ3bu0j+ X-Received: by 2002:a63:7c07:: with SMTP id x7mr8088531pgc.284.1553096180748; Wed, 20 Mar 2019 08:36:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553096180; cv=none; d=google.com; s=arc-20160816; b=s7QEW7/6TgcM8cttlTZXdjTirS64L3o5kEzZbZCdDi5RGirM1TBJ6j4Hic5Gh1lQ/l a29JCOynkJlmCMUdcu0ar7vq2eib28Zmj1FtpDSOBb5l/65Rc4myx96JdRYXhi/UcRDL 5WQhXxYOXDNo1rZ+nfMOA1MQ5u3xTULmlWJ6cg8WYXPGbFeSpgGKRbgajmck8AHX1bqA +h4plLIJkkGY8Er4cCaIfBRMiyytYzpwieyRrrBYoObG56xCH5vAzEazEXlqKmSTRuTH iJcsX2/T3ZjGSxV1PNIikd67kR4hDph/jt0RpBwDuCJWlTa7czHfmguxkjf7nLHMo5P6 /cHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=uNIyVUbxeFEDoIbxgtBJkCNkY0r/4CBODhtc3HnZvfo=; b=nogrUCUWaWnOroV0FxmenP31eWHXYhhBlONGU79LPy/nRUwungXx0s3D9iYocr0mqJ raOroFG92e0p86Zum3e7Hu+QxbdcECoEZD1LVsdJVeXWvW6cIi9JkpEJrkYgHPFQOApn lpBKtF9hKQXj0cIoJ/gPmVjyIUlbw7qD1OdficQQUfqyjhmGwghEEjvTXAe9ZOOYatvR V/hKACLDHyISvwO52YfSP/LM8L6+V/qeNHJXnx2fMlnGeSleeWVorMyezVeBx2UeMkxg sy3MjCPKCqIQCWzy3a80Kyu4UVkq7eN4GVvziPD3xIflPqIgq8vH8F1t5w+ouMMWAfBa pQHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=S1BTeZ60; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s8si1848171pgp.140.2019.03.20.08.36.05; Wed, 20 Mar 2019 08:36:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=S1BTeZ60; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727188AbfCTPed (ORCPT + 99 others); Wed, 20 Mar 2019 11:34:33 -0400 Received: from mail-ot1-f65.google.com ([209.85.210.65]:46312 "EHLO mail-ot1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726366AbfCTPec (ORCPT ); Wed, 20 Mar 2019 11:34:32 -0400 Received: by mail-ot1-f65.google.com with SMTP id c18so2456134otl.13 for ; Wed, 20 Mar 2019 08:34:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=uNIyVUbxeFEDoIbxgtBJkCNkY0r/4CBODhtc3HnZvfo=; b=S1BTeZ60qB8Kcw6wQHVvBU2Cw4rNly7jAnT1sv/cmmSV2k4deA9yMb7w1j6/m2Q7eN I0qQK4ihRRtoLqjMr9JBlDJxL03hrUvIAZ4zrfaYSZwb9Rz9vc0AqJaovdTBauLDCGg2 5ak3wWitVxY9gkpb4x+iB0EYEiq5A8H80kh1YPOC6PxGa9koz5UavcxbNY+BLgYvx3Dh J6CAW3BvJDoHbLCvHJV56Y1nul+KP6YMIlIA2Wo3Ei5mh1A0af24SEv5q1xMa9+k6XxD mjNrgoBWeZEj3sPoSMDW+6K9ngq6F4U19oVo6Hc1SCLZkYrHIijoo2AnD4h8DCLqdEFt xSiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=uNIyVUbxeFEDoIbxgtBJkCNkY0r/4CBODhtc3HnZvfo=; b=Nb/7XTuihQvD1llsoSbmS26J/gy4sbHTtHQIoDX8zXjZRvjAD5qlHlKGCOVmk/HcGa YCdCVxrPuuAmGaJrJNvnZ6VLbqlOG6kBlkC4AAOr6nc2nvSwWh7oiBTBPR9p+C6MPC7m tJRq7TZMk4WS5Aj5YkJjvCC/VJReyT+sLGGYrAyyLPqBITVXGT0Tpg4Va1LhUtZPjFzU MMfr1s1MrEWSluguxdSnZjQI7JO4alWn5i9JA9l6fmQ1z5SAWf/20X5OtwC3vgz9e36w PPCJ9uju/fJ33Rr6Q3c3YM7UVRLCP2Ei8wilZjhYNxS0uT+iW05MoHUEfONlb+DDQwyf qTAA== X-Gm-Message-State: APjAAAU1aowdAbgPk1bx3tN45TNfoK/eYal8ZQOHr0bxKqG3zqn5gywG wI6KeGdgxjknLS5mBJfAhpbx3BBoTV5WELdF5e7KTg== X-Received: by 2002:a9d:224a:: with SMTP id o68mr6279518ota.214.1553096071716; Wed, 20 Mar 2019 08:34:31 -0700 (PDT) MIME-Version: 1.0 References: <20190228083522.8189-1-aneesh.kumar@linux.ibm.com> <20190228083522.8189-2-aneesh.kumar@linux.ibm.com> <87k1hc8iqa.fsf@linux.ibm.com> <871s3aqfup.fsf@linux.ibm.com> <87bm267ywc.fsf@linux.ibm.com> <878sxa7ys5.fsf@linux.ibm.com> In-Reply-To: <878sxa7ys5.fsf@linux.ibm.com> From: Dan Williams Date: Wed, 20 Mar 2019 08:34:20 -0700 Message-ID: Subject: Re: [PATCH 2/2] mm/dax: Don't enable huge dax mapping by default To: "Aneesh Kumar K.V" Cc: Jan Kara , linux-nvdimm , Michael Ellerman , Linux Kernel Mailing List , Linux MM , Ross Zwisler , Andrew Morton , linuxppc-dev , "Kirill A . Shutemov" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 20, 2019 at 1:09 AM Aneesh Kumar K.V wrote: > > Aneesh Kumar K.V writes: > > > Dan Williams writes: > > > >> > >>> Now what will be page size used for mapping vmemmap? > >> > >> That's up to the architecture's vmemmap_populate() implementation. > >> > >>> Architectures > >>> possibly will use PMD_SIZE mapping if supported for vmemmap. Now a > >>> device-dax with struct page in the device will have pfn reserve area aligned > >>> to PAGE_SIZE with the above example? We can't map that using > >>> PMD_SIZE page size? > >> > >> IIUC, that's a different alignment. Currently that's handled by > >> padding the reservation area up to a section (128MB on x86) boundary, > >> but I'm working on patches to allow sub-section sized ranges to be > >> mapped. > > > > I am missing something w.r.t code. The below code align that using nd_pfn->align > > > > if (nd_pfn->mode == PFN_MODE_PMEM) { > > unsigned long memmap_size; > > > > /* > > * vmemmap_populate_hugepages() allocates the memmap array in > > * HPAGE_SIZE chunks. > > */ > > memmap_size = ALIGN(64 * npfns, HPAGE_SIZE); > > offset = ALIGN(start + SZ_8K + memmap_size + dax_label_reserve, > > nd_pfn->align) - start; > > } > > > > IIUC that is finding the offset where to put vmemmap start. And that has > > to be aligned to the page size with which we may end up mapping vmemmap > > area right? Right, that's the physical offset of where the vmemmap ends, and the memory to be mapped begins. > > Yes we find the npfns by aligning up using PAGES_PER_SECTION. But that > > is to compute howmany pfns we should map for this pfn dev right? > > > > Also i guess those 4K assumptions there is wrong? Yes, I think to support non-4K-PAGE_SIZE systems the 'pfn' metadata needs to be revved and the PAGE_SIZE needs to be recorded in the info-block.