Received: by 2002:ac0:8845:0:0:0:0:0 with SMTP id g63csp528516img; Thu, 28 Feb 2019 03:46:22 -0800 (PST) X-Google-Smtp-Source: AHgI3IZsRtbsJB1uUGxeKql2SNHmRXTzDf+pT3tqKJnUxlu7lb65DsEUgak8jdMTxHQXwg4cqkdB X-Received: by 2002:a17:902:8d89:: with SMTP id v9mr6661659plo.254.1551354382188; Thu, 28 Feb 2019 03:46:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551354382; cv=none; d=google.com; s=arc-20160816; b=hnRAXzPcXKE6N3K8pp0oDsyiA+XvHU8i0Lp1i5+pXPbJjNM93zy+Mdp6uE4O6U5K4t FqfR4dDN087AdfYlT9IH34rn3blFYXxT4YlvsuVeQgA0x475kRQDMTzDj1WlslyJEIco RtK5hXYDRSwgHtJVRXZ1XoWOVwdr26uOysWkCFnxAI/IoTLo5XQEyKTqmCQR6NCVCEHh OQiW0mJkvUjr2K3AlQKt7btgY9KHH36uGVQ7ep/dGUJGotORveKyvHeTGWLQ45AHtzwT JI7Z/eGh7N6na/Z7KzRfQ85UbP6qiNT484z2+D36igMUYy6OPlg042V65QxqNaEtZrb3 K/Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=jxwFn1zU5Kr4na+/zAvhiiR1F7Fe/YvjSBUQMDXSE1c=; b=gYa5xzhLHD0henCI0DlqWJlpeXx/AWvpBPnhkA3ZJJY9Yjc670j+Y6DiTYnfeSiRqU CQHN00pIvErBBbeQAOB/NWV3x0yhyYpl6f5+nP9nD6k+lHMiBWGIOD3ATjbF4jus2cEo +BXo1wk9unrEkp9JHFkBx2VEcDihTZC3Djc+5SO/pDDt13kEy3X6Bi1sjpxLl37QNeVY cHBMYJSaimeI5SRAyWnCvheTPKVYZdEj3j7KrTSAPLroHXPUE9ga/l8g4oEnlvYPH1jV LsElXlUSnH1yG04qL1eV0aMji0aDn+MSV7CVw/vdgpOmLRgCHeWUWAQjkHleovOVuHyd Gvzg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c11si12318984pga.350.2019.02.28.03.46.07; Thu, 28 Feb 2019 03:46:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732055AbfB1JkO (ORCPT + 99 others); Thu, 28 Feb 2019 04:40:14 -0500 Received: from mx2.suse.de ([195.135.220.15]:56022 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726135AbfB1JkO (ORCPT ); Thu, 28 Feb 2019 04:40:14 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1AC1DAD36; Thu, 28 Feb 2019 09:40:12 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 764DB1E4263; Thu, 28 Feb 2019 10:40:11 +0100 (CET) Date: Thu, 28 Feb 2019 10:40:11 +0100 From: Jan Kara To: "Aneesh Kumar K.V" Cc: akpm@linux-foundation.org, "Kirill A . Shutemov" , Jan Kara , mpe@ellerman.id.au, Ross Zwisler , Oliver O'Halloran , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Dan Williams Subject: Re: [PATCH 2/2] mm/dax: Don't enable huge dax mapping by default Message-ID: <20190228094011.GB22210@quack2.suse.cz> References: <20190228083522.8189-1-aneesh.kumar@linux.ibm.com> <20190228083522.8189-2-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190228083522.8189-2-aneesh.kumar@linux.ibm.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 28-02-19 14:05:22, Aneesh Kumar K.V wrote: > Add a flag to indicate the ability to do huge page?dax mapping. On architecture > like ppc64, the hypervisor can disable huge page support in the guest. In > such a case, we should not enable huge page dax mapping. This patch adds > a flag which the architecture code will update to indicate huge page > dax mapping support. > > Architectures mostly do transparent_hugepage_flag = 0; if they can't > do hugepages. That also takes care of disabling dax hugepage mapping > with this change. > > Without this patch we get the below error with kvm on ppc64. > > [ 118.849975] lpar: Failed hash pte insert with error -4 > > NOTE: The patch also use > > echo never > /sys/kernel/mm/transparent_hugepage/enabled > to disable dax huge page mapping. > > Signed-off-by: Aneesh Kumar K.V Added Dan to CC for opinion. I kind of fail to see why you don't use TRANSPARENT_HUGEPAGE_FLAG for this. I know that technically DAX huge pages and normal THPs are different things but so far we've tried to avoid making that distinction visible to userspace. Honza > --- > TODO: > * Add Fixes: tag > > include/linux/huge_mm.h | 4 +++- > mm/huge_memory.c | 4 ++++ > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 381e872bfde0..01ad5258545e 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -53,6 +53,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, > pud_t *pud, pfn_t pfn, bool write); > enum transparent_hugepage_flag { > TRANSPARENT_HUGEPAGE_FLAG, > + TRANSPARENT_HUGEPAGE_DAX_FLAG, > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, > TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, > TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, > @@ -111,7 +112,8 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG)) > return true; > > - if (vma_is_dax(vma)) > + if (vma_is_dax(vma) && > + (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_DAX_FLAG))) > return true; > > if (transparent_hugepage_flags & > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index faf357eaf0ce..43d742fe0341 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -53,6 +53,7 @@ unsigned long transparent_hugepage_flags __read_mostly = > #ifdef CONFIG_TRANSPARENT_HUGEPAGE_MADVISE > (1< #endif > + (1 << TRANSPARENT_HUGEPAGE_DAX_FLAG) | > (1< (1< (1< @@ -475,6 +476,8 @@ static int __init setup_transparent_hugepage(char *str) > &transparent_hugepage_flags); > clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, > &transparent_hugepage_flags); > + clear_bit(TRANSPARENT_HUGEPAGE_DAX_FLAG, > + &transparent_hugepage_flags); > ret = 1; > } > out: > @@ -753,6 +756,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, > spinlock_t *ptl; > > ptl = pmd_lock(mm, pmd); > + /* should we check for none here again? */ > entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); > if (pfn_t_devmap(pfn)) > entry = pmd_mkdevmap(entry); > -- > 2.20.1 > -- Jan Kara SUSE Labs, CR