Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753015AbdDLLhz (ORCPT ); Wed, 12 Apr 2017 07:37:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:46669 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752147AbdDLLhx (ORCPT ); Wed, 12 Apr 2017 07:37:53 -0400 Subject: Re: [PATCH 1/4] thp: reduce indentation level in change_huge_pmd() To: "Kirill A. Shutemov" , Andrea Arcangeli , Andrew Morton References: <20170302151034.27829-1-kirill.shutemov@linux.intel.com> <20170302151034.27829-2-kirill.shutemov@linux.intel.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org From: Vlastimil Babka Message-ID: Date: Wed, 12 Apr 2017 13:37:51 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <20170302151034.27829-2-kirill.shutemov@linux.intel.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2374 Lines: 80 On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote: > Restructure code in preparation for a fix. > > Signed-off-by: Kirill A. Shutemov Acked-by: Vlastimil Babka > --- > mm/huge_memory.c | 52 ++++++++++++++++++++++++++-------------------------- > 1 file changed, 26 insertions(+), 26 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 71e3dede95b4..e7ce73b2b208 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1722,37 +1722,37 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, > { > struct mm_struct *mm = vma->vm_mm; > spinlock_t *ptl; > - int ret = 0; > + pmd_t entry; > + bool preserve_write; > + int ret; > > ptl = __pmd_trans_huge_lock(pmd, vma); > - if (ptl) { > - pmd_t entry; > - bool preserve_write = prot_numa && pmd_write(*pmd); > - ret = 1; > + if (!ptl) > + return 0; > > - /* > - * Avoid trapping faults against the zero page. The read-only > - * data is likely to be read-cached on the local CPU and > - * local/remote hits to the zero page are not interesting. > - */ > - if (prot_numa && is_huge_zero_pmd(*pmd)) { > - spin_unlock(ptl); > - return ret; > - } > + preserve_write = prot_numa && pmd_write(*pmd); > + ret = 1; > > - if (!prot_numa || !pmd_protnone(*pmd)) { > - entry = pmdp_huge_get_and_clear_notify(mm, addr, pmd); > - entry = pmd_modify(entry, newprot); > - if (preserve_write) > - entry = pmd_mk_savedwrite(entry); > - ret = HPAGE_PMD_NR; > - set_pmd_at(mm, addr, pmd, entry); > - BUG_ON(vma_is_anonymous(vma) && !preserve_write && > - pmd_write(entry)); > - } > - spin_unlock(ptl); > - } > + /* > + * Avoid trapping faults against the zero page. The read-only > + * data is likely to be read-cached on the local CPU and > + * local/remote hits to the zero page are not interesting. > + */ > + if (prot_numa && is_huge_zero_pmd(*pmd)) > + goto unlock; > > + if (prot_numa && pmd_protnone(*pmd)) > + goto unlock; > + > + entry = pmdp_huge_get_and_clear_notify(mm, addr, pmd); > + entry = pmd_modify(entry, newprot); > + if (preserve_write) > + entry = pmd_mk_savedwrite(entry); > + ret = HPAGE_PMD_NR; > + set_pmd_at(mm, addr, pmd, entry); > + BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry)); > +unlock: > + spin_unlock(ptl); > return ret; > } > >