Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752572AbdFNQJO (ORCPT ); Wed, 14 Jun 2017 12:09:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:32790 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752271AbdFNQJM (ORCPT ); Wed, 14 Jun 2017 12:09:12 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 7A69E8124A Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=aarcange@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 7A69E8124A Date: Wed, 14 Jun 2017 18:09:09 +0200 From: Andrea Arcangeli To: "Kirill A. Shutemov" Cc: Andrew Morton , Vlastimil Babka , Vineet Gupta , Russell King , Will Deacon , Catalin Marinas , Ralf Baechle , "David S. Miller" , Heiko Carstens , "Aneesh Kumar K . V" , Martin Schwidefsky , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ingo Molnar , "H . Peter Anvin" , Thomas Gleixner Subject: Re: [PATCH 1/3] x86/mm: Provide pmdp_mknotpresent() helper Message-ID: <20170614160909.GE5847@redhat.com> References: <20170614135143.25068-1-kirill.shutemov@linux.intel.com> <20170614135143.25068-2-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170614135143.25068-2-kirill.shutemov@linux.intel.com> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Wed, 14 Jun 2017 16:09:12 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1183 Lines: 35 On Wed, Jun 14, 2017 at 04:51:41PM +0300, Kirill A. Shutemov wrote: > We need an atomic way to make pmd page table entry not-present. > This is required to implement pmdp_invalidate() that doesn't loose dirty > or access bits. What does the cmpxchg() loop achieves compared to xchg() and then return the old value (potentially with the dirty bit set when it was not before we called xchg)? > index f5af95a0c6b8..576420df12b8 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -1092,6 +1092,19 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, > clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); > } > > +#ifndef pmdp_mknotpresent > +#define pmdp_mknotpresent pmdp_mknotpresent > +static inline void pmdp_mknotpresent(pmd_t *pmdp) > +{ > + pmd_t old, new; > + > + { > + old = *pmdp; > + new = pmd_mknotpresent(old); > + } while (pmd_val(cmpxchg(pmdp, old, new)) != pmd_val(old)); > +} > +#endif Isn't it faster to do xchg(&xp->pmd, pmd_mknotpresent(pmd)) and have the pmdp_invalidate caller can set the dirty bit in the page if it was found set in the returned old pmd value (and skip the loop and cmpxchg)? Thanks, Andrea