Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754210AbdFPO1Z (ORCPT ); Fri, 16 Jun 2017 10:27:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36964 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753878AbdFPO1W (ORCPT ); Fri, 16 Jun 2017 10:27:22 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com E12D93D962 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=aarcange@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com E12D93D962 Date: Fri, 16 Jun 2017 16:27:20 +0200 From: Andrea Arcangeli To: Minchan Kim Cc: "Kirill A. Shutemov" , "Kirill A. Shutemov" , Andrew Morton , Vlastimil Babka , Vineet Gupta , Russell King , Will Deacon , Catalin Marinas , Ralf Baechle , "David S. Miller" , "Aneesh Kumar K . V" , Martin Schwidefsky , Heiko Carstens , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv2 3/3] mm: Use updated pmdp_invalidate() inteface to track dirty/accessed bits Message-ID: <20170616142720.GH11676@redhat.com> References: <20170615145224.66200-1-kirill.shutemov@linux.intel.com> <20170615145224.66200-4-kirill.shutemov@linux.intel.com> <20170616030250.GA27637@bbox> <20170616131908.3rxtm2w73gdfex4a@node.shutemov.name> <20170616135209.GA29542@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170616135209.GA29542@bbox> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Fri, 16 Jun 2017 14:27:22 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1299 Lines: 29 Hello Minchan, On Fri, Jun 16, 2017 at 10:52:09PM +0900, Minchan Kim wrote: > > > > @@ -1995,8 +1984,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > > > > if (soft_dirty) > > > > entry = pte_mksoft_dirty(entry); > > > > } > > > > - if (dirty) > > > > - SetPageDirty(page + i); > > > > pte = pte_offset_map(&_pmd, addr); [..] > > split_huge_page set PG_dirty to all subpages unconditionally? > If it's true, yes, it doesn't break MADV_FREE. However, I didn't spot > that piece of code. What I found one is just __split_huge_page_tail > which set PG_dirty to subpage if head page is dirty. IOW, if the head > page is not dirty, tail page will be clean, too. > Could you point out what routine set PG_dirty to all subpages unconditionally? On a side note the snippet deleted above was useless, as long as there's one left hugepmd to split, the physical page has to be still compound and huge and as long as that's the case the tail pages PG_dirty bit is meaningless (even if set, it's going to be clobbered during the physical split). In short PG_dirty is only meaningful in the head as long as it's compound. The physical split in __split_huge_page_tail transfer the head value to the tails like you mentioned, that's all as far as I can tell.