Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752081AbdHGSqk (ORCPT ); Mon, 7 Aug 2017 14:46:40 -0400 Received: from resqmta-ch2-10v.sys.comcast.net ([69.252.207.42]:38854 "EHLO resqmta-ch2-10v.sys.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751505AbdHGSqj (ORCPT ); Mon, 7 Aug 2017 14:46:39 -0400 Date: Mon, 7 Aug 2017 13:46:37 -0500 (CDT) From: Christopher Lameter X-X-Sender: cl@nuc-kabylake To: "Huang, Ying" cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrea Arcangeli , "Kirill A. Shutemov" , Nadia Yvette Chambers , Michal Hocko , Jan Kara , Matthew Wilcox , Hugh Dickins , Minchan Kim , Shaohua Li Subject: Re: [PATCH -mm] mm: Clear to access sub-page last when clearing huge page In-Reply-To: <20170807072131.8343-1-ying.huang@intel.com> Message-ID: References: <20170807072131.8343-1-ying.huang@intel.com> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-CMAE-Envelope: MS4wfNpkgQTHc4PwXty8oCP44kp6VbgyH1hRsVBg3fm9uHxgjXRgyShSeAAAWm/y38SVfVUOEH6pWEJNKMG9xXRNDaTKiKS9myEx1pP7d6Yp8bZmfr5owc53 6ZZKz4HqB/0J6iwbkN0jbzdu77Jhi2MnItyIRkUF4KEgGphuOZLVnaZW+tSXOxtArlkBcyTnfNKR72RG6pmj9uFQ8AU7E92hpToayeRcJQGx/aeUme48KquQ EeIj5SBMBMc6ud2L1pnhj0luC4ShAWzXOLgs4CbTPn4719k39Zz+WhLieKojp/kwa4RQRCMY2LOWjO/OVHFbk1sloeSpk0adxHLQ4VSWH04DmRoAXE+WRwyg KQmToRw7QSE7TMbbMshP4eJko5yce921MTyu2n+bt5aoaQb1luCeUoe2IsxYss6/bfiremfAaI6t8xfI+tHMnc+RrlPZa5WYj//MsuohrMHrOgl6yd/Q8TD+ nRQ1wfNQkfCkC9WI4afxbboneWp0d9nxf5d2+APjc+TAHPeQAKqqI14ia+g= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 983 Lines: 29 On Mon, 7 Aug 2017, Huang, Ying wrote: > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4374,9 +4374,31 @@ void clear_huge_page(struct page *page, > } > > might_sleep(); > - for (i = 0; i < pages_per_huge_page; i++) { > + VM_BUG_ON(clamp(addr_hint, addr, addr + > + (pages_per_huge_page << PAGE_SHIFT)) != addr_hint); > + n = (addr_hint - addr) / PAGE_SIZE; > + if (2 * n <= pages_per_huge_page) { > + base = 0; > + l = n; > + for (i = pages_per_huge_page - 1; i >= 2 * n; i--) { > + cond_resched(); > + clear_user_highpage(page + i, addr + i * PAGE_SIZE); > + } I really like the idea behind the patch but this is not clearing from last to first byte of the huge page. What seems to be happening here is clearing from the last page to the first page and I would think that within each page the clearing is from first byte to last byte. Maybe more gains can be had by really clearing from last to first byte of the huge page instead of this jumping over 4k addresses?