Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752035AbdHCNux (ORCPT ); Thu, 3 Aug 2017 09:50:53 -0400 Received: from mx2.suse.de ([195.135.220.15]:39607 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751755AbdHCNuv (ORCPT ); Thu, 3 Aug 2017 09:50:51 -0400 Date: Thu, 3 Aug 2017 15:50:47 +0200 From: Michal Hocko To: Wei Wang Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, mawilcox@microsoft.com, akpm@linux-foundation.org, virtio-dev@lists.oasis-open.org, david@redhat.com, cornelia.huck@de.ibm.com, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, liliang.opensource@gmail.com, yang.zhang.wz@gmail.com, quan.xu@aliyun.com Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks Message-ID: <20170803135047.GV12521@dhcp22.suse.cz> References: <1501742299-4369-1-git-send-email-wei.w.wang@intel.com> <1501742299-4369-5-git-send-email-wei.w.wang@intel.com> <20170803091151.GF12521@dhcp22.suse.cz> <5982FE07.3040207@intel.com> <20170803104417.GI12521@dhcp22.suse.cz> <59830897.2060203@intel.com> <20170803112831.GN12521@dhcp22.suse.cz> <5983130E.2070806@intel.com> <20170803124106.GR12521@dhcp22.suse.cz> <59832265.1040805@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <59832265.1040805@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4368 Lines: 101 On Thu 03-08-17 21:17:25, Wei Wang wrote: > On 08/03/2017 08:41 PM, Michal Hocko wrote: > >On Thu 03-08-17 20:11:58, Wei Wang wrote: > >>On 08/03/2017 07:28 PM, Michal Hocko wrote: > >>>On Thu 03-08-17 19:27:19, Wei Wang wrote: > >>>>On 08/03/2017 06:44 PM, Michal Hocko wrote: > >>>>>On Thu 03-08-17 18:42:15, Wei Wang wrote: > >>>>>>On 08/03/2017 05:11 PM, Michal Hocko wrote: > >>>>>>>On Thu 03-08-17 14:38:18, Wei Wang wrote: > >>>>>[...] > >>>>>>>>+static int report_free_page_block(struct zone *zone, unsigned int order, > >>>>>>>>+ unsigned int migratetype, struct page **page) > >>>>>>>This is just too ugly and wrong actually. Never provide struct page > >>>>>>>pointers outside of the zone->lock. What I've had in mind was to simply > >>>>>>>walk free lists of the suitable order and call the callback for each one. > >>>>>>>Something as simple as > >>>>>>> > >>>>>>> for (i = 0; i < MAX_NR_ZONES; i++) { > >>>>>>> struct zone *zone = &pgdat->node_zones[i]; > >>>>>>> > >>>>>>> if (!populated_zone(zone)) > >>>>>>> continue; > >>>>>>> spin_lock_irqsave(&zone->lock, flags); > >>>>>>> for (order = min_order; order < MAX_ORDER; ++order) { > >>>>>>> struct free_area *free_area = &zone->free_area[order]; > >>>>>>> enum migratetype mt; > >>>>>>> struct page *page; > >>>>>>> > >>>>>>> if (!free_area->nr_pages) > >>>>>>> continue; > >>>>>>> > >>>>>>> for_each_migratetype_order(order, mt) { > >>>>>>> list_for_each_entry(page, > >>>>>>> &free_area->free_list[mt], lru) { > >>>>>>> > >>>>>>> pfn = page_to_pfn(page); > >>>>>>> visit(opaque2, prn, 1< >>>>>>> } > >>>>>>> } > >>>>>>> } > >>>>>>> > >>>>>>> spin_unlock_irqrestore(&zone->lock, flags); > >>>>>>> } > >>>>>>> > >>>>>>>[...] > >>>>>>I think the above would take the lock for too long time. That's why we > >>>>>>prefer to take one free page block each time, and taking it one by one > >>>>>>also doesn't make a difference, in terms of the performance that we > >>>>>>need. > >>>>>I think you should start with simple approach and impove incrementally > >>>>>if this turns out to be not optimal. I really detest taking struct pages > >>>>>outside of the lock. You never know what might happen after the lock is > >>>>>dropped. E.g. can you race with the memory hotremove? > >>>>The caller won't use pages returned from the function, so I think there > >>>>shouldn't be an issue or race if the returned pages are used (i.e. not free > >>>>anymore) or simply gone due to hotremove. > >>>No, this is just too error prone. Consider that struct page pointer > >>>itself could get invalid in the meantime. Please always keep robustness > >>>in mind first. Optimizations are nice but it is even not clear whether > >>>the simple variant will cause any problems. > >> > >>how about this: > >> > >>for_each_populated_zone(zone) { > >> for_each_migratetype_order_decend(min_order, order, type) { > >> do { > >> => spin_lock_irqsave(&zone->lock, flags); > >> ret = report_free_page_block(zone, order, type, > >> &page)) { > >> pfn = page_to_pfn(page); > >> nr_pages = 1 << order; > >> visit(opaque1, pfn, nr_pages); > >> } > >> => spin_unlock_irqrestore(&zone->lock, flags); > >> } while (!ret) > >>} > >> > >>In this way, we can still keep the lock granularity at one free page block > >>while having the struct page operated under the lock. > >How can you continue iteration of free_list after the lock has been > >dropped? > > report_free_page_block() has handled all the possible cases after the lock > is > dropped. For example, if the previous reported page has not been on the free > list, then the first node from the list of this order will be given. This is > because > page allocation takes page blocks from the head to end, for example: > > 1,2,3,4,5,6 > if the previous reported free block is 2, when we give 2 to the report > function > to get the next page block, and find 1,2,3 have all gone, it will report 4, > which > is the head of the free list. As I've said earlier. Start simple optimize incrementally with some numbers to justify a more subtle code. -- Michal Hocko SUSE Labs