Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2653768pxj; Mon, 17 May 2021 06:49:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygD2rCeE7y4uK2rs74jgD8XDwzuNlVxUbrPruLrsiCqE3gwoAWNuxvPH1s919mwfzajVGr X-Received: by 2002:a17:906:2ed0:: with SMTP id s16mr37228118eji.543.1621259357403; Mon, 17 May 2021 06:49:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621259357; cv=none; d=google.com; s=arc-20160816; b=zJlM41LUNTG3UDyD1puVqm8l5Z9A/HsUubIkXbRHNBjPHukgw/nO7B/muvgXLLvjnj KS3y3ltR0jb0m+0eHsxrsw4H/Mcoa15uKDhLfrzthscPvUKgCNy1th81e8AvtzKeJ5Ig LdE1ALCXW2L4/9g338YoyApycYqTtxdLxuAgbaHkZ7VhtFNWiRUWHoLXLWzLAo0q4PYh UOByzx5ra67j65jj67gyIcpjJsM4QsYG/fJ52UYpiwlW8sppXK4m1w0lhc/siroI5eY6 +rlHl0M4WuLlvABwm09iHn20HvbtfXQrvgOZx+vAVIwvIxFTGXbiYVeFgmi435UgOWas NRFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=RWgh8W54crHvS5tSVof/lUfUd99FDFAeTaguxZczJ0U=; b=Zp7+XYYCd1MXN4J0FCG/Zoks9HaUSdzkPrEO23YB4hSzO8BVA6J86uAOnFXu5vfOhc Plly4RjmQtDyYuPpWiq3Co8dwp27ziD069lqps7zB47kz5IVnX9y8sV+bOetwJesuPtU l1acDJ9n8WwHm2KwDvIvrBDMgoDw1KkVLs8y+CjZsNSdzB3Z/DNcKyYCiAy2c/GRCiUl HvHUOBGkQipVHPCBisc7H455RboKGAarpY8+VDU7uyoImYtcoHYPTmq5IL4t8FfMup19 t0kdZZe/nw5+Pawx0jBg3QvvnWIDhUtARq4RASzXQATgDoSFnrgM0HVQz7wKipEkcFl2 fEvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gz22si14691193ejb.59.2021.05.17.06.48.54; Mon, 17 May 2021 06:49:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235615AbhEQI0J (ORCPT + 99 others); Mon, 17 May 2021 04:26:09 -0400 Received: from mx2.suse.de ([195.135.220.15]:38840 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235544AbhEQI0J (ORCPT ); Mon, 17 May 2021 04:26:09 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 763A3AF16; Mon, 17 May 2021 08:24:52 +0000 (UTC) Date: Mon, 17 May 2021 09:24:50 +0100 From: Mel Gorman To: "Uladzislau Rezki (Sony)" Cc: Andrew Morton , linux-mm@kvack.org, LKML , Matthew Wilcox , Nicholas Piggin , Hillf Danton , Michal Hocko , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node() Message-ID: <20210517082449.GT3672@suse.de> References: <20210516202056.2120-1-urezki@gmail.com> <20210516202056.2120-3-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210516202056.2120-3-urezki@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, May 16, 2021 at 10:20:55PM +0200, Uladzislau Rezki (Sony) wrote: > Recently there has been introduced a page bulk allocator for > users which need to get number of pages per one call request. > > For order-0 pages switch to an alloc_pages_bulk_array_node() > instead of alloc_pages_node(), the reason is the former is > not capable of allocating set of pages, thus a one call is > per one page. > > Second, according to my tests the bulk allocator uses less > cycles even for scenarios when only one page is requested. > Running the "perf" on same test case shows below difference: > > > - 45.18% __vmalloc_node > - __vmalloc_node_range > - 35.60% __alloc_pages > - get_page_from_freelist > 3.36% __list_del_entry_valid > 3.00% check_preemption_disabled > 1.42% prep_new_page > > > > - 31.00% __vmalloc_node > - __vmalloc_node_range > - 14.48% __alloc_pages_bulk > 3.22% __list_del_entry_valid > - 0.83% __alloc_pages > get_page_from_freelist > > > The "test_vmalloc.sh" also shows performance improvements: > > fix_size_alloc_test_4MB loops: 1000000 avg: 89105095 usec > fix_size_alloc_test loops: 1000000 avg: 513672 usec > full_fit_alloc_test loops: 1000000 avg: 748900 usec > long_busy_list_alloc_test loops: 1000000 avg: 8043038 usec > random_size_alloc_test loops: 1000000 avg: 4028582 usec > fix_align_alloc_test loops: 1000000 avg: 1457671 usec > > fix_size_alloc_test_4MB loops: 1000000 avg: 62083711 usec > fix_size_alloc_test loops: 1000000 avg: 449207 usec > full_fit_alloc_test loops: 1000000 avg: 735985 usec > long_busy_list_alloc_test loops: 1000000 avg: 5176052 usec > random_size_alloc_test loops: 1000000 avg: 2589252 usec > fix_align_alloc_test loops: 1000000 avg: 1365009 usec > > For example 4MB allocations illustrates ~30% gain, all the > rest is also better. > > Signed-off-by: Uladzislau Rezki (Sony) FWIW, it passed build and boot tests. Acked-by: Mel Gorman -- Mel Gorman SUSE Labs