Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp92021ybj; Mon, 4 May 2020 16:45:25 -0700 (PDT) X-Google-Smtp-Source: APiQypIyQjJCO34kA8qK/3GT0USLFg4agzldrE38+kmpVJI9xHwwedla+40SQGWDxzEWWf0aIxpL X-Received: by 2002:a17:906:348d:: with SMTP id g13mr260843ejb.374.1588635925032; Mon, 04 May 2020 16:45:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588635925; cv=none; d=google.com; s=arc-20160816; b=HVjBR4j4tdsuoQwhML9S7ZZUfb19ZuwqaWxsPL5JIFcSbi+FW1uXzviGvG57mCKbq1 IABZhM+3TwhaA4BUfAxUL2b8KTVDXr+t8pvAAhnfcUBVJZFrXvlXRGz8jAyQf17cbYhe Cu1KJXAEZRzNeq5lOfSLZUcCoV65YvsBwlk/x5zrnvjdluMw/kWLxj3gTvcOs8TM6NPV 9y8WNC9BwvSWuWrgONyLYTdaPeJhMIrgu6uFv3Z3Oel25Ja0Y0SzOtB/pYFe6y7qyfB4 cUlLbX/6faN3qmUnhCWOrjlmevWVyTvrVrNni2+9dENIQaix0lCn1uOQs2G7kcWe8zVX Fxyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:from:cc:to:subject :content-transfer-encoding:mime-version:references:in-reply-to:date; bh=fjLYeY7lgpb/v8m5q0P1PfJhqPCoa5XUC2hvH9OLH5g=; b=VcMq+TJ97p32hOUKNrxYqN+2gVLgoXn10k12InztrdIwwalFlor1+eM+MoHVLrjB6j IEtyTyDZpHhbqHo5rkF0XoAIkqxVfCwFHfB5UE0DUyFuCjU8aKFY9tPPVGaa7L+/YMo6 JdFmbF0dzpWnnD+clUahiCDzWUEw9l5GEgzialzDgmVQcoyFpdpUTB9cUle0uDWSNeSi 3cWb+AIvcDayQI9xKi2j9WWTaCBeehSMPmIKSHHJzqMhyob51F8YL7Xl0iYoGr5zFhHO EEs2aAMJI0xPFzwkoICb1jFeEWLM5HQF+ferG25iusD4LXyn10CSYxI5i8DiGvR6t2Lg JqSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e9si288718eds.63.2020.05.04.16.44.52; Mon, 04 May 2020 16:45:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728223AbgEDXoi convert rfc822-to-8bit (ORCPT + 99 others); Mon, 4 May 2020 19:44:38 -0400 Received: from relay10.mail.gandi.net ([217.70.178.230]:34839 "EHLO relay10.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728192AbgEDXoi (ORCPT ); Mon, 4 May 2020 19:44:38 -0400 Received: from [192.168.1.4] (50-39-163-217.bvtn.or.frontiernet.net [50.39.163.217]) (Authenticated sender: josh@joshtriplett.org) by relay10.mail.gandi.net (Postfix) with ESMTPSA id 23895240008; Mon, 4 May 2020 23:44:28 +0000 (UTC) Date: Mon, 04 May 2020 16:38:03 -0700 In-Reply-To: References: <20200430201125.532129-1-daniel.m.jordan@oracle.com> <20200430201125.532129-7-daniel.m.jordan@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Subject: Re: [PATCH 6/7] mm: parallelize deferred_init_memmap() To: Alexander Duyck , Daniel Jordan CC: Andrew Morton , Herbert Xu , Steffen Klassert , Alex Williamson , Alexander Duyck , Dan Williams , Dave Hansen , David Hildenbrand , Jason Gunthorpe , Jonathan Corbet , Kirill Tkhai , Michal Hocko , Pavel Machek , Pavel Tatashin , Peter Zijlstra , Randy Dunlap , Shile Zhang , Tejun Heo , Zi Yan , linux-crypto@vger.kernel.org, linux-mm , LKML From: Josh Triplett Message-ID: <3C3C62BE-6363-41C3-834C-C3124EB3FFAB@joshtriplett.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On May 4, 2020 3:33:58 PM PDT, Alexander Duyck wrote: >On Thu, Apr 30, 2020 at 1:12 PM Daniel Jordan > wrote: >> /* >> - * Initialize and free pages in MAX_ORDER sized increments so >> - * that we can avoid introducing any issues with the buddy >> - * allocator. >> + * More CPUs always led to greater speedups on tested >systems, up to >> + * all the nodes' CPUs. Use all since the system is >otherwise idle now. >> */ > >I would be curious about your data. That isn't what I have seen in the >past. Typically only up to about 8 or 10 CPUs gives you any benefit, >beyond that I was usually cache/memory bandwidth bound. I've found pretty much linear performance up to memory bandwidth, and on the systems I was testing, I didn't saturate memory bandwidth until about the full number of physical cores. From number of cores up to number of threads, the performance stayed about flat; it didn't get any better or worse. - Josh