Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp825326ybj; Thu, 7 May 2020 08:27:22 -0700 (PDT) X-Google-Smtp-Source: APiQypLWzDmK42D5ehaoo2NGqfSigwUHE5Jyr6w65OhcrV7xzofFJ0ecMaL2JJ7j1lfAnCw9nd2/ X-Received: by 2002:a17:906:9494:: with SMTP id t20mr12299751ejx.51.1588865242458; Thu, 07 May 2020 08:27:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588865242; cv=none; d=google.com; s=arc-20160816; b=cacsmJliCweqxT+6g1luVXHPfGnsPFhJPq2wrAZFik0XA0lsbFxQ/GWETXw91yI8gg VdkOOeRK3IinCglETsNCTlPhKFbtYR9SjDaKoCp9m8OzQhV1iygstssZ+Sy3BoVHkPAy VJ/GmluFs8cgz9MVK1uilBvRxz235X3YRjbZNxkuijnkadjswP/2sRaumiOxVVWQaFoN nGc7ZxnWymOtmLBc+WR61ATKhlm30vzVDcPlPjdHucNFEdT44u/XS3SUeXo64hOn/syL ad9OK+0vTY46ica7urj1SqlJ0VH2N1/0sQC19MDxWmesjVhvviDzX+cZ7CeftNA53skl 0cPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=hk/usxye0al9ZgtntC4dBmCGFqC4YKRv3P4aQcMq2qw=; b=bacLqeYLl+2R3eRVrdxSRt0/Ax9fXpabiavT6YF2kWgjIWT+O3qg2rlZzJVmllV/ZA UO2cjfWLuw9iY1l3WTDLJWJ1gStcMzi2WOs04WPAw2Pi4CAYvVRKzfu/qcNKTX5P8arn jIBf3IAgxgTudb9oZCBtJlR4eHMpEb1L9+fJ+Q5pKYhSjyuVcxr2sKWrFpAMBWTm8Hr7 tA4rlg2iLcChTN+VZnWpYE3QPnTtArQH4XWlrDke7f4EpyUum6svihdYn1/J3k9qkAas mPbF7aQU4J6kHxEyysW2vTfM4u0rhs46lpptDy3uf50sDIA5IKw30kqgxE9OQdP2LV0H Bdag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=EsbGz21g; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x17si3447855edi.409.2020.05.07.08.26.45; Thu, 07 May 2020 08:27:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=EsbGz21g; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725948AbgEGP0i (ORCPT + 99 others); Thu, 7 May 2020 11:26:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1725914AbgEGP0i (ORCPT ); Thu, 7 May 2020 11:26:38 -0400 Received: from mail-il1-x141.google.com (mail-il1-x141.google.com [IPv6:2607:f8b0:4864:20::141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A857C05BD43; Thu, 7 May 2020 08:26:38 -0700 (PDT) Received: by mail-il1-x141.google.com with SMTP id t12so2043506ile.9; Thu, 07 May 2020 08:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=hk/usxye0al9ZgtntC4dBmCGFqC4YKRv3P4aQcMq2qw=; b=EsbGz21gQVsZznU50EtHjz5rdZhFZ34GjZhj67iFxuR+pDLE3ZVuonZE7tJZ8+dpo+ he9P3YvGb+o2GcorrrdOs5szZm2d5TxFtpuWxCHKPgcxNrujvYgvcvQTazVOmSY/aw1+ 1yKNOJZXEp1/BVsQEFBqF1gpFQvNkw56+iSWryosih4d/s6iWTKt2FHxWRhOslcpo45Q kw0XFJ7nZY8DkZKZPLHTgsvGQn86Al0wrWOI7Jms4HmyNqILgpf+19hUtPVhyW+KDWTW CpaVlTuG9wTlu47KhatY4F00lmfxznp/gaiCLtFCdlOD2MPvmg0XeDnCS1kwlQL6pHVB tcqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hk/usxye0al9ZgtntC4dBmCGFqC4YKRv3P4aQcMq2qw=; b=Fevy7B1+gQs/vfB6L4uBF6GfnGtYZbgZnnOG3WCMw61LZgzQO7hlY04iIdwCG/P5BM OBTJLvmjpPo/uHMbw5swzWr9ezg7JTNTMn2kSsgpa82MDaCKiHyMoEhjnZ4xHGI5v7SE +u0XCis+ndH50fFzIErGLQuOn/G3075a+Iak92ARCQFWf9wNdnhGVQmemHEI9kOVT0te XWu73Q1OCZg5D1kIke23GXCZ9QWgNSFcxu8QLuQRY7plSdxksS84ms0Z1DIXmqr+AoTC 0hLJH+1l77OGjZwy39jVt5nE+Xx0qWJSDsl54P0BZMcZO4MIH4cfEIiij1RgoBeNe0Yi oizw== X-Gm-Message-State: AGi0PubxGgIdQous7qflCuY7kBvrZ0w2NXe9z1JCaHY7uoB1jbYWNl/l ZHQuwEdYNUcGV5KSEu2vVWHtYeqsIN0pLdxZCFk= X-Received: by 2002:a92:858b:: with SMTP id f133mr13709831ilh.97.1588865197283; Thu, 07 May 2020 08:26:37 -0700 (PDT) MIME-Version: 1.0 References: <20200430201125.532129-1-daniel.m.jordan@oracle.com> <20200430201125.532129-6-daniel.m.jordan@oracle.com> <20200501024539.tnjuybydwe3r4u2x@ca-dmjordan1.us.oracle.com> <20200505005432.bohmaa6zeffhdkgn@ca-dmjordan1.us.oracle.com> <20200506223923.z6cbixg2mhtjjlfo@ca-dmjordan1.us.oracle.com> In-Reply-To: <20200506223923.z6cbixg2mhtjjlfo@ca-dmjordan1.us.oracle.com> From: Alexander Duyck Date: Thu, 7 May 2020 08:26:26 -0700 Message-ID: Subject: Re: [PATCH 5/7] mm: move zone iterator outside of deferred_init_maxorder() To: Daniel Jordan Cc: Alexander Duyck , Andrew Morton , Herbert Xu , Steffen Klassert , Alex Williamson , Dan Williams , Dave Hansen , David Hildenbrand , Jason Gunthorpe , Jonathan Corbet , Josh Triplett , Kirill Tkhai , Michal Hocko , Pavel Machek , Pavel Tatashin , Peter Zijlstra , Randy Dunlap , Shile Zhang , Tejun Heo , Zi Yan , linux-crypto@vger.kernel.org, linux-mm , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Wed, May 6, 2020 at 3:39 PM Daniel Jordan wrote: > > On Tue, May 05, 2020 at 08:27:52AM -0700, Alexander Duyck wrote: > > As it turns out that deferred_free_range will be setting the > > migratetype for the page. In a sparse config the migratetype bits are > > stored in the section bitmap. So to avoid cacheline bouncing it would > > make sense to section align the tasks so that they only have one > > thread touching one section rather than having the pageblock_flags > > getting bounced between threads. > > That's a good point, I'll change the alignment. > > I kicked off some runs on the Skylake bare metal system to check how this did > and the performance stayed the same, but see below. > > > It should also reduce the overhead > > for having to parallelize the work in the first place since a section > > is several times larger than a MAX_ORDER page and allows for more > > batching of the work. > > I think you may be assuming that threads work in MAX_ORDER batches, maybe > because that's the job's min_chunk, but padata works differently. The > min_chunk is a lower bound that establishes the smallest amount of work that > makes sense for a thread to do in one go, so in this case it's useful to > prevent starting large numbers of threads to initialize a tiny amount of pages. > > Internally padata uses total job size and min chunk to arrive at the chunk > size, which on big machines will be much larger than min_chunk. The idea is > the chunk size should be large enough to minimize multithreading overhead but > small enough to permit load balancing between threads. > > This is probably why the results didn't change much when aligning by section, > but that doesn't mean other systems won't benefit. Okay, that makes sense. > > > Maybe it's better to leave deferred_init_maxorder alone and adapt the > > > multithreading to the existing implementation. That'd mean dealing with the > > > pesky opaque index somehow, so deferred_init_mem_pfn_range_in_zone() could be > > I should have been explicit, was thinking of @i from > () when mentioning the opaque index. Okay, that makes sense. However in reality you don't need to split that piece out. All you really are doing is splitting up the first_init_pfn value over multiple threads so you just need to make use of deferred_init_mem_pfn_range_in_zone() to initialize it. > > > generalized to find it in the thread function based on the start/end range, or > > > it could be maintained as part of the range that padata passes to the thread > > > function. > > > > You may be better off just implementing your threads to operate like > > deferred_grow_zone does. All your worker thread really needs then is > > to know where to start performing the page initialization and then it > > could go through and process an entire section worth of pages. The > > other bit that would have to be changed is patch 6 so that you combine > > any ranges that might span a single section instead of just splitting > > the work up based on the ranges. > > How are you thinking of combining them? I don't see a way to do it without > storing an arbitrary number of ranges somewhere for each thread. So when you are putting together your data you are storing a starting value and a length. All you end up having to do is make certain that the size + start pfn is section aligned. Then if you jump to a new section you have the option of either adding to the size of your current section or submitting the range and starting with a new start pfn in a new section. All you are really doing is breaking up the first_deferred_pfn over multiple sections. What I would do is section align end_pfn, and then check the next range from the zone. If the start_pfn of the next range is less than end_pfn you merge the two ranges by just increasing the size, otherwise you could start a new range. The idea is that you just want to define what the valid range of PFNs are, and if there are sizable holes you skip over them. You would leave most of the lifting for identifying exactly what PFNs to initialize to the pfn_range_in_zone iterators since they would all be read-only accesses anyway. > > If you are referring to the mo_pfn you shouldn't even need to think > > about it. > > (clarified "opaque index" above) Thanks. > > All it is doing is guaranteeing you are processing at least > > a full max order worth of pages. Without that the logic before was > > either process a whole section, or just process all of memory > > initializing it before it started freeing it. I found it made things > > much more efficient to process only up to MAX_ORDER at a time as you > > could squeeze that into the L2 cache for most x86 processors at least > > and it reduced the memory bandwidth by quite a bit. > > Yes, that was clever, we should keep doing it that way. Thanks.