Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934081AbbFJRHN (ORCPT ); Wed, 10 Jun 2015 13:07:13 -0400 Received: from cantor2.suse.de ([195.135.220.15]:60253 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754284AbbFJRHG (ORCPT ); Wed, 10 Jun 2015 13:07:06 -0400 Date: Wed, 10 Jun 2015 18:07:00 +0100 From: Mel Gorman To: Linus Torvalds Cc: Andi Kleen , Dave Hansen , Ingo Molnar , Andrew Morton , Rik van Riel , Hugh Dickins , Minchan Kim , H Peter Anvin , Linux-MM , LKML , Peter Zijlstra , Thomas Gleixner Subject: Re: [PATCH 0/3] TLB flush multiple pages per IPI v5 Message-ID: <20150610170700.GG26425@suse.de> References: <20150609084739.GQ26425@suse.de> <20150609103231.GA11026@gmail.com> <20150609112055.GS26425@suse.de> <20150609124328.GA23066@gmail.com> <5577078B.2000503@intel.com> <55771909.2020005@intel.com> <55775749.3090004@intel.com> <20150610131354.GO19417@two.firstfloor.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3987 Lines: 87 On Wed, Jun 10, 2015 at 09:17:15AM -0700, Linus Torvalds wrote: > On Wed, Jun 10, 2015 at 6:13 AM, Andi Kleen wrote: > > > > Assuming the page tables are cache-hot... And hot here does not mean > > L3 cache, but higher. But a memory intensive workload can easily > > violate that. > > In practice, no. > > You'll spend all your time on the actual real data cache misses, the > TLB misses won't be any more noticeable. > > And if your access patters are even *remoptely* cache-friendly (ie > _not_ spending all your time just waiting for regular data cache > misses), then a radix-tree-like page table like Intel will have much > better locality in the page tables than in the actual data. So again, > the TLB misses won't be your big problem. > > There may be pathological cases where you just look at one word per > page, but let's face it, we don't optimize for pathological or > unrealistic cases. > It's concerns like this that have me avoiding any micro-benchmarking approach that tried to measure the indirect costs of refills. No matter what the microbenchmark does, there will be other cases that render it irrelevant. > And the thing is, you need to look at the costs. Single-page > invalidation taking hundreds of cycles? Yeah, we definitely need to > take the downside of trying to be clever into account. > > If the invalidation was really cheap, the rules might change. As it > is, I really don't think there is any question about this. > > That's particularly true when the single-page invalidation approach > has lots of *software* overhead too - not just the complexity, but > even "obvious" costs feeding the list of pages to be invalidated > across CPU's. Think about it - there are cache misses there too, and > because we do those across CPU's those cache misses are *mandatory*. > > So trying to avoid a few TLB misses by forcing mandatory cache misses > and extra complexity, and by doing lots of 200+ cycle operations? > Really? In what universe does that sound like a good idea? > > Quite frankly, I can pretty much *guarantee* that you didn't actually > think about any real numbers, you've just been taught that fairy-tale > of "TLB misses are expensive". As if TLB entries were somehow sacred. > Everyone has been taught that one. Papers I've read from the last two years on TLB implementations or page reclaim management bring this up as a supporting point for whatever they are proposing. It was partially why I kept PFN tracking and also to put much of the cost on the reclaimer and minimise interference on the recipient of the IPI. I still think it was a rational concern but will assume that refills are cheaper than smart invalidations until it can be proven otherwise. > If somebody can show real numbers on a real workload, that's one > thing. The last adjustments made today to the series are at http://git.kernel.org/cgit/linux/kernel/git/mel/linux-balancenuma.git/log/?h=mm-vmscan-lessipi-v7r5 I'll redo it on top of 4.2-rc1 whenever that happens so gets a full round in linux-next. Patch 4 can be revisited if a real workload is found that is not deliberately pathological running on a CPU that matters. The forward port of patch 4 for testing will be trivial. It also separated out the dynamic allocation of the structure so that it can be excluded if deemed to be an unnecessary complication. > So anyway, I like the patch series. I just think that the final patch > - the one that actually saves the addreses, and limits things to > BATCH_TLBFLUSH_SIZE, should be limited. > I see your logic but if it's limited then we send more IPIs and it's all crappy tradeoffs. If a real workload complains, it'll be far easier to work with. -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/