Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756885AbZFCNaD (ORCPT ); Wed, 3 Jun 2009 09:30:03 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751351AbZFCN3y (ORCPT ); Wed, 3 Jun 2009 09:29:54 -0400 Received: from cmpxchg.org ([85.214.51.133]:50318 "EHLO cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752836AbZFCN3x (ORCPT ); Wed, 3 Jun 2009 09:29:53 -0400 Date: Wed, 3 Jun 2009 15:27:52 +0200 From: Johannes Weiner To: Andi Kleen Cc: Andrew Morton , Rik van Riel , Peter Zijlstra , Hugh Dickins , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [patch][v2] swap: virtual swap readahead Message-ID: <20090603132751.GA1813@cmpxchg.org> References: <20090602223738.GA15475@cmpxchg.org> <20090602233457.GY1065@one.firstfloor.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20090602233457.GY1065@one.firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2903 Lines: 84 On Wed, Jun 03, 2009 at 01:34:57AM +0200, Andi Kleen wrote: > On Wed, Jun 03, 2009 at 12:37:39AM +0200, Johannes Weiner wrote: > > + * > > + * Caller must hold down_read on the vma->vm_mm if vma is not NULL. > > + */ > > +struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > > + struct vm_area_struct *vma, unsigned long addr) > > +{ > > + unsigned long start, pos, end; > > + unsigned long pmin, pmax; > > + int cluster, window; > > + > > + if (!vma || !vma->vm_mm) /* XXX: shmem case */ > > + return swapin_readahead_phys(entry, gfp_mask, vma, addr); > > + > > + cluster = 1 << page_cluster; > > + window = cluster << PAGE_SHIFT; > > + > > + /* Physical range to read from */ > > + pmin = swp_offset(entry) & ~(cluster - 1); > > Is cluster really properly sign extended on 64bit? Looks a little > dubious. long from the start would be safer Fixed. > > + /* Virtual range to read from */ > > + start = addr & ~(window - 1); > > Same. Fixed. > > + pgd = pgd_offset(vma->vm_mm, pos); > > + if (!pgd_present(*pgd)) > > + continue; > > + pud = pud_offset(pgd, pos); > > + if (!pud_present(*pud)) > > + continue; > > + pmd = pmd_offset(pud, pos); > > + if (!pmd_present(*pmd)) > > + continue; > > + pte = pte_offset_map_lock(vma->vm_mm, pmd, pos, &ptl); > > You could be more efficient here by using the standard mm/* nested loop > pattern that avoids relookup of everything in each iteration. I suppose > it would mainly make a difference with 32bit highpte where mapping a pte > can be somewhat costly. And you would take less locks this way. I ran into weird problems here. The above version is actually faster in the benchmarks than writing a nested level walker or using walk_page_range(). Still digging but it can take some time. Busy week :( > > + page = read_swap_cache_async(swp, gfp_mask, vma, pos); > > + if (!page) > > + continue; > > That's out of memory, break would be better here because prefetch > while oom is usually harmful. It can also happen due to a race with something releasing the swap slot (i.e. swap_duplicate() fails). But the old version did a break too and this patch shouldn't do it differently. Fixed. > > + page_cache_release(page); > > + } > > + lru_add_drain(); /* Push any new pages onto the LRU now */ > > + return read_swap_cache_async(entry, gfp_mask, vma, addr); > > Shouldn't that page be already handled in the loop earlier? Why doing that > again? It would be better to remember it from there. When doing the nested page table level walker, communicating even more state back and forth gets pretty ugly. I see what I can do. Thanks for your input Andi, Hannes -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/