Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965111Ab3GLQ7C (ORCPT ); Fri, 12 Jul 2013 12:59:02 -0400 Received: from mail-ie0-f179.google.com ([209.85.223.179]:63977 "EHLO mail-ie0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965084Ab3GLQ7A (ORCPT ); Fri, 12 Jul 2013 12:59:00 -0400 MIME-Version: 1.0 In-Reply-To: <20130712154314.GC22823@quack.suse.cz> References: <1373497956-8770-1-git-send-email-taysom@chromium.org> <20130711105346.GA5349@quack.suse.cz> <20130711115832.GB5349@quack.suse.cz> <20130712154314.GC22823@quack.suse.cz> Date: Fri, 12 Jul 2013 09:59:00 -0700 Message-ID: Subject: Re: [PATCH] fs: sync: fixed performance regression From: Paul Taysom To: Jan Kara Cc: Paul Taysom , Alexander Viro , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, sonnyrao@chromium.org Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3291 Lines: 70 `On Fri, Jul 12, 2013 at 8:43 AM, Jan Kara wrote: > On Thu 11-07-13 13:58:32, Jan Kara wrote: >> On Thu 11-07-13 12:53:46, Jan Kara wrote: >> > On Wed 10-07-13 16:12:36, Paul Taysom wrote: >> > > The following commit introduced a 10x regression for >> > > syncing inodes in ext4 with relatime enabled where just >> > > the atime had been modified. >> > > >> > > commit 4ea425b63a3dfeb7707fc7cc7161c11a51e871ed >> > > Author: Jan Kara >> > > Date: Tue Jul 3 16:45:34 2012 +0200 >> > > vfs: Avoid unnecessary WB_SYNC_NONE writeback during sys_sync and reorder sync passes >> > > >> > > See also: http://www.kernelhub.org/?msg=93100&p=2 >> > > >> > > Fixed by putting back in the call to writeback_inodes_sb. >> > > >> > > I'll attach the test in a reply to this e-mail. >> > > >> > > The test starts by creating 512 files, syncing, reading one byte >> > > from each of those files, syncing, and then deleting each file >> > > and syncing. The time to do each sync is printed. The process >> > > is then repeated for 1024 files and then the next power of >> > > two up to 262144 files. >> > > >> > > Note, when running the test, the slow down doesn't always happen >> > > but most of the tests will show a slow down. >> > > >> > > In response to crbug.com/240422 >> > > >> > > Signed-off-by: Paul Taysom >> > Thanks for report. Rather than blindly reverting the change, I'd like to >> > understand why you see so huge regression. As the changelog in the patch >> > says, flusher thread should be doing async writeback equivalent to the >> > removed one because it gets woken via wakeup_flusher_threads(). But my >> > guess is that for some reason we end up doing all the writeback from >> > sync_inodes_one_sb(). I'll try to reproduce your results and investigate... >> Hum, so it must be something timing sensitive. I wasn't able to reproduce >> the issue on my test machine in 4 runs of your test program. I was able to >> reproduce it on my laptop on every second run of the test program but once >> I've enabled some tracepoints, the issue disappeared and I didn't see it in >> about 10 runs. >> >> That being said I think that reverting my patch is just papering over the >> problem. We will do the async pass over inodes twice instead of once >> and thus the timing changes enough that you aren't able to observe the >> problem. >> >> I'm looking into this more... > So I finally understood what's going on. If the system has no dirty pages > at all wakeup_flusher_threads() will submit work with nr_pages == 0. So > wb_writeback() will bail out immediately without doing anything and all the > writeback is left for WB_SYNC_ALL pass of sync(1) which is slow. Attached > patch fixes the problem for me. > > Honza > -- > Jan Kara > SUSE Labs, CR Jan, Your fix is a clear win! Not only did it fix the sync after read problem but it made the sync after create faster too. Thanks, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/