Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp5102874pxj; Tue, 22 Jun 2021 15:21:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz9d4afN7a9pHyxwuRWBkAqehRThxFPwlNfcYPxW0Aiv0+vLp3y+WV/WHdtmTkxdNPK4zuJ X-Received: by 2002:a05:6402:280d:: with SMTP id h13mr8066595ede.226.1624400480105; Tue, 22 Jun 2021 15:21:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624400480; cv=none; d=google.com; s=arc-20160816; b=h+GWGXBQjVSkXKRSMHSP8VJQC82eGtYXMnKGvdjyZTJ5xg0DUuz+MqIRJeDvh5RaYy huFhj681JP/pGNt4MCDWWCSbRUbb/w+wUaIvCNPWJ0lAD6KN9Cl+Ly2bKQYInespFYTn aE7lAYPuqIYRAnWAI4cwoU9YFqzfYx9XeN0LD7Co82D6RIHrfILWvqdBHU/25G4pY8KB mrnwTN3KZodslNIOh7CKN2bHMr/ep8iXOqnc0NwYp23zIAHKloFPmYhXFwnbhD5AUIAT 7gX4pMMCgIaiYjAi7dQYyCwDtGd3ssZAn5coHQpDl1fcmXK2P3cGT/nq1T6fhojE3FFj EHdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=VEhO0VN+vmHKLJZj2ta1YBqPRa5EHsqfcTQ7aKHScfc=; b=wtsameb992maLbvFl2IPZbcMTBOZID+tNGxjnFG/l310br67yskE7uca0tvljCuZDa jXlQzXe6pyPXCfCvvJNZcVsy2bovHlNKQ9sgDA14NLufRra+AoQN5nILjOFTptwiBCjA bFhtA5pb4ouPgcsA8PspvhTWwB4rnEfE/Ndu/aHK80CdYm2a83sI9aTQpeBbpdbbCOLe S2NqXkABAbO4N2fBf2h5+4Z7YRlVutAnn9rYmXi8Me0hhFacSRUVig1zRiXyv0akFbQ9 GKa0C3ohtzICc91AtnvsyMK/3/HUgfUlJ3NWJNPdJbgO6jZfA5VMktDLXIBYaVBMT25q 9Mtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v19si18946033edb.495.2021.06.22.15.20.51; Tue, 22 Jun 2021 15:21:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229900AbhFVWXG (ORCPT + 99 others); Tue, 22 Jun 2021 18:23:06 -0400 Received: from mail108.syd.optusnet.com.au ([211.29.132.59]:55046 "EHLO mail108.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229718AbhFVWXF (ORCPT ); Tue, 22 Jun 2021 18:23:05 -0400 Received: from dread.disaster.area (pa49-179-138-183.pa.nsw.optusnet.com.au [49.179.138.183]) by mail108.syd.optusnet.com.au (Postfix) with ESMTPS id 7E20B1B3100; Wed, 23 Jun 2021 08:20:45 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1lvolA-00FrD1-A6; Wed, 23 Jun 2021 08:20:44 +1000 Date: Wed, 23 Jun 2021 08:20:44 +1000 From: Dave Chinner To: David Laight Cc: 'David Howells' , Al Viro , "torvalds@linux-foundation.org" , Ted Ts'o , Dave Hansen , Andrew Morton , "willy@infradead.org" , "linux-mm@kvack.org" , "linux-ext4@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: Do we need to unrevert "fs: do not prefault sys_write() user buffer pages"? Message-ID: <20210622222044.GI2419729@dread.disaster.area> References: <3221175.1624375240@warthog.procyon.org.uk> <3225322.1624379221@warthog.procyon.org.uk> <7a6d8c55749d46d09f6f6e27a99fde36@AcuMS.aculab.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7a6d8c55749d46d09f6f6e27a99fde36@AcuMS.aculab.com> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=Tu+Yewfh c=1 sm=1 tr=0 a=MnllW2CieawZLw/OcHE/Ng==:117 a=MnllW2CieawZLw/OcHE/Ng==:17 a=v-Dl0aO_AE6Q_TjM:21 a=kj9zAlcOel0A:10 a=r6YtysWOX24A:10 a=drOt6m5kAAAA:8 a=7-415B0cAAAA:8 a=LpHeB8C7BEPH7Pa3heMA:9 a=CjuIK1q_8ugA:10 a=vIikcsq8ZuViU5wKlUpU:22 a=RMMjzBEyIzXRtoq5n5K6:22 a=biEYGPWJfzWAr4FL6Ov7:22 Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Tue, Jun 22, 2021 at 09:55:09PM +0000, David Laight wrote: > From: David Howells > > Sent: 22 June 2021 17:27 > > > > Al Viro wrote: > > > > > On Tue, Jun 22, 2021 at 04:20:40PM +0100, David Howells wrote: > > > > > > > and wondering if the iov_iter_fault_in_readable() is actually effective. > > > > Yes, it can make sure that the page we're intending to modify is dragged > > > > into the pagecache and marked uptodate so that it can be read from, but is > > > > it possible for the page to then get reclaimed before we get to > > > > iov_iter_copy_from_user_atomic()? a_ops->write_begin() could potentially > > > > take a long time, say if it has to go and get a lock/lease from a server. > > > > > > Yes, it is. So what? We'll just retry. You *can't* take faults while > > > holding some pages locked; not without shitloads of deadlocks. > > > > In that case, can we amend the comment immediately above > > iov_iter_fault_in_readable()? > > > > /* > > * Bring in the user page that we will copy from _first_. > > * Otherwise there's a nasty deadlock on copying from the > > * same page as we're writing to, without it being marked > > * up-to-date. > > * > > * Not only is this an optimisation, but it is also required > > * to check that the address is actually valid, when atomic > > * usercopies are used, below. > > */ > > if (unlikely(iov_iter_fault_in_readable(i, bytes))) { > > > > The first part suggests this is for deadlock avoidance. If that's not true, > > then this should perhaps be changed. > > I'd say something like: > /* > * The actual copy_from_user() is done with a lock held > * so cannot fault in missing pages. > * So fault in the pages first. > * If they get paged out the inatomic usercopy will fail > * and the whole operation is retried. > * > * Hopefully there are enough memory pages available to > * stop this looping forever. > */ What about the other 4 or 5 copies of this loop in the kernel? This is a pattern, not a one off implementation. Comments describing how the pattern works belong in the API documentation, not on a single implemenation of the pattern... Cheers, Dave. -- Dave Chinner david@fromorbit.com