Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp20136113rwd; Wed, 28 Jun 2023 20:52:35 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6fXHsj/1dSiqYd51qWXCAj1tqHOh+PtY4/e6A41NTJZuhXFpS+8ynbrT0C8KTTw1ue7DhB X-Received: by 2002:a9d:73d8:0:b0:6b8:8e99:9ae0 with SMTP id m24-20020a9d73d8000000b006b88e999ae0mr499983otk.24.1688010755076; Wed, 28 Jun 2023 20:52:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688010755; cv=none; d=google.com; s=arc-20160816; b=WE56RPi44zxsDl3HQw+bZljEhSofIfpPMdCJ50yJDI7kIAYN82ormxbEbSfVkish0m xT5kz82mzPAaVQcXUwTXAONWNzc3AXY+pjfYXmaGgX+6HU4rW26M/mQENwdGahJbuRAX prSPsnKOsBNGJtemxaWrN9dgvDqR+vYCgeqziYJA/i3qh+/kZvinw/QPjGjM8Y7VDaii 5MPshubBPWSu8vH/mcR1X5u+B1nP5WRcO8vXx7qFe7OnTzht/gWSoA9Q55houwmtpONV rtaeKbxPowBCVnQB9mvVaIT8mz5Z0RA3yNg5pWVBFvo/DdAGNAdSSllq575EzC1HlyBC Clwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Y/a+t3d/wEGpupZ38F99LZOEtohzSmepAE5z3o2T7a8=; fh=Aheedh/8OZqtVQ1KxMLTXLUfTPqJ0May8qtp05tx6sY=; b=wndNRyRiKcAA1bdf4oFPCi4oL1KtwhDaETMyb4T5kg1dsffT2dhEVLJLZ0VElC179g xjf0/qsBZoCU/NsKYVBas8DekRjVCqMDjHUiG+wAtc98dnQ9VbIWyvtZnwIBzXZOUpEh TG1iKqz6U/XljlNWQT2kY7CKaBMkzEeJj34VL6c8rklSGkW+6iJ8Wew+tvZNoEIkeLpo faW9MzSyR+BmfIvcm5b+7BJJKIsU1y7ltwqCpH4Gm1PedqPebOgSGzU6wydDdAm5BhcT qv+Tl+/vbHt5ZsWXgv1IqQAyDXkFpf5RmUTy0NeF5mRB4SbNvJxnGiYRvPL6ahIOc5Ai 2ooA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=CTZhf+te; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g1-20020a631101000000b00553b02a9a1asi10089414pgl.249.2023.06.28.20.52.23; Wed, 28 Jun 2023 20:52:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=CTZhf+te; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231272AbjF2DQP (ORCPT + 99 others); Wed, 28 Jun 2023 23:16:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230523AbjF2DQJ (ORCPT ); Wed, 28 Jun 2023 23:16:09 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D716271B; Wed, 28 Jun 2023 20:16:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Y/a+t3d/wEGpupZ38F99LZOEtohzSmepAE5z3o2T7a8=; b=CTZhf+teSgQpgsLhWsOMru+raT rGOlq5+9XWj+3Qybu/cItMXXcGn+dmBh3BYY/u+etzJuhSwhd+AwWOH+qRL4DPv8W/3xM8jy66/fP CU2vEJSD/TKotnmx44dMrC/KKnaQ4IULmg/PLdoa5Bpn0lz/+5k+zb054tvw4HgIFHKBzJPcaJj8E vpahTFNTvelbDXud2gdmXYPPg0CBFz6N3opF4dZ++04IyAxXRanoP+fZvOc4TL6/VIbw9sOkzlCUc BdDTSfRnMtu0JlIh8odHKhw46jDxaAWUXO2p1d1V9464PAwU0zx1LgWGO2coHKwjiR4ns7GpP3Osu mhzkRDCQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qEi8a-004YGx-Ui; Thu, 29 Jun 2023 03:16:05 +0000 Date: Thu, 29 Jun 2023 04:16:04 +0100 From: Matthew Wilcox To: "Fabio M. De Francesco" Cc: Sumitra Sharma , Hans de Goede , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ira Weiny , Deepak R Varma Subject: Re: [PATCH] fs/vboxsf: Replace kmap() with kmap_local_{page, folio}() Message-ID: References: <20230627135115.GA452832@sumitra.com> <6924669.18pcnM708K@suse> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6924669.18pcnM708K@suse> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 29, 2023 at 12:23:54AM +0200, Fabio M. De Francesco wrote: > > - buf = kmap(page); > > + do { > > Please let me understand why you are calling vboxsf_read() in a loop, a > PAGE_SIZE at a time. Because kmap_local_folio() can only (guarantee to) map one page at a time. Also vboxsf_read() is only tested with a single page at a time. > If I understand the current code it reads a single page at offset zero of a > folio and then memset() with zeros from &buf[nread] up to the end of the page. > Then it seems that this function currently assume that the folio doesn't need > to be read until "offset < folio_size(folio)" becomes false. > > Does it imply that the folio is always one page sized? Doesn't it? I'm surely > missing some basics... vboxsf does not yet claim to support large folios, so every folio that it sees will be only a single page in size. Hopefully at some point that will change. Again, somebody would need to test that. In the meantime, if someone is going to the trouble of switching over to using the folio API, let's actually include support for large folios. > > - kunmap(page); > > - unlock_page(page); > > + if (!err) { > > + flush_dcache_folio(folio); > > + folio_mark_uptodate(folio); > > + } > > + folio_unlock(folio); > > Shouldn't we call folio_lock() to lock the folio to be able to unlock with > folio_unlock()? > > If so, I can't find any neither a folio_lock() or a page_lock() in this > filesystem. > > Again sorry for not understanding, can you please explain it? Ira gave the minimal explanation, but a slightly fuller explanation is that the folio is locked while it is being fetched from backing store. That prevents both a second thread from reading from it while another thread is bringing it uptodate, and two threads trying to bring it uptodate at the same time. Most filesystems have an asynchronous read_folio, so you don't see the folio_unlock() in the read_folio() function; instead it's in the I/O completion path. vboxsf is synchronous.