Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp20910174rwd; Thu, 29 Jun 2023 08:28:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5MvBd6cl5eKWAmprP8WBm8R4EKmdIySeHbjG7+bSqVSU1+aduRlCX+HyHMC/B6RfgAmbGB X-Received: by 2002:a17:90a:a8f:b0:260:fe48:360e with SMTP id 15-20020a17090a0a8f00b00260fe48360emr17797996pjw.29.1688052529720; Thu, 29 Jun 2023 08:28:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688052529; cv=none; d=google.com; s=arc-20160816; b=vrTnKTA7VjV6dx8qQXCsRy76tkr11Ok0dfojSx41gIzH8jmn+tgteCN8BAxedGlJ2r 0qEMwAfYxvvbwMRNLegiohUmQYVRVPkCC8cdgcYAiqggI8nJ32a+RZIBJOl6/0P2U2/L IW8bvaPmKAtWTG0n7njiomx0NIWOyLVg0sirLh850miy/3ezKUBtPDWL4A6ikHUdmOPt IrHJ2LztoWLa7bB+GJi3WTUFXOnih07E4V00P4sJlGHp9s/fb8nZLAc0QjkKqr9fxlji 8eyee7RfSVaLTxiKfTTWG1M/bLcsa6tEKZKyTma/rLA38tCy0IK7Ara53IMa8HNRbduH XMBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=bsfX3JMuop516GotuJhoV/AW1k9XSHNWBLWZsLPkEfA=; fh=WWc1HXglz1tDMQHHS0gOmuW+EF0SSvZEvOJa5Z1Qhxg=; b=0srjrC/TRo/1iJCGUI2nFhyo1j62g0SW0zbwudIrYd1MropMZqm1D5OC2tfTRzKVM7 Gh1mS5ijUrYXIe86YRLuLL74C0m6f1VRXVdJuueL6Y/k8Wlc7NMzXZJwbjbqPZ/l0Xy7 X/CFXJD9QrkTNh0ad3PC9WsFdGdcHy56AMGuRES3BM9Ax406YHWHpmWvZNIFRkSIqlzi 2bHjkt7ILj8naXIJDO6bPwbmpwqkzGDgNKbJzjvbEjmf0RuNk8kxhRrnwWn7tvU5Jtob ZDcInXfjOs5QmAsOX7GYP4V04cSzJpLH4A1BtyirWq/1TYOWuKNLttaiELZJKCxg20x9 KuUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="KdkTx/37"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nu8-20020a17090b1b0800b0025e404ffb21si12209641pjb.18.2023.06.29.08.28.34; Thu, 29 Jun 2023 08:28:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="KdkTx/37"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232125AbjF2OnD (ORCPT + 99 others); Thu, 29 Jun 2023 10:43:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231653AbjF2OnC (ORCPT ); Thu, 29 Jun 2023 10:43:02 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B74D1FC1; Thu, 29 Jun 2023 07:42:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=bsfX3JMuop516GotuJhoV/AW1k9XSHNWBLWZsLPkEfA=; b=KdkTx/37O+SRDDy+Gmt/3IeIsE ypBdKFSOaF3yMJEYD5mwHT2aEXPEuFgd8Wt5cx6m7V/yIqlNC8r9GGSWGmeqrLM+wXj9tTGhRQh91 46jvSIcyaoKQlgCc25/kI3zV0KMwteIG+XcJm15s0/Q+TEfTwK2IZOD/l3BZJs0Ly7KzGneO9DOga /tJ2f2KMgZfuIBAs9rXs9D5Zt9CkmRhJubprLvZHlCC2NmcT/JKvMAELfSKsn9AUgVgSlfLPAftY3 hhvWxU3Jy9GBvikFinPWALjIg/6qCrDQTQw9B1j/S9MydcGkYBpjGRv5UHNRu57m1sQqtl7rOH+iw bvZBvKIw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qEsrH-004wOF-7l; Thu, 29 Jun 2023 14:42:55 +0000 Date: Thu, 29 Jun 2023 15:42:55 +0100 From: Matthew Wilcox To: Sumitra Sharma Cc: Hans de Goede , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ira Weiny , Fabio , Deepak R Varma Subject: Re: [PATCH] fs/vboxsf: Replace kmap() with kmap_local_{page, folio}() Message-ID: References: <20230627135115.GA452832@sumitra.com> <20230629092844.GA456505@sumitra.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20230629092844.GA456505@sumitra.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 29, 2023 at 02:28:44AM -0700, Sumitra Sharma wrote: > On Wed, Jun 28, 2023 at 06:15:04PM +0100, Matthew Wilcox wrote: > > Here's a more comprehensive read_folio patch. It's not at all > > efficient, but then if we wanted an efficient vboxsf, we'd implement > > vboxsf_readahead() and actually do an async call with deferred setting > > of the uptodate flag. I can consult with anyone who wants to do all > > this work. > > So, after reading the comments, I understood that the problem presented > by Hans and Matthew is as follows: > > 1) In the current code, the buffers used by vboxsf_write()/vboxsf_read() are > translated to PAGELIST-s before passing to the hypervisor, > but inefficiently— it first maps a page in vboxsf_read_folio() and then > calls page_to_phys(virt_to_page()) in the function hgcm_call_init_linaddr(). It does ... and I'm not even sure that virt_to_page() works for kmapped pages. Has it been tested with a 32-bit guest with, say, 4-8GB of memory? > The inefficiency in the current implementation arises due to the unnecessary > mapping of a page in vboxsf_read_folio() because the mapping output, i.e. the > linear address, is used deep down in file 'drivers/virt/vboxguest/vboxguest_utils.c'. > Hence, the mapping must be done in this file; to do so, the folio must be passed > until this point. It can be done by adding a new member, 'struct folio *folio', > in the 'struct vmmdev_hgcm_function_parameter64'. That's not the way to do it (as Hans already said). The other problem is that vboxsf_read() is synchronous. It makes the call to the host, then waits for the outcome. What we really need is a vboxsf_readahead() that looks something like this: static void vboxsf_readahead(struct readahead_control *ractl) { unsigned int nr = readahead_count(ractl); req = vbg_req_alloc(... something involving nr ...); ... fill in the page array ... ... submit the request ... } You also need to set up a kthread that will sit on the hgcm_wq and handle the completions that come in (where you'd call folio_mark_uptodate() if the call is successful, folio_unlock() to indicate the I/O has completed, etc, etc). Then go back to read_folio() (which can be synchronous), and maybe factor out the parts of vboxsf_readahead() that can be reused for filling in the vbg_req. Hans might well have better ideas about this could be structured; I'm new to the vbox code.