Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp469352rwl; Wed, 29 Mar 2023 04:41:49 -0700 (PDT) X-Google-Smtp-Source: AKy350Zko0seoleEX/dEFGNgz5jd2Ae/hrn5/e6AA6Gmn44w89qlt4XIUnNUIz7cjhbn/1h46Ziv X-Received: by 2002:a17:906:b15:b0:931:b4d3:fc7f with SMTP id u21-20020a1709060b1500b00931b4d3fc7fmr19303819ejg.30.1680090109305; Wed, 29 Mar 2023 04:41:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680090109; cv=none; d=google.com; s=arc-20160816; b=vfcp2SWVneRVtcknzujPihHUtRFtf90idAZ0i3516qQFof/vqw3zWbhLqVQ0KSn/lF nV9L01o3oS3RvoIZWAdekP1/h1FWiGuoFK5+Hhz8rznuHk0jUmhosAm/hA9cOEB0RA+S VuhfGspbStSZx/zIk0DMMPLWbQ/oW18rXE6PbNCt4YjcRhVtHDFQXGtROggie/hoE7yP 7VARO5Qk9sOIQepSbOCrEzKTtB01vei+5TjdnfW0yzINTyvK8Q7D64aLjNQUPxH0OV/g szuq3oF6LJMszGr8qGrt9YNbY4Q6ftjEMOKVuPXE0hX+YmklNm1eyDq7c4iY2BkI5MeF IV4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent :content-transfer-encoding:references:in-reply-to:date:cc:to:from :subject:message-id:dkim-signature; bh=gci78KxUv/HImRyIwqR341jYd+Bh1/nV19lBGqCUxak=; b=FO/X+l1Aw3LdYUsDBQ7J3eJnAfjVBjLT1Xk8P/8DaVSLfnlvIGMbhOHyCP9d/adKZm sDpZs6UXg8gLAW2HifixA3MGfNPX39JsnXdd73Y2kc/2ewkRsqPpeKDQePrGPlAFvW34 NlpGNgJKEi7ijG1a3BSRrV3i+BBXPHuTTiEoMiPwogy1OWBJK382JtDAAOSdbNHFSRXm Qybsv3QchEi5+x3Ii5Qjzwmb/ws04/g2nozJtMpCPCt57FuzQdr78aVp6Ifl/ojlF/w9 nWIhOh2/uoCD7oipIhhY4qeYEzLavn0gY9zIfHaSKIPrMYWbjQzWPo+hulUTRvKAY8Xb s7iA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=lxCWixa7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o15-20020a17090608cf00b008d4fbb9f30csi29258355eje.899.2023.03.29.04.41.23; Wed, 29 Mar 2023 04:41:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=lxCWixa7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229849AbjC2Lca (ORCPT + 99 others); Wed, 29 Mar 2023 07:32:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbjC2Lc3 (ORCPT ); Wed, 29 Mar 2023 07:32:29 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 230853C24 for ; Wed, 29 Mar 2023 04:32:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 441EB61CCA for ; Wed, 29 Mar 2023 11:32:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2502C433D2; Wed, 29 Mar 2023 11:32:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680089544; bh=g+bWYP2kmf5ZYSDx4o/Kl40M5wU8ZiJuFCe2pEPBl7Q=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=lxCWixa7cyoadUchVEFh4bWRx33ZF/mDjPz03y8ssHd8QjNZwFWQOuUS44OeJrz0q YkHZXQ5xK7cw2bWffeI/PY1XrPPeQI2C8Nv8YbhhRCFW+7DD4dcQzhZCac1vYip4t2 BgXq1BpLNvmt6pQtRddlJHswU105nExI8g5vxpNCRFxsDdSG/phn6V8oKcAO4Ei+ad 71AzcO7y1x+t+fqXPXFATi2WkAV4+Cc/epSYi5GzzOWYh8+Up/2EBSqiMZzq9Tyry5 OfOWFRK0CWOm5Jl60XZKDjIN01KeH9M6mH3p/oYWm8j+Pz2VLwek6FtpkwQmu/9Y0W n3E0O0zXnyYxQ== Message-ID: <6f89f0ac34956e7f527c7efa3d162b4a1f5ea71a.camel@kernel.org> Subject: Re: 9p caching with cache=loose and cache=fscache From: Jeff Layton To: Christian Schoenebeck , Luis Chamberlain , Dominique Martinet Cc: Eric Van Hensbergen , Josef Bacik , lucho@ionkov.net, v9fs-developer@lists.sourceforge.net, linux-kernel@vger.kernel.org, Amir Goldstein , Pankaj Raghav Date: Wed, 29 Mar 2023 07:32:22 -0400 In-Reply-To: <2322056.HEUtEhvpMu@silver> References: <2322056.HEUtEhvpMu@silver> Content-Type: text/plain; charset="ISO-8859-15" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2023-03-29 at 13:19 +0200, Christian Schoenebeck wrote: > On Wednesday, March 29, 2023 12:08:26 AM CEST Dominique Martinet wrote: > > Luis Chamberlain wrote on Tue, Mar 28, 2023 at 10:41:02AM -0700: > > > > "To speedup things you can also consider to use e.g. cache=3Dloos= e instead. > > >=20 > > > My experience is that cache=3Dloose is totally useless. > >=20 > > If the fs you mount isn't accessed by the host while the VM is up, and > > isn't shared with another guest (e.g. "exclusive share"), you'll get > > what you expect. > >=20 > > I have no idea what people use qemu's virtfs for but this is apparently > > common enough that it was recommended before without anyone complaining > > since that started being recommended in 2011[1] until now? > >=20 > > [1] https://wiki.qemu.org/index.php?title=3DDocumentation/9psetup&diff= =3D2178&oldid=3D2177 > >=20 > > (now I'm not arguing it should be recommended, my stance as a 9p > > maintainer is that the default should be used unless you know what > > you're doing, so the new code should just remove the 'cache=3Dnone' > > altogether as that's the default. > > With the new cache models Eric is preparing comes, we'll get a new safe > > default that will likely be better than cache=3Dnone, there is no reaso= n > > to explicitly recommend the historic safe model as the default has > > always been on the safe side and we have no plan of changing that.) >=20 > It's not that I receive a lot of feedback for what people use 9p for, nor= am I > QEMU's 9p maintainer for a long time, but so far contributors cared more = about > performance and other issues than propagating changes host -> guest witho= ut > reboot/remount/drop_caches. At least they did not care enough to work on > patches. >=20 > Personally I also use cache=3Dloose and only need to push changes host->g= uest > once in a while. >=20 > > > > That will deploy a filesystem cache on guest side and reduces th= e amount of > > > > 9p requests to hosts. As a consequence however guest might not s= ee file > > > > changes performed on host side *at* *all* > > >=20 > > > I think that makes it pretty useless, aren't most setups on the guest= read-only? > > >=20 > > > It is not about "may not see", just won't. For example I modified the > > > Makefile and compiled a full kernel and even with those series of > > > changes, the guest *minutes later* never saw any updates. > >=20 > > read-only on the guest has nothing to do with it, nor has time. > >=20 > > If the directory is never accessed on the guest before the kernel has > > been built, you'll be able to make install on the guest -- once, even i= f > > the build was done after the VM booted and fs mounted. > >=20 > > After it's been read once, it'll stay in cache until memory pressure (o= r > > an admin action like umount/mount or sysctl vm.drop_caches=3D3) clears = it. > >=20 > > I believe that's why it appeared to work until you noticed the issue an= d > > had to change the mount option -- I'd expect in most case you'll run > > make install once and reboot/kexec into the new kernel. > >=20 > > It's not safe for your usecase and cache=3Dnone definitely sounds bette= r > > to me, but people should use defaults make their own informed decision. >=20 > It appears to me that read-only seems not to be the average use case for = 9p, > at least from the command lines I received. It is often used in combinati= on > with overlayfs though. >=20 > I (think) the reason why cache=3Dloose was recommended as default option = on the > QEMU wiki page ages ago, was because of its really poor performance at th= at > point. I would personally not go that far and discourage people from usin= g > cache=3Dloose in general, as long as they get informed about the conseque= nces. > You still get a great deal of performance boost, the rest is for each > individual to decide. >=20 > Considering that Eric already has patches for revalidating the cache in t= he > works, I think the changes I made on the other QEMU wiki page are appropr= iate, > including the word "might" as it's soon only a matter of kernel version. >=20 > > > > In the above example the folder /home/guest/9p_setup/ shared of t= he > > > > host is shared with the folder /tmp/shared on the guest. We use n= o > > > > cache because current caching mechanisms need more work and the > > > > results are not what you would expect." > > >=20 > > > I got a wiki account now and I was the one who had clarified this. > >=20 > > Thanks for helping making this clearer. >=20 > Yep, and thanks for making a wiki account and improving the content there > directly. Always appreciated! >=20 Catching up on this thread. Getting cache coherency right on a network filesystem is quite difficult. It's always a balance between correctness and performance. Some protocols (e.g. CIFS and Ceph) take a very heavy-handed approach to try ensure that the caches are always coherent. Basically, these clients are only allowed to cache when the server grants permission for it. That can have a negative effect on performance, of course. NFS as a protocol is more "loose", but we've generally beat its cache coherency mechanisms into shape over the years, so you don't see these sorts of problems there as much. FWIW, NFS uses a sliding time window to revalidate the cache, such that it'll revalidate frequently when an inodes is changing frequently, but less so when it's more stable. 9P I haven't worked with as much, but it sounds like it doesn't try to keep caches coherent (at least not with cache=3Dloose). Probably the simplest solution here is to simply unmount/mount before you have the clients call "make modules_install && make install". That should ensure that the client doesn't have any stale data in the cache=20 when the time comes to do the reads. A full reboot shouldn't be required. --=20 Jeff Layton