Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp1088052rdg; Fri, 13 Oct 2023 09:49:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFxnLJ6F2pSMRIY7feLB/jJQ5qg427/1Wi6Q1eYW5KxpGgR6Bz40hr4ySgwhOiRiXmIbRfn X-Received: by 2002:a05:6a00:8c7:b0:68a:4d66:caf with SMTP id s7-20020a056a0008c700b0068a4d660cafmr32775887pfu.34.1697215784746; Fri, 13 Oct 2023 09:49:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697215784; cv=none; d=google.com; s=arc-20160816; b=zjeUTsHnW7fjqG5lkyVIkhip5ETDppCLR2pgh8IJbL9R6BxFSyL9RWl3JZHEqkzBhF 2hvlg/ruRWI1wopCQwW6eTP/c/Ad/sM5xvSyJQXfnaXrrgmiF0KZqV1wIijGzqPberDr ptnmhLG40vUtd4vezh3aoxkyFdEgOsFfFsdbBqCU1KesFbxNbC26QuzT4lkoN9+ux3qY yCSQmoMAOR8bfSQi04cpguAnxdVjEsdolnyy9bquW5VUlBFIgkPBCXn/8RwrlTLoltE9 CNF6WPwbt/vWSq2h61R0Y9SzlH+o3AXZCNgYha74nLX9ACK7fpDh3s8OJfZxNs4H53p4 8xmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=Nz4f3IlrP+KvlAwyWza5Savvd/r4RIbASaykdzTPF2Q=; fh=4LffPaCRBGASYY0c5CN4yv/E7KKT7OQ3WzH+XW9Gyk4=; b=OK2BqJIteiigx710wZfT7z/HpNRQfKVxhgyRdhB4Lh0xmT+Wh0RYTecbQ6OkOLuzks ZIx4aTOW0Per0ohaXazaQRkH+G2DcZ4p/o+Pg3Reg049t6ZPWiSSbmVjl5nFdlE/+DGb vL1RgZBQ5qVN9FzU0KIF/R6nk6PVct3LQnxs+b6l7dZZQgzLjB3Jwc4a+OYf+SBe/Kq0 UvzuC4ziH3eWFThUniz1ce9356wocgc47nUMcPJZT9hOljdHtTj7XO4Ksog8gyjixc3M YUmtTI68P14F+VlxJEdnKJc8mEiSoqRccAxWl+AmC0MDIR6Qg32UEcChjuie0/0167bW V3Mw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Zr38qGbT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id a21-20020a630b55000000b005578c6a7672si5003929pgl.90.2023.10.13.09.49.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 Oct 2023 09:49:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Zr38qGbT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 2DF7C83CD97E; Fri, 13 Oct 2023 09:49:36 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230175AbjJMQt2 (ORCPT + 99 others); Fri, 13 Oct 2023 12:49:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229891AbjJMQt0 (ORCPT ); Fri, 13 Oct 2023 12:49:26 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF5A4BB for ; Fri, 13 Oct 2023 09:49:24 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-405524e6769so15031865e9.1 for ; Fri, 13 Oct 2023 09:49:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697215763; x=1697820563; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Nz4f3IlrP+KvlAwyWza5Savvd/r4RIbASaykdzTPF2Q=; b=Zr38qGbTftWVwYVo/sIa97l2o26Fp9ztdIwVUF6s6kBA77xdtmtvnpPmdWYDyGMKo7 nVQQTOMTEOywlYdWt+nmoHapfNMv2PUzYE8M/TP24eMxBRAjr6hXtifvHtezuYLUnG2J i1J/cZxNkV0Hrdty0/0s+ksJ4b+QBX7/ZherxAK++qBpzyWc4eQa5SA791PWFAKGWHnm exbvQ0ZqDP1gn+D3fzLnSVmwMygK6yuj0aDmk6J+B6BOdJE/ZNBhxNjEFEREutpv6WOi 3AJgsLek1XG3JhaR/xrsiVo1MYAgfbPycU42mgenIr1SJJXq8LthBdxYiY4VwMuaRCNg l9aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697215763; x=1697820563; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nz4f3IlrP+KvlAwyWza5Savvd/r4RIbASaykdzTPF2Q=; b=rF6FDrsXLdWi0siiaKSLjldccfGsQXpE+N02VS3EPehTsKAlQQJYqQp19B8TpNlnfW 4sEIknr4lrpkjIdf73pVVpuFmh1MpD6Qd2rWTvqfvNl3plHTPr589lc2alFYoGaHeDzg ycEdagoflZRxXTYfkTI+t3UsO86eydoUWvHSNP1JPHC/+MWRf1KOgTEre1gU2DV2q/+I kjzk60G0QAeRhovXPNTnWiCG4gTgXU4b26kpQHvyGGgyqYChETeRRDOC8qhYw2qHy04X aU3NGXUXF+ul0qJZl64WTdVVrvIoh2UxL5DDEpWMlc28ETeCC4NuwIhmP+TCXZH71ckM YvKw== X-Gm-Message-State: AOJu0YxtEaPjzXfJzLAw9/QsT05gnkkpJ4CTSVzKAdf0lWUNLuqshw4B Y/X7LI/mMupMobDOSPPz38ASnRH2FObUgIpkFbVhJQ== X-Received: by 2002:a05:600c:a0a:b0:405:19dd:ad82 with SMTP id z10-20020a05600c0a0a00b0040519ddad82mr538198wmp.16.1697215762890; Fri, 13 Oct 2023 09:49:22 -0700 (PDT) MIME-Version: 1.0 References: <20231009064230.2952396-1-surenb@google.com> <20231009064230.2952396-3-surenb@google.com> <214b78ed-3842-5ba1-fa9c-9fa719fca129@redhat.com> <478697aa-f55c-375a-6888-3abb343c6d9d@redhat.com> <205abf01-9699-ff1c-3e4e-621913ada64e@redhat.com> In-Reply-To: From: Lokesh Gidra Date: Fri, 13 Oct 2023 09:49:10 -0700 Message-ID: Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI To: Peter Xu Cc: David Hildenbrand , Suren Baghdasaryan , akpm@linux-foundation.org, viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Fri, 13 Oct 2023 09:49:36 -0700 (PDT) On Fri, Oct 13, 2023 at 9:08=E2=80=AFAM Peter Xu wrote: > > On Fri, Oct 13, 2023 at 11:56:31AM +0200, David Hildenbrand wrote: > > Hi Peter, > > Hi, David, > > > > > > I used to have the same thought with David on whether we can simplify= the > > > design to e.g. limit it to single mm. Then I found that the trickies= t is > > > actually patch 1 together with the anon_vma manipulations, and the pr= oblem > > > is that's not avoidable even if we restrict the api to apply on singl= e mm. > > > > > > What else we can benefit from single mm? One less mmap read lock, bu= t > > > probably that's all we can get; IIUC we need to keep most of the rest= of > > > the code, e.g. pgtable walks, double pgtable lockings, etc. > > > > No existing mechanisms move anon pages between unrelated processes, tha= t > > naturally makes me nervous if we're doing it "just because we can". > > IMHO that's also the potential, when guarded with userfaultfd descriptor > being shared between two processes. > > See below with more comment on the raised concerns. > > > > > > > > > Actually, even though I have no solid clue, but I had a feeling that = there > > > can be some interesting way to leverage this across-mm movement, whil= e > > > keeping things all safe (by e.g. elaborately requiring other proc to = create > > > uffd and deliver to this proc). > > > > Okay, but no real use cases yet. > > I can provide a "not solid" example. I didn't mention it because it's > really something that just popped into my mind when thinking cross-mm, so= I > never discussed with anyone yet nor shared it anywhere. > > Consider VM live upgrade in a generic form (e.g., no VFIO), we can do tha= t > very efficiently with shmem or hugetlbfs, but not yet anonymous. We can = do > extremely efficient postcopy live upgrade now with anonymous if with REMA= P. > > Basically I see it a potential way of moving memory efficiently especiall= y > with thp. > > > > > > > > > Considering Andrea's original version already contains those bits and= all > > > above, I'd vote that we go ahead with supporting two MMs. > > > > You can do nasty things with that, as it stands, on the upstream codeba= se. > > > > If you pin the page in src_mm and move it to dst_mm, you successfully b= roke > > an invariant that "exclusive" means "no other references from other > > processes". That page is marked exclusive but it is, in fact, not exclu= sive. > > It is still exclusive to the dst mm? I see your point, but I think you'r= e > taking exclusiveness altogether with pinning, and IMHO that may not be > always necessary? > > > > > Once you achieved that, you can easily have src_mm not have MMF_HAS_PIN= NED, > > (I suppose you meant dst_mm here) > > > so you can just COW-share that page. Now you successfully broke the > > invariant that COW-shared pages must not be pinned. And you can even tr= igger > > VM_BUG_ONs, like in sanity_check_pinned_pages(). > > Yeah, that's really unfortunate. But frankly, I don't think it's the fau= lt > of this new feature, but the rest. > > Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but > per-vma, which I don't see why we can't because it's simply a hint so far= . > Then if we apply the same rule here, UFFDIO_REMAP won't even work for > single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will > be NACKed simply because of this.. > > And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never > happen, or any further change to pinning solution that may affect this. = So > far it just looks unsafe to remap a pin page to me. > > I don't have a good suggestion here if this is a risk.. I'd think it risk= y > then to do REMAP over pinned pages no matter cross-mm or single-mm. It > means probably we just rule them out: folio_maybe_dma_pinned() may not ev= en > be enough to be safe with fast-gup. We may need page_needs_cow_for_dma() > with proper write_protect_seq no matter cross-mm or single-mm? > > > > > Can it all be fixed? Sure, with more complexity. For something without = clear > > motivation, I'll have to pass. > > I think what you raised is a valid concern, but IMHO it's better fixed no > matter cross-mm or single-mm. What do you think? > > In general, pinning lose its whole point here to me for an userspace eith= er > if it DONTNEEDs it or REMAP it. What would be great to do here is we unp= in > it upon DONTNEED/REMAP/whatever drops the page, because it loses its > coherency anyway, IMHO. > > > > > Once there is real demand, we can revisit it and explore what else we w= ould > > have to take care of (I don't know how memcg behaves when moving betwee= n > > completely unrelated processes, maybe that works as expected, I don't k= now > > and I have no time to spare on reviewing features with no real use case= s) > > and announce it as a new feature. > > Good point. memcg is probably needed.. > > So you reminded me to do a more thorough review against zap/fault paths, = I > think what's missing are (besides page pinning): > > - mem_cgroup_charge()/mem_cgroup_uncharge(): > > (side note: I think folio_throttle_swaprate() is only for when > allocating new pages, so not needed here) > > - check_stable_address_space() (under pgtable lock) > > - tlb flush > > Hmm???????????????? I can't see anywhere we did tlb flush, batched or > not, either single-mm or cross-mm should need it. Is this missing? > IIUC, ptep_clear_flush() flushes tlb entry. So I think we are doing unbatched flushing. Possibly a nice performance improvement later on would be to try doing it batched. Suren can throw more light on it. One thing I was wondering is don't we need cache flush for the src pages? mremap's move_page_tables() does it. IMHO, it's required here as well. > > > > > > Note: that (with only reading the documentation) it also kept me wonder= ing > > how the MMs are even implied from > > > > struct uffdio_move { > > __u64 dst; /* Destination of move */ > > __u64 src; /* Source of move */ > > __u64 len; /* Number of bytes to move */ > > __u64 mode; /* Flags controlling behavior of move */ > > __s64 move; /* Number of bytes moved, or negated error */ > > }; > > > > That probably has to be documented as well, in which address space dst = and > > src reside. > > Agreed, some better documentation will never hurt. Dst should be in the = mm > address space that was bound to the userfault descriptor. Src should be = in > the current mm address space. > > Thanks, > > -- > Peter Xu >