Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp354860pxb; Mon, 16 Aug 2021 07:01:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxTGQCpIphTZQdvSFHZX+WKa1yWkKMQ3iiPDk5e5vRrkBxSKYM89Ap72jnCHptzwrwNuBJR X-Received: by 2002:a02:4e04:: with SMTP id r4mr15577706jaa.99.1629122497707; Mon, 16 Aug 2021 07:01:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629122497; cv=none; d=google.com; s=arc-20160816; b=Vm2BsaRivqTxHUKYE4RwJfjSNMuY7RKu6avHhLiivqI9HvaTFuxDORn4QE6BqhqrYR Cdgy6v6r4s2slW4SGdVR/RFaui7cOrN7GMhouoxqiVUSeL5bYm++17jYdEBpbciQJ/vE 5gZITisCtx8AKKpuWyPX1xnsT6LR83bbaj+u/dGKaaN65TjmlZPbQWNXWCWNBRmdzSM0 mZyEpqBoTviLSHxS6jM3MlY6a4BC04SuO2Gx2HgmVVzeOkF9wmwVFn4Z8wnvlpBD19ze b8wCUzKXsJ+U07dYA6uzEuaehfWRiNUZLE+oAb72KGS0iVrUuNt0so7xk4UwaIO/WPev 9NmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=6q1+UdtYiO8s6rj0/c/nQ3ECy5RSEzhQllvM+6vQdJ8=; b=pKiF51ipX6wIhK/s+p5+l+syOCY/TTOq5c3Uizb+6QQDBTKAeQTaNtg69CNPq1d940 /pEJqh52Ulj0Yeq7iCcAQXOhXqzGeX0dg+6QuHpQUCqeJhMThzZ5OQAemqABkaI4rTJx vbe8Pp1lHi4PBQqe2Z3M5EBkPA38gTcdWj4VeiIWgNLGNUylUUsyABFIDTTCDbIbRlkg w/GYNRtSx45APqVeE6oC8BChAIGjcOOdVmGAkDGKKP3ZvfFqb2Vbp8VtbRxjPix2fUyn x4zZlv7gaNMUi3yRllMpyrQS25jf5onnMUKxWS+ZQ1KnhoErnvrG5JhtGDiLPbPY8H7y aIUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cQ2mlhP6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p9si13700564ilo.104.2021.08.16.07.01.25; Mon, 16 Aug 2021 07:01:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cQ2mlhP6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231924AbhHPOAy (ORCPT + 99 others); Mon, 16 Aug 2021 10:00:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231672AbhHPOAT (ORCPT ); Mon, 16 Aug 2021 10:00:19 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 930DFC0617AD for ; Mon, 16 Aug 2021 06:58:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=6q1+UdtYiO8s6rj0/c/nQ3ECy5RSEzhQllvM+6vQdJ8=; b=cQ2mlhP60SgjrOH7VyYtCbZ6xq BElSBir7HnhCbahWGm9ukBCQ1FrwUhWmD3PhURPAtDmJZYoNuANE+GX3rfZQoZ2sNi6yBjV2gBwIH 86aCmG4mhqhk9y/NXlAf9QjNMYjhzUrBWu5dnINkMZUkRlQ+EwbfMRsBvviiZAAiBXY4R3GGqmX1q BVhZs0/rkDTmxDgn34gPwIs7NG5YAxxN/CZ4v5yNcRpYMEPDIuM5KDgQxrkIKav0+K1H/DCSoEcU8 1skTS2TpbEuLWokUS0rWhiM/SAixCyjNIhe31z/Lm2o4HRrN/vR7Ha4uFeEapzRwt0198hV7Ka/Nk +f1oEBYw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mFciu-001PhE-KG; Mon, 16 Aug 2021 13:32:41 +0000 Date: Mon, 16 Aug 2021 14:32:16 +0100 From: Matthew Wilcox To: David Hildenbrand Cc: Khalid Aziz , "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" , Steven Sistare , Anthony Yznaga , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "Gonglei (Arei)" Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC Message-ID: References: <43471cbb-67c6-f189-ef12-0f8302e81b06@oracle.com> <55720e1b39cff0a0f882d8610e7906dc80ea0a01.camel@oracle.com> <88884f55-4991-11a9-d330-5d1ed9d5e688@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <88884f55-4991-11a9-d330-5d1ed9d5e688@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 16, 2021 at 03:24:38PM +0200, David Hildenbrand wrote: > On 16.08.21 14:46, Matthew Wilcox wrote: > > On Mon, Aug 16, 2021 at 02:20:43PM +0200, David Hildenbrand wrote: > > > On 16.08.21 14:07, Matthew Wilcox wrote: > > > > On Mon, Aug 16, 2021 at 10:02:22AM +0200, David Hildenbrand wrote: > > > > > > Mappings within this address range behave as if they were shared > > > > > > between threads, so a write to a MAP_PRIVATE mapping will create a > > > > > > page which is shared between all the sharers. The first process that > > > > > > declares an address range mshare'd can continue to map objects in the > > > > > > shared area. All other processes that want mshare'd access to this > > > > > > memory area can do so by calling mshare(). After this call, the > > > > > > address range given by mshare becomes a shared range in its address > > > > > > space. Anonymous mappings will be shared and not COWed. > > > > > > > > > > Did I understand correctly that you want to share actual page tables between > > > > > processes and consequently different MMs? That sounds like a very bad idea. > > > > > > > > That is the entire point. Consider a machine with 10,000 instances > > > > of an application running (process model, not thread model). If each > > > > application wants to map 1TB of RAM using 2MB pages, that's 4MB of page > > > > tables per process or 40GB of RAM for the whole machine. > > > > > > What speaks against 1 GB pages then? > > > > Until recently, the CPUs only having 4 1GB TLB entries. I'm sure we > > still have customers using that generation of CPUs. 2MB pages perform > > better than 1GB pages on the previous generation of hardware, and I > > haven't seen numbers for the next generation yet. > > I read that somewhere else before, yet we have heavy 1 GiB page users, > especially in the context of VMs and DPDK. I wonder if those users actually benchmarked. Or whether the memory savings worked out so well for them that the loss of TLB performance didn't matter. > So, it only works for hugetlbfs in case uffd is not in place (-> no > per-process data in the page table) and we have an actual shared mappings. > When unsharing, we zap the PUD entry, which will result in allocating a > per-process page table on next fault. I think uffd was a huge mistake. It should have been a filesystem instead of a hack on the side of anonymous memory. > I will rephrase my previous statement "hugetlbfs just doesn't raise these > problems because we are special casing it all over the place already". For > example, not allowing to swap such pages. Disallowing MADV_DONTNEED. Special > hugetlbfs locking. Sure, that's why I want to drag this feature out of "oh this is a hugetlb special case" and into "this is something Linux supports".