Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp299327pxb; Mon, 16 Aug 2021 05:49:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygktA9WLLVJsfzXQXK45FiLmNjhamr3wG+PBFF+5cbmtpnIvduMh4Vp/0GIzBr8JwTysTu X-Received: by 2002:a5e:8915:: with SMTP id k21mr6275470ioj.137.1629118196716; Mon, 16 Aug 2021 05:49:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629118196; cv=none; d=google.com; s=arc-20160816; b=f7NtccMLH9BJZJQxHd4vka8UH7AoIbX2GXrYVLdJdqqCGsvUfXQCHRUtkirdCR4WbJ 8bM5S2nrNasyKoLDkyibW6K5e4lXW4WgIHlKg5uj4AP4W3yHT1H0X/9rfN3TqiWbFpnr BLieZtcaV4LejrODgFOO+K/ymlxR7wzF2fnIlLEdPOwwfbgJbBwKCkaIbvbHalg9osS5 gTtC37LhT0qvitpknmSz1ilHlB67oAmo1BICTACzMXDPt/6Mx/+CtLTk9d+5udxH7igK uYEwB9+oUy4OvFTrPOFpog//gyeiNHGb7Sztqw++9zYy3JKFBAU5MCctaWgPDMsu4N7Q c/OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=bYq/RlC0v6fhXkriJhOA5FB18MNbZK+nNft2JpmIjus=; b=Sn2jhG6FrlOn3Ut6b5hXYLgmKYGdC9KBthkTJ7mc0kYbGqSHiZPeep5Xb0McYuBnIU dVemYRPx05D6rXDPgA+w3y6dFKu7MWjNoGdyQ0cePpISxuFmYrynl0xaVqvLrSj6bW9N euqIuA0CpO6mHXQwtzwkatANxOMHjdH0UlUQ0a2m5IzF/WuXQ8uNXvyO8Rv4+hCHNQGe ztDhIgKnIZDZVOqu/xQgzpRRJ/urdahGBptgIp/D30jm2KJPRbpEKM5br2XISPtgW+2U dqRvUx0ZBPCUtlYYpRySSdSxVqLTrSwN5WRlGF+vESg1lTgx9Q4hR3zOPTMDVCVKkPur egkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Y2Zhk1IH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y196si13600394iof.86.2021.08.16.05.49.45; Mon, 16 Aug 2021 05:49:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Y2Zhk1IH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229839AbhHPMsd (ORCPT + 99 others); Mon, 16 Aug 2021 08:48:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229833AbhHPMsd (ORCPT ); Mon, 16 Aug 2021 08:48:33 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5E6CC061764 for ; Mon, 16 Aug 2021 05:48:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=bYq/RlC0v6fhXkriJhOA5FB18MNbZK+nNft2JpmIjus=; b=Y2Zhk1IHVOkWh9NnpfC0ISuW5/ M0jdQhL8RnDw0UUxSX6Fs7mqN2zpfMGbaBMZNFHohfaOBozVVF42/fX9ztpM1wjP3q1LVDG1waAkY urC4/xfB47ZHbbgqkMwL9f2M7LIaLNnZQCQy5zldDcS5LFcWkO2oCYcg3gUIBu9OpP8zhtRAIDGM9 4o5hlig3wtT65q4DI/H239cmXwfXiehARu85+ypLOrvzCaB355FjjUqyudZVMn+2Ko7gHtBoSw/DR vf2+ltvQsv4iIDVmIK1fcZiSYFXWtFlSn19f+pyQFlyvS3grtV7cqaGBUArYN4z+HUgsVVX02tAJR jMhD7lgg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mFc0U-001M88-Ca; Mon, 16 Aug 2021 12:46:27 +0000 Date: Mon, 16 Aug 2021 13:46:22 +0100 From: Matthew Wilcox To: David Hildenbrand Cc: Khalid Aziz , "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" , Steven Sistare , Anthony Yznaga , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "Gonglei (Arei)" Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC Message-ID: References: <1595869887-23307-1-git-send-email-anthony.yznaga@oracle.com> <43471cbb-67c6-f189-ef12-0f8302e81b06@oracle.com> <55720e1b39cff0a0f882d8610e7906dc80ea0a01.camel@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 16, 2021 at 02:20:43PM +0200, David Hildenbrand wrote: > On 16.08.21 14:07, Matthew Wilcox wrote: > > On Mon, Aug 16, 2021 at 10:02:22AM +0200, David Hildenbrand wrote: > > > > Mappings within this address range behave as if they were shared > > > > between threads, so a write to a MAP_PRIVATE mapping will create a > > > > page which is shared between all the sharers. The first process that > > > > declares an address range mshare'd can continue to map objects in the > > > > shared area. All other processes that want mshare'd access to this > > > > memory area can do so by calling mshare(). After this call, the > > > > address range given by mshare becomes a shared range in its address > > > > space. Anonymous mappings will be shared and not COWed. > > > > > > Did I understand correctly that you want to share actual page tables between > > > processes and consequently different MMs? That sounds like a very bad idea. > > > > That is the entire point. Consider a machine with 10,000 instances > > of an application running (process model, not thread model). If each > > application wants to map 1TB of RAM using 2MB pages, that's 4MB of page > > tables per process or 40GB of RAM for the whole machine. > > What speaks against 1 GB pages then? Until recently, the CPUs only having 4 1GB TLB entries. I'm sure we still have customers using that generation of CPUs. 2MB pages perform better than 1GB pages on the previous generation of hardware, and I haven't seen numbers for the next generation yet. > > There's a reason hugetlbfs was enhanced to allow this page table sharing. > > I'm not a fan of the implementation as it gets some locks upside down, > > so this is an attempt to generalise the concept beyond hugetlbfs. > > Who do we account the page tables to? What are MADV_DONTNEED semantics? Who > cleans up the page tables? What happens during munmap? How does the rmap > even work? How to we actually synchronize page table walkers? > > See how hugetlbfs just doesn't raise these problems because we are sharing > pages and not page tables? No, really, hugetlbfs shares page tables already. You just didn't notice that yet. > > Think of it like partial threading. You get to share some parts, but not > > all, of your address space with your fellow processes. Obviously you > > don't want to expose this to random other processes, only to other > > instances of yourself being run as the same user. > > Sounds like a nice way to over-complicate MM to optimize for some special > use cases. I know, I'm probably wrong. :) It's really not as bad as you seem to think it is.