Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp1774802ybn; Thu, 26 Sep 2019 01:57:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqwcoEloa8vXv2B/mghvnRzmKW2uBgn6+FrfPOWdyHZVxqwxUSpNiOVKYJ64HHE98K7C1Uvq X-Received: by 2002:a05:6402:74c:: with SMTP id p12mr2285475edy.135.1569488235911; Thu, 26 Sep 2019 01:57:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569488235; cv=none; d=google.com; s=arc-20160816; b=IYiP9sGNv/yw6wk9fOV2WFhjBfJhrzjkV5LYJglGT5U2mwOdAdSwq8SJxN7oa7yGiY pZeN+v07yQ2HQjW9qZhgKNycQ8htipQ1cEioOzWmoxq5v5jNLfttu7OZELWCySNDoV1I 7h1WTABwJ/AYr+tpuqpvLHR/BXAgs9c+As9j+0kxdVmCQtqS0jGOiXaOp1TRI0nQK/JV PAd+iQcDFC4FYSNaTjtfSOf+6lofCbn8YSgVTLqrIX5gCjEfZ48qV9lzk+O8XrAMMkwW gXES9ZkKjwLV+vAl2mWBHMaFrLCED3XF4CwsUZMvbkqzGLuZ3ih5NqzEQ8hBZl1obzTt p5cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=tLGshlxIGKhLcXUM0uB8QvsR7iqHi3kS/y0xfRKm0lM=; b=mT5/vUsMX+B+LmhehMYWKaLWVwkoFJR2wDBuN58VEKajpv2wh5Dc3kWrqCDRFp9zE1 Budb85NDh39zQxVJY9KDo3KrSjX2l+KC5lBMHLOdTljB1528v7Rh6EpLkW/hLvYqPG9E JIBDWjZN2p5Z+U9YffCPKM3UsiFkv40FqkDbwxCSJeoIfAAvWgnBFXHOJu2fN6SNG/VD elrBwUMm1RVNVL2Qr0/VtwSNNZMM2NUtbzX0df/Z3WyGvvIKwoRuDAnZp6ToB95vRAeg NsCOQT5/NjjWKU/yQFG7XpHhi2PCnoNv5mt/vkDy1PF4q/FjBUCFZ1sNOz6cZK8inHoj QwNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=lyFd6NKL; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i33si912396eda.205.2019.09.26.01.56.52; Thu, 26 Sep 2019 01:57:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=lyFd6NKL; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2411048AbfIXWmT (ORCPT + 99 others); Tue, 24 Sep 2019 18:42:19 -0400 Received: from mail-oi1-f193.google.com ([209.85.167.193]:34428 "EHLO mail-oi1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2410594AbfIXWmT (ORCPT ); Tue, 24 Sep 2019 18:42:19 -0400 Received: by mail-oi1-f193.google.com with SMTP id 83so3180137oii.1 for ; Tue, 24 Sep 2019 15:42:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tLGshlxIGKhLcXUM0uB8QvsR7iqHi3kS/y0xfRKm0lM=; b=lyFd6NKLnJffQTtK5uJBQQ7012wiEa8kfPQp3MvPsNG1KoPM+uZ5DOeXAsSVsHhhYl 9PhbUGy760gbP3AbAI0haHWUjXeqJuV1Ff6KFLenze+pOgXpQNk/UUV6EMPzIh6GiERb hkPoXs3KywobQJyS6dJIKtYbMWX+dcH1Z9IqG+y7mMINZC/6+WIH/ikyRo5vCTeBGtMw +z5T43aRRTA4BWjxL/E1O6WpxSwYPEa0ucXaZC+shiZ3dJlTMl4CE/iJfpZYB6WvXsEA dzUlZYOtCRyM+9PG+UnhpzsvSPW1o0SPzIAUOOhO0g+8rLEeXQY32mKBSxJefjDb4Fqr 01+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tLGshlxIGKhLcXUM0uB8QvsR7iqHi3kS/y0xfRKm0lM=; b=RYXK9TZKolbmT1mpGyOIS5XeVsrNiUbvEFiAigaGkM6SbARTjqs0+9pB2l2p3Ejpma 7q/358xCbLs/IYKlfJCTIXkyCsXmWperSFv38ppJY25vaC4dAS03i59Enx5HhOPE6GcC femIGOH6ccCi+dtISM6Ugv+9uFREEWjH5/HquoD+r4Li49VqVbLjAF+jrxZLJ/1Z13a0 CKiQqwqBmDqX5AJnVVj0kiKtYHqCvDhb5PO0zyXJIc0E/zgddxqb+hgQtBnGuekhus6o oa37XB4xJQmHbco1bWzoIS7dNeyxjzWxmE65rTQQXV9Mn1pqdhp5ZQN7L7p8UzyYDGKB yVgA== X-Gm-Message-State: APjAAAV9YLMxxxzBdue5AhsD7J2YV3b/HgAkkf/Zciy/YAxgYTaW9oqp ulBBy+/eEuIa4bZDN9uu/NM/ppvUeapAEOuhqoFI7Q== X-Received: by 2002:aca:3954:: with SMTP id g81mr2235308oia.65.1569364937133; Tue, 24 Sep 2019 15:42:17 -0700 (PDT) MIME-Version: 1.0 References: <20190919222421.27408-1-almasrymina@google.com> <3c73d2b7-f8d0-16bf-b0f0-86673c3e9ce3@oracle.com> In-Reply-To: From: Mina Almasry Date: Tue, 24 Sep 2019 15:42:06 -0700 Message-ID: Subject: Re: [PATCH v5 0/7] hugetlb_cgroup: Add hugetlb_cgroup reservation limits To: Mike Kravetz , David Rientjes Cc: Aneesh Kumar , shuah , Shakeel Butt , Greg Thelen , Andrew Morton , khalid.aziz@oracle.com, open list , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, cgroups@vger.kernel.org, =?UTF-8?Q?Michal_Koutn=C3=BD?= Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 23, 2019 at 2:27 PM Mike Kravetz wrote: > > On 9/23/19 12:18 PM, Mina Almasry wrote: > > On Mon, Sep 23, 2019 at 10:47 AM Mike Kravetz wrote: > >> > >> On 9/19/19 3:24 PM, Mina Almasry wrote: > >>> Patch series implements hugetlb_cgroup reservation usage and limits, which > >>> track hugetlb reservations rather than hugetlb memory faulted in. Details of > >>> the approach is 1/7. > >> > >> Thanks for your continued efforts Mina. > >> > > > > And thanks for your reviews so far. > > > >> One thing that has bothered me with this approach from the beginning is that > >> hugetlb reservations are related to, but somewhat distinct from hugetlb > >> allocations. The original (existing) huegtlb cgroup implementation does not > >> take reservations into account. This is an issue you are trying to address > >> by adding a cgroup support for hugetlb reservations. However, this new > >> reservation cgroup ignores hugetlb allocations at fault time. > >> > >> I 'think' the whole purpose of any hugetlb cgroup is to manage the allocation > >> of hugetlb pages. Both the existing cgroup code and the reservation approach > >> have what I think are some serious flaws. Consider a system with 100 hugetlb > >> pages available. A sysadmin, has two groups A and B and wants to limit hugetlb > >> usage to 50 pages each. > >> > >> With the existing implementation, a task in group A could create a mmap of > >> 100 pages in size and reserve all 100 pages. Since the pages are 'reserved', > >> nobody in group B can allocate ANY huge pages. This is true even though > >> no pages have been allocated in A (or B). > >> > >> With the reservation implementation, a task in group A could use MAP_NORESERVE > >> and allocate all 100 pages without taking any reservations. > >> > >> As mentioned in your documentation, it would be possible to use both the > >> existing (allocation) and new reservation cgroups together. Perhaps if both > >> are setup for the 50/50 split things would work a little better. > >> > >> However, instead of creating a new reservation crgoup how about adding > >> reservation support to the existing allocation cgroup support. One could > >> even argue that a reservation is an allocation as it sets aside huge pages > >> that can only be used for a specific purpose. Here is something that > >> may work. > >> > >> Starting with the existing allocation cgroup. > >> - When hugetlb pages are reserved, the cgroup of the task making the > >> reservations is charged. Tracking for the charged cgroup is done in the > >> reservation map in the same way proposed by this patch set. > >> - At page fault time, > >> - If a reservation already exists for that specific area do not charge the > >> faulting task. No tracking in page, just the reservation map. > >> - If no reservation exists, charge the group of the faulting task. Tracking > >> of this information is in the page itself as implemented today. > >> - When the hugetlb object is removed, compare the reservation map with any > >> allocated pages. If cgroup tracking information exists in page, uncharge > >> that group. Otherwise, unharge the group (if any) in the reservation map. > >> > >> One of the advantages of a separate reservation cgroup is that the existing > >> code is unmodified. Combining the two provides a more complete/accurate > >> solution IMO. But, it has the potential to break existing users. > >> > >> I really would like to get feedback from anyone that knows how the existing > >> hugetlb cgroup controller may be used today. Comments from Aneesh would > >> be very welcome to know if reservations were considered in development of the > >> existing code. > >> -- > > > > FWIW, I'm aware of the interaction with NORESERVE and my thoughts are: > > > > AFAICT, the 2 counter approach we have here is strictly superior to > > the 1 upgraded counter approach. Consider these points: > > > > - From what I can tell so far, everything you can do with the 1 > > counter approach, you can do with the two counter approach by setting > > both limit_in_bytes and reservation_limit_in_bytes to the limit value. > > That will limit both reservations and at fault allocations. > > > > - The 2 counter approach preserves existing usage of hugetlb cgroups, > > so no need to muck around with reverting the feature some time from > > now because of broken users. No existing users of hugetlb cgroups need > > to worry about the effect of this on their usage. > > > > - Users that use hugetlb memory strictly through reservations can use > > only reservation_limit_in_bytes and enjoy cgroup limits that never > > SIGBUS the application. This is our usage for example. > > > > - The 2 counter approach provides more info to the sysadmin. The > > sysadmin knows exactly how much reserved bytes there are via > > reservation_usage_in_bytes, and how much actually in use bytes there > > are via usage_in_bytes. They can even detect NORESERVE usage if > > usage_in_bytes > reservation_usage_in_bytes. failcnt shows failed > > reservations *and* failed allocations at fault, etc. All around better > > debuggability when things go wrong. I think this is particularly > > troubling for the 1 upgraded counter approach. That counter's > > usage_in_bytes doesn't tell you if the usage came from reservations or > > allocations at fault time. > > > > - Honestly, I think the 2 counter approach is easier to document and > > understand by the userspace? 1 counter that vaguely tracks both the > > reservations and usage and decides whether or not to charge at fault > > time seems hard to understand what really happened after something > > goes wrong. 1 counter that tracks reservations and 1 counter that > > tracks actual usage seem much simpler to digest, and provide better > > visibility to what the cgroup is doing as I mentioned above. > > > > I think it may be better if I keep the 2 counter approach but > > thoroughly document the interaction between the existing counters and > > NORESERVE. What do you think? > > I personally prefer the one counter approach only for the reason that it > exposes less information about hugetlb reservations. I was not around > for the introduction of hugetlb reservations, but I have fixed several > issues having to do with reservations. IMO, reservations should be hidden > from users as much as possible. Others may disagree. > > I really hope that Aneesh will comment. He added the existing hugetlb > cgroup code. I was not involved in that effort, but it looks like there > might have been some thought given to reservations in early versions of > that code. It would be interesting to get his perspective. > > Changes included in patch 4 (disable region_add file_region coalescing) > would be needed in a one counter approach as well, so I do plan to > review those changes. OK, FWIW, the 1 counter approach should be sufficient for us, so I'm not really opposed. David, maybe chime in if you see a problem here? From the perspective of hiding reservations from the user as much as possible, it is an improvement. I'm only wary about changing the behavior of the current and having that regress applications. I'm hoping you and Aneesh can shed light on this. > -- > Mike Kravetz