Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp526179pxv; Wed, 30 Jun 2021 11:02:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyvYl6SO7EJnr9wWqMCofqi621dGKj37nZDLwhwQTkwyO9+UYuwF3szaLUuWtU6+zaIfnHS X-Received: by 2002:a17:907:1ca4:: with SMTP id nb36mr6398308ejc.33.1625076162365; Wed, 30 Jun 2021 11:02:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625076162; cv=none; d=google.com; s=arc-20160816; b=MdZ1LJblQVkemEIrK0c9MmyeS4OdSZ8w/medd2BXyJRz96WLSMDGBQmgmbPGbs9Dex ygfWRMJKPXFYbe050MKq6+NiP26sarOW+eY9gZ/LUASoKnNSp14Fz8sY31PFeGlM/G1m JVYPG32vb3yWJkpMSYq+iDPAMj8Dtw4DMsGAhzJXKgi2MshYbiXItdmTWMDVynIo0wEr hMVkLKGxdX6x+cQj4gJnZsPW2pqbxJyEsQbvvv0zVUY65stbSDozq35eELykU4FvASYG 6pukrNhIHInwAJERAkfZHp+ryMVrffpXbrIxrdknvzzpY8384MVR+3dqsBY0BaOm1kjF R8Rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=f7Zc20M89DiLlQ+moMDDwd+VTvIiSSibcrF+4SaGp1M=; b=czwzR16S1+lNdplthuA1w0YdyR+Nsd0E5YAVAYUBM2NeKQAux8SoZf4l91gQemGsAi 9ZEWC4/Ym9BCw+fMMA+5VVWrBeKE2unHpclTgLVnkrHit8eS5UbwaxnncJr+zZuDfnym MX3Gv0U2cbeFK0jhE38kpPsxtNhDhsAtWZMdKQPAGNFXr9fUMRoO8vW4TAAFHCxm5oRl kvHi+HLzALJeAVskJ+VE+CeGnldOZX1LSQk7kdL6nSlkN7jQArow9/BSYMaM4fqyapVy 6BL0bmfk5GySGpmLmOhRNds4KI1fQ7X1Qt6xYAcSxEaS3U0E40N8cNkk3u1tHnnKAley l2xw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=t+qcpe2V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a8si20868426edx.42.2021.06.30.11.02.18; Wed, 30 Jun 2021 11:02:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=t+qcpe2V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232927AbhF3SDk (ORCPT + 99 others); Wed, 30 Jun 2021 14:03:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232825AbhF3SDj (ORCPT ); Wed, 30 Jun 2021 14:03:39 -0400 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7CE1C061756 for ; Wed, 30 Jun 2021 11:01:09 -0700 (PDT) Received: by mail-lf1-x134.google.com with SMTP id n14so6693133lfu.8 for ; Wed, 30 Jun 2021 11:01:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=f7Zc20M89DiLlQ+moMDDwd+VTvIiSSibcrF+4SaGp1M=; b=t+qcpe2VpHTQAZMvMfUa8hTVYiG9UEnIuclYa6oqSZoPe++uiGLJdbJf1lNa6xQp9s bMdc7XO7sqeSUfBvUjN3t2rtxSNNrnwlu7290pttzrctk62ECG9urOqo+Oy/W6gwdete oltlaUhK3LUJ17DlVRL2QtaNDMmK7qwIei2X8JXddo8NGdF2F7AIBzzrnrqwbmzRkprv CD1FNIzEe9EPf03lrjJjBkF39Iw49KiH1U2+fvsonnIGIdQ1oM3aXELMD0rK1/mw9uZM OU/5seZRhsrFdJ+ULPu03FEV0V2+nGrDMzNSCM55pKJfBfv2wKLuOKWPhUaEITVaup+z /C0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=f7Zc20M89DiLlQ+moMDDwd+VTvIiSSibcrF+4SaGp1M=; b=J+kbr5kMwNKvAjsUFHWGbeER7Dw6TWSKC8FpX4K/1Vw6UMyZcuN2Y4sTGkuIRtLXa0 i1Ra5nom1Bz2kC5x1+J6Ev5uXrJDNDyLLg6KDp4auJRTw85OZ1saTSviYkM6f1cg//ra sPV9DbT/sgdgu7taDsc+k+LoSc8jdT9X8qKp8ganPFHxe0dkctwQ3MuDID3a7S4OApKZ QlcRhNZtMVBy9jQPEzGoI3giMhAtze963lCRFusoy0vf+ooCBkT8SYMtj2QNybk/gRmV uIzV+86ZF/KSNV1EhB9POLUSI7nbFtI/vxvncPIJms2KA4gjMQlXobuJAUVDdrI8LhtY DYbg== X-Gm-Message-State: AOAM5321CijPfwBz977RKwu3Ufh7JR3mVqUqTzB5Fto5y71iFLsZbpoV nU9M6NgfalduSR7lVGl4YnyI48uVd4UB+jR/repp7w== X-Received: by 2002:a05:6512:3155:: with SMTP id s21mr27115898lfi.358.1625076067603; Wed, 30 Jun 2021 11:01:07 -0700 (PDT) MIME-Version: 1.0 References: <20210623192822.3072029-1-surenb@google.com> In-Reply-To: <20210623192822.3072029-1-surenb@google.com> From: Shakeel Butt Date: Wed, 30 Jun 2021 11:00:56 -0700 Message-ID: Subject: Re: [PATCH 1/1] mm: introduce process_reap system call To: Suren Baghdasaryan Cc: Andrew Morton , Michal Hocko , Michal Hocko , David Rientjes , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Rik van Riel , Minchan Kim , Christian Brauner , Christoph Hellwig , Oleg Nesterov , David Hildenbrand , Jann Horn , Tim Murray , Linux API , Linux MM , LKML , kernel-team Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Suren, On Wed, Jun 23, 2021 at 12:28 PM Suren Baghdasaryan wrote: > > In modern systems it's not unusual to have a system component monitoring > memory conditions of the system and tasked with keeping system memory > pressure under control. One way to accomplish that is to kill > non-essential processes to free up memory for more important ones. > Examples of this are Facebook's OOM killer daemon called oomd and > Android's low memory killer daemon called lmkd. > For such system component it's important to be able to free memory > quickly and efficiently. Unfortunately the time process takes to free > up its memory after receiving a SIGKILL might vary based on the state > of the process (uninterruptible sleep), size and OPP level of the core > the process is running. A mechanism to free resources of the target > process in a more predictable way would improve system's ability to > control its memory pressure. > Introduce process_reap system call that reclaims memory of a dying process > from the context of the caller. This way the memory in freed in a more > controllable way with CPU affinity and priority of the caller. The workload > of freeing the memory will also be charged to the caller. > The operation is allowed only on a dying process. > > Previously I proposed a number of alternatives to accomplish this: > - https://lore.kernel.org/patchwork/patch/1060407 extending > pidfd_send_signal to allow memory reaping using oom_reaper thread; > - https://lore.kernel.org/patchwork/patch/1338196 extending > pidfd_send_signal to reap memory of the target process synchronously from > the context of the caller; > - https://lore.kernel.org/patchwork/patch/1344419/ to add MADV_DONTNEED > support for process_madvise implementing synchronous memory reaping. > > The end of the last discussion culminated with suggestion to introduce a > dedicated system call (https://lore.kernel.org/patchwork/patch/1344418/#1553875) > The reasoning was that the new variant of process_madvise > a) does not work on an address range > b) is destructive > c) doesn't share much code at all with the rest of process_madvise > From the userspace point of view it was awkward and inconvenient to provide > memory range for this operation that operates on the entire address space. > Using special flags or address values to specify the entire address space > was too hacky. > > The API is as follows, > > int process_reap(int pidfd, unsigned int flags); > > DESCRIPTION > The process_reap() system call is used to free the memory of a > dying process. > > The pidfd selects the process referred to by the PID file > descriptor. > (See pidofd_open(2) for further information) *pidfd_open > > The flags argument is reserved for future use; currently, this > argument must be specified as 0. > > RETURN VALUE > On success, process_reap() returns 0. On error, -1 is returned > and errno is set to indicate the error. > > Signed-off-by: Suren Baghdasaryan Thanks for continuously pushing this. One question I have is how do you envision this syscall to be used for the cgroup based workloads. Traverse the target tree, read pids from cgroup.procs files, pidfd_open them, send SIGKILL and then process_reap them. Is that right? Orthogonal to this patch I wonder if we should have an optimized way to reap processes from a cgroup. Something similar to cgroup.kill (or maybe overload cgroup.kill with reaping as well). [...] > + > +SYSCALL_DEFINE2(process_reap, int, pidfd, unsigned int, flags) > +{ > + struct pid *pid; > + struct task_struct *task; > + struct mm_struct *mm = NULL; > + unsigned int f_flags; > + long ret = 0; > + > + if (flags != 0) > + return -EINVAL; > + > + pid = pidfd_get_pid(pidfd, &f_flags); > + if (IS_ERR(pid)) > + return PTR_ERR(pid); > + > + task = get_pid_task(pid, PIDTYPE_PID); > + if (!task) { > + ret = -ESRCH; > + goto put_pid; > + } > + > + /* > + * If the task is dying and in the process of releasing its memory > + * then get its mm. > + */ > + task_lock(task); > + if (task_will_free_mem(task) && (task->flags & PF_KTHREAD) == 0) { task_will_free_mem() is fine here but I think in parallel we should optimize this function. At the moment it is traversing all the processes on the machine. It is very normal to have tens of thousands of processes on big machines, so it would be really costly when reaping a bunch of processes.