Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp4575839imd; Tue, 30 Oct 2018 04:22:28 -0700 (PDT) X-Google-Smtp-Source: AJdET5fuNi6qrN0gebQjvw/uKwX0lDy7bg0VY2HNK+rQ9zmc2/noyUVoxx5X40j3v9pI+bZ/VOQW X-Received: by 2002:a63:cf0e:: with SMTP id j14-v6mr17659849pgg.195.1540898548058; Tue, 30 Oct 2018 04:22:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540898548; cv=none; d=google.com; s=arc-20160816; b=NWeE0fyTAo0vCR9lMBUQkiTvN6MPPsmnaqGBUjZ9vlRZ5oXX76t5+Pb5xF2NYOZOYM tZVkw5FMuwh4JZdO+5ToyHxG0dTjk/kQRLLdizHV5Up9OUKsas39f6HkZ669/i0mh3uW ThToIsCyn8lyxa4UB5Ss47CCl3jp1MWuA0WgGdxIz+B+4lQGmTjywTzIg5inmJ1xdWqE Q+zj3iB+T0Px/5mFbhk8wP8FRumOraPz3AUxCx0OKNJMDnJHedvfAKs0D0+nEVteZcon Sdx0i1QL3mxNJJuzb5XQt/K8fgGLZOiQOjRW+CaELnZz/DvQywErGek6rvdmZy0V2zg1 Z8Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version; bh=1sDuTsLiLT/HmT1tPv3oURZ2+zCFjuPhDiGlUQbk+nU=; b=opK8YBzh8LPRK+XuzY97PeBJEEro7030qr5rY12gX9oLOlAmRHeuUk2mDk0LXqe0eU WlaCF3gNXnLEpdPM5XzeYVKAiLufl8LqzuxfpttQeVC9B3CCbkGw4IL48rRzw+DShVTo KgzWzvvQxfS2pfYnSF4Pi9uyjqtbK36ylwdtClvYkFVi9WdBmTl/Rek9fIVD2XPNlH4W 3BENHDhS5KpZBteNg7KQyAT6/i0SvOYEh8j33GPNF4rV9++w/wy4UUxewuz0c/Ll2Za7 RS+Z/qUw6lN04xeP434UjfakjCPp3Cj9GrEFLnamwOmjBg1wLGZxwFyQU/rx+a29fkI4 0GFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=canonical.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m123-v6si8422297pfd.112.2018.10.30.04.22.12; Tue, 30 Oct 2018 04:22:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=canonical.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727662AbeJ3UNM (ORCPT + 99 others); Tue, 30 Oct 2018 16:13:12 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:48730 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727151AbeJ3UNL (ORCPT ); Tue, 30 Oct 2018 16:13:11 -0400 Received: from mail-vs1-f71.google.com ([209.85.217.71]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1gHS47-0004UI-3v for linux-kernel@vger.kernel.org; Tue, 30 Oct 2018 11:20:07 +0000 Received: by mail-vs1-f71.google.com with SMTP id g13so541157vso.7 for ; Tue, 30 Oct 2018 04:20:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1sDuTsLiLT/HmT1tPv3oURZ2+zCFjuPhDiGlUQbk+nU=; b=KV4jW3qJAgRDRsv9nUCRWMYquRDh+ceefvD5nXJm+GMowl3fefsJEaZX77jJ1MbUiR oobMOH5HZ91tipF+PbGcAuTssJrgj40qEu8wagbejFVGMYVbpPnLS7UBuXL7AA6/elHq zR2eVcne+S+jUJrMrmbpdyOS76/B5os2nrWq9Vx1d1VJaYIucMxPprGFhoD0D0qkGJ94 zzaSCmWwdhqjPb/s77sj0X6Of7jlEIHmMFWCML/9O/fKCfG/d3JeMk6k2MM+3IhEbr+T c/W/SexPEbmwtF2j3e39tK6pmm6n3UCh9F/P4Ev9HP7a2+qU6q4/Fi+SF8+YhehzE3xH xz1Q== X-Gm-Message-State: AGRZ1gJpU7I6kHd+gfL0ny487bTOWdxgy85CIIe+2pJ/3lR9Fe+Nz/f0 DTzOLXi0qln1xW5HycVMfCxWfCfILNX/otPffKfcRdmZiy6eS5Tix+B/ZAPRu5OcNciVUkrOwun KkMvdwctVgioiXzLFJslPO/SPfZDXtCFvOEoP5VcpVbErvo5P/XbG/KaLxA== X-Received: by 2002:a67:f085:: with SMTP id i5mr2896673vsl.198.1540898402932; Tue, 30 Oct 2018 04:20:02 -0700 (PDT) X-Received: by 2002:a67:f085:: with SMTP id i5mr2896660vsl.198.1540898402349; Tue, 30 Oct 2018 04:20:02 -0700 (PDT) MIME-Version: 1.0 References: <20181029221037.87724-1-dancol@google.com> <20181030103910.mnzot3zcoh6j7did@gmail.com> <20181030104037.73t5uz3piywxwmye@gmail.com> In-Reply-To: From: Christian Brauner Date: Tue, 30 Oct 2018 12:19:51 +0100 Message-ID: Subject: Re: [RFC PATCH] Implement /proc/pid/kill To: Daniel Colascione Cc: Joel Fernandes , Linux Kernel Mailing List , Tim Murray , Suren Baghdasaryan Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 30, 2018 at 12:12 PM Daniel Colascione wrote: > > On Tue, Oct 30, 2018 at 11:04 AM, Christian Brauner > wrote: > > On Tue, Oct 30, 2018 at 11:48 AM Daniel Colascione wrote: > >> > >> On Tue, Oct 30, 2018 at 10:40 AM, Christian Brauner > >> wrote: > >> > On Tue, Oct 30, 2018 at 11:39:11AM +0100, Christian Brauner wrote: > >> >> On Tue, Oct 30, 2018 at 08:50:22AM +0000, Daniel Colascione wrote: > >> >> > On Tue, Oct 30, 2018 at 3:21 AM, Joel Fernandes wrote: > >> >> > > On Mon, Oct 29, 2018 at 3:11 PM Daniel Colascione wrote: > >> >> > >> > >> >> > >> Add a simple proc-based kill interface. To use /proc/pid/kill, just > >> >> > >> write the signal number in base-10 ASCII to the kill file of the > >> >> > >> process to be killed: for example, 'echo 9 > /proc/$$/kill'. > >> >> > >> > >> >> > >> Semantically, /proc/pid/kill works like kill(2), except that the > >> >> > >> process ID comes from the proc filesystem context instead of from an > >> >> > >> explicit system call parameter. This way, it's possible to avoid races > >> >> > >> between inspecting some aspect of a process and that process's PID > >> >> > >> being reused for some other process. > >> >> > >> > >> >> > >> With /proc/pid/kill, it's possible to write a proper race-free and > >> >> > >> safe pkill(1). An approximation follows. A real program might use > >> >> > >> openat(2), having opened a process's /proc/pid directory explicitly, > >> >> > >> with the directory file descriptor serving as a sort of "process > >> >> > >> handle". > >> >> > > > >> >> > > How long does the 'inspection' procedure take? If its a short > >> >> > > duration, then is PID reuse really an issue, I mean the PIDs are not > >> >> > > reused until wrap around and the only reason this can be a problem is > >> >> > > if you have the wrap around while the 'inspecting some aspect' > >> >> > > procedure takes really long. > >> >> > > >> >> > It's a race. Would you make similar statements about a similar fix for > >> >> > a race condition involving a mutex and a double-free just because the > >> >> > race didn't crash most of the time? The issue I'm trying to fix here > >> >> > is the same problem, one level higher up in the abstraction hierarchy. > >> >> > > >> >> > > Also the proc fs is typically not the right place for this. Some > >> >> > > entries in proc are writeable, but those are for changing values of > >> >> > > kernel data structures. The title of man proc(5) is "proc - process > >> >> > > information pseudo-filesystem". So its "information" right? > >> >> > > >> >> > Why should userspace care whether a particular operation is "changing > >> >> > [a] value[] of [a] kernel data structure" or something else? That > >> >> > something in /proc is a struct field is an implementation detail. It's > >> >> > the interface semantics that matters, and whether a particular > >> >> > operation is achieved by changing a struct field or by making a > >> >> > function call is irrelevant to userspace. Proc is a filesystem about > >> >> > processes. Why shouldn't you be able to send a signal to a process via > >> >> > proc? It's an operation involving processes. > >> >> > > >> >> > It's already possible to do things *to* processes via proc, e.g., > >> >> > adjust OOM killer scores. Proc filesystem file descriptors are > >> >> > userspace references to kernel-side struct pid instances, and as such, > >> >> > make good process handles. There are already "verb" files in procfs, > >> >> > such as /proc/sys/vm/drop_caches and /proc/sysrq-trigger. Why not add > >> >> > a kill "verb", especially if it closes a race that can't be closed > >> >> > some other way? > >> >> > > >> >> > You could implement this interface as a system call that took a procfs > >> >> > directory file descriptor, but relative to this proposal, it would be > >> >> > all downside. Such a thing would act just the same way as > >> >> > /pric/pid/kill, and wouldn't be usable from the shell or from programs > >> >> > that didn't want to use syscall(2). (Since glibc isn't adding new > >> >> > system call wrappers.) AFAIK, the only downside of having a "kill" > >> >> > file is the need for a string-to-integer conversion, but compared to > >> >> > process killing, integer parsing is insignificant. > >> >> > > >> >> > > IMO without a really good reason for this, it could really be a hard > >> >> > > sell but the RFC was worth it anyway to discuss it ;-) > >> >> > > >> >> > The traditional unix process API is down there at level -10 of Rusty > >> >> > Russel's old bad API scale: "It's impossible to get right". The races > >> >> > in the current API are unavoidable. That most programs don't hit these > >> >> > races most of the time doesn't mean that the race isn't present. > >> >> > > >> >> > We've moved to a model where we identify other system resources, like > >> >> > DRM fences, locks, sockets, and everything else via file descriptors. > >> >> > This change is a step toward using procfs file descriptors to work > >> >> > with processes, which makes the system more regular and easier to > >> >> > reason about. A clean API that's possible to use correctly is a > >> >> > worthwhile project. > >> >> > >> >> So I have been disucssing a new process API With David Howells, Kees > >> >> Cook and a few others and I am working on an RFC/proposal for this. It > >> >> is partially inspired by the new mount API. So I would like to block > >> >> this patch until then. I would like to get this right very much and > >> > >> It's good to hear that others are thinking about this problem. > >> > >> >> I > >> >> don't think this is the way to go. > > > > Because we want this to be generic and things like getting handles on > > processes via /proc is just a part of that > > The word "generic" is like the word "secure": it's hard to tell what Ha, trolling hard but true. :) > it means in isolation. :-) Over what domain do we need to be generic? > Procfs file descriptors already work on processes generally, and they > allow for race-free access to anything that's reachable via a procfs > pid directory. In what way would an alternate approach be even more > generic? > > >> Why not? > >> > >> Does your proposed API allow for a race-free pkill, with arbitrary > >> selection criteria? This capability is a good litmus test for fixing > >> the long-standing Unix process API issues. > > > > You'd have a handle on the process with an fd so yes, it would be. > > Thanks. That's good to hear. > > Any idea on the timetable for this proposal? I'm open to lots of > alternative technical approaches, but I don't want this capability to > languish for a long time. Latest end of year likely sooner depending on the feedback I'm getting during LPC.