Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp675019ybg; Fri, 12 Jun 2020 11:30:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiFAGvnVDcppHTadhCzQoyZyg/cgwtdza31ucLv/ya6/cWbX1Xw3zwZzzeEx7pAA/e9hyi X-Received: by 2002:a17:906:945a:: with SMTP id z26mr15054300ejx.448.1591986656052; Fri, 12 Jun 2020 11:30:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591986656; cv=none; d=google.com; s=arc-20160816; b=bN/CTNMS1SEuW22USV4jii477d9cf2kHDyb7RpoPcOytjPEe+7pp5XV/aaw1IbVMQ+ 8CpW8/3+LZ5PhC5orlzK3tfHtkLfCUgGUHQictwKSv4fDKie0xy0mcT7fE+GWnW0cLX2 zcBgZpDdiw/8UIfkpqulVEbdE1JaaAUIEycU2u3oZ0DbwyoCrViBcQEPCNqxayz6Gseb HZzd0otNRDyXBpQbkPCf9O2MBnMTd5h3s3IdZH2Lt38B7e7pVW5B6/mlto4UtK2XNSB0 PlM0lUhfqHQ3FucxF0M5H/WbznXjVDvIYgdtH0u6uH3kFcCtXZYBT7T1LsPbL++5P9CG u1Qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=o5UzvmBJTiPx5NyfSnL7rDOUbtr9bwbV4eklRvRrZKs=; b=WlSn0dWKq/r/6zw/Ha7/pmUDei9JoMShsnqOVwk4RfHZyvm9coB5fFKmB40uxDZffR M+0W1i0ql+IQGwg874ZF+X1/UXZId8Dx2Bw3q2MjqJwm5YIniWy2e+8y+AZJk+oxNpCp 26lJIJO9p7nvtVGTPbLfyaX35WKfbdL7qysELSCM0oRhytCV+6f7Gzky4mRwdAdb9dME rmGgiT6cunw4q7nn5jEQREr71+3MDZHpo38R/tXnWRraKjTK94tBJzSqfBWKSJpCUVR2 LYA3RgBd+tzbNHrVorlDiQ4aTOFEgH4myJlF7XmeZHIHi5gB+qFNRbiK1dab8klt+991 bTZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k2si4076826edq.284.2020.06.12.11.30.33; Fri, 12 Jun 2020 11:30:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726358AbgFLS22 (ORCPT + 99 others); Fri, 12 Jun 2020 14:28:28 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:45497 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726085AbgFLS22 (ORCPT ); Fri, 12 Jun 2020 14:28:28 -0400 Received: from ip5f5af183.dynamic.kabel-deutschland.de ([95.90.241.131] helo=wittgenstein) by youngberry.canonical.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jjoPZ-0006KF-MP; Fri, 12 Jun 2020 18:28:17 +0000 Date: Fri, 12 Jun 2020 20:28:16 +0200 From: Christian Brauner To: Kees Cook Cc: Sargun Dhillon , Giuseppe Scrivano , Robert Sesek , Chris Palmer , Jann Horn , Greg Kroah-Hartman , "containers@lists.linux-foundation.org" , "stable@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Matt Denton , Tejun Heo , David Laight , Al Viro , "linux-fsdevel@vger.kernel.org" , "cgroups@vger.kernel.org" , "David S . Miller" Subject: Re: [PATCH v3 1/4] fs, net: Standardize on file_receive helper to move fds across processes Message-ID: <20200612182816.okwylihs6u6wkgxd@wittgenstein> References: <202006092227.D2D0E1F8F@keescook> <20200610081237.GA23425@ircssh-2.c.rugged-nimbus-611.internal> <202006101953.899EFB53@keescook> <20200611100114.awdjswsd7fdm2uzr@wittgenstein> <20200611110630.GB30103@ircssh-2.c.rugged-nimbus-611.internal> <067f494d55c14753a31657f958cb0a6e@AcuMS.aculab.com> <202006111634.8237E6A5C6@keescook> <94407449bedd4ba58d85446401ff0a42@AcuMS.aculab.com> <20200612104629.GA15814@ircssh-2.c.rugged-nimbus-611.internal> <202006120806.E770867EF@keescook> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <202006120806.E770867EF@keescook> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 12, 2020 at 08:13:25AM -0700, Kees Cook wrote: > On Fri, Jun 12, 2020 at 10:46:30AM +0000, Sargun Dhillon wrote: > > My suggest, written out (no idea if this code actually works), is as follows: > > > > ioctl.h: > > /* This needs to be added */ > > #define IOCDIR_MASK (_IOC_DIRMASK << _IOC_DIRSHIFT) > > This exists already: > > #define _IOC_DIRMASK ((1 << _IOC_DIRBITS)-1) > > > > > > > seccomp.h: > > > > struct struct seccomp_notif_addfd { > > __u64 fd; > > ... > > } > > > > /* or IOW? */ > > #define SECCOMP_IOCTL_NOTIF_ADDFD SECCOMP_IOWR(3, struct seccomp_notif_addfd) > > > > seccomp.c: > > static long seccomp_notify_addfd(struct seccomp_filter *filter, > > struct seccomp_notif_addfd __user *uaddfd int size) > > { > > struct seccomp_notif_addfd addfd; > > int ret; > > > > if (size < 32) > > return -EINVAL; > > if (size > PAGE_SIZE) > > return -E2BIG; > > (Tanget: what was the reason for copy_struct_from_user() not including > the min/max check? I have a memory of Al objecting to having an > "internal" limit?) Al didn't want the PAGE_SIZE limit in there because there's nothing inherently wrong with copying insane amounts of memory. (Another tangent. I've asked this on Twitter not too long ago: do we have stats how long copy_from_user()/copy_struct_from_user() takes with growing struct/memory size? I'd be really interested in this. I have a feeling that clone3()'s and - having had a chat with David Howells - openat2()'s structs will continue to grow for a while... and I'd really like to have some numbers on when copy_struct_from_user() becomes costly or how costly it becomes.) > > > > > ret = copy_struct_from_user(&addfd, sizeof(addfd), uaddfd, size); > > if (ret) > > return ret; > > > > ... > > } > > > > /* Mask out size */ > > #define SIZE_MASK(cmd) (~IOCSIZE_MASK & cmd) > > > > /* Mask out direction */ > > #define DIR_MASK(cmd) (~IOCDIR_MASK & cmd) > > > > static long seccomp_notify_ioctl(struct file *file, unsigned int cmd, > > unsigned long arg) > > { > > struct seccomp_filter *filter = file->private_data; > > void __user *buf = (void __user *)arg; > > > > /* Fixed size ioctls. Can be converted later on? */ > > switch (cmd) { > > case SECCOMP_IOCTL_NOTIF_RECV: > > return seccomp_notify_recv(filter, buf); > > case SECCOMP_IOCTL_NOTIF_SEND: > > return seccomp_notify_send(filter, buf); > > case SECCOMP_IOCTL_NOTIF_ID_VALID: > > return seccomp_notify_id_valid(filter, buf); > > } > > > > /* Probably should make some nicer macros here */ > > switch (SIZE_MASK(DIR_MASK(cmd))) { > > case SIZE_MASK(DIR_MASK(SECCOMP_IOCTL_NOTIF_ADDFD)): > > Ah yeah, I like this because of what you mention below: it's forward > compat too. (I'd just use the ioctl masks directly...) > > switch (cmd & ~(_IOC_SIZEMASK | _IOC_DIRMASK)) > > > return seccomp_notify_addfd(filter, buf, _IOC_SIZE(cmd)); > > I really like that this ends up having the same construction as a > standard EA syscall: the size is part of the syscall arguments. This is basically what I had proposed in my previous mail, right? Christian