Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753515AbdIXW4n (ORCPT ); Sun, 24 Sep 2017 18:56:43 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:53116 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753298AbdIXW4k (ORCPT ); Sun, 24 Sep 2017 18:56:40 -0400 Subject: Re: [PATCH 3/3] ima: use fs method to read integrity data (updated patch description) From: Mimi Zohar To: Jan Kara , Steven Whitehouse Cc: Al Viro , Linus Torvalds , Christoph Hellwig , LSM List , Christoph Hellwig , linux-ima-devel@lists.sourceforge.net, James Morris , Linux Kernel Mailing List , Matthew Garrett , Jan Kara , "Theodore Ts'o" , Andreas Dilger , Jaegeuk Kim , Chao Yu , Bob Peterson , David Woodhouse , Dave Kleikamp , Ryusuke Konishi , Mark Fasheh , Joel Becker , Richard Weinberger , "Darrick J. Wong" , Hugh Dickins , Chris Mason , Sascha Hauer Date: Sun, 24 Sep 2017 18:55:06 -0400 In-Reply-To: <1505746542.4200.242.camel@linux.vnet.ibm.com> References: <1505451494-30228-1-git-send-email-zohar@linux.vnet.ibm.com> <1505451494-30228-4-git-send-email-zohar@linux.vnet.ibm.com> <1505507142.4200.103.camel@linux.vnet.ibm.com> <20170917151757.GA14262@infradead.org> <1505664935.4200.191.camel@linux.vnet.ibm.com> <20170917163828.GE5426@ZenIV.linux.org.uk> <517c83a6-d7c5-9638-ebaa-52800ca0962c@redhat.com> <20170918101350.GI32516@quack2.suse.cz> <1505746542.4200.242.camel@linux.vnet.ibm.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.20.5 (3.20.5-1.fc24) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-MML: disable x-cbid: 17092422-0040-0000-0000-00000357A8B9 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17092422-0041-0000-0000-00000CD887A5 Message-Id: <1506293706.3893.70.camel@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-09-24_06:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1709240351 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4159 Lines: 77 On Mon, 2017-09-18 at 10:55 -0400, Mimi Zohar wrote: > On Mon, 2017-09-18 at 12:13 +0200, Jan Kara wrote: > > On Mon 18-09-17 10:19:25, Steven Whitehouse wrote: > > > On 17/09/17 17:38, Al Viro wrote: > > > >On Sun, Sep 17, 2017 at 09:34:01AM -0700, Linus Torvalds wrote: > > > >>Now, I suspect most (all?) do, but that's a historical artifact rather > > > >>than "design". In particular, the VFS layer used to do the locking for > > > >>the filesystems, to guarantee the POSIX requirements (POSIX requires > > > >>that writes be seen atomically). > > > >> > > > >>But that lock was pushed down into the filesystems, since some > > > >>filesystems really wanted to have parallel writes (particularly for > > > >>direct IO, where that POSIX serialization requirement doesn't exist). > > > >> > > > >>That's all many years ago, though. New filesystems are likely to have > > > >>copied the pattern from old ones, but even then.. > > > >> > > > >>Also, it's worth noting that "inode->i_rwlock" isn't even well-defined > > > >>as a lock. You can have the question of *which* inode gets talked > > > >>about when you have things like eoverlayfs etc. Normally it would be > > > >>obvious, but sometimes you'd use "file->f_mapping->host" (which is the > > > >>same thing in the simple cases), and sometimes it really wouldn't be > > > >>obvious at all.. > > > >> > > > >>So... I'm really not at all convinced that i_rwsem is sensible. It's > > > >>one of those things that are "mostly right for the simple cases", > > > >>but... > > > >The thing pretty much common to all of them is that write() might need > > > >to modify permissions (suid removal), which brings ->i_rwsem in one > > > >way or another - notify_change() needs that held... > > > For GFS2, if we are to hold the inode info constant while it is checked, we > > > would need to take a glock (read lock in this case) across the relevant > > > operations. The glock will be happy under i_rwlock, since we have a lock > > > ordering that takes local locks ahead of cluster locks. I've not dug into > > > this enough to figure out whether the current proposal will allow this to > > > work with GFS2 though. Does IMA cache the results from the > > > ->read_integrity() operation? > > Up to now, the hash calculation was stored in the iint structure, > which is then used to extend the TPM, verify the file's integrity > compared to the value stored in the xattr, and included in an audit > message. > > A new patch set by Thiago Bauermann will add appended signature > support, re-using the kernel module signature appended method, which > might require re-calculating the file hash based on a different hash > algorithm. > > > So I have asked Mimi about clustered filesystems before. And for now the > > answer was that IMA for clustered filesystems is not supported (it will > > return some error since ->integrity_read is NULL). If we would ever want to > > support those it would require larger overhaul of the IMA architecture to > > give filesystem more control over the locking (which is essentially what > > Linus wants). > > For performance reasons, IMA is not on a write hook, but detects file > change on the last __fput() opened for write.  At that point, the > cached info is reset.  The file hash is re-calculated and written out > as an xattr.  On the next file access (in policy), the file hash is > re-calculated and stored in the iint. > > In terms of remote/clustered/fuse filesystems, we wouldn't be on the > __fput() path.  Support for remote/clustered/fuse filesystems, would > be similar to filesystems that do not support i_version.  Meaning only > the first file access (in policy) would be measured/appraised, but not > subsequent ones.  Even if we could detect file change, we would be > dependent on the remote/clustered/fuse filesystem to inform us of the > change.  What type of integrity guarantees would that provide? After thinking this over a bit, perhaps we shouldn't cache the file integrity results for these filesystems, since we can't rely on them to notify us of a change (eg. malicious fs), but simply re-measure/re- validate files each time. Mimi