Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp5694067pxb; Thu, 20 Jan 2022 02:38:49 -0800 (PST) X-Google-Smtp-Source: ABdhPJy3ndKf2JFZqAQdYZyDQgvuscrg3GauXkjXHTUZdJPB7gDkc4yHm5Vb5w3jiEqsvO2Iy/wo X-Received: by 2002:a17:90a:c005:: with SMTP id p5mr10046588pjt.98.1642675129286; Thu, 20 Jan 2022 02:38:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642675129; cv=none; d=google.com; s=arc-20160816; b=plWt3cd5HRbZ3xwzJiDVrhCLfmfTRswA5P7rVaMZP5GjTSU6NWbcmOLSejJNszspl6 gZ78fLBNmGoImIOkDLbw8GM2nxde7ItICh8qDLKQMvKfp8rVt6AcvDKF8mbSuO9Yrfff iINtISP/hNQtZ0RNLDkq11pCw3h8cjRXPtL/nRfoJqpp8uBXwBwYjXZqxEn3AutZLxNi 8SLGIU0jmBP+mJFnYcYi6qoOojhSgxj1riwycWZ2KFEZVky1YO6H+ZJ/2XYigXfHXEQj oqI/f7Gszzw01ZpRAHG8Y5KNMjrp+L7bTl+0w4Uyz60rjZVgc2vwSE3/gHEwUInXP2eZ pP6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=WArcb/HgDJ8BUOmj0QPYNdtYVbUCkvQTyuDp1MfRBb0=; b=WO5AaPEBq0QvzTS3bmx1d2hXIg+AdA7UZTciM5mCOtbEVnwZbNZn2hYTp+KTX8RMZd 17GxZli23F8biaK7PHAiq1scdOi1H+gZ0t9pHe8NBm/0zp67zAgGuiaMO29460QJ3VP4 e3FYob/OKkpgLKG4jhA8PNzgPCV8gfbj/yboL85ldgxN7orNADU/GHzFMWj3prSC3B3N Ws+Y323wmDUsXauYMFJAWWcrNx/zHCZJ7RF66KsilnWVwP8+gRv8wfgCUzg9WZ8k1CvL rHVH5tIm731yqsezCXltpzwM4OwDj627yptvxHYQBqXfWtXR0QozeKm6G+o5RaSrvgBx Hb+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OxEzfROH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o10si3133410pfu.90.2022.01.20.02.38.37; Thu, 20 Jan 2022 02:38:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OxEzfROH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242762AbiARNWK (ORCPT + 99 others); Tue, 18 Jan 2022 08:22:10 -0500 Received: from mga06.intel.com ([134.134.136.31]:48436 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242590AbiARNWG (ORCPT ); Tue, 18 Jan 2022 08:22:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642512126; x=1674048126; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=4KNvuS2T6y3ZLkboDDDFll1RTzkK0ZIAxH/eOgdZ+S8=; b=OxEzfROHTJa4yDj3Q3dVTpqs1laCTYV7KkFT+BRyribEsm8RbWKRbVCt 5vjQ+1RNGsE+x+zOTcgiBRiFbPbrr68ef6cKwaJ5P7LaWDu/Q/BbS3ZfY gj34av002UecWRBRwPN/PqWgS5Fa8dYBlnDbSF1LyAo4KL1l6UNfbnAF0 ngzVrBi2RRxzpop2H6HYdv0ymunGvviI0kdZ9l9TscxY0fMZYZzIoHptL ZYKnODWI+2kwVCYFhHRixqGeTIdzALIX/Ra87KD89d00GLI1ewAESg98E +7TnsKJqxeCkBxYtBlsc6FJCNGX//p8UKX8I3ylLXgEQDG23Fee8m3OW4 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10230"; a="305545718" X-IronPort-AV: E=Sophos;i="5.88,297,1635231600"; d="scan'208";a="305545718" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2022 05:22:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,297,1635231600"; d="scan'208";a="531791632" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 18 Jan 2022 05:21:59 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [PATCH v4 01/12] mm/shmem: Introduce F_SEAL_INACCESSIBLE Date: Tue, 18 Jan 2022 21:21:10 +0800 Message-Id: <20220118132121.31388-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220118132121.31388-1-chao.p.peng@linux.intel.com> References: <20220118132121.31388-1-chao.p.peng@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" Introduce a new seal F_SEAL_INACCESSIBLE indicating the content of the file is inaccessible from userspace through ordinary MMU access (e.g., read/write/mmap). However, the file content can be accessed via a different mechanism (e.g. KVM MMU) indirectly. It provides semantics required for KVM guest private memory support that a file descriptor with this seal set is going to be used as the source of guest memory in confidential computing environments such as Intel TDX/AMD SEV but may not be accessible from host userspace. At this time only shmem implements this seal. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng --- include/uapi/linux/fcntl.h | 1 + mm/shmem.c | 40 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 39 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/fcntl.h b/include/uapi/linux/fcntl.h index 2f86b2ad6d7e..09ef34754dfa 100644 --- a/include/uapi/linux/fcntl.h +++ b/include/uapi/linux/fcntl.h @@ -43,6 +43,7 @@ #define F_SEAL_GROW 0x0004 /* prevent file from growing */ #define F_SEAL_WRITE 0x0008 /* prevent writes */ #define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */ +#define F_SEAL_INACCESSIBLE 0x0020 /* prevent ordinary MMU access (e.g. read/write/mmap) to file content */ /* (1U << 31) is reserved for signed error codes */ /* diff --git a/mm/shmem.c b/mm/shmem.c index 18f93c2d68f1..72185630e7c4 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1098,6 +1098,13 @@ static int shmem_setattr(struct user_namespace *mnt_userns, (newsize > oldsize && (info->seals & F_SEAL_GROW))) return -EPERM; + if (info->seals & F_SEAL_INACCESSIBLE) { + if(i_size_read(inode)) + return -EPERM; + if (newsize & ~PAGE_MASK) + return -EINVAL; + } + if (newsize != oldsize) { error = shmem_reacct_size(SHMEM_I(inode)->flags, oldsize, newsize); @@ -1364,6 +1371,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) goto redirty; if (!total_swap_pages) goto redirty; + if (info->seals & F_SEAL_INACCESSIBLE) + goto redirty; /* * Our capabilities prevent regular writeback or sync from ever calling @@ -2262,6 +2271,9 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) if (ret) return ret; + if (info->seals & F_SEAL_INACCESSIBLE) + return -EPERM; + /* arm64 - allow memory tagging on RAM-based files */ vma->vm_flags |= VM_MTE_ALLOWED; @@ -2459,12 +2471,15 @@ shmem_write_begin(struct file *file, struct address_space *mapping, pgoff_t index = pos >> PAGE_SHIFT; /* i_rwsem is held by caller */ - if (unlikely(info->seals & (F_SEAL_GROW | - F_SEAL_WRITE | F_SEAL_FUTURE_WRITE))) { + if (unlikely(info->seals & (F_SEAL_GROW | F_SEAL_WRITE | + F_SEAL_FUTURE_WRITE | + F_SEAL_INACCESSIBLE))) { if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) return -EPERM; if ((info->seals & F_SEAL_GROW) && pos + len > inode->i_size) return -EPERM; + if (info->seals & F_SEAL_INACCESSIBLE) + return -EPERM; } return shmem_getpage(inode, index, pagep, SGP_WRITE); @@ -2538,6 +2553,21 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) end_index = i_size >> PAGE_SHIFT; if (index > end_index) break; + + /* + * inode_lock protects setting up seals as well as write to + * i_size. Setting F_SEAL_INACCESSIBLE only allowed with + * i_size == 0. + * + * Check F_SEAL_INACCESSIBLE after i_size. It effectively + * serialize read vs. setting F_SEAL_INACCESSIBLE without + * taking inode_lock in read path. + */ + if (SHMEM_I(inode)->seals & F_SEAL_INACCESSIBLE) { + error = -EPERM; + break; + } + if (index == end_index) { nr = i_size & ~PAGE_MASK; if (nr <= offset) @@ -2663,6 +2693,12 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto out; } + if ((info->seals & F_SEAL_INACCESSIBLE) && + (offset & ~PAGE_MASK || len & ~PAGE_MASK)) { + error = -EINVAL; + goto out; + } + shmem_falloc.waitq = &shmem_falloc_waitq; shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT; shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; -- 2.17.1