Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3404894imm; Tue, 17 Jul 2018 04:25:20 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdzleAHO3Bl1OhWN9kFnFBvfBvjZ22gZ+fRFCPuWEeoIIFNj1hpj6PQZoVpEjFNizY2oUXu X-Received: by 2002:a65:5a49:: with SMTP id z9-v6mr1222481pgs.244.1531826720648; Tue, 17 Jul 2018 04:25:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531826720; cv=none; d=google.com; s=arc-20160816; b=0SVEC/XaDSBGfcKljoxG8RdmHnfciy+VpohNvhjV/WwCN+5JDBLTuH0AHnXqCEiK/M DegrE/Z/5KpBmFiIBIABhHeqQ6EeeN9BtuJLMiEc0YcdK3+q3FlS4UQop5iS9dtyWRBB diwCse5Og8c8cbeDn8YcHFacXgYmiR3BWHMfpuQXkAW8gvv4xzDcaexXMu8C8uykvUc/ mv+c+oFUr41ra7lDAt/cZSKHUUuHXmnU9x9b5tV/nDce/VCjFspYaOVT7AC6krWkqypF MEgrNqk7Li5tEhXCoD6wh8QznzeycCPz2kGT2XXRJ5C3JBwIDtPUo854CZCJSF/rS32Z 4EvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=rQaO/JVc0AS/aRRbgD9fSDKHD3P+GEP7QoyaRik/99g=; b=GjsUClSKfAaYBmskCAYf8XUEkjkZWQatS9hSsNzQs8kGnKc6HVCtm8pVdNhER/TprM 17RjN+9p9acfQFIBNkTGDJNVUuyc+ogrFZaO1kUazk2hAyCxkhVGy5ki9REDEkDYIY0M xUhkFIJDbOF1m4GRumSs8v+WeH+itpcrEJylUFhpEbHY+pqZB0vZtuKdo+R2YGKqZt8L NwUoNaIo8P7Cc93t/tSAS8GzXQUDndgdLXWI+DcAlqb6mIoQvQfhFPRuHBFal3oRJQVb QPHRnuckhMYSWCZxg1IuNr6hD8LCea5ACwnq4oPkHVwUNgSmCr1cPpYBFSHPOjTaXzyF B/LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r4-v6si742417pgb.97.2018.07.17.04.25.05; Tue, 17 Jul 2018 04:25:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731221AbeGQLxy (ORCPT + 99 others); Tue, 17 Jul 2018 07:53:54 -0400 Received: from mga02.intel.com ([134.134.136.20]:45425 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730600AbeGQLxx (ORCPT ); Tue, 17 Jul 2018 07:53:53 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jul 2018 04:21:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,365,1526367600"; d="scan'208";a="57138270" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 17 Jul 2018 04:21:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id BA0A9309; Tue, 17 Jul 2018 14:21:48 +0300 (EEST) From: "Kirill A. Shutemov" To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" , Tom Lendacky Cc: Dave Hansen , Kai Huang , Jacob Pan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Subject: [PATCHv5 02/19] mm: Do not use zero page in encrypted pages Date: Tue, 17 Jul 2018 14:20:12 +0300 Message-Id: <20180717112029.42378-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180717112029.42378-1-kirill.shutemov@linux.intel.com> References: <20180717112029.42378-1-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Zero page is not encrypted and putting it into encrypted VMA produces garbage. We can map zero page with KeyID-0 into an encrypted VMA, but this would be violation security boundary between encryption domains. Forbid zero pages in encrypted VMAs. Signed-off-by: Kirill A. Shutemov --- arch/s390/include/asm/pgtable.h | 2 +- include/linux/mm.h | 4 ++-- mm/huge_memory.c | 3 +-- mm/memory.c | 3 +-- 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 5ab636089c60..2e8658962aae 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -505,7 +505,7 @@ static inline int mm_alloc_pgste(struct mm_struct *mm) * In the case that a guest uses storage keys * faults should no longer be backed by zero pages */ -#define mm_forbids_zeropage mm_has_pgste +#define vma_forbids_zeropage(vma) mm_has_pgste(vma->vm_mm) static inline int mm_uses_skeys(struct mm_struct *mm) { #ifdef CONFIG_PGSTE diff --git a/include/linux/mm.h b/include/linux/mm.h index c8780c5835ad..151d6e6b16e5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -92,8 +92,8 @@ extern int mmap_rnd_compat_bits __read_mostly; * s390 does this to prevent multiplexing of hardware bits * related to the physical page in case of virtualization. */ -#ifndef mm_forbids_zeropage -#define mm_forbids_zeropage(X) (0) +#ifndef vma_forbids_zeropage +#define vma_forbids_zeropage(vma) vma_keyid(vma) #endif /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1cd7c1a57a14..83f096c7299b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -676,8 +676,7 @@ int do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma, vma->vm_flags))) return VM_FAULT_OOM; - if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm) && + if (!(vmf->flags & FAULT_FLAG_WRITE) && !vma_forbids_zeropage(vma) && transparent_hugepage_use_zero_page()) { pgtable_t pgtable; struct page *zero_page; diff --git a/mm/memory.c b/mm/memory.c index 02fbef2bd024..a705637d2ded 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3139,8 +3139,7 @@ static int do_anonymous_page(struct vm_fault *vmf) return 0; /* Use the zero-page for reads */ - if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm)) { + if (!(vmf->flags & FAULT_FLAG_WRITE) && !vma_forbids_zeropage(vma)) { entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address), vma->vm_page_prot)); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, -- 2.18.0