Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754381AbeAGSZY (ORCPT + 1 other); Sun, 7 Jan 2018 13:25:24 -0500 Received: from mail.skyhub.de ([5.9.137.197]:37386 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754215AbeAGSZX (ORCPT ); Sun, 7 Jan 2018 13:25:23 -0500 Date: Sun, 7 Jan 2018 19:25:14 +0100 From: Borislav Petkov To: Tom Lendacky Cc: x86@kernel.org, Brijesh Singh , linux-kernel@vger.kernel.org, Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner Subject: Re: [PATCH v2 4/5] x86/mm: Prepare sme_encrypt_kernel() for PAGE aligned encryption Message-ID: <20180107182513.3bvw3xgrzaxi23m3@pd.tnic> References: <20171221220242.30632.5031.stgit@tlendack-t1.amdoffice.net> <20171221220321.30632.70405.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20171221220321.30632.70405.stgit@tlendack-t1.amdoffice.net> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Thu, Dec 21, 2017 at 04:03:21PM -0600, Tom Lendacky wrote: > @@ -568,17 +578,57 @@ static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) > native_set_pud(pud_p, pud); > } > > + return pmd_p; > +} > + > +static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) > +{ > + pmd_t *pmd_p; > + > + pmd_p = sme_prepare_pgd(ppd); > + if (!pmd_p) > + return; > + > pmd_p += pmd_index(ppd->vaddr); > if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE)) > native_set_pmd(pmd_p, > native_make_pmd(ppd->paddr | ppd->pmd_flags)); Ugly linebreak. > } > > -static void __init __sme_map_range(struct sme_populate_pgd_data *ppd, > - pmdval_t pmd_flags) > +static void __init sme_populate_pgd(struct sme_populate_pgd_data *ppd) > { > - ppd->pmd_flags = pmd_flags; > + pmd_t *pmd_p; > + pte_t *pte_p; > + > + pmd_p = sme_prepare_pgd(ppd); > + if (!pmd_p) > + return; > + > + pmd_p += pmd_index(ppd->vaddr); > + if (native_pmd_val(*pmd_p)) { > + if (native_pmd_val(*pmd_p) & _PAGE_PSE) > + return; > + > + pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK); > + } else { > + pmd_t pmd; > > + pte_p = ppd->pgtable_area; > + memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE); > + ppd->pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE; > + > + pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS); > + native_set_pmd(pmd_p, pmd); > + } > + > + pte_p += pte_index(ppd->vaddr); > + if (!native_pte_val(*pte_p)) > + native_set_pte(pte_p, > + native_make_pte(ppd->paddr | ppd->pte_flags)); Ditto. -- Regards/Gruss, Boris. Good mailing practices for 400: avoid top-posting and trim the reply.