Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp83345imm; Thu, 21 Jun 2018 14:21:48 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIw4FLBuUiMZYEkoRshUrPQeLY1av/9kfZwOirGkqaX9EdzLjRkuxdmEQFGgezuZdELo3Fs X-Received: by 2002:a17:902:b58f:: with SMTP id a15-v6mr511018pls.76.1529616108663; Thu, 21 Jun 2018 14:21:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529616108; cv=none; d=google.com; s=arc-20160816; b=zP4FxSY8DJBdsV7Y2S2AgLVEJzILTIqrqQcB1g20OmcgJCwQiaeK3Qq6AnNNv9etu6 7PaWjiesaNEyyll/2g3yqG04rXH04f0EFF2VJPckDPztDuQaGQMG1ML367YtXViRo1sf BlfPhQc5Rf8dWT4c4WOTkz2xJK8LBtYnlAEwBH8jUxT2RGElNNBnsqTUKemve+JICrCQ HGMeYtqrBAULpMPJ9iXSTM7HnVmzIqProOej5no7PAdCmT7cOeNynOnSQmWalM6/TzEz JDd13EZCqNWYHf1jVFMJ6CoC4nk0HIoESAQcnHEK/WXkyzSCbp8RcteiZPfXkPwVzIOY nieA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=lZXYoa1R+iXDM7t2kZrtEFolNcLIH7W1CCZ4WSqlI4Y=; b=MxUYm+JNRKupGtu8X39lhJrDvQ+zFz2Am+sOhERluqAE3R3YnGMSDmrojay4V7rUcU XvSsp1oHSOmWwqi81Vwqg/tUewMW5E/E53c8B0P+N1J+HGOZr2QXikWQK2YIWo+ug4ax NtimtKtP4yiMrYxf4Hd8MU2jm2GpDcd6vhmVDGaMyVZcrfyiHLMZLujuIU0m54uFoafL e45UFYbhbvkgXcNPfy140m2kLdEgCMvh7WArlHWmHKysD9QhDpUlDJk0FpC03zvfZUig DFaD86R8zs/uFzIxUdEe2iii1lcoOEh0BuzxEJY5uLJQ3MVYTb6cdmY8/LerMC4m1uQV J30g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z6-v6si4787438pgo.364.2018.06.21.14.21.34; Thu, 21 Jun 2018 14:21:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933609AbeFUVT4 (ORCPT + 99 others); Thu, 21 Jun 2018 17:19:56 -0400 Received: from mga14.intel.com ([192.55.52.115]:27241 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754012AbeFUVSC (ORCPT ); Thu, 21 Jun 2018 17:18:02 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jun 2018 14:18:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,253,1526367600"; d="scan'208";a="48912108" Received: from hanvin-mobl2.amr.corp.intel.com ([10.254.42.214]) by fmsmga007.fm.intel.com with ESMTP; 21 Jun 2018 14:18:02 -0700 From: "H. Peter Anvin, Intel" To: Linux Kernel Mailing List Cc: "H. Peter Anvin" , "H . Peter Anvin" , Ingo Molnar , Thomas Gleixner , Andy Lutomirski , "Chang S . Bae" , "Markus T . Metzger" Subject: [PATCH v3 1/7] x86/ldt: refresh %fs and %gs in refresh_ldt_segments() Date: Thu, 21 Jun 2018 14:17:48 -0700 Message-Id: <20180621211754.12757-2-h.peter.anvin@intel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180621211754.12757-1-h.peter.anvin@intel.com> References: <20180621211754.12757-1-h.peter.anvin@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "H. Peter Anvin" It is not only %ds and %es which contain cached user descriptor information, %fs and %gs do as well. To make sure we don't do something stupid that will affect processes which wouldn't want this requalification, be more restrictive about which selector numbers will be requalified: they need to be LDT selectors (which by definition are never null), have an RPL of 3 (always the case in user space unless null), and match the updated descriptor. The infrastructure is set up to allow a range of descriptors; this will be used in a subsequent patch. Signed-off-by: H. Peter Anvin (Intel) Cc: Ingo Molnar Cc: Thomas Gleixner Cc: Andy Lutomirski Cc: Chang S. Bae Cc: Markus T. Metzger --- arch/x86/kernel/ldt.c | 70 +++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 54 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index c9b14020f4dd..18e9f4c0633d 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -29,36 +29,68 @@ #include #include -static void refresh_ldt_segments(void) -{ +struct flush_ldt_info { + struct mm_struct *mm; + unsigned short first_desc; + unsigned short last_desc; +}; + #ifdef CONFIG_X86_64 + +static inline bool +need_requalify(unsigned short sel, const struct flush_ldt_info *info) +{ + /* Must be an LDT segment descriptor with an RPL of 3 */ + if ((sel & (SEGMENT_TI_MASK|SEGMENT_RPL_MASK)) != (SEGMENT_LDT|3)) + return false; + + return sel >= info->first_desc && sel <= info->last_desc; +} + +static void refresh_ldt_segments(const struct flush_ldt_info *info) +{ unsigned short sel; /* - * Make sure that the cached DS and ES descriptors match the updated - * LDT. + * Make sure that the cached DS/ES/FS/GS descriptors + * match the updated LDT, if the specific selectors point + * to LDT entries that have changed. */ savesegment(ds, sel); - if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT) + if (need_requalify(sel, info)) loadsegment(ds, sel); savesegment(es, sel); - if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT) + if (need_requalify(sel, info)) loadsegment(es, sel); -#endif + + savesegment(fs, sel); + if (need_requalify(sel, info)) + loadsegment(fs, sel); + + savesegment(gs, sel); + if (need_requalify(sel, info)) + load_gs_index(sel); } +#else +/* On 32 bits, entry_32.S takes care of this on kernel exit */ +static void refresh_ldt_segments(const struct flush_ldt_info *info) +{ + (void)info; +} +#endif + /* context.lock is held by the task which issued the smp function call */ -static void flush_ldt(void *__mm) +static void flush_ldt(void *_info) { - struct mm_struct *mm = __mm; + const struct flush_ldt_info *info = _info; - if (this_cpu_read(cpu_tlbstate.loaded_mm) != mm) + if (this_cpu_read(cpu_tlbstate.loaded_mm) != info->mm) return; - load_mm_ldt(mm); - - refresh_ldt_segments(); + load_mm_ldt(info->mm); + refresh_ldt_segments(info); } /* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */ @@ -223,15 +255,21 @@ static void finalize_ldt_struct(struct ldt_struct *ldt) paravirt_alloc_ldt(ldt->entries, ldt->nr_entries); } -static void install_ldt(struct mm_struct *mm, struct ldt_struct *ldt) +static void install_ldt(struct mm_struct *mm, struct ldt_struct *ldt, + unsigned short first_index, unsigned short last_index) { + struct flush_ldt_info info; + mutex_lock(&mm->context.lock); /* Synchronizes with READ_ONCE in load_mm_ldt. */ smp_store_release(&mm->context.ldt, ldt); /* Activate the LDT for all CPUs using currents mm. */ - on_each_cpu_mask(mm_cpumask(mm), flush_ldt, mm, true); + info.mm = mm; + info.first_desc = (first_index << 3)|SEGMENT_LDT|3; + info.last_desc = (last_index << 3)|SEGMENT_LDT|3; + on_each_cpu_mask(mm_cpumask(mm), flush_ldt, &info, true); mutex_unlock(&mm->context.lock); } @@ -436,7 +474,7 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode) goto out_unlock; } - install_ldt(mm, new_ldt); + install_ldt(mm, new_ldt, ldt_info.entry_number, ldt_info.entry_number); free_ldt_struct(old_ldt); error = 0; -- 2.14.4