Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp333423ybc; Tue, 19 Nov 2019 02:05:44 -0800 (PST) X-Google-Smtp-Source: APXvYqysRFNpFXHLNnrcldovIb0k9W2Hhljx4tbz1C4GoWww5ttU8/Au2Hc49vCBu3QGjfst7cHx X-Received: by 2002:a17:906:3d2:: with SMTP id c18mr33808798eja.111.1574157944664; Tue, 19 Nov 2019 02:05:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574157944; cv=none; d=google.com; s=arc-20160816; b=jvVE5DqjCbKb4+RkSZW+xp/4wSA9pK93tNnBR33qh8kXa07Yi7W1/9Iha3zH5E6/lp EdGGYxiWL9ISY9UVv0BTYbR9CtrtslQumWU/4szqYqGRSmFO9EAHcdFI7n9PH0KsAMQD bEvi0H6wpfKTV18KgDrX1tO+e6VbISSeoKYNFqIgkjlN7dgD06Le35mzLLesvVjPQkrt VawI0Sx27PVU4WB2lFG/VILA06BSHUkGVLDseNn6bgtodFDOwZ2cHeSESUeMtFzpTbCV QEK4VVe/Uz3ANGnXGKN2xek//fcEabT24GRd4c0aA9CSAdPqNW7HN5m+4IGKlnGMM96Z Fd/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=1NBT8iUrJOv4Zv8TbLU7OnC/OPEA7bH3w+nLa/g77GA=; b=o3wRKj2XMENjREZfX2kz9uSF42ncwS4mSiL3zDrDQJ5zjDt+AttoPnZh/jaAQJAIZY jq8GdxBppbznI9N44L8LwWn2P4mh9nrTZuB2K8IK+LEbkhaX6xCQI7m6XEU9tJJtpCp1 nISaJnre6wlvFn8fxJTCme8t++VusR/JY8wc4liOg2YpINfBaSMkYxp5fDK5c8XxKfU7 K64B3o0xQPdl4CvUGi9NB3kuSJ53QPaXs02VvP4u1oQOytKcsWI7WwQfURYwyi0pAYZc r8na5z+acNQDBTYfLs/pN7V7IwJkSdYfjF7m+rD37cKL1/QqfIajUlPJG+r2uqxOt09E viGw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Yq1N8H2Q; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d44si15173607ede.149.2019.11.19.02.05.20; Tue, 19 Nov 2019 02:05:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Yq1N8H2Q; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727099AbfKSKBb (ORCPT + 99 others); Tue, 19 Nov 2019 05:01:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:38374 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725280AbfKSKBb (ORCPT ); Tue, 19 Nov 2019 05:01:31 -0500 Received: from mail-wr1-f45.google.com (mail-wr1-f45.google.com [209.85.221.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 89A122230E for ; Tue, 19 Nov 2019 10:01:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574157689; bh=igKe/Irq684yGVXZz57Pixw3CSTAN3NnsvzA3F3g/sM=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=Yq1N8H2QrECxrtfQpCvle8h9lOSRJZWvcVd/1guWUqFW5DOYtfuay7YG1iWDJOlct rWhsDJLTIAZpSeNr/YINo3JNRjITO94qikJ/H/AwTRLlq9tQXSE+LZIc4O84YNqnYR /M9jlj0u3x3sYqbiisRQxIevonrFZMSgm/OZ+X9c= Received: by mail-wr1-f45.google.com with SMTP id b3so23022956wrs.13 for ; Tue, 19 Nov 2019 02:01:29 -0800 (PST) X-Gm-Message-State: APjAAAWRWoOQT80OhRFTI4E8Q+GSIKfCOQzRHFR9S6H1E35+FUTKqjcb TA8MAlpnuR4zHFiku73z/hEaColdVIW9AG+L5oC+rA== X-Received: by 2002:a5d:490b:: with SMTP id x11mr34617300wrq.111.1574157687971; Tue, 19 Nov 2019 02:01:27 -0800 (PST) MIME-Version: 1.0 References: <1574140520-9738-1-git-send-email-maninder1.s@samsung.com> In-Reply-To: <1574140520-9738-1-git-send-email-maninder1.s@samsung.com> From: Andy Lutomirski Date: Tue, 19 Nov 2019 02:01:16 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 1/1] mm: Map next task stack in previous mm. To: Maninder Singh Cc: Dave Hansen , Andrew Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , X86 ML , LKML , v.narang@samsung.com, a.sahrawat@samsung.com, avnish.jain@samsung.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 18, 2019 at 9:44 PM Maninder Singh wrote: > > Issue: In context switch, stack of next task (kernel thread) > is not mapped in previous task PGD. > > Issue faced while porting VMAP stack on ARM. > currently forcible mapping is done in case of switching mm, but if > next task is kernel thread then it can cause issue. > > Thus Map stack of next task in prev if next task is kernel thread, > as kernel thread will use mm of prev task. > > "Since we don't have reproducible setup for x86, > changes verified on ARM. So not sure about arch specific handling > for X86" I think the code is correct without your patch and is wrong with your patch. See below. > > Signed-off-by: Vaneet Narang > Signed-off-by: Maninder Singh > --- > arch/x86/mm/tlb.c | 23 +++++++++++++++++++---- > 1 file changed, 19 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index e6a9edc5baaf..28328cf8e79c 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -161,11 +161,17 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, > local_irq_restore(flags); > } > > -static void sync_current_stack_to_mm(struct mm_struct *mm) > +static void sync_stack_to_mm(struct mm_struct *mm, struct task_struct *tsk) > { > - unsigned long sp = current_stack_pointer; > - pgd_t *pgd = pgd_offset(mm, sp); > + unsigned long sp; > + pgd_t *pgd; > > + if (!tsk) > + sp = current_stack_pointer; > + else > + sp = (unsigned long)tsk->stack; > + > + pgd = pgd_offset(mm, sp); > if (pgtable_l5_enabled()) { > if (unlikely(pgd_none(*pgd))) { > pgd_t *pgd_ref = pgd_offset_k(sp); > @@ -383,7 +389,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > * mapped in the new pgd, we'll double-fault. Forcibly > * map it. > */ > - sync_current_stack_to_mm(next); > + sync_stack_to_mm(next, NULL); If we set CR3 to point to the next mm's PGD and then touch the current stack, we'll double-fault. So we need to sync the *current* stack, not the next stack. The code in prepare_switch_to() makes sure that the next task's stack is synced. > } > > /* > @@ -460,6 +466,15 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > */ > void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) > { > + if (IS_ENABLED(CONFIG_VMAP_STACK)) { > + /* > + * If tsk stack is in vmalloc space and isn't > + * mapped in the new pgd, we'll double-fault. Forcibly > + * map it. > + */ > + sync_stack_to_mm(mm, tsk); > + } > + I don't think this is necessary, since prepare_switch_to() already does what's needed.