Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1757226pxu; Tue, 24 Nov 2020 08:13:50 -0800 (PST) X-Google-Smtp-Source: ABdhPJz0aRzx3A5MP8dhkQ5AID1oezsVlZd+Knao6oRJ4u1XWrP35xtq60ezlMszPMpdH4aCxDVn X-Received: by 2002:a17:906:6b82:: with SMTP id l2mr4704844ejr.241.1606234430088; Tue, 24 Nov 2020 08:13:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606234430; cv=none; d=google.com; s=arc-20160816; b=1Bn/cgIcchg51bUUHc0KDjIsQMKULQ6wJHNQ9UleTbGvyBmxnWQ+538MOPKhY6kjMW vw80/CQ0CPX7Mz3N/mjyXQEo7xQmMKBZ862q2dtF6HGlqFnZArqeKwVW+FPEQhNiz+5s Xt8EvyAViEm2PGSJ2UqjylMhESOoZxgobVBw0OQvWC7fqC1r8OsbmMLZqX0PfoEq/K0E C8q56gXQ8DswarKlhubvjL4UxORzTl4UtZnnyIKCT/mqZb7gcEmaaED6lpqvpCr2Nzuw LEIn9sV31dYkuFq2RO79fQd9nBRffMScyoTbXNDsZXY16kqMdcbyhPWZG81eD1PUAWWj 7L0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=6fQQBJMyU7rQN8STcobWFnRbpaAgVHAouxn6yUHDYi0=; b=uKDCiheAZiM4bKadKYPGHqy95VqLrHR8LqhKQcGJcjuLx6LdpEm5Gb4ZEfqfPiC5Hv ojfIxHIASoz2TasW6NkcKycgSef2nt1/C9pkVXLg+GKtkKpDJrEOX3s3heYpZlFBm9rf nMdiUPytfjIP1o5vfdby8meQvPRPf/6KaGlnB9I6JMBau8GnaFN2bMYi3sKlvDFDyuTK ifIFjwdgY+IZEQbkEoX+/MkmgibbU13K8tFbSYJ7HRxCzqnHjcUiUJzoWLtx6txTGha7 05UUx0T7hCTElntjY5XjF7f4B6qk5+SOxcydeeEDMbSeCSsmJ3enzgad6UusMmh1WrsP 8E7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=LIG6ysIN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i22si8597977ejr.184.2020.11.24.08.13.26; Tue, 24 Nov 2020 08:13:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=LIG6ysIN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390143AbgKXQKq (ORCPT + 99 others); Tue, 24 Nov 2020 11:10:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728249AbgKXQKp (ORCPT ); Tue, 24 Nov 2020 11:10:45 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4CC4C0613D6 for ; Tue, 24 Nov 2020 08:10:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=6fQQBJMyU7rQN8STcobWFnRbpaAgVHAouxn6yUHDYi0=; b=LIG6ysINJ18Q2FPQ0q16mDexVV guBsMK0qDtf8Us1FenqjkX7906rVSoVBa6jd5PMfhnEW1jX+whnQfgoctrAuTM/Ma1F3/+LW1pwcz crqlwTletQnZI1kz6tarYbDCHx/iUUKUowkt6OSv5qns9e4PCqxJ74RA6QyeKabSWMXcgez6JkDZN jCfGmmGahmpjgO4URVHMnksj+s9kMrWVZMO/igsub+gU2Ul4lYXenT6VD4YS7FnyJbxJRXuGND8vi SWWsp54O1MwmVJ+S3u7QH9gu9bXwcmitsDmNi4mkI98qac43MdwreRrnsFK9gC+KUDvDTcCL42i9V mFEkDLrg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1khasQ-0001OF-52; Tue, 24 Nov 2020 16:09:12 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B50D53062EA; Tue, 24 Nov 2020 17:09:06 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 975F5200D4F00; Tue, 24 Nov 2020 17:09:06 +0100 (CET) Date: Tue, 24 Nov 2020 17:09:06 +0100 From: Peter Zijlstra To: "Joel Fernandes (Google)" Cc: Nishanth Aravamudan , Julien Desfossez , Tim Chen , Vineeth Pillai , Aaron Lu , Aubrey Li , tglx@linutronix.de, linux-kernel@vger.kernel.org, mingo@kernel.org, torvalds@linux-foundation.org, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , vineeth@bitbyteword.org, Chen Yu , Christian Brauner , Agata Gruza , Antonio Gomez Iglesias , graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com, pjt@google.com, rostedt@goodmis.org, derkling@google.com, benbjiang@tencent.com, Alexandre Chartre , James.Bottomley@hansenpartnership.com, OWeisse@umich.edu, Dhaval Giani , Junaid Shahid , jsbarnes@google.com, chris.hyser@oracle.com, Ben Segall , Josh Don , Hao Luo , Tom Lendacky , Aubrey Li , Tim Chen , "Paul E . McKenney" Subject: Re: [PATCH -tip 18/32] kernel/entry: Add support for core-wide protection of kernel-mode Message-ID: <20201124160906.GA3021@hirez.programming.kicks-ass.net> References: <20201117232003.3580179-1-joel@joelfernandes.org> <20201117232003.3580179-19-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201117232003.3580179-19-joel@joelfernandes.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 17, 2020 at 06:19:48PM -0500, Joel Fernandes (Google) wrote: > Core-scheduling prevents hyperthreads in usermode from attacking each > other, but it does not do anything about one of the hyperthreads > entering the kernel for any reason. This leaves the door open for MDS > and L1TF attacks with concurrent execution sequences between > hyperthreads. > > This patch therefore adds support for protecting all syscall and IRQ > kernel mode entries. Care is taken to track the outermost usermode exit > and entry using per-cpu counters. In cases where one of the hyperthreads > enter the kernel, no additional IPIs are sent. Further, IPIs are avoided > when not needed - example: idle and non-cookie HTs do not need to be > forced into kernel mode. > > More information about attacks: > For MDS, it is possible for syscalls, IRQ and softirq handlers to leak > data to either host or guest attackers. For L1TF, it is possible to leak > to guest attackers. There is no possible mitigation involving flushing > of buffers to avoid this since the execution of attacker and victims > happen concurrently on 2 or more HTs. Oh gawd; this is horrible... > +bool sched_core_wait_till_safe(unsigned long ti_check) > +{ > + bool restart = false; > + struct rq *rq; > + int cpu; > + > + /* We clear the thread flag only at the end, so no need to check for it. */ > + ti_check &= ~_TIF_UNSAFE_RET; > + > + cpu = smp_processor_id(); > + rq = cpu_rq(cpu); > + > + if (!sched_core_enabled(rq)) > + goto ret; > + > + /* Down grade to allow interrupts to prevent stop_machine lockups.. */ > + preempt_disable(); > + local_irq_enable(); > + > + /* > + * Wait till the core of this HT is not in an unsafe state. > + * > + * Pair with raw_spin_lock/unlock() in sched_core_unsafe_enter/exit(). > + */ > + while (smp_load_acquire(&rq->core->core_unsafe_nest) > 0) { > + cpu_relax(); > + if (READ_ONCE(current_thread_info()->flags) & ti_check) { > + restart = true; > + break; > + } > + } What's that ACQUIRE for? > + > + /* Upgrade it back to the expectations of entry code. */ > + local_irq_disable(); > + preempt_enable(); > + > +ret: > + if (!restart) > + clear_tsk_thread_flag(current, TIF_UNSAFE_RET); > + > + return restart; > +} So if TIF_NEED_RESCHED gets set, we'll break out and reschedule, cute. > +void sched_core_unsafe_enter(void) > +{ > + const struct cpumask *smt_mask; > + unsigned long flags; > + struct rq *rq; > + int i, cpu; > + > + if (!static_branch_likely(&sched_core_protect_kernel)) > + return; > + > + local_irq_save(flags); > + cpu = smp_processor_id(); > + rq = cpu_rq(cpu); > + if (!sched_core_enabled(rq)) > + goto ret; > + > + /* Ensure that on return to user/guest, we check whether to wait. */ > + if (current->core_cookie) > + set_tsk_thread_flag(current, TIF_UNSAFE_RET); > + > + /* Count unsafe_enter() calls received without unsafe_exit() on this CPU. */ > + rq->core_this_unsafe_nest++; > + > + /* > + * Should not nest: enter() should only pair with exit(). Both are done > + * during the first entry into kernel and the last exit from kernel. > + * Nested kernel entries (such as nested interrupts) will only trigger > + * enter() and exit() on the outer most kernel entry and exit. > + */ > + if (WARN_ON_ONCE(rq->core_this_unsafe_nest != 1)) > + goto ret; > + > + raw_spin_lock(rq_lockp(rq)); > + smt_mask = cpu_smt_mask(cpu); > + > + /* > + * Contribute this CPU's unsafe_enter() to the core-wide unsafe_enter() > + * count. The raw_spin_unlock() release semantics pairs with the nest > + * counter's smp_load_acquire() in sched_core_wait_till_safe(). > + */ > + WRITE_ONCE(rq->core->core_unsafe_nest, rq->core->core_unsafe_nest + 1); > + > + if (WARN_ON_ONCE(rq->core->core_unsafe_nest == UINT_MAX)) > + goto unlock; > + > + if (irq_work_is_busy(&rq->core_irq_work)) { > + /* > + * Do nothing more since we are in an IPI sent from another > + * sibling to enforce safety. That sibling would have sent IPIs > + * to all of the HTs. > + */ > + goto unlock; > + } > + > + /* > + * If we are not the first ones on the core to enter core-wide unsafe > + * state, do nothing. > + */ > + if (rq->core->core_unsafe_nest > 1) > + goto unlock; > + > + /* Do nothing more if the core is not tagged. */ > + if (!rq->core->core_cookie) > + goto unlock; > + > + for_each_cpu(i, smt_mask) { > + struct rq *srq = cpu_rq(i); > + > + if (i == cpu || cpu_is_offline(i)) > + continue; > + > + if (!srq->curr->mm || is_task_rq_idle(srq->curr)) > + continue; > + > + /* Skip if HT is not running a tagged task. */ > + if (!srq->curr->core_cookie && !srq->core_pick) > + continue; > + > + /* > + * Force sibling into the kernel by IPI. If work was already > + * pending, no new IPIs are sent. This is Ok since the receiver > + * would already be in the kernel, or on its way to it. > + */ > + irq_work_queue_on(&srq->core_irq_work, i); Why irq_work though? Why not smp_send_reschedule(i)? > + } > +unlock: > + raw_spin_unlock(rq_lockp(rq)); > +ret: > + local_irq_restore(flags); > +}