Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752559AbdLLSWz (ORCPT ); Tue, 12 Dec 2017 13:22:55 -0500 Received: from mail-pg0-f46.google.com ([74.125.83.46]:43554 "EHLO mail-pg0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751665AbdLLSWw (ORCPT ); Tue, 12 Dec 2017 13:22:52 -0500 X-Google-Smtp-Source: ACJfBotQCbM0LHBrb+rcQ85GDXeAEUKxadCcuWxiP5XTxWt1+IvB9mDA/3b+yQuqaRhjpVEFypJy3w== Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (1.0) Subject: Re: [patch 11/16] x86/ldt: Force access bit for CS/SS From: Andy Lutomirski X-Mailer: iPhone Mail (15C114) In-Reply-To: Date: Tue, 12 Dec 2017 10:22:48 -0800 Cc: Peter Zijlstra , Thomas Gleixner , LKML , X86 ML , Linus Torvalds , Dave Hansen , Borislav Petkov , Greg KH , Kees Cook , Hugh Dickins , Brian Gerst , Josh Poimboeuf , Denys Vlasenko , Boris Ostrovsky , Juergen Gross , David Laight , Eduardo Valentin , aliguori@amazon.com, Will Deacon , "linux-mm@kvack.org" Message-Id: References: <20171212173221.496222173@linutronix.de> <20171212173334.176469949@linutronix.de> <20171212180918.lc5fdk5jyzwmrcxq@hirez.programming.kicks-ass.net> To: Andy Lutomirski Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by nfs id vBCIMvYZ015420 Content-Length: 1708 Lines: 39 > On Dec 12, 2017, at 10:10 AM, Andy Lutomirski wrote: > >> On Tue, Dec 12, 2017 at 10:09 AM, Peter Zijlstra wrote: >>> On Tue, Dec 12, 2017 at 10:03:02AM -0800, Andy Lutomirski wrote: >>> On Tue, Dec 12, 2017 at 9:32 AM, Thomas Gleixner wrote: >> >>>> @@ -171,6 +172,9 @@ static void exit_to_usermode_loop(struct >>>> /* Disable IRQs and retry */ >>>> local_irq_disable(); >>>> >>>> + if (cached_flags & _TIF_LDT) >>>> + ldt_exit_user(regs); >>> >>> Nope. To the extent that this code actually does anything (which it >>> shouldn't since you already forced the access bit), >> >> Without this; even with the access bit set; IRET will go wobbly and >> we'll #GP on the user-space side. Try it ;-) > > Maybe later. > > But that means that we need Intel and AMD to confirm WTF is going on > before this blows up even with LAR on some other CPU. > >> >>> it's racy against >>> flush_ldt() from another thread, and that race will be exploitable for >>> privilege escalation. It needs to be outside the loopy part. >> >> The flush_ldt (__ldt_install after these patches) would re-set the TIF >> flag. But sure, we can move this outside the loop I suppose. Also, why is LAR deferred to user exit? And I thought that LAR didn't set the accessed bit. If I had to guess, I'd guess that LAR is actually generating a read fault and forcing the pagetables to get populated. If so, then it means the VMA code isn't quite right, or you're susceptible to failures under memory pressure. Now maybe LAR will repopulate the PTE every time if you were to never clear it, but ick.