Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2546imu; Thu, 10 Jan 2019 16:36:10 -0800 (PST) X-Google-Smtp-Source: ALg8bN6jbTWSDRDGhPRI9tw/z3oB0VF6feYCr8r2edWvFVOkcy/pzXXyi7C/S9dColXecYqzR8zY X-Received: by 2002:a63:5922:: with SMTP id n34mr6586076pgb.435.1547166970159; Thu, 10 Jan 2019 16:36:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547166970; cv=none; d=google.com; s=arc-20160816; b=veJyi1aFK4nlZLmc6Og8gPdhroSot814kuWBlqdUFYQwrxz7iiNQunZqxKRo7iwkxT 016/677SZprWlEyy4IxfJEPXiGm9TBvn8w7q7yzFfcz+JYISLmgjQ9DJsDGf4kp/3atu aUXwcrvCVSHsYW7mpxHgUsnD66FAFXYaBv+Di9n9YqrcbvkTQvliZ8QeUJ9+uNyCP7+B z8c6wDw7p9xRp8FQ+IkqtOOJsSd76fP5wO2b+LlP6MFU5NpY+ogGs4o0hC9yxvaDNYhM 95cOVYPA5p/RRrVnKqq3/Gd5vz5LJktDcjJ0Z/S0KZgRQQWRk63pEX21u4VwVjTavK1Z +LBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:autocrypt:openpgp:from:references:cc:to:subject; bh=jTBS1Bx6kfioEkHCZ4vNSQkxLKsyJUYcHC5akqJ/qvE=; b=MHJePAwY0FzV0k0zTllWgca3lPdab8aDaouNK2BeNnBh94eCHzx/AFfz82fAXBfBpQ sDCSx4HtkYifWVxNtmu9xaAaVCwA1ZP38nuYHB/gRdp06EB9oobd2u0A506ATdTsRUtI EZONrrqVePlRi4n3nyQcTMuN4PCRsipUV1wesA3ZxwNv44DwnZsUyNg+Xsu8pmAx35WZ wAV6I3YaDraHhgEBtvXZ499MQVJSZJ+uZo0AB6Vgyf9YwcT92YjCdk2OHvFuZGUq6jSW Iif4RLZLWTRh3PIqeBAFUWz0rg7gOkEus9uL/Msou1ILpqrULTpfiEC6zuf89BpCO2iZ 3Izw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p23si5241596plo.7.2019.01.10.16.35.53; Thu, 10 Jan 2019 16:36:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730276AbfAJXkG (ORCPT + 99 others); Thu, 10 Jan 2019 18:40:06 -0500 Received: from mga17.intel.com ([192.55.52.151]:11649 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729424AbfAJXkG (ORCPT ); Thu, 10 Jan 2019 18:40:06 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jan 2019 15:40:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,463,1539673200"; d="scan'208";a="126725268" Received: from ray.jf.intel.com (HELO [10.7.201.19]) ([10.7.201.19]) by orsmga001.jf.intel.com with ESMTP; 10 Jan 2019 15:40:04 -0800 Subject: Re: [RFC PATCH v7 00/16] Add support for eXclusive Page Frame Ownership To: Khalid Aziz , juergh@gmail.com, tycho@tycho.ws, jsteckli@amazon.de, ak@linux.intel.com, torvalds@linux-foundation.org, liran.alon@oracle.com, keescook@google.com, konrad.wilk@oracle.com Cc: deepa.srinivasan@oracle.com, chris.hyser@oracle.com, tyhicks@canonical.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com, jcm@redhat.com, boris.ostrovsky@oracle.com, kanth.ghatraju@oracle.com, joao.m.martins@oracle.com, jmattson@google.com, pradeep.vincent@oracle.com, john.haxby@oracle.com, tglx@linutronix.de, kirill.shutemov@linux.intel.com, hch@lst.de, steven.sistare@oracle.com, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andy Lutomirski , Peter Zijlstra References: From: Dave Hansen Openpgp: preference=signencrypt Autocrypt: addr=dave.hansen@intel.com; keydata= mQINBE6HMP0BEADIMA3XYkQfF3dwHlj58Yjsc4E5y5G67cfbt8dvaUq2fx1lR0K9h1bOI6fC oAiUXvGAOxPDsB/P6UEOISPpLl5IuYsSwAeZGkdQ5g6m1xq7AlDJQZddhr/1DC/nMVa/2BoY 2UnKuZuSBu7lgOE193+7Uks3416N2hTkyKUSNkduyoZ9F5twiBhxPJwPtn/wnch6n5RsoXsb ygOEDxLEsSk/7eyFycjE+btUtAWZtx+HseyaGfqkZK0Z9bT1lsaHecmB203xShwCPT49Blxz VOab8668QpaEOdLGhtvrVYVK7x4skyT3nGWcgDCl5/Vp3TWA4K+IofwvXzX2ON/Mj7aQwf5W iC+3nWC7q0uxKwwsddJ0Nu+dpA/UORQWa1NiAftEoSpk5+nUUi0WE+5DRm0H+TXKBWMGNCFn c6+EKg5zQaa8KqymHcOrSXNPmzJuXvDQ8uj2J8XuzCZfK4uy1+YdIr0yyEMI7mdh4KX50LO1 pmowEqDh7dLShTOif/7UtQYrzYq9cPnjU2ZW4qd5Qz2joSGTG9eCXLz5PRe5SqHxv6ljk8mb ApNuY7bOXO/A7T2j5RwXIlcmssqIjBcxsRRoIbpCwWWGjkYjzYCjgsNFL6rt4OL11OUF37wL QcTl7fbCGv53KfKPdYD5hcbguLKi/aCccJK18ZwNjFhqr4MliQARAQABtEVEYXZpZCBDaHJp c3RvcGhlciBIYW5zZW4gKEludGVsIFdvcmsgQWRkcmVzcykgPGRhdmUuaGFuc2VuQGludGVs LmNvbT6JAjgEEwECACIFAlQ+9J0CGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEGg1 lTBwyZKwLZUP/0dnbhDc229u2u6WtK1s1cSd9WsflGXGagkR6liJ4um3XCfYWDHvIdkHYC1t MNcVHFBwmQkawxsYvgO8kXT3SaFZe4ISfB4K4CL2qp4JO+nJdlFUbZI7cz/Td9z8nHjMcWYF IQuTsWOLs/LBMTs+ANumibtw6UkiGVD3dfHJAOPNApjVr+M0P/lVmTeP8w0uVcd2syiaU5jB aht9CYATn+ytFGWZnBEEQFnqcibIaOrmoBLu2b3fKJEd8Jp7NHDSIdrvrMjYynmc6sZKUqH2 I1qOevaa8jUg7wlLJAWGfIqnu85kkqrVOkbNbk4TPub7VOqA6qG5GCNEIv6ZY7HLYd/vAkVY E8Plzq/NwLAuOWxvGrOl7OPuwVeR4hBDfcrNb990MFPpjGgACzAZyjdmYoMu8j3/MAEW4P0z F5+EYJAOZ+z212y1pchNNauehORXgjrNKsZwxwKpPY9qb84E3O9KYpwfATsqOoQ6tTgr+1BR CCwP712H+E9U5HJ0iibN/CDZFVPL1bRerHziuwuQuvE0qWg0+0SChFe9oq0KAwEkVs6ZDMB2 P16MieEEQ6StQRlvy2YBv80L1TMl3T90Bo1UUn6ARXEpcbFE0/aORH/jEXcRteb+vuik5UGY 5TsyLYdPur3TXm7XDBdmmyQVJjnJKYK9AQxj95KlXLVO38lcuQINBFRjzmoBEACyAxbvUEhd GDGNg0JhDdezyTdN8C9BFsdxyTLnSH31NRiyp1QtuxvcqGZjb2trDVuCbIzRrgMZLVgo3upr MIOx1CXEgmn23Zhh0EpdVHM8IKx9Z7V0r+rrpRWFE8/wQZngKYVi49PGoZj50ZEifEJ5qn/H Nsp2+Y+bTUjDdgWMATg9DiFMyv8fvoqgNsNyrrZTnSgoLzdxr89FGHZCoSoAK8gfgFHuO54B lI8QOfPDG9WDPJ66HCodjTlBEr/Cwq6GruxS5i2Y33YVqxvFvDa1tUtl+iJ2SWKS9kCai2DR 3BwVONJEYSDQaven/EHMlY1q8Vln3lGPsS11vSUK3QcNJjmrgYxH5KsVsf6PNRj9mp8Z1kIG qjRx08+nnyStWC0gZH6NrYyS9rpqH3j+hA2WcI7De51L4Rv9pFwzp161mvtc6eC/GxaiUGuH BNAVP0PY0fqvIC68p3rLIAW3f97uv4ce2RSQ7LbsPsimOeCo/5vgS6YQsj83E+AipPr09Caj 0hloj+hFoqiticNpmsxdWKoOsV0PftcQvBCCYuhKbZV9s5hjt9qn8CE86A5g5KqDf83Fxqm/ vXKgHNFHE5zgXGZnrmaf6resQzbvJHO0Fb0CcIohzrpPaL3YepcLDoCCgElGMGQjdCcSQ+Ci FCRl0Bvyj1YZUql+ZkptgGjikQARAQABiQIfBBgBAgAJBQJUY85qAhsMAAoJEGg1lTBwyZKw l4IQAIKHs/9po4spZDFyfDjunimEhVHqlUt7ggR1Hsl/tkvTSze8pI1P6dGp2XW6AnH1iayn yRcoyT0ZJ+Zmm4xAH1zqKjWplzqdb/dO28qk0bPso8+1oPO8oDhLm1+tY+cOvufXkBTm+whm +AyNTjaCRt6aSMnA/QHVGSJ8grrTJCoACVNhnXg/R0g90g8iV8Q+IBZyDkG0tBThaDdw1B2l asInUTeb9EiVfL/Zjdg5VWiF9LL7iS+9hTeVdR09vThQ/DhVbCNxVk+DtyBHsjOKifrVsYep WpRGBIAu3bK8eXtyvrw1igWTNs2wazJ71+0z2jMzbclKAyRHKU9JdN6Hkkgr2nPb561yjcB8 sIq1pFXKyO+nKy6SZYxOvHxCcjk2fkw6UmPU6/j/nQlj2lfOAgNVKuDLothIxzi8pndB8Jju KktE5HJqUUMXePkAYIxEQ0mMc8Po7tuXdejgPMwgP7x65xtfEqI0RuzbUioFltsp1jUaRwQZ MTsCeQDdjpgHsj+P2ZDeEKCbma4m6Ez/YWs4+zDm1X8uZDkZcfQlD9NldbKDJEXLIjYWo1PH hYepSffIWPyvBMBTW2W5FRjJ4vLRrJSUoEfJuPQ3vW9Y73foyo/qFoURHO48AinGPZ7PC7TF vUaNOTjKedrqHkaOcqB185ahG2had0xnFsDPlx5y Message-ID: <31fe7522-0a59-94c8-663e-049e9ad2bff6@intel.com> Date: Thu, 10 Jan 2019 15:40:04 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org First of all, thanks for picking this back up. It looks to be going in a very positive direction! On 1/10/19 1:09 PM, Khalid Aziz wrote: > I implemented a solution to reduce performance penalty and > that has had large impact. When XPFO code flushes stale TLB entries, > it does so for all CPUs on the system which may include CPUs that > may not have any matching TLB entries or may never be scheduled to > run the userspace task causing TLB flush. ... > A rogue process can launch a ret2dir attack only from a CPU that has > dual mapping for its pages in physmap in its TLB. We can hence defer > TLB flush on a CPU until a process that would have caused a TLB > flush is scheduled on that CPU. This logic is a bit suspect to me. Imagine a situation where we have two attacker processes: one which is causing page to go from kernel->user (and be unmapped from the kernel) and a second process that *was* accessing that page. The second process could easily have the page's old TLB entry. It could abuse that entry as long as that CPU doesn't context switch (switch_mm_irqs_off()) or otherwise flush the TLB entry. As for where to flush the TLB... As you know, using synchronous IPIs is obviously the most bulletproof from a mitigation perspective. If you can batch the IPIs, you can get the overhead down, but you need to do the flushes for a bunch of pages at once, which I think is what you were exploring but haven't gotten working yet. Anything else you do will have *some* reduced mitigation value, which isn't a deal-breaker (to me at least). Some ideas: Take a look at the SWITCH_TO_KERNEL_CR3 in head_64.S. Every time that gets called, we've (potentially) just done a user->kernel transition and might benefit from flushing the TLB. We're always doing a CR3 write (on Meltdown-vulnerable hardware) and it can do a full TLB flush based on if X86_CR3_PCID_NOFLUSH_BIT is set. So, when you need a TLB flush, you would set a bit that ADJUST_KERNEL_CR3 would see on the next user->kernel transition on *each* CPU. Potentially, multiple TLB flushes could be coalesced this way. The downside of this is that you're exposed to the old TLB entries if a flush is needed while you are already *in* the kernel. You could also potentially do this from C code, like in the syscall entry code, or in sensitive places, like when you're returning from a guest after a VMEXIT in the kvm code.