Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752416AbbDCLuf (ORCPT ); Fri, 3 Apr 2015 07:50:35 -0400 Received: from ou.quest-ce.net ([195.154.187.82]:37628 "EHLO ou.quest-ce.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751435AbbDCLub (ORCPT ); Fri, 3 Apr 2015 07:50:31 -0400 Message-ID: <1428061785.22575.139.camel@opteya.com> From: Yann Droneaud To: Haggai Eran Cc: Shachar Raindel , Sagi Grimberg , " (linux-rdma@vger.kernel.org)" , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" Date: Fri, 03 Apr 2015 13:49:45 +0200 In-Reply-To: <1428050408201.35668@mellanox.com> References: <1427969085.17020.5.camel@opteya.com> <1427981431.22575.21.camel@opteya.com> <551D5DC8.6070909@mellanox.com> <1427992506.22575.80.camel@opteya.com> ,<1428007208.22575.104.camel@opteya.com> <1428050408201.35668@mellanox.com> Organization: OPTEYA Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.11 (3.12.11-1.fc21) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 37.160.89.89 X-SA-Exim-Mail-From: ydroneaud@opteya.com Subject: Re: CVE-2014-8159 kernel: infiniband: uverbs: unprotected physical memory access X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on ou.quest-ce.net) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4708 Lines: 108 Hi, Le vendredi 03 avril 2015 à 08:39 +0000, Haggai Eran a écrit : > On Thursday, April 2, 2015 11:40 PM, Yann Droneaud wrote: > > Le jeudi 02 avril 2015 à 16:44 +0000, Shachar Raindel a écrit : > >> > -----Original Message----- > >> > From: Yann Droneaud [mailto:ydroneaud@opteya.com] > >> > Sent: Thursday, April 02, 2015 7:35 PM > > > >> > Another related question: as the large memory range could be registered > >> > by user space with ibv_reg_mr(pd, base, size, IB_ACCESS_ON_DEMAND), > >> > what's prevent the kernel to map a file as the result of mmap(0, ...) > >> > in this region, making it available remotely through IBV_WR_RDMA_READ / > >> > IBV_WR_RDMA_WRITE ? > >> > > >> > >> This is not a bug. This is a feature. > >> > >> Exposing a file through RDMA, using ODP, can be done exactly like this. > >> Given that the application explicitly requested this behavior, I don't > >> see why it is a problem. > > > > If the application cannot choose what will end up in the region it has > > registered, it's an issue ! > > > > What might happen if one library in a program call mmap(0, size, ...) to > > load a file storing a secret (a private key), and that file ends up > > being mapped in an registered but otherwise free region (afaict, the > > kernel is allowed to do it) ? > > What might happen if one library in a program call call mmap(0, > > size, ..., MAP_ANONYMOUS,...) to allocate memory, call mlock(), then > > write in this location a secret (a passphrase), and that area ends up > > in the memory region registered for on demand paging ? > > > > The application haven't choose to disclose these confidential piece of > > information, but they are available for reading/writing by remote > > through RDMA given it knows the rkey of the memory region (which is a > > 32bits value). > > > > I hope I'm missing something, because I'm not feeling confident such > > behavor is a feature. > > What we are aiming for is the possibility to register the entire process' address > space for RDMA operations (if the process chooses to use this feature). > This is similar to multiple threads accessing the same address space. I'm sure > you wouldn't be complaining about the ability of one thread to access the secret > passphrase mmapped by another thread in your example. > > > I'm trying to understand how the application can choose what is exposed > > through RDMA if it registers a very large memory region for later use > > (but do not actually explicitly map something there yet): what's the > > consequences ? > > > > void *start = sbrk(0); > > size_t size = ULONG_MAX - (unsigned long)start; > > > > ibv_reg_mr(pd, start, size, IB_ACCESS_ON_DEMAND) > > The consequences are exactly as you wrote. Just as giving a non-ODP rkey > to a remote node allows the node to access the registered memory behind that > rkey, giving an ODP rkey to a remote node allows that node to access the > virtual address space behind that rkey. > There's a difference: it's impossible to give a valid non-ODP rkey that point to a memory region not already mapped (backed by a file for example), so the application *choose* the content of the memory to be made accessible remotely before making it accessible. As I understand the last explanation regarding ODP, at creation time, an ODP rkey can point to a free, unused, unallocated memory portion. At this point the kernel can happily map anything the application (and its libraries) want to map at a (almost) *random* address that could be in (or partially in) the ODP memory region. And I have a problem with such random behavior. Allowing this is seems dangerous and should be done with care. I believe the application must kept the control of what's end up in its ODP registered memory region. Especially for multi thread program: imagine one thread creating a large memory region for its future purposes, then send the rkey to a remote peer and wait for some work to be done. In the mean time another call mmap(0, ...) to map a file at a kernel chosen address, and that address happen to be in the memory region registered by the other thread: 1) the first thread is amputated from a portion of memory it was willing to use; 2) the data used by the second thread is accessible to the remote peer(s) while not expected. Speculatively registering memory seems dangerous for any use case I could think of. Regards. -- Yann Droneaud OPTEYA -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/