Received: by 10.223.176.46 with SMTP id f43csp1138562wra; Wed, 24 Jan 2018 11:15:19 -0800 (PST) X-Google-Smtp-Source: AH8x226BEqYOukq94Z4mE967GR+TW/B4hy056BeEvc/mjm0jciFJ1I/xhh6l7AsAVmyH57wTzlp2 X-Received: by 10.98.144.79 with SMTP id a76mr13745404pfe.15.1516821319526; Wed, 24 Jan 2018 11:15:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516821319; cv=none; d=google.com; s=arc-20160816; b=ahVwptUAhUmswF2XdkwYkopdQXqxUqPvLuO/KZP9X2OExl6g6/g0KmnZxHs6GGscP4 Q4vMRf6mQSg6D5DUi8/XMikFO7j33OBQjOc8bFEKK9R1bzpD0tG3VVaY22uB10Dbbtuj k3uXPf/zu5iRPMvGR42FbCKIRbJapHOj52VUHS5eKSs5xKiNlWLPe5HX1w2TTwSZ5N6h ZIKdCNA4OuRvn8VNuKdwvLmbNLeDkL6XC5rxeBldba4HbLpHSMjWhEykNnhK8S0LA/JK yZCv19vJVKCn2oND4OxwYz+VO+yIYi/UB6lt8xVP1w/TVXDuun/S2PjDBC1Mqj4c4qBl VS8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=bOV95RTwW/ntkGwwiJV9iyN50fxAT2ROAn7Br9JZVRA=; b=lscqVO3CXAa1RfLsotP/E3CDL3rLOeNABWM/cFTShuBlf+F0f53JM8JL+w40/XT4iS IPDPW+/940NMWiVaFL2JK7TJh4ZcyAnIDogk2iVbvznf7ABXdAftoM3E/3V8PLtnvb00 +VGVhlQscmblRsV0JX1TocCoJXmaY85koAjKQRS4dBf8PFFZLeaa97GKiAD3zGBy7qya chFbBXmB2+Wb7fXff2N9vhsEbeWeRCMLDqG48d0yM3Bxd2MRHwWlNEGNFeUcqVstHqr7 +PTNZNc8RGxtN5nlGkEkrAyqNALdJgd0j/b1+bah/H/2I1qd9wFmwjTXPf6WaGVoHynh tOPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k130si478358pgc.358.2018.01.24.11.15.04; Wed, 24 Jan 2018 11:15:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965171AbeAXTOh (ORCPT + 99 others); Wed, 24 Jan 2018 14:14:37 -0500 Received: from hqemgate14.nvidia.com ([216.228.121.143]:13247 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964999AbeAXTOg (ORCPT ); Wed, 24 Jan 2018 14:14:36 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Wed, 24 Jan 2018 11:14:19 -0800 Received: from HQMAIL105.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 24 Jan 2018 11:14:35 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 24 Jan 2018 11:14:35 -0800 Received: from rcampbell-dev.nvidia.com (10.110.48.66) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Wed, 24 Jan 2018 19:14:35 +0000 Subject: Re: [PATCH 5/6] Documentation for Pmalloc To: Igor Stoppa , , , , , , CC: , , , , References: <20180124175631.22925-1-igor.stoppa@huawei.com> <20180124175631.22925-6-igor.stoppa@huawei.com> From: Ralph Campbell Message-ID: <8397e79f-c4b7-4cae-e5a0-2c3b32d9a327@nvidia.com> Date: Wed, 24 Jan 2018 11:14:34 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <20180124175631.22925-6-igor.stoppa@huawei.com> X-Originating-IP: [10.110.48.66] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL105.nvidia.com (172.20.187.12) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2 Minor typos inline below: On 01/24/2018 09:56 AM, Igor Stoppa wrote: > Detailed documentation about the protectable memory allocator. > > Signed-off-by: Igor Stoppa > --- > Documentation/core-api/pmalloc.txt | 104 +++++++++++++++++++++++++++++++++++++ > 1 file changed, 104 insertions(+) > create mode 100644 Documentation/core-api/pmalloc.txt > > diff --git a/Documentation/core-api/pmalloc.txt b/Documentation/core-api/pmalloc.txt > new file mode 100644 > index 0000000..9c39672 > --- /dev/null > +++ b/Documentation/core-api/pmalloc.txt > @@ -0,0 +1,104 @@ > +============================ > +Protectable memory allocator > +============================ > + > +Introduction > +------------ > + > +When trying to perform an attack toward a system, the attacker typically > +wants to alter the execution flow, in a way that allows actions which > +would otherwise be forbidden. > + > +In recent years there has been lots of effort in preventing the execution > +of arbitrary code, so the attacker is progressively pushed to look for > +alternatives. > + > +If code changes are either detected or even prevented, what is left is to > +alter kernel data. > + > +As countermeasure, constant data is collected in a section which is then > +marked as readonly. > +To expand on this, also statically allocated variables which are tagged > +as __ro_after_init will receive a similar treatment. > +The difference from constant data is that such variables can be still > +altered freely during the kernel init phase. > + > +However, such solution does not address those variables which could be > +treated essentially as read-only, but whose size is not known at compile > +time or cannot be fully initialized during the init phase. > + > + > +Design > +------ > + > +pmalloc builds on top of genalloc, using the same concept of memory pools > +A pool is a handle to a group of chunks of memory of various sizes. > +When created, a pool is empty. It will be populated by allocating chunks > +of memory, either when the first memory allocation request is received, or > +when a pre-allocation is performed. > + > +Either way, one or more memory pages will be obtaiend from vmalloc and obtained > +registered in the pool as chunk. Subsequent requests will be satisfied by > +either using any available free space from the current chunks, or by > +allocating more vmalloc pages, should the current free space not suffice. > + > +This is the key point of pmalloc: it groups data that must be protected > +into a set of pages. The protection is performed through the mmu, which > +is a prerequisite and has a minimum granularity of one page. > + > +If the relevant variables were not grouped, there would be a problem of > +allowing writes to other variables that might happen to share the same > +page, but require further alterations over time. > + > +A pool is a group of pages that are write protected at the same time. > +Ideally, they have some high level correlation (ex: they belong to the > +same module), which justifies write protecting them all together. > + > +To keep it to a minimum, locking is left to the user of the API, in > +those cases where it's not strictly needed. > +Ideally, no further locking is required, since each module can have own > +pool (or pools), which should, for example, avoid the need for cross > +module or cross thread synchronization about write protecting a pool. > + > +The overhead of creating an additional pool is minimal: a handful of bytes > +from kmalloc space for the metadata and then what is left unused from the > +page(s) registered as chunks. > + > +Compared to plain use of vmalloc, genalloc has the advantage of tightly > +packing the allocations, reducing the number of pages used and therefore > +the pressure on the TLB. The slight overhead in execution time of the > +allocation should be mostly irrelevant, because pmalloc memory is not > +meant to be allocated/freed in tight loops. Rather it ought to be taken > +in use, initialized and write protected. Possibly destroyed. > + > +Considering that not much data is supposed to be dynamically allocated > +and then marked as read-only, it shouldn't be an issue that the address > +range for pmalloc is limited, on 32-bit systemd. > + > +Regarding SMP systems, the allocations are expected to happen mostly > +during an initial transient, after which there should be no more need to > +perform cross-processor synchronizations of page tables. > + > + > +Use > +--- > + > +The typical sequence, when using pmalloc, is: > + > +1. create a pool > +2. [optional] pre-allocate some memory in the pool > +3. issue one or more allocation requests to the pool > +4. initialize the memory obtained > + - iterate over points 3 & 4 as needed - > +5. write protect the pool > +6. use in read-only mode the handlers obtained throguh the allocations through > +7. [optional] destroy the pool > + > + > +In a scenario where, for example due to some error, part or all of the > +allocations performed at point 3 must be reverted, it is possible to free > +them, as long as point 5 has not been executed, and the pool is still > +modifiable. Such freed memory can be re-used. > +Performing a free operation on a write-protected pool will, instead, > +simply release the corresponding memory from the accounting, but it will > +be still impossible to alter its content. >