Received: by 10.213.65.68 with SMTP id h4csp35896imn; Mon, 19 Mar 2018 18:42:40 -0700 (PDT) X-Google-Smtp-Source: AG47ELt4fp32+uM0HMD+sBreLNIm/EfbSr3oisHHNl0rnCncDxoXFUpDqT/oGBvo5Q3sUVSeLsyl X-Received: by 2002:a17:902:5501:: with SMTP id f1-v6mr14395213pli.50.1521510160510; Mon, 19 Mar 2018 18:42:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521510160; cv=none; d=google.com; s=arc-20160816; b=zchKHzQFtQ7NG7zUOfYe6+nNolAiaAKVEC7M7XLJb8/mx0hm5TKhuTkNRnQw/UEJqk snKj6t1YK0hUXWeJLZ9jTTt7nBggRX/FVk3ZG7uYGRpd6C9eWuxNZ09Oh1XEYfCKIa4U NiloZihwxncfMqr2FO5NjRuvnC/cai5sfPCRqcWHxBb/u2BELETR6IWdgb18S0bVn5cP oB2vrJ4qQae8UMsSsjdl57GKEKMQ9VK+Zk+VWbH7/iQSXaVTLKgjMl45KBhfLvSiIEHj 3zZevPwwJxS2KXdkL115vbw7+1Le2cLs6olbGOUGpsysXo3BTOwqWqdYI8KgU7Cswroa s4Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=d3/LlTIvMI8/ZY3VaCmuadgqq7MUCJhz2Uan9Czz9h4=; b=BI0z0UAwA7rKe4vOxgCcX5as+Jp3AdhYWjJDwN+LywlrGyR3J2VD4hHWfHhzSMer+c 70ovZM2+QuHOTiL8Ij/pgpwXYY6GluEKUkNcf9M3vhUrKKICGVgfyGZFIB4zuw/Oo9kX 7bOObDJCwL3brMpXM8eV7mRMhNxSop0Dx8F5jMuw41rIS5HIcaiRoLXcqo1C4ik+55gh bwUxnyyhsHI5V0ICzfNsaGpYuTcLtpTdV2UlcHcqLCzF2GREPmlmPfPslpJyv+4ivLJX /Y+aqWla0ek8fDrWTWJy+ZqpNt1MCl9O8qZFO2qpstBEvD68CZoM2HceIJ4ciA+K9cpJ HaSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f9-v6si538883plt.502.2018.03.19.18.42.27; Mon, 19 Mar 2018 18:42:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968464AbeCSSFf (ORCPT + 99 others); Mon, 19 Mar 2018 14:05:35 -0400 Received: from lhrrgout.huawei.com ([194.213.3.17]:29945 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S967747AbeCSSFa (ORCPT ); Mon, 19 Mar 2018 14:05:30 -0400 Received: from LHREML712-CAH.china.huawei.com (unknown [172.18.7.106]) by Forcepoint Email with ESMTP id BC928A3039EDB; Mon, 19 Mar 2018 18:05:26 +0000 (GMT) Received: from [10.122.225.51] (10.122.225.51) by smtpsuk.huawei.com (10.201.108.35) with Microsoft SMTP Server (TLS) id 14.3.382.0; Mon, 19 Mar 2018 18:05:26 +0000 Subject: Re: [RFC PATCH v19 0/8] mm: security: ro protection for dynamic data To: Matthew Wilcox CC: , , , , , , , , References: <20180313214554.28521-1-igor.stoppa@huawei.com> <20180314115653.GD29631@bombadil.infradead.org> <8623382b-cdbe-8862-8c2f-fa5bc6a1213a@huawei.com> <20180314130418.GG29631@bombadil.infradead.org> <9623b0d1-4ace-b3e7-b861-edba03b8a8cd@huawei.com> <20180314173343.GJ29631@bombadil.infradead.org> From: Igor Stoppa Message-ID: <242fd8a2-2b80-3aa3-4b11-27f49c021a1d@huawei.com> Date: Mon, 19 Mar 2018 20:04:35 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180314173343.GJ29631@bombadil.infradead.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.122.225.51] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/03/18 19:33, Matthew Wilcox wrote: > I think an implementation of > pmalloc which used a page_frag-style allocator would be larger than > 100 lines, but I don't think it would have to be significantly larger > than that. I have some doubt about what is the best way to implement it using vmalloced memory. 1. Since I can allocate an arbitrary number of pages, I think allocating a rounded up amount of memory, so that it's multiple of PAGE_SIZE should be enough. But maybe I could do better than that: a) support pre-allocation of x pages b) define, as pool parameter, the minimum number of pages to allocate every time there is a refill c) both a and b ---- 2. the flavor of page_frag from page_alloc relies on page->_refcount, however neither vmap_area, nor vm_struct seem to have anything like that. (My reasoning is that I should do the accounting not on page level, but based on the virtual area that I get when I allocate new memory) What would be the best way to do refcounting for the area? a) use the the page->_refcount from the first page that belongs to the area b) add the _refcount to either vm_struct or vmap_area (I am not really sure of why these two structures exist as separate entities, rather than a single one - cache optimization?) ---- 3. I will have to add a list of chunks (in genalloc lingo, or areas, if we refer to the new implementation), because I will still need to iterate over all the memory that belongs to a pool, for either write protecting it or for destroying the pool. I have two options: a) handle the chunks within the pmalloc pool b) create an intermediate type of pool (vfrag_pool?) and then include it in the pmalloc pool structure. I'd lean toward option a, but I thought I might as well ask for advice before I implement the less desirable option (whatever it might be). -- thanks, igor