Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp4813535ybg; Tue, 29 Oct 2019 12:47:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqxAysN8zOiVYG6hTRq1jl2vlA21E7bbQW4kP6wxWhKacuKKWgNwJFCcHLefJMYseIl0E84O X-Received: by 2002:a17:906:5490:: with SMTP id r16mr5110593ejo.308.1572378439966; Tue, 29 Oct 2019 12:47:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572378439; cv=none; d=google.com; s=arc-20160816; b=q5qN1NAe47Re4JqyiVuZp1ZE7dMfmu7EPxC9q39kkIecNDugX+uNwMuXRH4PhfqorX S8cCYbU0z04JaZ6IhiBG4L8M5dlnP7G5yWmChgwZ/sS5+VxYbXVoMuNDO6aIp/5O9tbf L6C4roErOMuve3m7gh64X9laSe9g+KhUWdRh4o7JyQF6CT64E1IBlZT6Kds0o/dV7FTw 8oucVQy0ghOza9MAGISwG79EZcb5PosUg122dY0qMSbHzmITU66siAa7CtFIdjc/GuEW v/4cXvbjoG7++2lMALsOohLnuGIznPJL7IDPEyUw9q8g1qXg49bS4YzLvjK/jgbhDKQL uuRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=BoUS8sI2Y0PRHVkD+8sTGgihdSSm7C0ThXdZ4OWfLUA=; b=BOH5KxKiSnyb8Cr96PZL2d4tb+OH8UgF5RsPvnKYa1YRpPj2o5CB57eF2G1t4hucNZ 3RNuFBYjc67QQOx2L7HEXQQVRiNpAno8saTf2cfbzsaehk2HWiqrlryw9wQQ48VReM+L lc8DL0uqFHiG8N0JjnFc1Jv3J8VSx78MGmKTjIFO8W83YGS78c9dtv3L3YKF0zPY7aX/ +q9KtUvD9IXQRlHH9VCVOk5Yn939nJqp0pJq4oUJNxK+NzwfE7VQjwH8yxgZ2xIm2hKQ 28vj265VoXoTxA3haZB+sOguvJQPgvpH46yO6WuZz+V1lZ9DO36Ltm5g7X85BG+6IIT1 j7kA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c11si9051094edb.425.2019.10.29.12.46.38; Tue, 29 Oct 2019 12:47:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390434AbfJ2Qnb (ORCPT + 99 others); Tue, 29 Oct 2019 12:43:31 -0400 Received: from relay.sw.ru ([185.231.240.75]:56202 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390258AbfJ2Qnb (ORCPT ); Tue, 29 Oct 2019 12:43:31 -0400 Received: from [172.16.25.5] by relay.sw.ru with esmtp (Exim 4.92.2) (envelope-from ) id 1iPUaS-0006WW-BR; Tue, 29 Oct 2019 19:43:16 +0300 Subject: Re: [PATCH v10 1/5] kasan: support backing vmalloc space with real shadow memory To: Daniel Axtens , kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Andrew Morton References: <20191029042059.28541-1-dja@axtens.net> <20191029042059.28541-2-dja@axtens.net> From: Andrey Ryabinin Message-ID: Date: Tue, 29 Oct 2019 19:42:57 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20191029042059.28541-2-dja@axtens.net> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/29/19 7:20 AM, Daniel Axtens wrote: > Hook into vmalloc and vmap, and dynamically allocate real shadow > memory to back the mappings. > > Most mappings in vmalloc space are small, requiring less than a full > page of shadow space. Allocating a full shadow page per mapping would > therefore be wasteful. Furthermore, to ensure that different mappings > use different shadow pages, mappings would have to be aligned to > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > Instead, share backing space across multiple mappings. Allocate a > backing page when a mapping in vmalloc space uses a particular page of > the shadow region. This page can be shared by other vmalloc mappings > later on. > > We hook in to the vmap infrastructure to lazily clean up unused shadow > memory. > > To avoid the difficulties around swapping mappings around, this code > expects that the part of the shadow region that covers the vmalloc > space will not be covered by the early shadow page, but will be left > unmapped. This will require changes in arch-specific code. > > This allows KASAN with VMAP_STACK, and may be helpful for architectures > that do not have a separate module space (e.g. powerpc64, which I am > currently working on). It also allows relaxing the module alignment > back to PAGE_SIZE. > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 > Acked-by: Vasily Gorbik > Co-developed-by: Mark Rutland > Signed-off-by: Mark Rutland [shadow rework] > Signed-off-by: Daniel Axtens Small nit bellow, otherwise looks fine: Reviewed-by: Andrey Ryabinin > static __always_inline bool > @@ -1196,8 +1201,8 @@ static void free_vmap_area(struct vmap_area *va) > * Insert/Merge it back to the free tree/list. > */ > spin_lock(&free_vmap_area_lock); > - merge_or_add_vmap_area(va, > - &free_vmap_area_root, &free_vmap_area_list); > + (void)merge_or_add_vmap_area(va, &free_vmap_area_root, > + &free_vmap_area_list); > spin_unlock(&free_vmap_area_lock); > } > .. > > @@ -3391,8 +3428,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, > * and when pcpu_get_vm_areas() is success. > */ > while (area--) { > - merge_or_add_vmap_area(vas[area], > - &free_vmap_area_root, &free_vmap_area_list); > + (void)merge_or_add_vmap_area(vas[area], &free_vmap_area_root, I don't think these (void) casts are necessary. > + &free_vmap_area_list); > vas[area] = NULL; > } > >