Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp297277pxj; Thu, 17 Jun 2021 03:01:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyJYpJ9xqLMNFs8RCYCIBudWCV0n6h+EaBuepNwV4UeKnnhKGiJzr3ypQwkw72egC6eIcwP X-Received: by 2002:a92:b506:: with SMTP id f6mr2971003ile.148.1623924066637; Thu, 17 Jun 2021 03:01:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623924066; cv=none; d=google.com; s=arc-20160816; b=Bj7K5EYy9h9qq/zw9W6wAs2bdBZMrn4eL5zOaq6KHI70uMTxgCcUy3r2W1uMJINBTg i5BwcqkWWb7VqA53srQRg8eHy5sM7A3ze4a8T6Y8Cj13DEjrabu0FrX1RmCqTgP9Fp5A WFT0zWPeEcG7MQfys8dIqBK4qZvJbDnP/1vWqBJmwNOW6y4pA6oZCTGC1/DdP2xTU43T bLOfKQoJNE8FM31K96j8HtvhDtbQw4tJeBF9BJ+0KHG14jS0nb15naCvqKW5XnBm4jOj StTjMP4rebYwVmLxlR963yt3VdHUI5vYpDLY59NvATf70STjJU+FG5wVb9Ci50E2U121 koag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:message-id :mime-version:in-reply-to:references:cc:to:subject:from:date :dkim-signature; bh=XjLNVS8FTx5mg1CNbMomnZVz+ZmOYcIpwKWpVxPSZ9w=; b=uPDQ7o+eu03xRvs6hCJMlLpCoLVkZqTfJYIraBJbVnqFLu15WshWlv+WVFQ7J1Ls6t Jn17mMLWwln9hgRqX91EGStOp7O75/Br7jOWQiwkFqm5nCjWDqIHKVnLNBfuIrPFyrH1 p+KZoQWI68/sfDhex1D8mJTkw4O0U09GWN44SKS0jcmAVyo4XzbquHWrWpZ/LtKWjKbS LKqB37b5HHUJHXSCW7Yko5Aqdxa7m2C5SK5zEgcWJfUsfTwf76Glv4PgXu5RN591SGez us4jdpcC2yQcRQ/ToFU0xSgIikVsAR6NP3/5J/yv5841RMtxVxhmkxH2rExyxIXIIwjz TLEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=BrhlGOFi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n5si6125037ilj.6.2021.06.17.03.00.54; Thu, 17 Jun 2021 03:01:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=BrhlGOFi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231724AbhFQJnD (ORCPT + 99 others); Thu, 17 Jun 2021 05:43:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbhFQJnD (ORCPT ); Thu, 17 Jun 2021 05:43:03 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8EC9C061574 for ; Thu, 17 Jun 2021 02:40:55 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id e7so2630082plj.7 for ; Thu, 17 Jun 2021 02:40:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:subject:to:cc:references:in-reply-to:mime-version :message-id:content-transfer-encoding; bh=XjLNVS8FTx5mg1CNbMomnZVz+ZmOYcIpwKWpVxPSZ9w=; b=BrhlGOFiuNTlT066ZSqQu5rrq81YOSSFJOCCKunAZiZjib4sCKF/bPaYURHAw27dwq 38+zt/0BmrWQ+sfHpwVFgLbG4748X9/IHC0igLCedTDs1uAJqQX3xDidPmJJrzNmAbr0 fbaEnXZbgmU6Ecyj6cDtn0QGi9j52zV/YZcNw4GjmZsqaokY+/xSlw2W1YSXcgMGZAsz ov879So3M6wOtTi6SX1Eaf3D5Zcm9XSWQ9JGLPYmx/YfmX287MHNaJq5SU70ixh1pupE DfKzpZZTdzoXw2GJ8VlLLYdHV3BiFLindJD1tqVxGDoYdY3bC4BnVXvkWOKE/kza/A+b cHUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:subject:to:cc:references:in-reply-to :mime-version:message-id:content-transfer-encoding; bh=XjLNVS8FTx5mg1CNbMomnZVz+ZmOYcIpwKWpVxPSZ9w=; b=KZ7LhAqxzs/lKy1bm5l2RFSQ379WAc0X8ZjVwPpn4ASw5TMwH+2GrnDbCAkKOrZD6M WqgP6xsFE6F4MfCnDMCERkn85sgM0M4a+hSWOgBmXBqqE5/DZZSfxMbQDdPMFJ3ToS6S 4aUCO4Q5Fvs+QWv/IaKX6Uat4N5ueMG8IDybC5L0K7iuvSNbzaIGZClMMGSps1CHMwIe olorou12KP+3/C6+Vuqh0mcSqrKHn9Ikxx8FkLe+I1nq5IgaWnU/885pfd9tf20v+Qcy XNjsqHw3tuVgdg4WYTTcqQtxI+NfqU3xGPqt0xFEgxIgaUCUHJ2dDpXj7a2/xGNadfcq M3JA== X-Gm-Message-State: AOAM531T48yddDe1MSenOiBNV1SxZ+H3CqVXEFbOXCnflB7R4U7YqPZo QE8X6kmH3y3aSTzJyc7vJvE= X-Received: by 2002:a17:903:2310:b029:109:e746:89a2 with SMTP id d16-20020a1709032310b0290109e74689a2mr3775750plh.8.1623922855231; Thu, 17 Jun 2021 02:40:55 -0700 (PDT) Received: from localhost (60-242-147-73.tpgi.com.au. [60.242.147.73]) by smtp.gmail.com with ESMTPSA id h8sm4506707pjf.7.2021.06.17.02.40.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Jun 2021 02:40:54 -0700 (PDT) Date: Thu, 17 Jun 2021 19:40:49 +1000 From: Nicholas Piggin Subject: Re: [PATCH] mm/vmalloc: unbreak kasan vmalloc support To: akpm@linux-foundation.org, Daniel Axtens , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrey Konovalov , David Gow , Dmitry Vyukov , Uladzislau Rezki References: <20210617081330.98629-1-dja@axtens.net> In-Reply-To: <20210617081330.98629-1-dja@axtens.net> MIME-Version: 1.0 Message-Id: <1623922742.sam09kpmhp.astroid@bobo.none> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Excerpts from Daniel Axtens's message of June 17, 2021 6:13 pm: > In commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"), > __vmalloc_node_range was changed such that __get_vm_area_node was no > longer called with the requested/real size of the vmalloc allocation, but > rather with a rounded-up size. >=20 > This means that __get_vm_area_node called kasan_unpoision_vmalloc() with > a rounded up size rather than the real size. This led to it allowing > access to too much memory and so missing vmalloc OOBs and failing the > kasan kunit tests. >=20 > Pass the real size and the desired shift into __get_vm_area_node. This > allows it to round up the size for the underlying allocators while > still unpoisioning the correct quantity of shadow memory. >=20 > Adjust the other call-sites to pass in PAGE_SHIFT for the shift value. >=20 > Cc: Nicholas Piggin > Cc: David Gow > Cc: Dmitry Vyukov > Cc: Andrey Konovalov > Cc: Uladzislau Rezki (Sony) > Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D213335 > Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") Thanks Daniel, good debugging. Reviewed-by: Nicholas Piggin > Signed-off-by: Daniel Axtens > --- > mm/vmalloc.c | 24 ++++++++++++++---------- > 1 file changed, 14 insertions(+), 10 deletions(-) >=20 > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index aaad569e8963..3471cbeb083c 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2362,15 +2362,16 @@ static void clear_vm_uninitialized_flag(struct vm= _struct *vm) > } > =20 > static struct vm_struct *__get_vm_area_node(unsigned long size, > - unsigned long align, unsigned long flags, unsigned long start, > - unsigned long end, int node, gfp_t gfp_mask, const void *caller) > + unsigned long align, unsigned long shift, unsigned long flags, > + unsigned long start, unsigned long end, int node, > + gfp_t gfp_mask, const void *caller) > { > struct vmap_area *va; > struct vm_struct *area; > unsigned long requested_size =3D size; > =20 > BUG_ON(in_interrupt()); > - size =3D PAGE_ALIGN(size); > + size =3D ALIGN(size, 1ul << shift); > if (unlikely(!size)) > return NULL; > =20 > @@ -2402,8 +2403,8 @@ struct vm_struct *__get_vm_area_caller(unsigned lon= g size, unsigned long flags, > unsigned long start, unsigned long end, > const void *caller) > { > - return __get_vm_area_node(size, 1, flags, start, end, NUMA_NO_NODE, > - GFP_KERNEL, caller); > + return __get_vm_area_node(size, 1, PAGE_SHIFT, flags, start, end, > + NUMA_NO_NODE, GFP_KERNEL, caller); > } > =20 > /** > @@ -2419,7 +2420,8 @@ struct vm_struct *__get_vm_area_caller(unsigned lon= g size, unsigned long flags, > */ > struct vm_struct *get_vm_area(unsigned long size, unsigned long flags) > { > - return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, > + return __get_vm_area_node(size, 1, PAGE_SHIFT, flags, > + VMALLOC_START, VMALLOC_END, > NUMA_NO_NODE, GFP_KERNEL, > __builtin_return_address(0)); > } > @@ -2427,7 +2429,8 @@ struct vm_struct *get_vm_area(unsigned long size, u= nsigned long flags) > struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long f= lags, > const void *caller) > { > - return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, > + return __get_vm_area_node(size, 1, PAGE_SHIFT, flags, > + VMALLOC_START, VMALLOC_END, > NUMA_NO_NODE, GFP_KERNEL, caller); > } > =20 > @@ -2949,9 +2952,9 @@ void *__vmalloc_node_range(unsigned long size, unsi= gned long align, > } > =20 > again: > - size =3D PAGE_ALIGN(size); > - area =3D __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | > - vm_flags, start, end, node, gfp_mask, caller); > + area =3D __get_vm_area_node(real_size, align, shift, VM_ALLOC | > + VM_UNINITIALIZED | vm_flags, start, end, node, > + gfp_mask, caller); > if (!area) { > warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, vm_struct allocation failed", > @@ -2970,6 +2973,7 @@ void *__vmalloc_node_range(unsigned long size, unsi= gned long align, > */ > clear_vm_uninitialized_flag(area); > =20 > + size =3D PAGE_ALIGN(size); > kmemleak_vmalloc(area, size, gfp_mask); > =20 > return addr; > --=20 > 2.30.2 >=20 >=20