Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2537564imu; Thu, 29 Nov 2018 06:34:19 -0800 (PST) X-Google-Smtp-Source: AFSGD/U9+BzSOMGxh3MqRna1N3GtUhCXpDYQcR+qJKK2A8TFvbcZcRHPlCg0pY3+WprUo8bfGfVz X-Received: by 2002:a63:413:: with SMTP id 19mr251959pge.7.1543502059364; Thu, 29 Nov 2018 06:34:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543502059; cv=none; d=google.com; s=arc-20160816; b=n9to+4F+SOSsfq7+eACeVxTvxY1pRUdukpIqxejpUCKZUA9MDWyL4wNhPAF6l0RCv2 p1VV8qtvuuzE2rBtNqhXhFEmz+pCI9Gu+TGWWfjIhn6rR5flGxOLorl+CKKFFQ9Jnlje 5C5Y/IlHTp3YvAmddkI56t8wcleqjhtaeCq3vuVnjtZ321Oq2WqMsTwB9rzod0JZ3Kjz IW4Zy80a9KlO3oKiGjKO4BybHO2pEH1qrfl0fYT7eqOcSle6YoSZfVVnslRP2Oo/9/xl 4WOGPPxI+vj7z/FsEbogUcKIZMo+dOKbUarLTxaXycdt0jbKWnjxjSqBGjQmgxIc4wa8 SenQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=A3Edfn8VuZVrp8EeQFnDVV+yuHgdXcSsfyE8rU89+34=; b=lWKQ23JzW6RVDDnb5a/rNRFMyErnCIOR4N/q87CX1jFAJts9MvcG+VO+GshOTQnObs 8uRUfyfXN2GOjq2ZKR8wF8P3FEwJZ5a1D7/CFUAi0bEw7GkdWsywIDJWiiNwLditVaVZ qo2yYasLtvD8C13QSfUD+guTWxMQ5T+OzeiYOPl/6458rv7BSbGxPVoE3SEzsSWnABmL jK5XoXGNkMjXReXGAIjFXxBf10Gg784QgO+hCxSyZRqWTYDIm3FtnXPN40246rpySjpM 0iRTszYZGC5REgoUMhs/4dhtbLwePEjbV2uKBiAbPNmBvQ4fdwW8Hn/Kw3Gab+N/AHuM qvlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=xOJzClJI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q9si2225445pgi.89.2018.11.29.06.34.03; Thu, 29 Nov 2018 06:34:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=xOJzClJI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389613AbeK3Bi0 (ORCPT + 99 others); Thu, 29 Nov 2018 20:38:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:41982 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732483AbeK3Bi0 (ORCPT ); Thu, 29 Nov 2018 20:38:26 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C0CAF2146F; Thu, 29 Nov 2018 14:32:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1543501972; bh=CRvYvr0ky5Q9ddCjBDiJfYjtcNw0U/cJVzw5KhyEmbA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xOJzClJIX9h+Whv7qw/aGf+mvPOu6eZUHRKNM+RsfKgmchg4yBRV4cVCbMXymJLll lOgPAAlmPMyzdq65XYITD0eAspRaIsa1wi0n8bYmGRKlGwPCMY2oYfowLibK60P9OA a3lsQFTspXKYHoPc7agaIxQb8Po/nNHee7uCMqDk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michal Hocko , Konstantin Khlebnikov , Kyungtae Kim , Vlastimil Babka , Balbir Singh , Mel Gorman , Pavel Tatashin , Oscar Salvador , Mike Rapoport , Aaron Lu , Joonsoo Kim , Byoungyoung Lee , "Dae R. Jeong" , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.19 098/110] mm, page_alloc: check for max order in hot path Date: Thu, 29 Nov 2018 15:13:09 +0100 Message-Id: <20181129135925.208128460@linuxfoundation.org> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181129135921.231283053@linuxfoundation.org> References: <20181129135921.231283053@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit c63ae43ba53bc432b414fd73dd5f4b01fcb1ab43 ] Konstantin has noticed that kvmalloc might trigger the following warning: WARNING: CPU: 0 PID: 6676 at mm/vmstat.c:986 __fragmentation_index+0x54/0x60 [...] Call Trace: fragmentation_index+0x76/0x90 compaction_suitable+0x4f/0xf0 shrink_node+0x295/0x310 node_reclaim+0x205/0x250 get_page_from_freelist+0x649/0xad0 __alloc_pages_nodemask+0x12a/0x2a0 kmalloc_large_node+0x47/0x90 __kmalloc_node+0x22b/0x2e0 kvmalloc_node+0x3e/0x70 xt_alloc_table_info+0x3a/0x80 [x_tables] do_ip6t_set_ctl+0xcd/0x1c0 [ip6_tables] nf_setsockopt+0x44/0x60 SyS_setsockopt+0x6f/0xc0 do_syscall_64+0x67/0x120 entry_SYSCALL_64_after_hwframe+0x3d/0xa2 the problem is that we only check for an out of bound order in the slow path and the node reclaim might happen from the fast path already. This is fixable by making sure that kvmalloc doesn't ever use kmalloc for requests that are larger than KMALLOC_MAX_SIZE but this also shows that the code is rather fragile. A recent UBSAN report just underlines that by the following report UBSAN: Undefined behaviour in mm/page_alloc.c:3117:19 shift exponent 51 is too large for 32-bit type 'int' CPU: 0 PID: 6520 Comm: syz-executor1 Not tainted 4.19.0-rc2 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xd2/0x148 lib/dump_stack.c:113 ubsan_epilogue+0x12/0x94 lib/ubsan.c:159 __ubsan_handle_shift_out_of_bounds+0x2b6/0x30b lib/ubsan.c:425 __zone_watermark_ok+0x2c7/0x400 mm/page_alloc.c:3117 zone_watermark_fast mm/page_alloc.c:3216 [inline] get_page_from_freelist+0xc49/0x44c0 mm/page_alloc.c:3300 __alloc_pages_nodemask+0x21e/0x640 mm/page_alloc.c:4370 alloc_pages_current+0xcc/0x210 mm/mempolicy.c:2093 alloc_pages include/linux/gfp.h:509 [inline] __get_free_pages+0x12/0x60 mm/page_alloc.c:4414 dma_mem_alloc+0x36/0x50 arch/x86/include/asm/floppy.h:156 raw_cmd_copyin drivers/block/floppy.c:3159 [inline] raw_cmd_ioctl drivers/block/floppy.c:3206 [inline] fd_locked_ioctl+0xa00/0x2c10 drivers/block/floppy.c:3544 fd_ioctl+0x40/0x60 drivers/block/floppy.c:3571 __blkdev_driver_ioctl block/ioctl.c:303 [inline] blkdev_ioctl+0xb3c/0x1a30 block/ioctl.c:601 block_ioctl+0x105/0x150 fs/block_dev.c:1883 vfs_ioctl fs/ioctl.c:46 [inline] do_vfs_ioctl+0x1c0/0x1150 fs/ioctl.c:687 ksys_ioctl+0x9e/0xb0 fs/ioctl.c:702 __do_sys_ioctl fs/ioctl.c:709 [inline] __se_sys_ioctl fs/ioctl.c:707 [inline] __x64_sys_ioctl+0x7e/0xc0 fs/ioctl.c:707 do_syscall_64+0xc4/0x510 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Note that this is not a kvmalloc path. It is just that the fast path really depends on having sanitzed order as well. Therefore move the order check to the fast path. Link: http://lkml.kernel.org/r/20181113094305.GM15120@dhcp22.suse.cz Signed-off-by: Michal Hocko Reported-by: Konstantin Khlebnikov Reported-by: Kyungtae Kim Acked-by: Vlastimil Babka Cc: Balbir Singh Cc: Mel Gorman Cc: Pavel Tatashin Cc: Oscar Salvador Cc: Mike Rapoport Cc: Aaron Lu Cc: Joonsoo Kim Cc: Byoungyoung Lee Cc: "Dae R. Jeong" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/page_alloc.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3a4065312938..b721631d78ab 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4055,17 +4055,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, unsigned int cpuset_mems_cookie; int reserve_flags; - /* - * In the slowpath, we sanity check order to avoid ever trying to - * reclaim >= MAX_ORDER areas which will never succeed. Callers may - * be using allocators in order of preference for an area that is - * too large. - */ - if (order >= MAX_ORDER) { - WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); - return NULL; - } - /* * We also sanity check to catch abuse of atomic reserves being used by * callers that are not in atomic context. @@ -4359,6 +4348,15 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */ struct alloc_context ac = { }; + /* + * There are several places where we assume that the order value is sane + * so bail out early if the request is out of bound. + */ + if (unlikely(order >= MAX_ORDER)) { + WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); + return NULL; + } + gfp_mask &= gfp_allowed_mask; alloc_mask = gfp_mask; if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) -- 2.17.1