Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp4458090pxv; Tue, 27 Jul 2021 07:52:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz5uOZrwaDQ9O1U0towF/EUWa58FFp3yGctOY5VJ52eJ6F6wbTViSYOzNFaq25japACdK1O X-Received: by 2002:a05:6402:2283:: with SMTP id cw3mr20228544edb.87.1627397569250; Tue, 27 Jul 2021 07:52:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627397569; cv=none; d=google.com; s=arc-20160816; b=nCDk3eiSg5XI0+3WrR3t7hvZaJe7QFvsDdcEAcTLq06AaXCwwFsPxZbn7rJrX8iVpD J4Mf3smDo7rXqBBJEh+KHXYZlxmmfy2bPxS3LhTN/OZr8/RPVHocJQSdnQdw44FmDHcp LiHoeKZ8EzViX5foGU995bFWC4G7IDhFC+XCR+79XUS1V/KiCzvv2sD+Z+puI+ABNcdf kfOgKdEmfLwq21jvpwCkvHg7g74cXwAKehqqbxWXpJ/5qy0gPFXPu6FLw01fRnQBMZVH XxFu2K9jo2P1cE5Kh6z12AwjNZYp4ocKwuHMFZIgsxY9Xr6Gej1kH13rlkqmaZxwW2P7 mLuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kL1UCUxDx4YwGRZdNDH1kWPFwZxg7ViYueBl2tpiX6g=; b=s3VD96O4DwIBStGxt94gN7rpShikl46JaZdtpk6nZVqsYVVPUjhRxhswpTQV6GRgY+ sOv+BIA48YwkvXnI+YFy0j3Lz07TvrTE8Wpf3INXKqv8W5bBYZDNoUxdDnaamLLus+x1 t1NE3sWYPEMtQyTLsHXuaozymC4R7nQGkoMFA8jf6k/m6A5yjN7T+FxDyescjzrgAI50 VNvC8C302T4MMiJZy0rKeZbiNggGftAoLdtF991eIenyeZISGdRi1nBN4svfc9xwaOIk bdI1FOEtEaEmlVxUnxmmNmiKTgicAh7cuCcApECRTwpNZMg5MO7ywk6wZIYhwJwidfID /Ucw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="kj/IVvXy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f5si2690820ejj.75.2021.07.27.07.52.26; Tue, 27 Jul 2021 07:52:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="kj/IVvXy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237051AbhG0OvF (ORCPT + 99 others); Tue, 27 Jul 2021 10:51:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:36102 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236909AbhG0Ott (ORCPT ); Tue, 27 Jul 2021 10:49:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id F238F61B05; Tue, 27 Jul 2021 14:49:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1627397371; bh=uEwQ9C0cLg3IwMIORY/37dlh5O4fu4jNkU/FjsRvIrE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kj/IVvXy3JHWkqM0XFNYDPQdxSxj8JggLNuER9Hto8/iywRcIWO4ntU+1XOU+vznN nwEKRsA+VKIKup4+NtVyRDYRpls5U2TXGCwqj3BJNmiv5sKmAucefRYvBWnZGg/gWZ c82xtneZYpDFHr8DZcbROOlVG3lPysp9nfW3etc11bBGulX+P5auZXokAklC34LE09 zwwS/wKlWBr3BJYnxmJQmo/nNvm9893oVZ0YfCEUg2WJgjbqKzN5nBIr50IymV9mHv xGRSVKAvWuWMXXuS4LWxtdDEitvbuLrJU8AGGYaPMslKu0GUWYCUOcXlo8lTk6GhMM d1I9fnI2UDWGQ== From: Arnd Bergmann To: Andrew Morton Cc: Arnd Bergmann , Catalin Marinas , Will Deacon , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , "David S. Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Al Viro , "Eric W. Biederman" , Christoph Hellwig , Feng Tang , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v5 2/6] kexec: avoid compat_alloc_user_space Date: Tue, 27 Jul 2021 16:48:55 +0200 Message-Id: <20210727144859.4150043-3-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210727144859.4150043-1-arnd@kernel.org> References: <20210727144859.4150043-1-arnd@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Arnd Bergmann kimage_alloc_init() expects a __user pointer, so compat_sys_kexec_load() uses compat_alloc_user_space() to convert the layout and put it back onto the user space caller stack. Moving the user space access into the syscall handler directly actually makes the code simpler, as the conversion for compat mode can now be done on kernel memory. Co-developed-by: Eric Biederman Co-developed-by: Christoph Hellwig Link: https://lore.kernel.org/lkml/YPbtsU4GX6PL7%2F42@infradead.org/ Link: https://lore.kernel.org/lkml/m1y2cbzmnw.fsf@fess.ebiederm.org/ Signed-off-by: Arnd Bergmann --- kernel/kexec.c | 61 +++++++++++++++++++++----------------------------- 1 file changed, 25 insertions(+), 36 deletions(-) diff --git a/kernel/kexec.c b/kernel/kexec.c index 9c7aef8f4bb6..b5e40f069768 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,26 +19,9 @@ #include "kexec_internal.h" -static int copy_user_segment_list(struct kimage *image, - unsigned long nr_segments, - struct kexec_segment __user *segments) -{ - int ret; - size_t segment_bytes; - - /* Read in the segments */ - image->nr_segments = nr_segments; - segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; - - return ret; -} - static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, + struct kexec_segment *segments, unsigned long flags) { int ret; @@ -58,10 +41,8 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, return -ENOMEM; image->start = entry; - - ret = copy_user_segment_list(image, nr_segments, segments); - if (ret) - goto out_free_image; + image->nr_segments = nr_segments; + memcpy(image->segment, segments, nr_segments * sizeof(*segments)); if (kexec_on_panic) { /* Enable special crash kernel control page alloc policy. */ @@ -104,7 +85,7 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, } static int do_kexec_load(unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, unsigned long flags) + struct kexec_segment *segments, unsigned long flags) { struct kimage **dest_image, *image; unsigned long i; @@ -250,7 +231,8 @@ static inline int kexec_load_check(unsigned long nr_segments, SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, struct kexec_segment __user *, segments, unsigned long, flags) { - int result; + struct kexec_segment *ksegments; + unsigned long result; result = kexec_load_check(nr_segments, flags); if (result) @@ -261,7 +243,12 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) return -EINVAL; - result = do_kexec_load(entry, nr_segments, segments, flags); + ksegments = memdup_user(segments, nr_segments * sizeof(ksegments[0])); + if (IS_ERR(ksegments)) + return PTR_ERR(ksegments); + + result = do_kexec_load(entry, nr_segments, ksegments, flags); + kfree(ksegments); return result; } @@ -273,7 +260,7 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, flags) { struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; + struct kexec_segment *ksegments; unsigned long i, result; result = kexec_load_check(nr_segments, flags); @@ -286,24 +273,26 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) return -EINVAL; - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); + ksegments = kmalloc_array(nr_segments, sizeof(ksegments[0]), + GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) - return -EFAULT; + goto fail; - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; - - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; + ksegments[i].buf = compat_ptr(in.buf); + ksegments[i].bufsz = in.bufsz; + ksegments[i].mem = in.mem; + ksegments[i].memsz = in.memsz; } result = do_kexec_load(entry, nr_segments, ksegments, flags); +fail: + kfree(ksegments); return result; } #endif -- 2.29.2