Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp3483554pxb; Mon, 4 Apr 2022 18:25:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxSLRCF/oVzHKnhtvzqqRdFb6iJlbkGUDFwPiGRsMWnmykkVGVU7I5/XTzKnDlvAb4UHPN4 X-Received: by 2002:a05:6a02:19c:b0:399:3007:c8fb with SMTP id bj28-20020a056a02019c00b003993007c8fbmr790265pgb.571.1649121947405; Mon, 04 Apr 2022 18:25:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649121947; cv=none; d=google.com; s=arc-20160816; b=jdlvPLEBQQLS8oSh09YoYJ/1ymUVgs2iJdDH1qHxsQa811Wd5zQg82rJcjQO/gHd+T r/WsnoUTtwNFhrwM3wve2d21ZSfEIv0odioYMBHkB3xj/8guWolWhg4XkysXX4zshCdW II1DwA78zkL31JQON1wDWaxcolfqE15zyQwLMABbeZTFB5dM/8JzxNs780/Cl+7e0kfL UXFhjBqYtGikLiEprGWg5fmXD+P/VoY63Jx0aGUQRs8JPd7Umx+oXT0kFnDDlgha8zBo WqWGrphTfIMcIo0boaa3wlMGm6TC5slQPOaysIPj+Uv6Y8am1ubaIQh1h0jlD6xQ47/8 U9Mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ojEFiuUNPceCsnNL6VfIsvK1ylAalbdBvF3RH3jgr6g=; b=Fo5Xnj3O3zWBC0qzptfMsBanhH3mnsMqcmeVkKUnyF/WsdsfjDmkrZlSaZWlBKkpA/ C93TOtmuBkW6aXJp8Tm7PvRFe22aMe5koixC/snQzt9XB1HBa6aFb19lqvNcBrybEPKr gC+IiC9ImjClL3bMkAi9tklDJm/I5g1HFfah7rL4m6gG0+REURFAUKpt64jI2LoHfSX7 +dQeUzyQWmwcfz5/s8Fm7zIFeguDe/tF9VGpSUQKVrr4NsUWa2LmvWYNz26PN9DrfqGP rUjpt8ZO+qPAa5tK2SGTRHfCCK9CVtOjO97lIlcKbQ0njCN6hT+WIdQd+vFikJKw4plE EVDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=S8dtOVzN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id u18-20020a17090341d200b00153b2d164ffsi12463766ple.263.2022.04.04.18.25.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 18:25:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=S8dtOVzN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B70F619BFE5; Mon, 4 Apr 2022 17:16:04 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237934AbiDBEcf (ORCPT + 99 others); Sat, 2 Apr 2022 00:32:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237876AbiDBEc2 (ORCPT ); Sat, 2 Apr 2022 00:32:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D1401186C6 for ; Fri, 1 Apr 2022 21:30:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648873835; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ojEFiuUNPceCsnNL6VfIsvK1ylAalbdBvF3RH3jgr6g=; b=S8dtOVzNqwr9sGCdD8U7WvAuQ4Id4yP+QF+men57R/9stArxm5y3o6t9Vvyy0g3CtHD+ly WSCyuu8YVFvu+a7woW8QWojrLs2ERdVGshtzlZzbbIBYgM8oQslxYbOh1FqlqpUj12CaPP 3jBJuGUCyFaRRx+z0YgWNrTsTBPR6MA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-27-c3Gu6Z-xON-MzXBEvukbGw-1; Sat, 02 Apr 2022 00:30:31 -0400 X-MC-Unique: c3Gu6Z-xON-MzXBEvukbGw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7E82485A5A8; Sat, 2 Apr 2022 04:30:30 +0000 (UTC) Received: from MiWiFi-R3L-srv.redhat.com (ovpn-12-21.pek2.redhat.com [10.72.12.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id D865F40FF416; Sat, 2 Apr 2022 04:30:24 +0000 (UTC) From: Baoquan He To: akpm@linux-foundation.org, willy@infradead.org Cc: linux-kernel@vger.kernel.org, kexec@lists.infradead.org, yangtiezhu@loongson.cn, amit.kachhap@arm.com, hch@lst.de, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, bhe@redhat.com Subject: [PATCH v5 2/3] vmcore: Convert __read_vmcore to use an iov_iter Date: Sat, 2 Apr 2022 12:30:07 +0800 Message-Id: <20220402043008.458679-3-bhe@redhat.com> In-Reply-To: <20220402043008.458679-1-bhe@redhat.com> References: <20220402043008.458679-1-bhe@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Matthew Wilcox (Oracle)" This gets rid of copy_to() and let us use proc_read_iter() instead of proc_read(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Baoquan He --- fs/proc/vmcore.c | 82 ++++++++++++++++++------------------------------ 1 file changed, 30 insertions(+), 52 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 54dda2e19ed1..4a721865b5cd 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -249,22 +249,8 @@ ssize_t __weak copy_oldmem_page_encrypted(struct iov_iter *iter, return copy_oldmem_page(iter, pfn, csize, offset); } -/* - * Copy to either kernel or user space - */ -static int copy_to(void *target, void *src, size_t size, int userbuf) -{ - if (userbuf) { - if (copy_to_user((char __user *) target, src, size)) - return -EFAULT; - } else { - memcpy(target, src, size); - } - return 0; -} - #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP -static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf) +static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size) { struct vmcoredd_node *dump; u64 offset = 0; @@ -277,14 +263,13 @@ static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf) if (start < offset + dump->size) { tsz = min(offset + (u64)dump->size - start, (u64)size); buf = dump->buf + start - offset; - if (copy_to(dst, buf, tsz, userbuf)) { + if (copy_to_iter(buf, tsz, iter) < tsz) { ret = -EFAULT; goto out_unlock; } size -= tsz; start += tsz; - dst += tsz; /* Leave now if buffer filled already */ if (!size) @@ -340,33 +325,28 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, /* Read from the ELF header and then the crash dump. On error, negative value is * returned otherwise number of bytes read are returned. */ -static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, - int userbuf) +static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos) { ssize_t acc = 0, tmp; size_t tsz; u64 start; struct vmcore *m = NULL; - if (buflen == 0 || *fpos >= vmcore_size) + if (!iov_iter_count(iter) || *fpos >= vmcore_size) return 0; - /* trim buflen to not go beyond EOF */ - if (buflen > vmcore_size - *fpos) - buflen = vmcore_size - *fpos; + iov_iter_truncate(iter, vmcore_size - *fpos); /* Read ELF core header */ if (*fpos < elfcorebuf_sz) { - tsz = min(elfcorebuf_sz - (size_t)*fpos, buflen); - if (copy_to(buffer, elfcorebuf + *fpos, tsz, userbuf)) + tsz = min(elfcorebuf_sz - (size_t)*fpos, iov_iter_count(iter)); + if (copy_to_iter(elfcorebuf + *fpos, tsz, iter) < tsz) return -EFAULT; - buflen -= tsz; *fpos += tsz; - buffer += tsz; acc += tsz; /* leave now if filled buffer already */ - if (buflen == 0) + if (!iov_iter_count(iter)) return acc; } @@ -387,35 +367,32 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, /* Read device dumps */ if (*fpos < elfcorebuf_sz + vmcoredd_orig_sz) { tsz = min(elfcorebuf_sz + vmcoredd_orig_sz - - (size_t)*fpos, buflen); + (size_t)*fpos, iov_iter_count(iter)); start = *fpos - elfcorebuf_sz; - if (vmcoredd_copy_dumps(buffer, start, tsz, userbuf)) + if (vmcoredd_copy_dumps(iter, start, tsz)) return -EFAULT; - buflen -= tsz; *fpos += tsz; - buffer += tsz; acc += tsz; /* leave now if filled buffer already */ - if (!buflen) + if (!iov_iter_count(iter)) return acc; } #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ /* Read remaining elf notes */ - tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, buflen); + tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, + iov_iter_count(iter)); kaddr = elfnotes_buf + *fpos - elfcorebuf_sz - vmcoredd_orig_sz; - if (copy_to(buffer, kaddr, tsz, userbuf)) + if (copy_to_iter(kaddr, tsz, iter) < tsz) return -EFAULT; - buflen -= tsz; *fpos += tsz; - buffer += tsz; acc += tsz; /* leave now if filled buffer already */ - if (buflen == 0) + if (!iov_iter_count(iter)) return acc; } @@ -423,19 +400,17 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, if (*fpos < m->offset + m->size) { tsz = (size_t)min_t(unsigned long long, m->offset + m->size - *fpos, - buflen); + iov_iter_count(iter)); start = m->paddr + *fpos - m->offset; - tmp = read_from_oldmem(buffer, tsz, &start, - userbuf, cc_platform_has(CC_ATTR_MEM_ENCRYPT)); + tmp = read_from_oldmem_iter(iter, tsz, &start, + cc_platform_has(CC_ATTR_MEM_ENCRYPT)); if (tmp < 0) return tmp; - buflen -= tsz; *fpos += tsz; - buffer += tsz; acc += tsz; /* leave now if filled buffer already */ - if (buflen == 0) + if (!iov_iter_count(iter)) return acc; } } @@ -443,15 +418,14 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, return acc; } -static ssize_t read_vmcore(struct file *file, char __user *buffer, - size_t buflen, loff_t *fpos) +static ssize_t read_vmcore(struct kiocb *iocb, struct iov_iter *iter) { - return __read_vmcore((__force char *) buffer, buflen, fpos, 1); + return __read_vmcore(iter, &iocb->ki_pos); } /* * The vmcore fault handler uses the page cache and fills data using the - * standard __vmcore_read() function. + * standard __read_vmcore() function. * * On s390 the fault handler is used for memory regions that can't be mapped * directly with remap_pfn_range(). @@ -461,9 +435,10 @@ static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf) #ifdef CONFIG_S390 struct address_space *mapping = vmf->vma->vm_file->f_mapping; pgoff_t index = vmf->pgoff; + struct iov_iter iter; + struct kvec kvec; struct page *page; loff_t offset; - char *buf; int rc; page = find_or_create_page(mapping, index, GFP_KERNEL); @@ -471,8 +446,11 @@ static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf) return VM_FAULT_OOM; if (!PageUptodate(page)) { offset = (loff_t) index << PAGE_SHIFT; - buf = __va((page_to_pfn(page) << PAGE_SHIFT)); - rc = __read_vmcore(buf, PAGE_SIZE, &offset, 0); + kvec.iov_base = page_address(page); + kvec.iov_len = PAGE_SIZE; + iov_iter_kvec(&iter, READ, &kvec, 1, PAGE_SIZE); + + rc = __read_vmcore(&iter, &offset); if (rc < 0) { unlock_page(page); put_page(page); @@ -722,7 +700,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma) static const struct proc_ops vmcore_proc_ops = { .proc_open = open_vmcore, - .proc_read = read_vmcore, + .proc_read_iter = read_vmcore, .proc_lseek = default_llseek, .proc_mmap = mmap_vmcore, }; -- 2.34.1