Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp7175326rwl; Wed, 22 Mar 2023 23:52:41 -0700 (PDT) X-Google-Smtp-Source: AK7set89JznLDbvG2HzSD1nlgoDhps9PvLz9vv2AgmUk39EoFZX4JDtXTLaQE3UG4/OC6jH/B/ZV X-Received: by 2002:a62:6105:0:b0:626:41b:2598 with SMTP id v5-20020a626105000000b00626041b2598mr5359679pfb.26.1679554361653; Wed, 22 Mar 2023 23:52:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679554361; cv=none; d=google.com; s=arc-20160816; b=yVCIQTcSYBj22J2kSbgCxM5ec1qN6AZcNc0osMg6dp+cOxcbelLvJLaPmaGLqFiisc WJUPEM+ERZNr/JH/ORMfliRNKewN8+o0hQNnWN67FNKvpS35lGK0zizlhv4ywD+9U7Ef tg+H1NrXhvabQs1oySJMXD7QJVgjFb0BBV0bV9STeXcSQW5Q6rlJDcJv1aMdGG4TVCZS NchFwsmbPDMRkcVQ8MI+jcoCZdmd+Wxw5zOz0ngAEsUzpSRMzN1WZIpE0zJ3eOb99p6O XHw5BiqpJM5/3cQECXNP6fB8S+B6xNb51ce3SenL9rcVAoYjqKcVi5D/JDOhtC4u0y15 nPCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=XMt6IiIj1g/DqQnbQSAvbNGtbOJm5tLZxMcMsHALdIk=; b=om5+KCHRRAujf8kGbXJcSEbtp/Kq5ILKL3IdP7k24ALBs7LJx/CPe9/zAI8NE4yShE WRCInjaJc9CkrUcxcZkJk7rjlQlZ1HSeMrp5QMpzSYCjfk2SgrjJPzOPu1sHYk5GGMAr TdoT5hwzIEyn46CMM0cvOsetJtGiHYZVByUsPyGXbIGjvtuCJ1e+dJisjS4lB+7fsUlF tOBoTG2Mgp2jGK5gRT2m/J3P83qj9JVrwzLrNP8Sd2bP+Rpj/G1gAsIkxa/8W3ksAD/C zZTtUDj8HIG2RS/j6Jq0o/7gXLWJrScE31Epac82tPVSdZ49iGCqvCalv9tIj0nLpvwx vgQA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=GhmjoBqZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a21-20020a62d415000000b00625488ab85esi17713931pfh.4.2023.03.22.23.52.30; Wed, 22 Mar 2023 23:52:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=GhmjoBqZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230026AbjCWGoh (ORCPT + 99 others); Thu, 23 Mar 2023 02:44:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230167AbjCWGoR (ORCPT ); Thu, 23 Mar 2023 02:44:17 -0400 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C452F2DE5C; Wed, 22 Mar 2023 23:44:13 -0700 (PDT) Received: by mail-wr1-x433.google.com with SMTP id v1so13207511wrv.1; Wed, 22 Mar 2023 23:44:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679553852; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=XMt6IiIj1g/DqQnbQSAvbNGtbOJm5tLZxMcMsHALdIk=; b=GhmjoBqZnTla9JBkjF1UZtu8wu6QjffqlHVeeBSYX2nsCx2EjNUPd9PoBLsyD6Fvay PUDva7C4NGKsmwEbwBCxAOGd9EL87aiHG+kQudl9u9UlFpmv9e/4RjhfDSdSVwnwvcBm baBY0/IoZRR49XX2VcDjY1RfhNbEJkmSxuN/ffwm0Mm67KsM6hDo5BFwoP77jL1yua9J 37n9J27adgwFwke7WNhN8IzlQlAPD55U+bklIM3krb05IsmeD8vkf20E9UygrsGyoZ9R Tc00rBGRQW6lUANqYbvaQ+qdXnh/WeUiS1MpV6MziUy55YOmt4caiqeTmvENycbM24So ACTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679553852; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=XMt6IiIj1g/DqQnbQSAvbNGtbOJm5tLZxMcMsHALdIk=; b=W5cLQhdJmNdBQFo4dCPZRzzfHVmnCtu87sXroPeqtE9YeMV1cc/z+ivQbJDda1Hisg bRfqOZ3t2ocelhXtNDjee/t/sT5wBBkEkFqBpwfJcdqpbfgLGuCj+oqV2Km09DbuGnbP VV2EH3ti2QOcLDxDMqhSeRopRUq0hse+Vl8t4JnUoFXtdq4s1obEzROVGoAe41O6JZbg aFLv93oiPcalDWm8/2QL9Fs2jHPr8jJEWHzoVPRJ/RkR4rzS7VpsZWQA69R298UOzh2g 4YQFrdq2aLfPelYT6eqYRisugHM6qu660WZlCO6Tz2JKNgfEKzrqJ1oelaaiKJfGs//B lL7w== X-Gm-Message-State: AAQBX9ce29FazUbaPs1XJmeEzWhkhOAkZ2I7xblP0TDrknIThigvQv8a j0mHaTBPWvs9eNrK63LdQ64= X-Received: by 2002:adf:ee84:0:b0:2ce:ae54:1592 with SMTP id b4-20020adfee84000000b002ceae541592mr1469269wro.38.1679553851918; Wed, 22 Mar 2023 23:44:11 -0700 (PDT) Received: from localhost ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.gmail.com with ESMTPSA id w4-20020a05600c474400b003edc9a5f98asm845384wmo.44.2023.03.22.23.44.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Mar 2023 23:44:10 -0700 (PDT) Date: Thu, 23 Mar 2023 06:44:10 +0000 From: Lorenzo Stoakes To: Baoquan He Cc: David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton , Uladzislau Rezki , Matthew Wilcox , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro Subject: Re: [PATCH v7 4/4] mm: vmalloc: convert vread() to vread_iter() Message-ID: References: <941f88bc5ab928e6656e1e2593b91bf0f8c81e1b.1679511146.git.lstoakes@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 23, 2023 at 10:52:09AM +0800, Baoquan He wrote: > On 03/22/23 at 06:57pm, Lorenzo Stoakes wrote: > > Having previously laid the foundation for converting vread() to an iterator > > function, pull the trigger and do so. > > > > This patch attempts to provide minimal refactoring and to reflect the > > existing logic as best we can, for example we continue to zero portions of > > memory not read, as before. > > > > Overall, there should be no functional difference other than a performance > > improvement in /proc/kcore access to vmalloc regions. > > > > Now we have eliminated the need for a bounce buffer in read_kcore_iter(), > > we dispense with it, and try to write to user memory optimistically but > > with faults disabled via copy_page_to_iter_nofault(). We already have > > preemption disabled by holding a spin lock. We continue faulting in until > > the operation is complete. > > I don't understand the sentences here. In vread_iter(), the actual > content reading is done in aligned_vread_iter(), otherwise we zero > filling the region. In aligned_vread_iter(), we will use > vmalloc_to_page() to get the mapped page and read out, otherwise zero > fill. While in this patch, fault_in_iov_iter_writeable() fault in memory > of iter one time and will bail out if failed. I am wondering why we > continue faulting in until the operation is complete, and how that is done. This is refererrring to what's happening in kcore.c, not vread_iter(), i.e. the looped read/faultin. The reason we bail out if failt_in_iov_iter_writeable() is that would indicate an error had occurred. The whole point is to _optimistically_ try to perform the operation assuming the pages are faulted in. Ultimately we fault in via copy_to_user_nofault() which will either copy data or fail if the pages are not faulted in (will discuss this below a bit more in response to your other point). If this fails, then we fault in, and try again. We loop because there could be some extremely unfortunate timing with a race on e.g. swapping out or migrating pages between faulting in and trying to write out again. This is extremely unlikely, but to avoid any chance of breaking userland we repeat the operation until it completes. In nearly all real-world situations it'll either work immediately or loop once. > > If we look into the failing point in vread_iter(), it's mainly coming > from copy_page_to_iter_nofault(), e.g page_copy_sane() checking failed, > i->data_source checking failed. If these conditional checking failed, > should we continue reading again and again? And this is not related to > memory faulting in. I saw your discussion with David, but I am still a > little lost. Hope I can learn it, thanks in advance. > Actually neither of these are going to happen. page_copy_sane() checks the sanity of the _source_ pages, and the 'sanity' is defined by whether your offset and length sit within the (possibly compound) folio. Since we control this, we can arrange for it never to happen. i->data_source is checking that it's an output iterator, however we would already have checked this when writing ELF headers at the bare minimum, so we cannot reach this point with an invalid iterator. Therefore it is not possible either cause a failure. What could cause a failure, and what we are checking for, is specified in copyout_nofault() (in iov_iter.c) which we pass to the iterate_and_advance() macro. Now we have a fault-injection should_fail_usercopy() which would just trigger a redo, or copy_to_user_nofault() returning < 0 (e.g. -EFAULT). This code is confusing as this function returns the number of bytes _not copied_ rather than copied. I have tested this to be sure by the way :) Therefore the only way for a failure to occur is for memory to not be faulted in and thus the loop only triggers in this situation. If we fail to fault in pages for any reason, the whole operation aborts so this should cover all angles. > ...... > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > > index 08b795fd80b4..25b44b303b35 100644 > > --- a/fs/proc/kcore.c > > +++ b/fs/proc/kcore.c > ...... > > @@ -507,13 +503,30 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > > > switch (m->type) { > > case KCORE_VMALLOC: > > - vread(buf, (char *)start, tsz); > > - /* we have to zero-fill user buffer even if no read */ > > - if (copy_to_iter(buf, tsz, iter) != tsz) { > > - ret = -EFAULT; > > - goto out; > > + { > > + const char *src = (char *)start; > > + size_t read = 0, left = tsz; > > + > > + /* > > + * vmalloc uses spinlocks, so we optimistically try to > > + * read memory. If this fails, fault pages in and try > > + * again until we are done. > > + */ > > + while (true) { > > + read += vread_iter(iter, src, left); > > + if (read == tsz) > > + break; > > + > > + src += read; > > + left -= read; > > + > > + if (fault_in_iov_iter_writeable(iter, left)) { > > + ret = -EFAULT; > > + goto out; > > + } > > } > > break; > > + } > > case KCORE_USER: > > /* User page is handled prior to normal kernel page: */ > > if (copy_to_iter((char *)start, tsz, iter) != tsz) { >