Received: by 2002:a05:6358:e9c4:b0:b2:91dc:71ab with SMTP id hc4csp6430396rwb; Tue, 9 Aug 2022 15:27:45 -0700 (PDT) X-Google-Smtp-Source: AA6agR5hUO1tkPRCN/W+dDEFWvcqwjuHom24zEmZarY80IbWsI+xxQ+u0QvWqn0zQz1/thg8rp75 X-Received: by 2002:a05:6402:540c:b0:434:d965:f8a with SMTP id ev12-20020a056402540c00b00434d9650f8amr23719255edb.30.1660084064929; Tue, 09 Aug 2022 15:27:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660084064; cv=none; d=google.com; s=arc-20160816; b=k2n2G9oEyyFhfL8M0KXFPS4N25p/CudF6Ngte1Hdj1Ip23T2JiIDTofOkX2bLi4wZa 1GlTqPSWKXh+ErjNWmhH5CPKRlQCy7nPYFZBGDYJnYxmO2jUV7ev/s1fRSxmMo/ctVzI wvDiWwsYYdX4fk3Flk/o2YZnWhTAnYgQYeDMHD3+5OGfS3zRdrF9wQDn538OzsClghZE IPdWx8FWKhw53H+9KOKokboDibYTLvCqCZeLv+WtpqUl2rjIW/HkQx8yPn1yVoNJW+V/ ZOW8gNfRw9nnOnBu+h54BdrqTnSbs0cEmyAwNi3eEzmA6cFIyBlT0BqYrEPj2VNtUAmZ iMWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:references:message-id :content-transfer-encoding:cc:date:in-reply-to:from:subject :mime-version:dkim-signature; bh=UmCBRGF3C2n8uq8833y/5TNpiGFctwemElE6YbHo9AQ=; b=jqlbXyG9ftaNnD0iWAa2HKn8J1iBRDt3g7udhgjqYpF7c3//OlAC7kMuJS/zGPwXjq 1hpYYEJkcphWmTQ5BL4oN6okobW25D17lV0t+N/QERZrwSyHmgqiB5t9/LZ+OmmNkKn7 kXvGxSaVgugLVBcL2TXiZaJqnNEjaPClGQS9ATp29pZq9kTi1DLbSgRAgOlytN4uSo+U KjiMNY9jlo9KhoMGRnYKH+XgPwSV0ByNCf5MU7900jRhaeWXEG4dk020c38MPbrPizoO +6/FEoQ30yJZSQhz2fmekXDtRuD3cpVFngKDvAq1ZnOxRGbw8KM7f1QjiCSTH+wl3710 gP0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@dubeyko-com.20210112.gappssmtp.com header.s=20210112 header.b=aTx9UXWA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ho7-20020a1709070e8700b00730babc11e3si3133293ejc.640.2022.08.09.15.27.19; Tue, 09 Aug 2022 15:27:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@dubeyko-com.20210112.gappssmtp.com header.s=20210112 header.b=aTx9UXWA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229701AbiHIWSX (ORCPT + 99 others); Tue, 9 Aug 2022 18:18:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230290AbiHIWSE (ORCPT ); Tue, 9 Aug 2022 18:18:04 -0400 Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com [IPv6:2607:f8b0:4864:20::72d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3B6C6557A for ; Tue, 9 Aug 2022 15:17:37 -0700 (PDT) Received: by mail-qk1-x72d.google.com with SMTP id w6so9866806qkf.3 for ; Tue, 09 Aug 2022 15:17:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dubeyko-com.20210112.gappssmtp.com; s=20210112; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=UmCBRGF3C2n8uq8833y/5TNpiGFctwemElE6YbHo9AQ=; b=aTx9UXWAC9kFhPi5ZvC9R15Oh69fUWPsvm7MufHSf1vRg41jroAgTA9EpZEqcwUdC3 4JDNoKZ88ItTx8yR8pMqsq1iLjvKnz5mk6vJQ57sroWz4Ug9m750mMlJXioBczoK65AC S7JB2TMqiiVPJfuk0W/OnpUxionGfPxJkq5B5PU/+4R1MC73vk8dKa/ZlKAta90mlyJB O/y1nfbSVk4p67B3kUSFc6WWpdRlpoBX9VRbiTx9uvNFbIy5X/jjv8FG389uVXZWn0AF sqzJ2GHK9ilACd2zIiAFPiyczOO3TWw6FBF6I3yg4XCppma/NxS0C46AQgnryNrFM+ww BL4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=UmCBRGF3C2n8uq8833y/5TNpiGFctwemElE6YbHo9AQ=; b=DLt6sKNYamSWoo9eLnMEtY7u7a+nmgBYQcIUw1qAuv1r+LzDt/QpEVI48ONHvmzGBr hQWdMYeOPHc6sRpJiQS/X7gTVzr0/FFghbFJGrVSKiGFyNEdC8LZNEKjICs1If1hZ10N vndnN8EPh015bxF5VXBqzSQjW1RrAGv1pkfdvg0ztRG77xVrpEjSWMVYOEqSXF6TLUnQ owaz1JyrsLnbQORWRzwG92oJT9f1Vf9qlkF3vfqIauBT81/xgxVh/TjdtyfBSi3LEG4v 1TPKgoo7wt1bFKSgYFVPVIsewVobVyHpEAPoJ0n4ND9r8VAfn8CvJEWaZ1ql3sk+dkH2 BwKw== X-Gm-Message-State: ACgBeo3uU33PVIefYalcpS3u0WIl5QAucdPTNjlfvvYbpvLZ5dTdvyEj Rjg5fSsT4MfQO+uPw6FVbnE7lw== X-Received: by 2002:ae9:e901:0:b0:6b6:ad9:c9c1 with SMTP id x1-20020ae9e901000000b006b60ad9c9c1mr18823579qkf.313.1660083456758; Tue, 09 Aug 2022 15:17:36 -0700 (PDT) Received: from smtpclient.apple ([2600:1700:42f0:6600:b19b:cbb5:a678:767c]) by smtp.gmail.com with ESMTPSA id o4-20020ac841c4000000b00342fd6d6dd3sm3791677qtm.42.2022.08.09.15.17.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Aug 2022 15:17:35 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\)) Subject: Re: [PATCH 2/4] hfsplus: Convert kmap() to kmap_local_page() in bnode.c From: Viacheslav Dubeyko In-Reply-To: <20220809203105.26183-3-fmdefrancesco@gmail.com> Date: Tue, 9 Aug 2022 15:17:32 -0700 Cc: "Matthew Wilcox (Oracle)" , Ira Weiny , Jens Axboe , Andrew Morton , Bart Van Assche , Kees Cook , Muchun Song , Linux FS Devel , LKML Content-Transfer-Encoding: quoted-printable Message-Id: References: <20220809203105.26183-1-fmdefrancesco@gmail.com> <20220809203105.26183-3-fmdefrancesco@gmail.com> To: "Fabio M. De Francesco" X-Mailer: Apple Mail (2.3696.120.41.1.1) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Aug 9, 2022, at 1:31 PM, Fabio M. De Francesco = wrote: >=20 > kmap() is being deprecated in favor of kmap_local_page(). >=20 > Two main problems with kmap(): (1) It comes with an overhead as = mapping > space is restricted and protected by a global lock for synchronization = and > (2) it also requires global TLB invalidation when the kmap=E2=80=99s = pool wraps > and it might block when the mapping space is fully utilized until a = slot > becomes available. >=20 > With kmap_local_page() the mappings are per thread, CPU local, can = take > page faults, and can be called from any context (including = interrupts). > It is faster than kmap() in kernels with HIGHMEM enabled. Furthermore, > the tasks can be preempted and, when they are scheduled to run again, = the > kernel virtual addresses are restored and still valid. >=20 > Since its use in bnode.c is safe everywhere, it should be preferred. >=20 > Therefore, replace kmap() with kmap_local_page() in bnode.c. Where > possible, use the suited standard helpers (memzero_page(), = memcpy_page()) > instead of open coding kmap_local_page() plus memset() or memcpy(). >=20 > Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with > HIGHMEM64GB enabled. >=20 > Cc: Viacheslav Dubeyko > Suggested-by: Ira Weiny > Reviewed-by: Ira Weiny > Signed-off-by: Fabio M. De Francesco > --- Looks good. Reviewed-by: Viacheslav Dubeyko Thanks, Slava. > fs/hfsplus/bnode.c | 105 +++++++++++++++++++++------------------------ > 1 file changed, 48 insertions(+), 57 deletions(-) >=20 > diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c > index a5ab00e54220..87974d5e6791 100644 > --- a/fs/hfsplus/bnode.c > +++ b/fs/hfsplus/bnode.c > @@ -29,14 +29,12 @@ void hfs_bnode_read(struct hfs_bnode *node, void = *buf, int off, int len) > off &=3D ~PAGE_MASK; >=20 > l =3D min_t(int, len, PAGE_SIZE - off); > - memcpy(buf, kmap(*pagep) + off, l); > - kunmap(*pagep); > + memcpy_from_page(buf, *pagep, off, l); >=20 > while ((len -=3D l) !=3D 0) { > buf +=3D l; > l =3D min_t(int, len, PAGE_SIZE); > - memcpy(buf, kmap(*++pagep), l); > - kunmap(*pagep); > + memcpy_from_page(buf, *++pagep, 0, l); > } > } >=20 > @@ -82,16 +80,14 @@ void hfs_bnode_write(struct hfs_bnode *node, void = *buf, int off, int len) > off &=3D ~PAGE_MASK; >=20 > l =3D min_t(int, len, PAGE_SIZE - off); > - memcpy(kmap(*pagep) + off, buf, l); > + memcpy_to_page(*pagep, off, buf, l); > set_page_dirty(*pagep); > - kunmap(*pagep); >=20 > while ((len -=3D l) !=3D 0) { > buf +=3D l; > l =3D min_t(int, len, PAGE_SIZE); > - memcpy(kmap(*++pagep), buf, l); > + memcpy_to_page(*++pagep, 0, buf, l); > set_page_dirty(*pagep); > - kunmap(*pagep); > } > } >=20 > @@ -112,15 +108,13 @@ void hfs_bnode_clear(struct hfs_bnode *node, int = off, int len) > off &=3D ~PAGE_MASK; >=20 > l =3D min_t(int, len, PAGE_SIZE - off); > - memset(kmap(*pagep) + off, 0, l); > + memzero_page(*pagep, off, l); > set_page_dirty(*pagep); > - kunmap(*pagep); >=20 > while ((len -=3D l) !=3D 0) { > l =3D min_t(int, len, PAGE_SIZE); > - memset(kmap(*++pagep), 0, l); > + memzero_page(*++pagep, 0, l); > set_page_dirty(*pagep); > - kunmap(*pagep); > } > } >=20 > @@ -142,24 +136,20 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, = int dst, >=20 > if (src =3D=3D dst) { > l =3D min_t(int, len, PAGE_SIZE - src); > - memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); > - kunmap(*src_page); > + memcpy_page(*dst_page, src, *src_page, src, l); > set_page_dirty(*dst_page); > - kunmap(*dst_page); >=20 > while ((len -=3D l) !=3D 0) { > l =3D min_t(int, len, PAGE_SIZE); > - memcpy(kmap(*++dst_page), kmap(*++src_page), l); > - kunmap(*src_page); > + memcpy_page(*++dst_page, 0, *++src_page, 0, l); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > } > } else { > void *src_ptr, *dst_ptr; >=20 > do { > - src_ptr =3D kmap(*src_page) + src; > - dst_ptr =3D kmap(*dst_page) + dst; > + dst_ptr =3D kmap_local_page(*dst_page) + dst; > + src_ptr =3D kmap_local_page(*src_page) + src; > if (PAGE_SIZE - src < PAGE_SIZE - dst) { > l =3D PAGE_SIZE - src; > src =3D 0; > @@ -171,9 +161,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, = int dst, > } > l =3D min(len, l); > memcpy(dst_ptr, src_ptr, l); > - kunmap(*src_page); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); > if (!dst) > dst_page++; > else > @@ -185,6 +175,7 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, = int dst, > void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) > { > struct page **src_page, **dst_page; > + void *src_ptr, *dst_ptr; > int l; >=20 > hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len); > @@ -202,27 +193,28 @@ void hfs_bnode_move(struct hfs_bnode *node, int = dst, int src, int len) >=20 > if (src =3D=3D dst) { > while (src < len) { > - memmove(kmap(*dst_page), = kmap(*src_page), src); > - kunmap(*src_page); > + dst_ptr =3D kmap_local_page(*dst_page); > + src_ptr =3D kmap_local_page(*src_page); > + memmove(dst_ptr, src_ptr, src); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); > len -=3D src; > src =3D PAGE_SIZE; > src_page--; > dst_page--; > } > src -=3D len; > - memmove(kmap(*dst_page) + src, > - kmap(*src_page) + src, len); > - kunmap(*src_page); > + dst_ptr =3D kmap_local_page(*dst_page); > + src_ptr =3D kmap_local_page(*src_page); > + memmove(dst_ptr + src, src_ptr + src, len); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); > } else { > - void *src_ptr, *dst_ptr; > - > do { > - src_ptr =3D kmap(*src_page) + src; > - dst_ptr =3D kmap(*dst_page) + dst; > + dst_ptr =3D kmap_local_page(*dst_page) + = dst; > + src_ptr =3D kmap_local_page(*src_page) + = src; > if (src < dst) { > l =3D src; > src =3D PAGE_SIZE; > @@ -234,9 +226,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int = dst, int src, int len) > } > l =3D min(len, l); > memmove(dst_ptr - l, src_ptr - l, l); > - kunmap(*src_page); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); > if (dst =3D=3D PAGE_SIZE) > dst_page--; > else > @@ -251,26 +243,27 @@ void hfs_bnode_move(struct hfs_bnode *node, int = dst, int src, int len) >=20 > if (src =3D=3D dst) { > l =3D min_t(int, len, PAGE_SIZE - src); > - memmove(kmap(*dst_page) + src, > - kmap(*src_page) + src, l); > - kunmap(*src_page); > + > + dst_ptr =3D kmap_local_page(*dst_page) + src; > + src_ptr =3D kmap_local_page(*src_page) + src; > + memmove(dst_ptr, src_ptr, l); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); >=20 > while ((len -=3D l) !=3D 0) { > l =3D min_t(int, len, PAGE_SIZE); > - memmove(kmap(*++dst_page), > - kmap(*++src_page), l); > - kunmap(*src_page); > + dst_ptr =3D = kmap_local_page(*++dst_page); > + src_ptr =3D = kmap_local_page(*++src_page); > + memmove(dst_ptr, src_ptr, l); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); > } > } else { > - void *src_ptr, *dst_ptr; > - > do { > - src_ptr =3D kmap(*src_page) + src; > - dst_ptr =3D kmap(*dst_page) + dst; > + dst_ptr =3D kmap_local_page(*dst_page) + = dst; > + src_ptr =3D kmap_local_page(*src_page) + = src; > if (PAGE_SIZE - src < > PAGE_SIZE - dst) { > l =3D PAGE_SIZE - src; > @@ -283,9 +276,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int = dst, int src, int len) > } > l =3D min(len, l); > memmove(dst_ptr, src_ptr, l); > - kunmap(*src_page); > + kunmap_local(src_ptr); > set_page_dirty(*dst_page); > - kunmap(*dst_page); > + kunmap_local(dst_ptr); > if (!dst) > dst_page++; > else > @@ -498,14 +491,14 @@ struct hfs_bnode *hfs_bnode_find(struct = hfs_btree *tree, u32 num) > if (!test_bit(HFS_BNODE_NEW, &node->flags)) > return node; >=20 > - desc =3D (struct hfs_bnode_desc *)(kmap(node->page[0]) + > - node->page_offset); > + desc =3D (struct hfs_bnode_desc = *)(kmap_local_page(node->page[0]) + > + = node->page_offset); > node->prev =3D be32_to_cpu(desc->prev); > node->next =3D be32_to_cpu(desc->next); > node->num_recs =3D be16_to_cpu(desc->num_recs); > node->type =3D desc->type; > node->height =3D desc->height; > - kunmap(node->page[0]); > + kunmap_local(desc); >=20 > switch (node->type) { > case HFS_NODE_HEADER: > @@ -589,14 +582,12 @@ struct hfs_bnode *hfs_bnode_create(struct = hfs_btree *tree, u32 num) > } >=20 > pagep =3D node->page; > - memset(kmap(*pagep) + node->page_offset, 0, > - min_t(int, PAGE_SIZE, tree->node_size)); > + memzero_page(*pagep, node->page_offset, > + min_t(int, PAGE_SIZE, tree->node_size)); > set_page_dirty(*pagep); > - kunmap(*pagep); > for (i =3D 1; i < tree->pages_per_bnode; i++) { > - memset(kmap(*++pagep), 0, PAGE_SIZE); > + memzero_page(*++pagep, 0, PAGE_SIZE); > set_page_dirty(*pagep); > - kunmap(*pagep); > } > clear_bit(HFS_BNODE_NEW, &node->flags); > wake_up(&node->lock_wq); > --=20 > 2.37.1 >=20