Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1406528pxu; Mon, 23 Nov 2020 22:11:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJx5IjaK8bPlFXcHGflBTix2ZZL471N+QkJsUNdjeERP6nW9weUfNJ5kbZN/10I3JXsrcu83 X-Received: by 2002:a05:6402:c83:: with SMTP id cm3mr2457003edb.189.1606198296951; Mon, 23 Nov 2020 22:11:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606198296; cv=none; d=google.com; s=arc-20160816; b=o0eSf1anoSJk7ZeOoGmU/PoEZRyx1yDDN/9KUlhN4b6GjaqxFjG966k9kt7r9gnKwc tTBDY8PyAo+DI3N9qzddi7Lyu7nc31HVlfuG69WnlDdDYryzcpDBgs/oYLhzpgW1xHaL HD1Bi13dnxMt6uZatdwrlUZAg9kxiAiK8eI6FOR7ns/qfpd+riKBFLo97EUgmu/Bv0TI RSdFvDnFy4Q0ALmTAkES9o2UJ5FEFYVccJoZoq1IkPDVTiAPavqfuy1yhUTyVvwvvsIg KwFfYqo0VrdR6I+sIAE/koVCVWSaw8l7Bjsy+Ee1bUX6jw710kGERw7cb/JIMoeu2X+n hDJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=IGQF7wUbEL8doIEbEwGQD7IkGiW5FPU2febJMErESJA=; b=YRdFchk0D/LFNHgF63C2rh7HmYUXBK5vX92oqCSYQNDJEU6kvs4+z6QlKffRK6/j/n GXBqMMUkubPZGYFuYj6j3f1AmTCkOxX3LNuP+BOIzUwYd/0UF4q7WQBg37srvC5d0ozZ 9jxnC8LVePKnAKBc67cI/7xUAhoBnLhZLaCL5uN6+h0IflQnVVWK612YDL1nOO9hpjDT pRiwrjtIkbFlLPVeRmed0MMvQUS4LO+kISpxFLPxNq6/nQWt5oekFxcFXZ5NyRjl10Vq X7hHoV5Wb5bd5jAQXAD5dPmHNLYdGI7rZI/uZin1qxgi/iMLpIi1cTn0wKVSXfVzBQKB 5o7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j20si7709954eja.715.2020.11.23.22.11.14; Mon, 23 Nov 2020 22:11:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729411AbgKXGIK (ORCPT + 99 others); Tue, 24 Nov 2020 01:08:10 -0500 Received: from mga01.intel.com ([192.55.52.88]:19536 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729352AbgKXGIJ (ORCPT ); Tue, 24 Nov 2020 01:08:09 -0500 IronPort-SDR: geVYEDrceckE63TiQ+F09/PWTBA1H0KorKtiDXB8rjthEIrk+wtqItL5xSTBZ9V/8+5cRWJk04 5NsgheQjWpRA== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="190018255" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="190018255" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:08 -0800 IronPort-SDR: 6A3FRQDZPACcFAIZnE7ekmb80a3WtXBzNicKSPp2urUQ0L3+ACQ+pyFW78Rgd1bGhbuGXgX/OA xW9/+TJNKiKA== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="370270493" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 08/17] fs/hfsplus: Convert to mem*_page() Date: Mon, 23 Nov 2020 22:07:46 -0800 Message-Id: <20201124060755.1405602-9-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ira Weiny Remove the pattern of kmap/mem*/kunmap in favor of the new mem*_page() functions which handle the kmap'ing correctly for us. Signed-off-by: Ira Weiny --- fs/hfsplus/bnode.c | 53 +++++++++++++--------------------------------- 1 file changed, 15 insertions(+), 38 deletions(-) diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 177fae4e6581..c4347b1cb36f 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -29,14 +29,12 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memcpy(buf, kmap(*pagep) + off, l); - kunmap(*pagep); + memcpy_from_page(buf, *pagep, off, l); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); - memcpy(buf, kmap(*++pagep), l); - kunmap(*pagep); + memcpy_from_page(buf, *++pagep, 0, l); } } @@ -82,16 +80,14 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memcpy(kmap(*pagep) + off, buf, l); + memcpy_to_page(*pagep, off, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++pagep), buf, l); + memcpy_to_page(*++pagep, 0, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); } } @@ -112,15 +108,13 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memset(kmap(*pagep) + off, 0, l); + memzero_page(*pagep, off, l); set_page_dirty(*pagep); - kunmap(*pagep); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memset(kmap(*++pagep), 0, l); + memzero_page(*++pagep, 0, l); set_page_dirty(*pagep); - kunmap(*pagep); } } @@ -142,17 +136,13 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, if (src == dst) { l = min_t(int, len, PAGE_SIZE - src); - memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); - kunmap(*src_page); + memcpy_page(*dst_page, src, *src_page, src, l); set_page_dirty(*dst_page); - kunmap(*dst_page); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++dst_page), kmap(*++src_page), l); - kunmap(*src_page); + memcpy_page(*++dst_page, 0, *++src_page, 0, l); set_page_dirty(*dst_page); - kunmap(*dst_page); } } else { void *src_ptr, *dst_ptr; @@ -202,21 +192,16 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) if (src == dst) { while (src < len) { - memmove(kmap(*dst_page), kmap(*src_page), src); - kunmap(*src_page); + memmove_page(*dst_page, 0, *src_page, 0, src); set_page_dirty(*dst_page); - kunmap(*dst_page); len -= src; src = PAGE_SIZE; src_page--; dst_page--; } src -= len; - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, len); - kunmap(*src_page); + memmove_page(*dst_page, src, *src_page, src, len); set_page_dirty(*dst_page); - kunmap(*dst_page); } else { void *src_ptr, *dst_ptr; @@ -251,19 +236,13 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) if (src == dst) { l = min_t(int, len, PAGE_SIZE - src); - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, l); - kunmap(*src_page); + memmove_page(*dst_page, src, *src_page, src, l); set_page_dirty(*dst_page); - kunmap(*dst_page); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memmove(kmap(*++dst_page), - kmap(*++src_page), l); - kunmap(*src_page); + memmove_page(*++dst_page, 0, *++src_page, 0, l); set_page_dirty(*dst_page); - kunmap(*dst_page); } } else { void *src_ptr, *dst_ptr; @@ -593,14 +572,12 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num) } pagep = node->page; - memset(kmap(*pagep) + node->page_offset, 0, - min_t(int, PAGE_SIZE, tree->node_size)); + memzero_page(*pagep, node->page_offset, + min_t(int, PAGE_SIZE, tree->node_size)); set_page_dirty(*pagep); - kunmap(*pagep); for (i = 1; i < tree->pages_per_bnode; i++) { - memset(kmap(*++pagep), 0, PAGE_SIZE); + memzero_page(*++pagep, 0, PAGE_SIZE); set_page_dirty(*pagep); - kunmap(*pagep); } clear_bit(HFS_BNODE_NEW, &node->flags); wake_up(&node->lock_wq); -- 2.28.0.rc0.12.gb6a658bd00c9