Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp6162579pxb; Tue, 16 Feb 2021 18:54:19 -0800 (PST) X-Google-Smtp-Source: ABdhPJy57h7CKugrlwB7TK66nGYHMVcD5A6plY8RgxKG3istIQtAn5KhHbcPKmvJH4JuXyk3InYe X-Received: by 2002:a17:906:e257:: with SMTP id gq23mr14369934ejb.241.1613530459133; Tue, 16 Feb 2021 18:54:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1613530459; cv=none; d=google.com; s=arc-20160816; b=wxMU+yhwTfrwPcY23VliPK0B20TERuz59qWRgjXE/gbMcs6qCLZ8zRXBVVbAZ6Tc0m E9TmxEUpuJ2qdSYbUA2LdwuZ3cw6lTDkH7D+aLCrS7K4TxmnVSxLlSDQbUTqpv+8oTmn y0l44/LoXjGurxPvf6lnrkDw+wDxNfd+hsOS9wqMpCl6VI2GWUzAxRMLMGz7YUTywsfa ZjNkIVnoQZJnploqXshLUcMwl6D9vIUHDVy6DwsoBckMCeAIRnTL/OJtaOpUZoUOEREh WWrvxcRUG1dHV9HKfcW8eOz7gg6QieXKJSdD3J5xFExMgRT+BzcEBdCkTMMndorXeWs6 px5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=F0H7dlhXuhzcadJj+zGPtW1uN4Cr+jEX9DBv/eReC74=; b=W6OjmFme1y9od8T3Z5gGPFulBiIuwA9HPg6gUnPNMOhHK1eLdynE8u2b5M6iOLJ0lP nFoS9xTtKMu11lsgpf2HjtM2PjKWoYm1j9SMGFZOnvHXIkdoO7aWb2exbxT8hImYIrW0 7o1vGxABzjP8tLLTSzqKJuursbOKAFTvyWIDxXNPpX6VCfhE7nHoZAKZi09Xu9tICTpg prsiYmhMVQpnM9BDH3zWcHWS3yB3q8QzTKvN+aW1r95qJ7gt1otbll+ndGccYgsgAKyd XZBqCj1nmpiWVrogl0MKLoeH2aRBswJ82ptU5TsLL1aLHu+30VJ1FzFa7bswZ2qlFl1S Bl9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t23si340776edw.609.2021.02.16.18.53.55; Tue, 16 Feb 2021 18:54:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230336AbhBQCtV (ORCPT + 99 others); Tue, 16 Feb 2021 21:49:21 -0500 Received: from mga07.intel.com ([134.134.136.100]:54811 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230071AbhBQCtQ (ORCPT ); Tue, 16 Feb 2021 21:49:16 -0500 IronPort-SDR: 4QmAf6uelNyeR9ee1nO6CuFkDr+FvXPhyU49N7toE9fTl9gaktL77acl5HF1Lcyo3+iXkleGz0 lPKNRlTc9oxw== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247152592" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247152592" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 18:48:35 -0800 IronPort-SDR: wRHJaJyU1fkz8igK7t+37rtg+DEijII+g+OA8fXQprZevZDzVmF7rgwuw48tQSY3QvqezKqWrR 7hsEAWZENEmg== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384922221" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 18:48:35 -0800 From: ira.weiny@intel.com To: David Sterba Cc: Ira Weiny , Chris Mason , Josef Bacik , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 2/4] fs/btrfs: Convert raid5/6 kmaps to kmap_local_page() Date: Tue, 16 Feb 2021 18:48:24 -0800 Message-Id: <20210217024826.3466046-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20210217024826.3466046-1-ira.weiny@intel.com> References: <20210217024826.3466046-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ira Weiny These kmaps are thread local and don't need to be atomic. So they can use the more efficient kmap_local_page(). However, the mapping of pages in the stripes and the additional parity and qstripe pages are a bit trickier because the unmapping must occur in the opposite order from the mapping. Furthermore, the pointer array in __raid_recover_end_io() may get reordered. Convert these calls to kmap_local_page() taking care to reverse the unmappings of any page arrays as well as being careful with the mappings of any special pages such as the parity and qstripe pages. Signed-off-by: Ira Weiny --- This patch depends on the fix to raid5/6 kmapping sent previously https://lore.kernel.org/lkml/20210205163943.GD5033@iweiny-DESK2.sc.intel.com/#t --- fs/btrfs/raid56.c | 57 +++++++++++++++++++++++------------------------ 1 file changed, 28 insertions(+), 29 deletions(-) diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 9759fb31b73e..04abae305582 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1233,13 +1233,13 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { p = page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] = kmap(p); + pointers[stripe] = kmap_local_page(p); } /* then add the parity stripe */ p = rbio_pstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] = kmap(p); + pointers[stripe++] = kmap_local_page(p); if (has_qstripe) { @@ -1249,7 +1249,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) */ p = rbio_qstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] = kmap(p); + pointers[stripe++] = kmap_local_page(p); raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -1258,10 +1258,8 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) copy_page(pointers[nr_data], pointers[0]); run_xor(pointers + 1, nr_data - 1, PAGE_SIZE); } - - - for (stripe = 0; stripe < rbio->real_stripes; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); + for (stripe = stripe - 1; stripe >= 0; stripe--) + kunmap_local(pointers[stripe]); } /* @@ -1780,6 +1778,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) { int pagenr, stripe; void **pointers; + void **unmap_array; int faila = -1, failb = -1; struct page *page; blk_status_t err; @@ -1791,6 +1790,12 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) goto cleanup_io; } + unmap_array = kcalloc(rbio->real_stripes, sizeof(void *), GFP_NOFS); + if (!unmap_array) { + err = BLK_STS_RESOURCE; + goto cleanup_pointers; + } + faila = rbio->faila; failb = rbio->failb; @@ -1814,6 +1819,9 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) /* setup our array of pointers with pages * from each stripe + * + * NOTE Store a duplicate array of pointers to preserve the + * pointer order. */ for (stripe = 0; stripe < rbio->real_stripes; stripe++) { /* @@ -1827,7 +1835,8 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) } else { page = rbio_stripe_page(rbio, stripe, pagenr); } - pointers[stripe] = kmap(page); + pointers[stripe] = kmap_local_page(page); + unmap_array[stripe] = pointers[stripe]; } /* all raid6 handling here */ @@ -1920,24 +1929,14 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) } } } - for (stripe = 0; stripe < rbio->real_stripes; stripe++) { - /* - * if we're rebuilding a read, we have to use - * pages from the bio list - */ - if ((rbio->operation == BTRFS_RBIO_READ_REBUILD || - rbio->operation == BTRFS_RBIO_REBUILD_MISSING) && - (stripe == faila || stripe == failb)) { - page = page_in_rbio(rbio, stripe, pagenr, 0); - } else { - page = rbio_stripe_page(rbio, stripe, pagenr); - } - kunmap(page); - } + for (stripe = rbio->real_stripes - 1; stripe >= 0; stripe--) + kunmap_local(unmap_array[stripe]); } err = BLK_STS_OK; cleanup: + kfree(unmap_array); +cleanup_pointers: kfree(pointers); cleanup_io: @@ -2362,13 +2361,13 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, goto cleanup; } SetPageUptodate(q_page); - pointers[rbio->real_stripes - 1] = kmap(q_page); + pointers[rbio->real_stripes - 1] = kmap_local_page(q_page); } atomic_set(&rbio->error, 0); /* map the parity stripe just once */ - pointers[nr_data] = kmap(p_page); + pointers[nr_data] = kmap_local_page(p_page); for_each_set_bit(pagenr, rbio->dbitmap, rbio->stripe_npages) { struct page *p; @@ -2376,7 +2375,7 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { p = page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] = kmap(p); + pointers[stripe] = kmap_local_page(p); } if (has_qstripe) { @@ -2399,14 +2398,14 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, bitmap_clear(rbio->dbitmap, pagenr, 1); kunmap_local(parity); - for (stripe = 0; stripe < nr_data; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); + for (stripe = nr_data - 1; stripe >= 0; stripe--) + kunmap_local(pointers[stripe]); } - kunmap(p_page); + kunmap_local(pointers[nr_data]); __free_page(p_page); if (q_page) { - kunmap(q_page); + kunmap_local(pointers[rbio->real_stripes - 1]); __free_page(q_page); } -- 2.28.0.rc0.12.gb6a658bd00c9