Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1315820pxu; Sat, 12 Dec 2020 08:44:35 -0800 (PST) X-Google-Smtp-Source: ABdhPJzL2HZe4do9abxfWskQ9kdQSmUvxwAjdk88BXiYvqsztZcKXVEUwUs0htGH7Ajjjm2a/Jlr X-Received: by 2002:aa7:c3c2:: with SMTP id l2mr17172034edr.15.1607791475011; Sat, 12 Dec 2020 08:44:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607791475; cv=none; d=google.com; s=arc-20160816; b=Mm48ZOU6qVu+dzqJrhifjnRwbqDkAcotugok1dbyIlpKWDdiWtT9wynUZ1albb9YmI RlS36oe/oLOxmWLBXeF91zBzETYGMMGdYy9z4sXRFDG26o72TNvYkzhfjBwzY8F6AZZe BBAjKsxCuHEVj3iUYX5pNrJsriREan0eOXpowf4O0++tREWxSCAFxf3s0T3Zkd6YHmSe 46miAZmC8e6jI/f2RsZIXwAVdmmB9Q5vB3gqVWrVMNSjXQSOUr5dsp+pWE9SRHb/nNvW 9wc8lzdurrq0dI0FSGScD/LYWJ8kMT3iVV1UTX6i2JfKwRDIJn7M628M61f2ih2UZODb tQiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from :dkim-signature:date; bh=E4rtBBXMN3LCq91x/DaiNUn2H8u71EItf2Pntl+o0pQ=; b=dURyOjHkKSzRSQGQ1oeqqf1A72etTvsZ8oiwPJtqhiK4yuIJF+hecC/mwzvNwwooQM Cwex0GWamx+YMIUl1AJ+q6qyyND4gbVxcehvS0TMm0KfJMFHYhcI6JHcDDHlwRNvYjyi UAjVBSkahl5jh0bzFVT2wpNibh0aYHDOzYeOc7yIF47pN/kLJsuenj6RjuZfmHiwzY5V Q1Sk5T1Nl0tRRVwDnYWwPkLR/Tb/3vmEWiEwZ+oE8qq0xZbD85GIqobhekmdUXrV5SeZ FHPE1zSU6qgnnCn0INsM1pACjGVEyMErK6poiJXk18/CrYsN2dWkBPZ0+5JO+NTKpRYn Q6Tg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kr+OWHAN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z4si773903ejc.686.2020.12.12.08.44.12; Sat, 12 Dec 2020 08:44:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kr+OWHAN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726579AbgLKSGD (ORCPT + 99 others); Fri, 11 Dec 2020 13:06:03 -0500 Received: from mail.kernel.org ([198.145.29.99]:40802 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727741AbgLKSFk (ORCPT ); Fri, 11 Dec 2020 13:05:40 -0500 Date: Sat, 12 Dec 2020 03:04:51 +0900 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1607709900; bh=Ib27xs5IAcJ0qL2DAfNKgNMEdNLfkeQEpcKwRi5G7mA=; h=From:To:Cc:Subject:References:In-Reply-To:From; b=kr+OWHANq+KjPZyzOGZUSiSgISj2y2/sM3LVsR+TeJNbyhdOL5EPWLwqo3LgfIz0J iZT1rjJMfVaRNo+Ll3fmlXWFWu7OFedFVeINgnAOFETfIhgwFcfhadAg7uOGxlLNcb FraBSV2iiM+YEzGBDvclal9L8zJ2hwW6UbQX2eV2EluwptM99dVpO3mJ0pKVuYK6mJ izig/qX+Var2q3VcIYh0RTphDxuSJX5vJmKCyB3kOcjbg+Cj7uQaFkWlJ709U4DeDK 2MsWIicLLhd+VS2Elg7zMfuCCn3IKTxb4Nu6EsgNsP4vHyL4gprdLYPaBB89v5g8mo d2ff2xkWDbhng== From: Keith Busch To: SelvaKumar S Cc: linux-nvme@lists.infradead.org, axboe@kernel.dk, damien.lemoal@wdc.com, Johannes.Thumshirn@wdc.com, hch@lst.de, sagi@grimberg.me, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, martin.petersen@oracle.com, bvanassche@acm.org, mpatocka@redhat.com, hare@suse.de, dm-devel@redhat.com, snitzer@redhat.com, selvajove@gmail.com, nj.shetty@samsung.com, joshi.k@samsung.com, javier.gonz@samsung.com Subject: Re: [RFC PATCH v3 1/2] block: add simple copy support Message-ID: <20201211180451.GA9103@redsun51.ssa.fujisawa.hgst.com> References: <20201211135139.49232-1-selvakuma.s1@samsung.com> <20201211135139.49232-2-selvakuma.s1@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201211135139.49232-2-selvakuma.s1@samsung.com> User-Agent: Mutt/1.12.1 (2019-06-15) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 11, 2020 at 07:21:38PM +0530, SelvaKumar S wrote: > +int blk_copy_emulate(struct block_device *bdev, struct blk_copy_payload *payload, > + gfp_t gfp_mask) > +{ > + struct request_queue *q = bdev_get_queue(bdev); > + struct bio *bio; > + void *buf = NULL; > + int i, nr_srcs, max_range_len, ret, cur_dest, cur_size; > + > + nr_srcs = payload->copy_range; > + max_range_len = q->limits.max_copy_range_sectors << SECTOR_SHIFT; The default value for this limit is 0, and this is the function for when the device doesn't support copy. Are we expecting drivers to set this value to something else for that case? > + cur_dest = payload->dest; > + buf = kvmalloc(max_range_len, GFP_ATOMIC); > + if (!buf) > + return -ENOMEM; > + > + for (i = 0; i < nr_srcs; i++) { > + bio = bio_alloc(gfp_mask, 1); > + bio->bi_iter.bi_sector = payload->range[i].src; > + bio->bi_opf = REQ_OP_READ; > + bio_set_dev(bio, bdev); > + > + cur_size = payload->range[i].len << SECTOR_SHIFT; > + ret = bio_add_page(bio, virt_to_page(buf), cur_size, > + offset_in_page(payload)); 'buf' is vmalloc'ed, so we don't necessarily have congituous pages. I think you need to allocate the bio with bio_map_kern() or something like that instead with that kind of memory. > + if (ret != cur_size) { > + ret = -ENOMEM; > + goto out; > + } > + > + ret = submit_bio_wait(bio); > + bio_put(bio); > + if (ret) > + goto out; > + > + bio = bio_alloc(gfp_mask, 1); > + bio_set_dev(bio, bdev); > + bio->bi_opf = REQ_OP_WRITE; > + bio->bi_iter.bi_sector = cur_dest; > + ret = bio_add_page(bio, virt_to_page(buf), cur_size, > + offset_in_page(payload)); > + if (ret != cur_size) { > + ret = -ENOMEM; > + goto out; > + } > + > + ret = submit_bio_wait(bio); > + bio_put(bio); > + if (ret) > + goto out; > + > + cur_dest += payload->range[i].len; > + } I think this would be a faster implementation if the reads were asynchronous with a payload buffer allocated specific to that read, and the callback can enqueue the write part. This would allow you to accumulate all the read data and write it in a single call.