2017-04-13 22:10:37

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 00/22] Introduce common scatterlist map function

Hi Everyone,

As part of my effort to enable P2P DMA transactions with PCI cards,
we've identified the need to be able to safely put IO memory into
scatterlists (and eventually other spots). This probably involves a
conversion from struct page to pfn_t but that migration is a ways off
and those decisions are yet to be made.

As an initial step in that direction, I've started cleaning up some of the
scatterlist code by trying to carve out a better defined layer between it
and it's users. The longer term goal would be to remove sg_page or replace
it with something that can potentially fail.

This patchset is the first step in that effort. I've introduced
a common function to map scatterlist memory and converted all the common
kmap(sg_page()) cases. This removes about 66 sg_page calls (of ~331).

Seeing this is a fairly large cleanup set that touches a wide swath of
the kernel I have limited the people I've sent this to. I'd suggest we look
toward merging the first patch and then I can send the individual subsystem
patches on to their respective maintainers and get them merged
independantly. (This is to avoid the conflicts I created with my last
cleanup set... Sorry) Though, I'm certainly open to other suggestions to get
it merged.

The patchset is based on v4.11-rc6 and can be found in the sg_map
branch from this git tree:

https://github.com/sbates130272/linux-p2pmem.git

Thanks,

Logan


Logan Gunthorpe (22):
scatterlist: Introduce sg_map helper functions
nvmet: Make use of the new sg_map helper function
libiscsi: Make use of new the sg_map helper function
target: Make use of the new sg_map function at 16 call sites
drm/i915: Make use of the new sg_map helper function
crypto: hifn_795x: Make use of the new sg_map helper function
crypto: shash, caam: Make use of the new sg_map helper function
crypto: chcr: Make use of the new sg_map helper function
dm-crypt: Make use of the new sg_map helper in 4 call sites
staging: unisys: visorbus: Make use of the new sg_map helper function
RDS: Make use of the new sg_map helper function
scsi: ipr, pmcraid, isci: Make use of the new sg_map helper in 4 call
sites
scsi: hisi_sas, mvsas, gdth: Make use of the new sg_map helper
function
scsi: arcmsr, ips, megaraid: Make use of the new sg_map helper
function
scsi: libfc, csiostor: Change to sg_copy_buffer in two drivers
xen-blkfront: Make use of the new sg_map helper function
mmc: sdhci: Make use of the new sg_map helper function
mmc: spi: Make use of the new sg_map helper function
mmc: tmio: Make use of the new sg_map helper function
mmc: sdricoh_cs: Make use of the new sg_map helper function
mmc: tifm_sd: Make use of the new sg_map helper function
memstick: Make use of the new sg_map helper function

crypto/shash.c | 9 +-
drivers/block/xen-blkfront.c | 33 +++++--
drivers/crypto/caam/caamalg.c | 8 +-
drivers/crypto/chelsio/chcr_algo.c | 28 +++---
drivers/crypto/hifn_795x.c | 32 ++++---
drivers/dma-buf/dma-buf.c | 3 +
drivers/gpu/drm/i915/i915_gem.c | 27 +++---
drivers/md/dm-crypt.c | 38 +++++---
drivers/memstick/host/jmb38x_ms.c | 23 ++++-
drivers/memstick/host/tifm_ms.c | 22 ++++-
drivers/mmc/host/mmc_spi.c | 26 +++--
drivers/mmc/host/sdhci.c | 35 ++++++-
drivers/mmc/host/sdricoh_cs.c | 14 ++-
drivers/mmc/host/tifm_sd.c | 88 +++++++++++++----
drivers/mmc/host/tmio_mmc.h | 12 ++-
drivers/mmc/host/tmio_mmc_dma.c | 5 +
drivers/mmc/host/tmio_mmc_pio.c | 24 +++++
drivers/nvme/target/fabrics-cmd.c | 16 +++-
drivers/scsi/arcmsr/arcmsr_hba.c | 16 +++-
drivers/scsi/csiostor/csio_scsi.c | 54 +----------
drivers/scsi/cxgbi/libcxgbi.c | 5 +
drivers/scsi/gdth.c | 9 +-
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c | 14 ++-
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 13 ++-
drivers/scsi/ipr.c | 27 +++---
drivers/scsi/ips.c | 8 +-
drivers/scsi/isci/request.c | 42 ++++----
drivers/scsi/libfc/fc_libfc.c | 49 ++--------
drivers/scsi/libiscsi_tcp.c | 32 ++++---
drivers/scsi/megaraid.c | 9 +-
drivers/scsi/mvsas/mv_sas.c | 10 +-
drivers/scsi/pmcraid.c | 19 ++--
drivers/staging/unisys/visorhba/visorhba_main.c | 12 ++-
drivers/target/iscsi/iscsi_target.c | 27 ++++--
drivers/target/target_core_rd.c | 3 +-
drivers/target/target_core_sbc.c | 122 +++++++++++++++++-------
drivers/target/target_core_transport.c | 18 ++--
drivers/target/target_core_user.c | 43 ++++++---
include/linux/scatterlist.h | 97 +++++++++++++++++++
include/scsi/libiscsi_tcp.h | 3 +-
include/target/target_core_backend.h | 4 +-
net/rds/ib_recv.c | 17 +++-
42 files changed, 739 insertions(+), 357 deletions(-)

--
2.1.4


2017-04-13 22:06:41

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 01/22] scatterlist: Introduce sg_map helper functions

This patch introduces functions which kmap the pages inside an sgl. Two
variants are provided: one if an offset is required and one if the
offset is zero. These functions replace a common pattern of
kmap(sg_page(sg)) that is used in about 50 places within the kernel.

The motivation for this work is to eventually safely support sgls that
contain io memory. In order for that to work, any access to the contents
of an iomem SGL will need to be done with iomemcpy or hit some warning.
(The exact details of how this will work have yet to be worked out.)
Having all the kmaps in one place is just a first step in that
direction. Additionally, seeing this helps cut down the users of sg_page,
it should make any effort to go to struct-page-less DMAs a little
easier (should that idea ever swing back into favour again).

A flags option is added to select between a regular or atomic mapping so
these functions can replace kmap(sg_page or kmap_atomic(sg_page.
Future work may expand this to have flags for using page_address or
vmap. Much further in the future, there may be a flag to allocate memory
and copy the data from/to iomem.

We also add the semantic that sg_map can fail to create a mapping,
despite the fact that the current code this is replacing is assumed to
never fail and the current version of these functions cannot fail. This
is to support iomem which either have to fail to create the mapping or
allocate memory as a bounce buffer which itself can fail.

Also, in terms of cleanup, a few of the existing kmap(sg_page) users
play things a bit loose in terms of whether they apply sg->offset
so using these helper functions should help avoid such issues.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/dma-buf/dma-buf.c | 3 ++
include/linux/scatterlist.h | 97 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 100 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 0007b79..b95934b 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -37,6 +37,9 @@

#include <uapi/linux/dma-buf.h>

+/* Prevent the highmem.h macro from aliasing ops->kunmap_atomic */
+#undef kunmap_atomic
+
static inline int is_dma_buf_file(struct file *);

struct dma_buf_list {
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index cb3c8fe..acd4d73 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -5,6 +5,7 @@
#include <linux/types.h>
#include <linux/bug.h>
#include <linux/mm.h>
+#include <linux/highmem.h>
#include <asm/io.h>

struct scatterlist {
@@ -126,6 +127,102 @@ static inline struct page *sg_page(struct scatterlist *sg)
return (struct page *)((sg)->page_link & ~0x3);
}

+#define SG_KMAP (1 << 0) /* create a mapping with kmap */
+#define SG_KMAP_ATOMIC (1 << 1) /* create a mapping with kmap_atomic */
+
+/**
+ * sg_map_offset - kmap a page inside an sgl
+ * @sg: SG entry
+ * @offset: Offset into entry
+ * @flags: Flags for creating the mapping
+ *
+ * Description:
+ * Use this function to map a page in the scatterlist at the specified
+ * offset. sg->offset is already added for you. Note: the semantics of
+ * this function are that it may fail. Thus, its output should be checked
+ * with IS_ERR and PTR_ERR. Otherwise, a pointer to the specified offset
+ * in the mapped page is returned.
+ *
+ * Flags can be any of:
+ * * SG_KMAP - Use kmap to create the mapping
+ * * SG_KMAP_ATOMIC - Use kmap_atomic to map the page atommically.
+ * Thus, the rules of that function apply: the cpu
+ * may not sleep until it is unmaped.
+ *
+ * Also, consider carefully whether this function is appropriate. It is
+ * largely not recommended for new code and if the sgl came from another
+ * subsystem and you don't know what kind of memory might be in the list
+ * then you definitely should not call it. Non-mappable memory may be in
+ * the sgl and thus this function may fail unexpectedly.
+ **/
+static inline void *sg_map_offset(struct scatterlist *sg, size_t offset,
+ int flags)
+{
+ struct page *pg;
+ unsigned int pg_off;
+
+ offset += sg->offset;
+ pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT);
+ pg_off = offset_in_page(offset);
+
+ if (flags & SG_KMAP_ATOMIC)
+ return kmap_atomic(pg) + pg_off;
+ else
+ return kmap(pg) + pg_off;
+}
+
+/**
+ * sg_unkmap_offset - unmap a page that was mapped with sg_map_offset
+ * @sg: SG entry
+ * @addr: address returned by sg_map_offset
+ * @offset: Offset into entry (same as specified for sg_map_offset)
+ * @flags: Flags, which are the same specified for sg_map_offset
+ *
+ * Description:
+ * Unmap the page that was mapped with sg_map_offset
+ *
+ **/
+static inline void sg_unmap_offset(struct scatterlist *sg, void *addr,
+ size_t offset, int flags)
+{
+ struct page *pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT);
+ unsigned int pg_off = offset_in_page(offset);
+
+ if (flags & SG_KMAP_ATOMIC)
+ kunmap_atomic(addr - sg->offset - pg_off);
+ else
+ kunmap(pg);
+}
+
+/**
+ * sg_map - map the first page in the scatterlist entry
+ * @sg: SG entry
+ * @flags: Flags, see sg_map_offset for a description
+ *
+ * Description:
+ * Same as sg_map_offset(sg, 0, flags);
+ *
+ **/
+static inline void *sg_map(struct scatterlist *sg, int flags)
+{
+ return sg_map_offset(sg, 0, flags);
+}
+
+/**
+ * sg_unmap - unmap a page mapped with sg_map
+ * @sg: SG entry
+ * @addr: address returned by sg_map
+ * @flags: Flags, see sg_map_offset for a description
+ *
+ * Description:
+ * Same as sg_map_offset(sg, 0, flags);
+ *
+ **/
+static inline void sg_unmap(struct scatterlist *sg, void *addr, int flags)
+{
+ sg_unmap_offset(sg, addr, 0, flags);
+}
+
/**
* sg_set_buf - Set sg entry to point at given data
* @sg: SG entry
--
2.1.4

2017-04-13 22:06:37

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 08/22] crypto: chcr: Make use of the new sg_map helper function

The get_page in this area looks *highly* suspect due to there being no
corresponding put_page. However, I've left that as is to avoid breaking
things.

I've also removed the KMAP_ATOMIC_ARGS check as it appears to be dead
code that dates back to when it was first committed...

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/crypto/chelsio/chcr_algo.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 41bc7f4..a993d1d 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1489,22 +1489,21 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
return ERR_PTR(-EINVAL);
}

-static void aes_gcm_empty_pld_pad(struct scatterlist *sg,
- unsigned short offset)
+static int aes_gcm_empty_pld_pad(struct scatterlist *sg,
+ unsigned short offset)
{
- struct page *spage;
unsigned char *addr;

- spage = sg_page(sg);
- get_page(spage); /* so that it is not freed by NIC */
-#ifdef KMAP_ATOMIC_ARGS
- addr = kmap_atomic(spage, KM_SOFTIRQ0);
-#else
- addr = kmap_atomic(spage);
-#endif
- memset(addr + sg->offset, 0, offset + 1);
+ get_page(sg_page(sg)); /* so that it is not freed by NIC */
+
+ addr = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(addr))
+ return PTR_ERR(addr);
+
+ memset(addr, 0, offset + 1);
+ sg_unmap(sg, addr, SG_KMAP_ATOMIC);

- kunmap_atomic(addr);
+ return 0;
}

static int set_msg_len(u8 *block, unsigned int msglen, int csize)
@@ -1940,7 +1939,10 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
if (req->cryptlen) {
write_sg_to_skb(skb, &frags, src, req->cryptlen);
} else {
- aes_gcm_empty_pld_pad(req->dst, authsize - 1);
+ err = aes_gcm_empty_pld_pad(req->dst, authsize - 1);
+ if (err)
+ goto dstmap_fail;
+
write_sg_to_skb(skb, &frags, reqctx->dst, crypt_len);

}
--
2.1.4

2017-04-13 22:06:55

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 14/22] scsi: arcmsr, ips, megaraid: Make use of the new sg_map helper function

Very straightforward conversion of three scsi drivers

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/scsi/arcmsr/arcmsr_hba.c | 16 ++++++++++++----
drivers/scsi/ips.c | 8 ++++----
drivers/scsi/megaraid.c | 9 +++++++--
3 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
index af032c4..3cd485c 100644
--- a/drivers/scsi/arcmsr/arcmsr_hba.c
+++ b/drivers/scsi/arcmsr/arcmsr_hba.c
@@ -2306,7 +2306,10 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,

use_sg = scsi_sg_count(cmd);
sg = scsi_sglist(cmd);
- buffer = kmap_atomic(sg_page(sg)) + sg->offset;
+ buffer = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(buffer))
+ return ARCMSR_MESSAGE_FAIL;
+
if (use_sg > 1) {
retvalue = ARCMSR_MESSAGE_FAIL;
goto message_out;
@@ -2539,7 +2542,7 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
message_out:
if (use_sg) {
struct scatterlist *sg = scsi_sglist(cmd);
- kunmap_atomic(buffer - sg->offset);
+ sg_unmap(sg, buffer, SG_KMAP_ATOMIC);
}
return retvalue;
}
@@ -2590,11 +2593,16 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb,
strncpy(&inqdata[32], "R001", 4); /* Product Revision */

sg = scsi_sglist(cmd);
- buffer = kmap_atomic(sg_page(sg)) + sg->offset;
+ buffer = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(buffer)) {
+ cmd->result = (DID_ERROR << 16);
+ cmd->scsi_done(cmd);
+ return;
+ }

memcpy(buffer, inqdata, sizeof(inqdata));
sg = scsi_sglist(cmd);
- kunmap_atomic(buffer - sg->offset);
+ sg_unmap(sg, buffer, SG_KMAP_ATOMIC);

cmd->scsi_done(cmd);
}
diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
index 3419e1b..a44291d 100644
--- a/drivers/scsi/ips.c
+++ b/drivers/scsi/ips.c
@@ -1506,14 +1506,14 @@ static int ips_is_passthru(struct scsi_cmnd *SC)
/* kmap_atomic() ensures addressability of the user buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
- buffer = kmap_atomic(sg_page(sg)) + sg->offset;
- if (buffer && buffer[0] == 'C' && buffer[1] == 'O' &&
+ buffer = sg_map(sg, SG_KMAP_ATOMIC);
+ if (!IS_ERR(buffer) && buffer[0] == 'C' && buffer[1] == 'O' &&
buffer[2] == 'P' && buffer[3] == 'P') {
- kunmap_atomic(buffer - sg->offset);
+ sg_unmap(sg, buffer, SG_KMAP_ATOMIC);
local_irq_restore(flags);
return 1;
}
- kunmap_atomic(buffer - sg->offset);
+ sg_unmap(sg, buffer, SG_KMAP_ATOMIC);
local_irq_restore(flags);
}
return 0;
diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
index 3c63c29..0b66e50 100644
--- a/drivers/scsi/megaraid.c
+++ b/drivers/scsi/megaraid.c
@@ -663,10 +663,15 @@ mega_build_cmd(adapter_t *adapter, Scsi_Cmnd *cmd, int *busy)
struct scatterlist *sg;

sg = scsi_sglist(cmd);
- buf = kmap_atomic(sg_page(sg)) + sg->offset;
+ buf = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(buf)) {
+ cmd->result = (DID_ERROR << 16);
+ cmd->scsi_done(cmd);
+ return NULL;
+ }

memset(buf, 0, cmd->cmnd[4]);
- kunmap_atomic(buf - sg->offset);
+ sg_unmap(sg, buf, SG_KMAP_ATOMIC);

cmd->result = (DID_OK << 16);
cmd->scsi_done(cmd);
--
2.1.4

2017-04-13 22:07:46

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 09/22] dm-crypt: Make use of the new sg_map helper in 4 call sites

Very straightforward conversion to the new function in all four spots.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/md/dm-crypt.c | 38 +++++++++++++++++++++++++-------------
1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 389a363..6bd0ffc 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -589,9 +589,12 @@ static int crypt_iv_lmk_gen(struct crypt_config *cc, u8 *iv,
int r = 0;

if (bio_data_dir(dmreq->ctx->bio_in) == WRITE) {
- src = kmap_atomic(sg_page(&dmreq->sg_in));
- r = crypt_iv_lmk_one(cc, iv, dmreq, src + dmreq->sg_in.offset);
- kunmap_atomic(src);
+ src = sg_map(&dmreq->sg_in, SG_KMAP_ATOMIC);
+ if (IS_ERR(src))
+ return PTR_ERR(src);
+
+ r = crypt_iv_lmk_one(cc, iv, dmreq, src);
+ sg_unmap(&dmreq->sg_in, src, SG_KMAP_ATOMIC);
} else
memset(iv, 0, cc->iv_size);

@@ -607,14 +610,17 @@ static int crypt_iv_lmk_post(struct crypt_config *cc, u8 *iv,
if (bio_data_dir(dmreq->ctx->bio_in) == WRITE)
return 0;

- dst = kmap_atomic(sg_page(&dmreq->sg_out));
- r = crypt_iv_lmk_one(cc, iv, dmreq, dst + dmreq->sg_out.offset);
+ dst = sg_map(&dmreq->sg_out, SG_KMAP_ATOMIC);
+ if (IS_ERR(dst))
+ return PTR_ERR(dst);
+
+ r = crypt_iv_lmk_one(cc, iv, dmreq, dst);

/* Tweak the first block of plaintext sector */
if (!r)
- crypto_xor(dst + dmreq->sg_out.offset, iv, cc->iv_size);
+ crypto_xor(dst, iv, cc->iv_size);

- kunmap_atomic(dst);
+ sg_unmap(&dmreq->sg_out, dst, SG_KMAP_ATOMIC);
return r;
}

@@ -731,9 +737,12 @@ static int crypt_iv_tcw_gen(struct crypt_config *cc, u8 *iv,

/* Remove whitening from ciphertext */
if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) {
- src = kmap_atomic(sg_page(&dmreq->sg_in));
- r = crypt_iv_tcw_whitening(cc, dmreq, src + dmreq->sg_in.offset);
- kunmap_atomic(src);
+ src = sg_map(&dmreq->sg_in, SG_KMAP_ATOMIC);
+ if (IS_ERR(src))
+ return PTR_ERR(src);
+
+ r = crypt_iv_tcw_whitening(cc, dmreq, src);
+ sg_unmap(&dmreq->sg_in, src, SG_KMAP_ATOMIC);
}

/* Calculate IV */
@@ -755,9 +764,12 @@ static int crypt_iv_tcw_post(struct crypt_config *cc, u8 *iv,
return 0;

/* Apply whitening on ciphertext */
- dst = kmap_atomic(sg_page(&dmreq->sg_out));
- r = crypt_iv_tcw_whitening(cc, dmreq, dst + dmreq->sg_out.offset);
- kunmap_atomic(dst);
+ dst = sg_map(&dmreq->sg_out, SG_KMAP_ATOMIC);
+ if (IS_ERR(dst))
+ return PTR_ERR(dst);
+
+ r = crypt_iv_tcw_whitening(cc, dmreq, dst);
+ sg_unmap(&dmreq->sg_out, dst, SG_KMAP_ATOMIC);

return r;
}
--
2.1.4

2017-04-13 22:07:38

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 13/22] scsi: hisi_sas, mvsas, gdth: Make use of the new sg_map helper function

Very straightforward conversion of three scsi drivers.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/scsi/gdth.c | 9 +++++++--
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c | 14 +++++++++-----
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 13 +++++++++----
drivers/scsi/mvsas/mv_sas.c | 10 +++++-----
4 files changed, 30 insertions(+), 16 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index d020a13..82c9fba 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -2301,10 +2301,15 @@ static void gdth_copy_internal_data(gdth_ha_str *ha, Scsi_Cmnd *scp,
return;
}
local_irq_save(flags);
- address = kmap_atomic(sg_page(sl)) + sl->offset;
+ address = sg_map(sl, SG_KMAP_ATOMIC);
+ if (IS_ERR(address)) {
+ scp->result = DID_ERROR << 16;
+ return;
+ }
+
memcpy(address, buffer, cpnow);
flush_dcache_page(sg_page(sl));
- kunmap_atomic(address);
+ sg_unmap(sl, address, SG_KMAP_ATOMIC);
local_irq_restore(flags);
if (cpsum == cpcount)
break;
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
index 854fbea..30408f8 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
@@ -1377,18 +1377,22 @@ static int slot_complete_v1_hw(struct hisi_hba *hisi_hba,
void *to;
struct scatterlist *sg_resp = &task->smp_task.smp_resp;

- ts->stat = SAM_STAT_GOOD;
- to = kmap_atomic(sg_page(sg_resp));
+ to = sg_map(sg_resp, SG_KMAP_ATOMIC);
+ if (IS_ERR(to)) {
+ dev_err(dev, "slot complete: error mapping memory");
+ ts->stat = SAS_SG_ERR;
+ break;
+ }

+ ts->stat = SAM_STAT_GOOD;
dma_unmap_sg(dev, &task->smp_task.smp_resp, 1,
DMA_FROM_DEVICE);
dma_unmap_sg(dev, &task->smp_task.smp_req, 1,
DMA_TO_DEVICE);
- memcpy(to + sg_resp->offset,
- slot->status_buffer +
+ memcpy(to, slot->status_buffer +
sizeof(struct hisi_sas_err_record),
sg_dma_len(sg_resp));
- kunmap_atomic(to);
+ sg_unmap(sg_resp, to, SG_KMAP_ATOMIC);
break;
}
case SAS_PROTOCOL_SATA:
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
index 1b21445..0907947 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
@@ -1796,18 +1796,23 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot,
struct scatterlist *sg_resp = &task->smp_task.smp_resp;
void *to;

+ to = sg_map(sg_resp, SG_KMAP_ATOMIC);
+ if (IS_ERR(to)) {
+ dev_err(dev, "slot complete: error mapping memory");
+ ts->stat = SAS_SG_ERR;
+ break;
+ }
+
ts->stat = SAM_STAT_GOOD;
- to = kmap_atomic(sg_page(sg_resp));

dma_unmap_sg(dev, &task->smp_task.smp_resp, 1,
DMA_FROM_DEVICE);
dma_unmap_sg(dev, &task->smp_task.smp_req, 1,
DMA_TO_DEVICE);
- memcpy(to + sg_resp->offset,
- slot->status_buffer +
+ memcpy(to, slot->status_buffer +
sizeof(struct hisi_sas_err_record),
sg_dma_len(sg_resp));
- kunmap_atomic(to);
+ sg_unmap(sg_resp, to, SG_KMAP_ATOMIC);
break;
}
case SAS_PROTOCOL_SATA:
diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
index c7cc803..374d0e0 100644
--- a/drivers/scsi/mvsas/mv_sas.c
+++ b/drivers/scsi/mvsas/mv_sas.c
@@ -1798,11 +1798,11 @@ int mvs_slot_complete(struct mvs_info *mvi, u32 rx_desc, u32 flags)
case SAS_PROTOCOL_SMP: {
struct scatterlist *sg_resp = &task->smp_task.smp_resp;
tstat->stat = SAM_STAT_GOOD;
- to = kmap_atomic(sg_page(sg_resp));
- memcpy(to + sg_resp->offset,
- slot->response + sizeof(struct mvs_err_info),
- sg_dma_len(sg_resp));
- kunmap_atomic(to);
+ to = sg_map(sg_resp, SG_KMAP_ATOMIC);
+ memcpy(to,
+ slot->response + sizeof(struct mvs_err_info),
+ sg_dma_len(sg_resp));
+ sg_unmap(sg_resp, to, SG_KMAP_ATOMIC);
break;
}

--
2.1.4

2017-04-13 22:08:31

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 20/22] mmc: sdricoh_cs: Make use of the new sg_map helper function

This is a straightforward conversion to the new function.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/mmc/host/sdricoh_cs.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/host/sdricoh_cs.c b/drivers/mmc/host/sdricoh_cs.c
index 5ff26ab..7eeed23 100644
--- a/drivers/mmc/host/sdricoh_cs.c
+++ b/drivers/mmc/host/sdricoh_cs.c
@@ -319,16 +319,20 @@ static void sdricoh_request(struct mmc_host *mmc, struct mmc_request *mrq)
for (i = 0; i < data->blocks; i++) {
size_t len = data->blksz;
u8 *buf;
- struct page *page;
int result;
- page = sg_page(data->sg);

- buf = kmap(page) + data->sg->offset + (len * i);
+ buf = sg_map_offset(data->sg, (len * i), SG_KMAP);
+ if (IS_ERR(buf)) {
+ cmd->error = PTR_ERR(buf);
+ break;
+ }
+
result =
sdricoh_blockio(host,
data->flags & MMC_DATA_READ, buf, len);
- kunmap(page);
- flush_dcache_page(page);
+ sg_unmap_offset(data->sg, buf, (len * i), SG_KMAP);
+
+ flush_dcache_page(sg_page(data->sg));
if (result) {
dev_err(dev, "sdricoh_request: cmd %i "
"block transfer failed\n", cmd->opcode);
--
2.1.4

2017-04-13 22:06:59

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 11/22] RDS: Make use of the new sg_map helper function

Straightforward conversion except there's no error path, so we WARN if
the sg_map fails.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
net/rds/ib_recv.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index e10624a..7f8fa99 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -801,9 +801,20 @@ static void rds_ib_cong_recv(struct rds_connection *conn,
to_copy = min(RDS_FRAG_SIZE - frag_off, PAGE_SIZE - map_off);
BUG_ON(to_copy & 7); /* Must be 64bit aligned. */

- addr = kmap_atomic(sg_page(&frag->f_sg));
+ addr = sg_map(&frag->f_sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(addr)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return;
+ }

- src = addr + frag->f_sg.offset + frag_off;
+ src = addr + frag_off;
dst = (void *)map->m_page_addrs[map_page] + map_off;
for (k = 0; k < to_copy; k += 8) {
/* Record ports that became uncongested, ie
@@ -811,7 +822,7 @@ static void rds_ib_cong_recv(struct rds_connection *conn,
uncongested |= ~(*src) & *dst;
*dst++ = *src++;
}
- kunmap_atomic(addr);
+ sg_unmap(&frag->f_sg, addr, SG_KMAP_ATOMIC);

copied += to_copy;

--
2.1.4

2017-04-13 22:08:42

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 21/22] mmc: tifm_sd: Make use of the new sg_map helper function

This conversion is a bit complicated. We modiy the read_fifo,
write_fifo and copy_page functions to take a scatterlist instead of a
page. Thus we can use sg_map instead of kmap_atomic. There's a bit of
accounting that needed to be done for the offset for this to work.
(Seeing sg_map takes care of the offset but it's already added and
used earlier in the code.

There's also no error path, so if unmappable memory finds its way into
the sgl we can only WARN.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/mmc/host/tifm_sd.c | 88 +++++++++++++++++++++++++++++++++++-----------
1 file changed, 67 insertions(+), 21 deletions(-)

diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c
index 93c4b40..75b0d74 100644
--- a/drivers/mmc/host/tifm_sd.c
+++ b/drivers/mmc/host/tifm_sd.c
@@ -111,14 +111,26 @@ struct tifm_sd {
};

/* for some reason, host won't respond correctly to readw/writew */
-static void tifm_sd_read_fifo(struct tifm_sd *host, struct page *pg,
+static void tifm_sd_read_fifo(struct tifm_sd *host, struct scatterlist *sg,
unsigned int off, unsigned int cnt)
{
struct tifm_dev *sock = host->dev;
unsigned char *buf;
unsigned int pos = 0, val;

- buf = kmap_atomic(pg) + off;
+ buf = sg_map_offset(sg, off - sg->offset, SG_KMAP_ATOMIC);
+ if (IS_ERR(buf)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return;
+ }
+
if (host->cmd_flags & DATA_CARRY) {
buf[pos++] = host->bounce_buf_data[0];
host->cmd_flags &= ~DATA_CARRY;
@@ -134,17 +146,29 @@ static void tifm_sd_read_fifo(struct tifm_sd *host, struct page *pg,
}
buf[pos++] = (val >> 8) & 0xff;
}
- kunmap_atomic(buf - off);
+ sg_unmap_offset(sg, buf, off - sg->offset, SG_KMAP_ATOMIC);
}

-static void tifm_sd_write_fifo(struct tifm_sd *host, struct page *pg,
+static void tifm_sd_write_fifo(struct tifm_sd *host, struct scatterlist *sg,
unsigned int off, unsigned int cnt)
{
struct tifm_dev *sock = host->dev;
unsigned char *buf;
unsigned int pos = 0, val;

- buf = kmap_atomic(pg) + off;
+ buf = sg_map_offset(sg, off - sg->offset, SG_KMAP_ATOMIC);
+ if (IS_ERR(buf)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return;
+ }
+
if (host->cmd_flags & DATA_CARRY) {
val = host->bounce_buf_data[0] | ((buf[pos++] << 8) & 0xff00);
writel(val, sock->addr + SOCK_MMCSD_DATA);
@@ -161,7 +185,7 @@ static void tifm_sd_write_fifo(struct tifm_sd *host, struct page *pg,
val |= (buf[pos++] << 8) & 0xff00;
writel(val, sock->addr + SOCK_MMCSD_DATA);
}
- kunmap_atomic(buf - off);
+ sg_unmap_offset(sg, buf, off - sg->offset, SG_KMAP_ATOMIC);
}

static void tifm_sd_transfer_data(struct tifm_sd *host)
@@ -170,7 +194,6 @@ static void tifm_sd_transfer_data(struct tifm_sd *host)
struct scatterlist *sg = r_data->sg;
unsigned int off, cnt, t_size = TIFM_MMCSD_FIFO_SIZE * 2;
unsigned int p_off, p_cnt;
- struct page *pg;

if (host->sg_pos == host->sg_len)
return;
@@ -192,33 +215,57 @@ static void tifm_sd_transfer_data(struct tifm_sd *host)
}
off = sg[host->sg_pos].offset + host->block_pos;

- pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT);
p_off = offset_in_page(off);
p_cnt = PAGE_SIZE - p_off;
p_cnt = min(p_cnt, cnt);
p_cnt = min(p_cnt, t_size);

if (r_data->flags & MMC_DATA_READ)
- tifm_sd_read_fifo(host, pg, p_off, p_cnt);
+ tifm_sd_read_fifo(host, &sg[host->sg_pos], p_off,
+ p_cnt);
else if (r_data->flags & MMC_DATA_WRITE)
- tifm_sd_write_fifo(host, pg, p_off, p_cnt);
+ tifm_sd_write_fifo(host, &sg[host->sg_pos], p_off,
+ p_cnt);

t_size -= p_cnt;
host->block_pos += p_cnt;
}
}

-static void tifm_sd_copy_page(struct page *dst, unsigned int dst_off,
- struct page *src, unsigned int src_off,
+static void tifm_sd_copy_page(struct scatterlist *dst, unsigned int dst_off,
+ struct scatterlist *src, unsigned int src_off,
unsigned int count)
{
- unsigned char *src_buf = kmap_atomic(src) + src_off;
- unsigned char *dst_buf = kmap_atomic(dst) + dst_off;
+ unsigned char *src_buf, *dst_buf;
+
+ src_off -= src->offset;
+ dst_off -= dst->offset;
+
+ src_buf = sg_map_offset(src, src_off, SG_KMAP_ATOMIC);
+ if (IS_ERR(src_buf))
+ goto sg_map_err;
+
+ dst_buf = sg_map_offset(dst, dst_off, SG_KMAP_ATOMIC);
+ if (IS_ERR(dst_buf))
+ goto sg_map_err;

memcpy(dst_buf, src_buf, count);

- kunmap_atomic(dst_buf - dst_off);
- kunmap_atomic(src_buf - src_off);
+ sg_unmap_offset(dst, dst_buf, dst_off, SG_KMAP_ATOMIC);
+ sg_unmap_offset(src, src_buf, src_off, SG_KMAP_ATOMIC);
+
+sg_map_err:
+ if (!IS_ERR(src_buf))
+ sg_unmap_offset(src, src_buf, src_off, SG_KMAP_ATOMIC);
+
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
}

static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data)
@@ -227,7 +274,6 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data)
unsigned int t_size = r_data->blksz;
unsigned int off, cnt;
unsigned int p_off, p_cnt;
- struct page *pg;

dev_dbg(&host->dev->dev, "bouncing block\n");
while (t_size) {
@@ -241,18 +287,18 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data)
}
off = sg[host->sg_pos].offset + host->block_pos;

- pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT);
p_off = offset_in_page(off);
p_cnt = PAGE_SIZE - p_off;
p_cnt = min(p_cnt, cnt);
p_cnt = min(p_cnt, t_size);

if (r_data->flags & MMC_DATA_WRITE)
- tifm_sd_copy_page(sg_page(&host->bounce_buf),
+ tifm_sd_copy_page(&host->bounce_buf,
r_data->blksz - t_size,
- pg, p_off, p_cnt);
+ &sg[host->sg_pos], p_off, p_cnt);
else if (r_data->flags & MMC_DATA_READ)
- tifm_sd_copy_page(pg, p_off, sg_page(&host->bounce_buf),
+ tifm_sd_copy_page(&sg[host->sg_pos], p_off,
+ &host->bounce_buf,
r_data->blksz - t_size, p_cnt);

t_size -= p_cnt;
--
2.1.4

2017-04-13 22:09:41

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 17/22] mmc: sdhci: Make use of the new sg_map helper function

Straightforward conversion, except due to the lack of error path we
have to WARN if the memory in the SGL is not mappable.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/mmc/host/sdhci.c | 35 ++++++++++++++++++++++++++++++-----
1 file changed, 30 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 63bc33a..af0c107 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -497,15 +497,34 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host,
return sg_count;
}

+/*
+ * Note this function may return PTR_ERR and must be checked.
+ */
static char *sdhci_kmap_atomic(struct scatterlist *sg, unsigned long *flags)
{
+ void *ret;
+
local_irq_save(*flags);
- return kmap_atomic(sg_page(sg)) + sg->offset;
+
+ ret = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(ret)) {
+ /*
+ * This should really never happen unless the code is changed
+ * to use memory that is not mappable in the sg. Seeing there
+ * doesn't seem to be any error path out of here, we can only
+ * WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ local_irq_restore(*flags);
+ }
+
+ return ret;
}

-static void sdhci_kunmap_atomic(void *buffer, unsigned long *flags)
+static void sdhci_kunmap_atomic(struct scatterlist *sg, void *buffer,
+ unsigned long *flags)
{
- kunmap_atomic(buffer);
+ sg_unmap(sg, buffer, SG_KMAP_ATOMIC);
local_irq_restore(*flags);
}

@@ -568,8 +587,11 @@ static void sdhci_adma_table_pre(struct sdhci_host *host,
if (offset) {
if (data->flags & MMC_DATA_WRITE) {
buffer = sdhci_kmap_atomic(sg, &flags);
+ if (IS_ERR(buffer))
+ return;
+
memcpy(align, buffer, offset);
- sdhci_kunmap_atomic(buffer, &flags);
+ sdhci_kunmap_atomic(sg, buffer, &flags);
}

/* tran, valid */
@@ -646,8 +668,11 @@ static void sdhci_adma_table_post(struct sdhci_host *host,
(sg_dma_address(sg) & SDHCI_ADMA2_MASK);

buffer = sdhci_kmap_atomic(sg, &flags);
+ if (IS_ERR(buffer))
+ return;
+
memcpy(buffer, align, size);
- sdhci_kunmap_atomic(buffer, &flags);
+ sdhci_kunmap_atomic(sg, buffer, &flags);

align += SDHCI_ADMA2_ALIGN;
}
--
2.1.4

2017-04-13 22:06:48

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 02/22] nvmet: Make use of the new sg_map helper function

This is a straight forward conversion in two places. Should kmap fail,
the code will return an INVALD_DATA error in the completion.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/nvme/target/fabrics-cmd.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 8bd022af..f62a634 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -122,7 +122,11 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
struct nvmet_ctrl *ctrl = NULL;
u16 status = 0;

- d = kmap(sg_page(req->sg)) + req->sg->offset;
+ d = sg_map(req->sg, SG_KMAP);
+ if (IS_ERR(d)) {
+ status = NVME_SC_SGL_INVALID_DATA;
+ goto out;
+ }

/* zero out initial completion result, assign values as needed */
req->rsp->result.u32 = 0;
@@ -158,7 +162,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
req->rsp->result.u16 = cpu_to_le16(ctrl->cntlid);

out:
- kunmap(sg_page(req->sg));
+ sg_unmap(req->sg, d, SG_KMAP);
nvmet_req_complete(req, status);
}

@@ -170,7 +174,11 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
u16 qid = le16_to_cpu(c->qid);
u16 status = 0;

- d = kmap(sg_page(req->sg)) + req->sg->offset;
+ d = sg_map(req->sg, SG_KMAP);
+ if (IS_ERR(d)) {
+ status = NVME_SC_SGL_INVALID_DATA;
+ goto out;
+ }

/* zero out initial completion result, assign values as needed */
req->rsp->result.u32 = 0;
@@ -205,7 +213,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
pr_info("adding queue %d to ctrl %d.\n", qid, ctrl->cntlid);

out:
- kunmap(sg_page(req->sg));
+ sg_unmap(req->sg, d, SG_KMAP);
nvmet_req_complete(req, status);
return;

--
2.1.4

2017-04-13 22:11:40

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 05/22] drm/i915: Make use of the new sg_map helper function

This is a single straightforward conversion from kmap to sg_map.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 67b1fc5..1b1b91a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2188,6 +2188,15 @@ static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj)
radix_tree_delete(&obj->mm.get_page.radix, iter.index);
}

+static void i915_gem_object_unmap(const struct drm_i915_gem_object *obj,
+ void *ptr)
+{
+ if (is_vmalloc_addr(ptr))
+ vunmap(ptr);
+ else
+ sg_unmap(obj->mm.pages->sgl, ptr, SG_KMAP);
+}
+
void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
enum i915_mm_subclass subclass)
{
@@ -2215,10 +2224,7 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
void *ptr;

ptr = ptr_mask_bits(obj->mm.mapping);
- if (is_vmalloc_addr(ptr))
- vunmap(ptr);
- else
- kunmap(kmap_to_page(ptr));
+ i915_gem_object_unmap(obj, ptr);

obj->mm.mapping = NULL;
}
@@ -2475,8 +2481,11 @@ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj,
void *addr;

/* A single page can always be kmapped */
- if (n_pages == 1 && type == I915_MAP_WB)
- return kmap(sg_page(sgt->sgl));
+ if (n_pages == 1 && type == I915_MAP_WB) {
+ addr = sg_map(sgt->sgl, SG_KMAP);
+ if (IS_ERR(addr))
+ return NULL;
+ }

if (n_pages > ARRAY_SIZE(stack_pages)) {
/* Too big for stack -- allocate temporary array instead */
@@ -2543,11 +2552,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
goto err_unpin;
}

- if (is_vmalloc_addr(ptr))
- vunmap(ptr);
- else
- kunmap(kmap_to_page(ptr));
-
+ i915_gem_object_unmap(obj, ptr);
ptr = obj->mm.mapping = NULL;
}

--
2.1.4

2017-04-13 22:06:30

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 06/22] crypto: hifn_795x: Make use of the new sg_map helper function

Conversion of a couple kmap_atomic instances to the sg_map helper
function.

However, it looks like there was a bug in the original code: the source
scatter lists offset (t->offset) was passed to ablkcipher_get which
added it to the destination address. This doesn't make a lot of
sense, but t->offset is likely always zero anyway. So, this patch cleans
that brokeness up.

Also, a change to the error path: if ablkcipher_get failed, everything
seemed to proceed as if it hadn't. Setting 'error' should hopefully
clear that up.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/crypto/hifn_795x.c | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/hifn_795x.c b/drivers/crypto/hifn_795x.c
index e09d405..8e2c6a9 100644
--- a/drivers/crypto/hifn_795x.c
+++ b/drivers/crypto/hifn_795x.c
@@ -1619,7 +1619,7 @@ static int hifn_start_device(struct hifn_device *dev)
return 0;
}

-static int ablkcipher_get(void *saddr, unsigned int *srestp, unsigned int offset,
+static int ablkcipher_get(void *saddr, unsigned int *srestp,
struct scatterlist *dst, unsigned int size, unsigned int *nbytesp)
{
unsigned int srest = *srestp, nbytes = *nbytesp, copy;
@@ -1632,15 +1632,17 @@ static int ablkcipher_get(void *saddr, unsigned int *srestp, unsigned int offset
while (size) {
copy = min3(srest, dst->length, size);

- daddr = kmap_atomic(sg_page(dst));
- memcpy(daddr + dst->offset + offset, saddr, copy);
- kunmap_atomic(daddr);
+ daddr = sg_map(dst, SG_KMAP_ATOMIC);
+ if (IS_ERR(daddr))
+ return PTR_ERR(daddr);
+
+ memcpy(daddr, saddr, copy);
+ sg_unmap(dst, daddr, SG_KMAP_ATOMIC);

nbytes -= copy;
size -= copy;
srest -= copy;
saddr += copy;
- offset = 0;

pr_debug("%s: copy: %u, size: %u, srest: %u, nbytes: %u.\n",
__func__, copy, size, srest, nbytes);
@@ -1671,11 +1673,12 @@ static inline void hifn_complete_sa(struct hifn_device *dev, int i)

static void hifn_process_ready(struct ablkcipher_request *req, int error)
{
+ int err;
struct hifn_request_context *rctx = ablkcipher_request_ctx(req);

if (rctx->walk.flags & ASYNC_FLAGS_MISALIGNED) {
unsigned int nbytes = req->nbytes;
- int idx = 0, err;
+ int idx = 0;
struct scatterlist *dst, *t;
void *saddr;

@@ -1695,17 +1698,24 @@ static void hifn_process_ready(struct ablkcipher_request *req, int error)
continue;
}

- saddr = kmap_atomic(sg_page(t));
+ saddr = sg_map(t, SG_KMAP_ATOMIC);
+ if (IS_ERR(saddr)) {
+ if (!error)
+ error = PTR_ERR(saddr);
+ break;
+ }
+
+ err = ablkcipher_get(saddr, &t->length,
+ dst, nbytes, &nbytes);
+ sg_unmap(t, saddr, SG_KMAP_ATOMIC);

- err = ablkcipher_get(saddr, &t->length, t->offset,
- dst, nbytes, &nbytes);
if (err < 0) {
- kunmap_atomic(saddr);
+ if (!error)
+ error = err;
break;
}

idx += err;
- kunmap_atomic(saddr);
}

hifn_cipher_walk_exit(&rctx->walk);
--
2.1.4

2017-04-13 22:12:49

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 19/22] mmc: tmio: Make use of the new sg_map helper function

Straightforward conversion to sg_map helper. A couple paths will
WARN if the memory does not end up being mappable.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/mmc/host/tmio_mmc.h | 12 ++++++++++--
drivers/mmc/host/tmio_mmc_dma.c | 5 +++++
drivers/mmc/host/tmio_mmc_pio.c | 24 ++++++++++++++++++++++++
3 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
index 2b349d4..ba68c9fed 100644
--- a/drivers/mmc/host/tmio_mmc.h
+++ b/drivers/mmc/host/tmio_mmc.h
@@ -198,17 +198,25 @@ void tmio_mmc_enable_mmc_irqs(struct tmio_mmc_host *host, u32 i);
void tmio_mmc_disable_mmc_irqs(struct tmio_mmc_host *host, u32 i);
irqreturn_t tmio_mmc_irq(int irq, void *devid);

+/* Note: this function may return PTR_ERR and must be checked! */
static inline char *tmio_mmc_kmap_atomic(struct scatterlist *sg,
unsigned long *flags)
{
+ void *ret;
+
local_irq_save(*flags);
- return kmap_atomic(sg_page(sg)) + sg->offset;
+ ret = sg_map(sg, SG_KMAP_ATOMIC);
+
+ if (IS_ERR(ret))
+ local_irq_restore(*flags);
+
+ return ret;
}

static inline void tmio_mmc_kunmap_atomic(struct scatterlist *sg,
unsigned long *flags, void *virt)
{
- kunmap_atomic(virt - sg->offset);
+ sg_unmap(sg, virt, SG_KMAP_ATOMIC);
local_irq_restore(*flags);
}

diff --git a/drivers/mmc/host/tmio_mmc_dma.c b/drivers/mmc/host/tmio_mmc_dma.c
index fa8a936..07531f7 100644
--- a/drivers/mmc/host/tmio_mmc_dma.c
+++ b/drivers/mmc/host/tmio_mmc_dma.c
@@ -149,6 +149,11 @@ static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
if (!aligned) {
unsigned long flags;
void *sg_vaddr = tmio_mmc_kmap_atomic(sg, &flags);
+ if (IS_ERR(sg_vaddr)) {
+ ret = PTR_ERR(sg_vaddr);
+ goto pio;
+ }
+
sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
memcpy(host->bounce_buf, sg_vaddr, host->bounce_sg.length);
tmio_mmc_kunmap_atomic(sg, &flags, sg_vaddr);
diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c
index 6b789a7..d6fdbf6 100644
--- a/drivers/mmc/host/tmio_mmc_pio.c
+++ b/drivers/mmc/host/tmio_mmc_pio.c
@@ -479,6 +479,18 @@ static void tmio_mmc_pio_irq(struct tmio_mmc_host *host)
}

sg_virt = tmio_mmc_kmap_atomic(host->sg_ptr, &flags);
+ if (IS_ERR(sg_virt)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return;
+ }
+
buf = (unsigned short *)(sg_virt + host->sg_off);

count = host->sg_ptr->length - host->sg_off;
@@ -506,6 +518,18 @@ static void tmio_mmc_check_bounce_buffer(struct tmio_mmc_host *host)
if (host->sg_ptr == &host->bounce_sg) {
unsigned long flags;
void *sg_vaddr = tmio_mmc_kmap_atomic(host->sg_orig, &flags);
+ if (IS_ERR(sg_vaddr)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return;
+ }
+
memcpy(sg_vaddr, host->bounce_buf, host->bounce_sg.length);
tmio_mmc_kunmap_atomic(host->sg_orig, &flags, sg_vaddr);
}
--
2.1.4

2017-04-13 22:12:42

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function

Straightforward conversion to the new helper, except due to
the lack of error path, we have to warn if unmapable memory
is ever present in the sgl.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 5067a0a..7dcf41d 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
BUG_ON(sg->offset + sg->length > PAGE_SIZE);

if (setup.need_copy) {
- setup.bvec_off = sg->offset;
- setup.bvec_data = kmap_atomic(sg_page(sg));
+ setup.bvec_off = 0;
+ setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(setup.bvec_data)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there is a
+ * questionable error path out of here,
+ * we WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return 1;
+ }
}

gnttab_foreach_grant_in_range(sg_page(sg),
@@ -818,7 +829,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
&setup);

if (setup.need_copy)
- kunmap_atomic(setup.bvec_data);
+ sg_unmap(sg, setup.bvec_data, SG_KMAP_ATOMIC);
}
if (setup.segments)
kunmap_atomic(setup.segments);
@@ -1468,8 +1479,18 @@ static bool blkif_completion(unsigned long *id,
for_each_sg(s->sg, sg, num_sg, i) {
BUG_ON(sg->offset + sg->length > PAGE_SIZE);

- data.bvec_offset = sg->offset;
- data.bvec_data = kmap_atomic(sg_page(sg));
+ data.bvec_offset = 0;
+ data.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(data.bvec_data)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there is no
+ * clear error path, we WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return 1;
+ }

gnttab_foreach_grant_in_range(sg_page(sg),
sg->offset,
@@ -1477,7 +1498,7 @@ static bool blkif_completion(unsigned long *id,
blkif_copy_from_grant,
&data);

- kunmap_atomic(data.bvec_data);
+ sg_unmap(sg, data.bvec_data, SG_KMAP_ATOMIC);
}
}
/* Add the persistent grant into the list of free grants */
--
2.1.4

2017-04-13 22:13:39

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 04/22] target: Make use of the new sg_map function at 16 call sites

Fairly straightforward conversions in all spots. In a couple of cases
any error gets propogated up should sg_map fail. In other
cases a warning is issued if the kmap fails seeing there's no
clear error path. This should not be an issue until someone tries to
use unmappable memory in the sgl with this driver.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/target/iscsi/iscsi_target.c | 27 +++++---
drivers/target/target_core_rd.c | 3 +-
drivers/target/target_core_sbc.c | 122 +++++++++++++++++++++++----------
drivers/target/target_core_transport.c | 18 +++--
drivers/target/target_core_user.c | 43 ++++++++----
include/target/target_core_backend.h | 4 +-
6 files changed, 149 insertions(+), 68 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index a918024..e3e0d8f 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -579,7 +579,7 @@ iscsit_xmit_nondatain_pdu(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
}

static int iscsit_map_iovec(struct iscsi_cmd *, struct kvec *, u32, u32);
-static void iscsit_unmap_iovec(struct iscsi_cmd *);
+static void iscsit_unmap_iovec(struct iscsi_cmd *, struct kvec *);
static u32 iscsit_do_crypto_hash_sg(struct ahash_request *, struct iscsi_cmd *,
u32, u32, u32, u8 *);
static int
@@ -646,7 +646,7 @@ iscsit_xmit_datain_pdu(struct iscsi_conn *conn, struct iscsi_cmd *cmd,

ret = iscsit_fe_sendpage_sg(cmd, conn);

- iscsit_unmap_iovec(cmd);
+ iscsit_unmap_iovec(cmd, &cmd->iov_data[1]);

if (ret < 0) {
iscsit_tx_thread_wait_for_tcp(conn);
@@ -925,7 +925,10 @@ static int iscsit_map_iovec(
while (data_length) {
u32 cur_len = min_t(u32, data_length, sg->length - page_off);

- iov[i].iov_base = kmap(sg_page(sg)) + sg->offset + page_off;
+ iov[i].iov_base = sg_map_offset(sg, page_off, SG_KMAP);
+ if (IS_ERR(iov[i].iov_base))
+ goto map_err;
+
iov[i].iov_len = cur_len;

data_length -= cur_len;
@@ -937,17 +940,25 @@ static int iscsit_map_iovec(
cmd->kmapped_nents = i;

return i;
+
+map_err:
+ cmd->kmapped_nents = i - 1;
+ iscsit_unmap_iovec(cmd, iov);
+ return -1;
}

-static void iscsit_unmap_iovec(struct iscsi_cmd *cmd)
+static void iscsit_unmap_iovec(struct iscsi_cmd *cmd, struct kvec *iov)
{
u32 i;
struct scatterlist *sg;
+ unsigned int page_off = cmd->first_data_sg_off;

sg = cmd->first_data_sg;

- for (i = 0; i < cmd->kmapped_nents; i++)
- kunmap(sg_page(&sg[i]));
+ for (i = 0; i < cmd->kmapped_nents; i++) {
+ sg_unmap_offset(&sg[i], iov[i].iov_base, page_off, SG_KMAP);
+ page_off = 0;
+ }
}

static void iscsit_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn)
@@ -1610,7 +1621,7 @@ iscsit_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd,

rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);

- iscsit_unmap_iovec(cmd);
+ iscsit_unmap_iovec(cmd, iov);

if (rx_got != rx_size)
return -1;
@@ -2626,7 +2637,7 @@ static int iscsit_handle_immediate_data(

rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);

- iscsit_unmap_iovec(cmd);
+ iscsit_unmap_iovec(cmd, cmd->iov_data);

if (rx_got != rx_size) {
iscsit_rx_thread_wait_for_tcp(conn);
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index ddc216c..22c5ad5 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -431,7 +431,8 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
cmd->t_prot_sg, 0);

if (!rc)
- sbc_dif_copy_prot(cmd, sectors, is_read, prot_sg, prot_offset);
+ rc = sbc_dif_copy_prot(cmd, sectors, is_read, prot_sg,
+ prot_offset);

return rc;
}
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index c194063..67cb420 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -420,17 +420,17 @@ static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,

offset = 0;
for_each_sg(cmd->t_bidi_data_sg, sg, cmd->t_bidi_data_nents, count) {
- addr = kmap_atomic(sg_page(sg));
- if (!addr) {
+ addr = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(addr)) {
ret = TCM_OUT_OF_RESOURCES;
goto out;
}

for (i = 0; i < sg->length; i++)
- *(addr + sg->offset + i) ^= *(buf + offset + i);
+ *(addr + i) ^= *(buf + offset + i);

offset += sg->length;
- kunmap_atomic(addr);
+ sg_unmap(sg, addr, SG_KMAP_ATOMIC);
}

out:
@@ -541,8 +541,8 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
* Compare against SCSI READ payload against verify payload
*/
for_each_sg(cmd->t_bidi_data_sg, sg, cmd->t_bidi_data_nents, i) {
- addr = (unsigned char *)kmap_atomic(sg_page(sg));
- if (!addr) {
+ addr = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(addr)) {
ret = TCM_OUT_OF_RESOURCES;
goto out;
}
@@ -552,10 +552,10 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
if (memcmp(addr, buf + offset, len)) {
pr_warn("Detected MISCOMPARE for addr: %p buf: %p\n",
addr, buf + offset);
- kunmap_atomic(addr);
+ sg_unmap(sg, addr, SG_KMAP_ATOMIC);
goto miscompare;
}
- kunmap_atomic(addr);
+ sg_unmap(sg, addr, SG_KMAP_ATOMIC);

offset += len;
compare_len -= len;
@@ -1262,8 +1262,14 @@ sbc_dif_generate(struct se_cmd *cmd)
unsigned int block_size = dev->dev_attrib.block_size;

for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
- paddr = kmap_atomic(sg_page(psg)) + psg->offset;
- daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+ paddr = sg_map(psg, SG_KMAP_ATOMIC);
+ if (IS_ERR(paddr))
+ goto sg_map_err;
+
+ daddr = sg_map(dsg, SG_KMAP_ATOMIC);
+ if (IS_ERR(daddr))
+ goto sg_map_err;
+

for (j = 0; j < psg->length;
j += sizeof(*sdt)) {
@@ -1272,26 +1278,32 @@ sbc_dif_generate(struct se_cmd *cmd)

if (offset >= dsg->length) {
offset -= dsg->length;
- kunmap_atomic(daddr - dsg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
dsg = sg_next(dsg);
if (!dsg) {
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
return;
}
- daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+ daddr = sg_map(dsg, SG_KMAP_ATOMIC);
+ if (IS_ERR(daddr))
+ goto sg_map_err;
}

sdt = paddr + j;
avail = min(block_size, dsg->length - offset);
crc = crc_t10dif(daddr + offset, avail);
if (avail < block_size) {
- kunmap_atomic(daddr - dsg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
dsg = sg_next(dsg);
if (!dsg) {
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
return;
}
- daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+
+ daddr = sg_map(dsg, SG_KMAP_ATOMIC);
+ if (IS_ERR(daddr))
+ goto sg_map_err;
+
offset = block_size - avail;
crc = crc_t10dif_update(crc, daddr, offset);
} else {
@@ -1313,9 +1325,24 @@ sbc_dif_generate(struct se_cmd *cmd)
sector++;
}

- kunmap_atomic(daddr - dsg->offset);
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
}
+
+ return;
+
+sg_map_err:
+ if (!IS_ERR_OR_NULL(paddr))
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
+
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
}

static sense_reason_t
@@ -1359,8 +1386,8 @@ sbc_dif_v1_verify(struct se_cmd *cmd, struct t10_pi_tuple *sdt,
return 0;
}

-void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
- struct scatterlist *sg, int sg_off)
+int sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
+ struct scatterlist *sg, int sg_off)
{
struct se_device *dev = cmd->se_dev;
struct scatterlist *psg;
@@ -1369,18 +1396,24 @@ void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
unsigned int offset = sg_off;

if (!sg)
- return;
+ return 0;

left = sectors * dev->prot_length;

for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
unsigned int psg_len, copied = 0;

- paddr = kmap_atomic(sg_page(psg)) + psg->offset;
+ paddr = sg_map(psg, SG_KMAP_ATOMIC);
+ if (IS_ERR(paddr))
+ return TCM_OUT_OF_RESOURCES;
+
psg_len = min(left, psg->length);
while (psg_len) {
len = min(psg_len, sg->length - offset);
- addr = kmap_atomic(sg_page(sg)) + sg->offset + offset;
+ addr = sg_map_offset(sg, offset, SG_KMAP_ATOMIC);
+
+ if (IS_ERR(paddr))
+ return TCM_OUT_OF_RESOURCES;

if (read)
memcpy(paddr + copied, addr, len);
@@ -1392,15 +1425,17 @@ void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
copied += len;
psg_len -= len;

- kunmap_atomic(addr - sg->offset - offset);
+ sg_unmap_offset(sg, addr, offset, SG_KMAP_ATOMIC);

if (offset >= sg->length) {
sg = sg_next(sg);
offset = 0;
}
}
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
}
+
+ return 0;
}
EXPORT_SYMBOL(sbc_dif_copy_prot);

@@ -1419,8 +1454,15 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
unsigned int block_size = dev->dev_attrib.block_size;

for (; psg && sector < start + sectors; psg = sg_next(psg)) {
- paddr = kmap_atomic(sg_page(psg)) + psg->offset;
- daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+ paddr = sg_map(psg, SG_KMAP_ATOMIC);
+ if (IS_ERR(paddr))
+ goto sg_map_err;
+
+ daddr = sg_map(dsg, SG_KMAP_ATOMIC);
+ if (IS_ERR(daddr)) {
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
+ goto sg_map_err;
+ }

for (i = psg_off; i < psg->length &&
sector < start + sectors;
@@ -1430,13 +1472,13 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,

if (dsg_off >= dsg->length) {
dsg_off -= dsg->length;
- kunmap_atomic(daddr - dsg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
dsg = sg_next(dsg);
if (!dsg) {
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
return 0;
}
- daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+ daddr = sg_map(dsg, SG_KMAP_ATOMIC);
}

sdt = paddr + i;
@@ -1454,13 +1496,13 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
avail = min(block_size, dsg->length - dsg_off);
crc = crc_t10dif(daddr + dsg_off, avail);
if (avail < block_size) {
- kunmap_atomic(daddr - dsg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
dsg = sg_next(dsg);
if (!dsg) {
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
return 0;
}
- daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+ daddr = sg_map(dsg, SG_KMAP_ATOMIC);
dsg_off = block_size - avail;
crc = crc_t10dif_update(crc, daddr, dsg_off);
} else {
@@ -1469,8 +1511,8 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,

rc = sbc_dif_v1_verify(cmd, sdt, crc, sector, ei_lba);
if (rc) {
- kunmap_atomic(daddr - dsg->offset);
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
cmd->bad_sector = sector;
return rc;
}
@@ -1480,10 +1522,16 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
}

psg_off = 0;
- kunmap_atomic(daddr - dsg->offset);
- kunmap_atomic(paddr - psg->offset);
+ sg_unmap(dsg, daddr, SG_KMAP_ATOMIC);
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
}

return 0;
+
+sg_map_err:
+ if (!IS_ERR_OR_NULL(paddr))
+ sg_unmap(psg, paddr, SG_KMAP_ATOMIC);
+
+ return TCM_OUT_OF_RESOURCES;
}
EXPORT_SYMBOL(sbc_dif_verify);
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index b1a3cdb..6899ef9 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1504,11 +1504,11 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
unsigned char *buf = NULL;

if (sgl)
- buf = kmap(sg_page(sgl)) + sgl->offset;
+ buf = sg_map(sgl, SG_KMAP);

- if (buf) {
+ if (buf && !IS_ERR(buf)) {
memset(buf, 0, sgl->length);
- kunmap(sg_page(sgl));
+ sg_unmap(sgl, buf, SG_KMAP);
}
}

@@ -2276,8 +2276,14 @@ void *transport_kmap_data_sg(struct se_cmd *cmd)
return NULL;

BUG_ON(!sg);
- if (cmd->t_data_nents == 1)
- return kmap(sg_page(sg)) + sg->offset;
+ if (cmd->t_data_nents == 1) {
+ cmd->t_data_vmap = sg_map(sg, SG_KMAP);
+ if (IS_ERR(cmd->t_data_vmap)) {
+ cmd->t_data_vmap = NULL;
+ return NULL;
+ }
+ return cmd->t_data_vmap;
+ }

/* >1 page. use vmap */
pages = kmalloc(sizeof(*pages) * cmd->t_data_nents, GFP_KERNEL);
@@ -2303,7 +2309,7 @@ void transport_kunmap_data_sg(struct se_cmd *cmd)
if (!cmd->t_data_nents) {
return;
} else if (cmd->t_data_nents == 1) {
- kunmap(sg_page(cmd->t_data_sg));
+ sg_unmap(cmd->t_data_sg, cmd->t_data_vmap, SG_KMAP);
return;
}

diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index c6874c3..319fef5 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -260,7 +260,7 @@ static inline size_t iov_tail(struct tcmu_dev *udev, struct iovec *iov)
return (size_t)iov->iov_base + iov->iov_len;
}

-static void alloc_and_scatter_data_area(struct tcmu_dev *udev,
+static int alloc_and_scatter_data_area(struct tcmu_dev *udev,
struct scatterlist *data_sg, unsigned int data_nents,
struct iovec **iov, int *iov_cnt, bool copy_data)
{
@@ -272,7 +272,10 @@ static void alloc_and_scatter_data_area(struct tcmu_dev *udev,

for_each_sg(data_sg, sg, data_nents, i) {
int sg_remaining = sg->length;
- from = kmap_atomic(sg_page(sg)) + sg->offset;
+ from = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(from))
+ return PTR_ERR(from);
+
while (sg_remaining > 0) {
if (block_remaining == 0) {
block = find_first_zero_bit(udev->data_bitmap,
@@ -301,8 +304,10 @@ static void alloc_and_scatter_data_area(struct tcmu_dev *udev,
sg_remaining -= copy_bytes;
block_remaining -= copy_bytes;
}
- kunmap_atomic(from - sg->offset);
+ sg_unmap(sg, from, SG_KMAP_ATOMIC);
}
+
+ return 0;
}

static void free_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd)
@@ -311,7 +316,7 @@ static void free_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd)
DATA_BLOCK_BITS);
}

-static void gather_data_area(struct tcmu_dev *udev, unsigned long *cmd_bitmap,
+static int gather_data_area(struct tcmu_dev *udev, unsigned long *cmd_bitmap,
struct scatterlist *data_sg, unsigned int data_nents)
{
int i, block;
@@ -322,7 +327,10 @@ static void gather_data_area(struct tcmu_dev *udev, unsigned long *cmd_bitmap,

for_each_sg(data_sg, sg, data_nents, i) {
int sg_remaining = sg->length;
- to = kmap_atomic(sg_page(sg)) + sg->offset;
+ to = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(to))
+ return PTR_ERR(to);
+
while (sg_remaining > 0) {
if (block_remaining == 0) {
block = find_first_bit(cmd_bitmap,
@@ -342,8 +350,10 @@ static void gather_data_area(struct tcmu_dev *udev, unsigned long *cmd_bitmap,
sg_remaining -= copy_bytes;
block_remaining -= copy_bytes;
}
- kunmap_atomic(to - sg->offset);
+ sg_unmap(sg, to, SG_KMAP_ATOMIC);
}
+
+ return 0;
}

static inline size_t spc_bitmap_free(unsigned long *bitmap)
@@ -505,15 +515,18 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
iov_cnt = 0;
copy_to_data_area = (se_cmd->data_direction == DMA_TO_DEVICE
|| se_cmd->se_cmd_flags & SCF_BIDI);
- alloc_and_scatter_data_area(udev, se_cmd->t_data_sg,
- se_cmd->t_data_nents, &iov, &iov_cnt, copy_to_data_area);
+ if (alloc_and_scatter_data_area(udev, se_cmd->t_data_sg,
+ se_cmd->t_data_nents, &iov, &iov_cnt, copy_to_data_area))
+ return TCM_OUT_OF_RESOURCES;
+
entry->req.iov_cnt = iov_cnt;
entry->req.iov_dif_cnt = 0;

/* Handle BIDI commands */
iov_cnt = 0;
- alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
- se_cmd->t_bidi_data_nents, &iov, &iov_cnt, false);
+ if (alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
+ se_cmd->t_bidi_data_nents, &iov, &iov_cnt, false))
+ return TCM_OUT_OF_RESOURCES;
entry->req.iov_bidi_cnt = iov_cnt;

/* cmd's data_bitmap is what changed in process */
@@ -596,15 +609,17 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *

/* Get Data-In buffer before clean up */
bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
- gather_data_area(udev, bitmap,
- se_cmd->t_bidi_data_sg, se_cmd->t_bidi_data_nents);
+ if (gather_data_area(udev, bitmap,
+ se_cmd->t_bidi_data_sg, se_cmd->t_bidi_data_nents))
+ entry->rsp.scsi_status = SAM_STAT_CHECK_CONDITION;
free_data_area(udev, cmd);
} else if (se_cmd->data_direction == DMA_FROM_DEVICE) {
DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);

bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
- gather_data_area(udev, bitmap,
- se_cmd->t_data_sg, se_cmd->t_data_nents);
+ if (gather_data_area(udev, bitmap,
+ se_cmd->t_data_sg, se_cmd->t_data_nents))
+ entry->rsp.scsi_status = SAM_STAT_CHECK_CONDITION;
free_data_area(udev, cmd);
} else if (se_cmd->data_direction == DMA_TO_DEVICE) {
free_data_area(udev, cmd);
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 1b0f447..c39ecd9 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -82,8 +82,8 @@ sector_t sbc_get_write_same_sectors(struct se_cmd *cmd);
void sbc_dif_generate(struct se_cmd *);
sense_reason_t sbc_dif_verify(struct se_cmd *, sector_t, unsigned int,
unsigned int, struct scatterlist *, int);
-void sbc_dif_copy_prot(struct se_cmd *, unsigned int, bool,
- struct scatterlist *, int);
+int sbc_dif_copy_prot(struct se_cmd *, unsigned int, bool,
+ struct scatterlist *, int);
void transport_set_vpd_proto_id(struct t10_vpd *, unsigned char *);
int transport_set_vpd_assoc(struct t10_vpd *, unsigned char *);
int transport_set_vpd_ident_type(struct t10_vpd *, unsigned char *);
--
2.1.4

2017-04-13 22:13:50

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 22/22] memstick: Make use of the new sg_map helper function

Straightforward conversion, but we have to WARN if unmappable
memory finds its way into the sgl.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/memstick/host/jmb38x_ms.c | 23 ++++++++++++++++++-----
drivers/memstick/host/tifm_ms.c | 22 +++++++++++++++++-----
2 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c
index 48db922..256cf41 100644
--- a/drivers/memstick/host/jmb38x_ms.c
+++ b/drivers/memstick/host/jmb38x_ms.c
@@ -303,7 +303,6 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host)
unsigned int off;
unsigned int t_size, p_cnt;
unsigned char *buf;
- struct page *pg;
unsigned long flags = 0;

if (host->req->long_data) {
@@ -318,14 +317,26 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host)
unsigned int uninitialized_var(p_off);

if (host->req->long_data) {
- pg = nth_page(sg_page(&host->req->sg),
- off >> PAGE_SHIFT);
p_off = offset_in_page(off);
p_cnt = PAGE_SIZE - p_off;
p_cnt = min(p_cnt, length);

local_irq_save(flags);
- buf = kmap_atomic(pg) + p_off;
+ buf = sg_map_offset(&host->req->sg,
+ off - host->req->sg.offset,
+ SG_KMAP_ATOMIC);
+ if (IS_ERR(buf)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ break;
+ }
+
} else {
buf = host->req->data + host->block_pos;
p_cnt = host->req->data_len - host->block_pos;
@@ -341,7 +352,9 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host)
: jmb38x_ms_read_reg_data(host, buf, p_cnt);

if (host->req->long_data) {
- kunmap_atomic(buf - p_off);
+ sg_unmap_offset(&host->req->sg, buf,
+ off - host->req->sg.offset,
+ SG_KMAP_ATOMIC);
local_irq_restore(flags);
}

diff --git a/drivers/memstick/host/tifm_ms.c b/drivers/memstick/host/tifm_ms.c
index 7bafa72..c0bc40e 100644
--- a/drivers/memstick/host/tifm_ms.c
+++ b/drivers/memstick/host/tifm_ms.c
@@ -186,7 +186,6 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host)
unsigned int off;
unsigned int t_size, p_cnt;
unsigned char *buf;
- struct page *pg;
unsigned long flags = 0;

if (host->req->long_data) {
@@ -203,14 +202,25 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host)
unsigned int uninitialized_var(p_off);

if (host->req->long_data) {
- pg = nth_page(sg_page(&host->req->sg),
- off >> PAGE_SHIFT);
p_off = offset_in_page(off);
p_cnt = PAGE_SIZE - p_off;
p_cnt = min(p_cnt, length);

local_irq_save(flags);
- buf = kmap_atomic(pg) + p_off;
+ buf = sg_map_offset(&host->req->sg,
+ off - host->req->sg.offset,
+ SG_KMAP_ATOMIC);
+ if (IS_ERR(buf)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there doesn't
+ * seem to be any error path out of here,
+ * we can only WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ break;
+ }
} else {
buf = host->req->data + host->block_pos;
p_cnt = host->req->data_len - host->block_pos;
@@ -221,7 +231,9 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host)
: tifm_ms_read_data(host, buf, p_cnt);

if (host->req->long_data) {
- kunmap_atomic(buf - p_off);
+ sg_unmap_offset(&host->req->sg, buf,
+ off - host->req->sg.offset,
+ SG_KMAP_ATOMIC);
local_irq_restore(flags);
}

--
2.1.4

2017-04-13 22:13:45

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 18/22] mmc: spi: Make use of the new sg_map helper function

We use the sg_map helper but it's slightly more complicated
as we only check for the error when the mapping actually gets used.
Such that if the mapping failed but wasn't needed then no
error occurs.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/mmc/host/mmc_spi.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
index e77d79c..82f786d 100644
--- a/drivers/mmc/host/mmc_spi.c
+++ b/drivers/mmc/host/mmc_spi.c
@@ -676,9 +676,15 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
struct scratch *scratch = host->data;
u32 pattern;

- if (host->mmc->use_spi_crc)
+ if (host->mmc->use_spi_crc) {
+ if (IS_ERR(t->tx_buf))
+ return PTR_ERR(t->tx_buf);
+
scratch->crc_val = cpu_to_be16(
crc_itu_t(0, t->tx_buf, t->len));
+ t->tx_buf += t->len;
+ }
+
if (host->dma_dev)
dma_sync_single_for_device(host->dma_dev,
host->data_dma, sizeof(*scratch),
@@ -743,7 +749,6 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
return status;
}

- t->tx_buf += t->len;
if (host->dma_dev)
t->tx_dma += t->len;

@@ -809,6 +814,11 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
}
leftover = status << 1;

+ if (bitshift || host->mmc->use_spi_crc) {
+ if (IS_ERR(t->rx_buf))
+ return PTR_ERR(t->rx_buf);
+ }
+
if (host->dma_dev) {
dma_sync_single_for_device(host->dma_dev,
host->data_dma, sizeof(*scratch),
@@ -860,9 +870,10 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
scratch->crc_val, crc, t->len);
return -EILSEQ;
}
+
+ t->rx_buf += t->len;
}

- t->rx_buf += t->len;
if (host->dma_dev)
t->rx_dma += t->len;

@@ -936,11 +947,11 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
}

/* allow pio too; we don't allow highmem */
- kmap_addr = kmap(sg_page(sg));
+ kmap_addr = sg_map(sg, SG_KMAP);
if (direction == DMA_TO_DEVICE)
- t->tx_buf = kmap_addr + sg->offset;
+ t->tx_buf = kmap_addr;
else
- t->rx_buf = kmap_addr + sg->offset;
+ t->rx_buf = kmap_addr;

/* transfer each block, and update request status */
while (length) {
@@ -970,7 +981,8 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
/* discard mappings */
if (direction == DMA_FROM_DEVICE)
flush_kernel_dcache_page(sg_page(sg));
- kunmap(sg_page(sg));
+ if (!IS_ERR(kmap_addr))
+ sg_unmap(sg, kmap_addr, SG_KMAP);
if (dma_dev)
dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir);

--
2.1.4

2017-04-13 22:15:19

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 03/22] libiscsi: Make use of new the sg_map helper function

Convert the kmap and kmap_atomic uses to the sg_map function. We now
store the flags for the kmap instead of a boolean to indicate
atomicitiy. We also propogate a possible kmap error down and create
a new ISCSI_TCP_INTERNAL_ERR error type for this.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/scsi/cxgbi/libcxgbi.c | 5 +++++
drivers/scsi/libiscsi_tcp.c | 32 ++++++++++++++++++++------------
include/scsi/libiscsi_tcp.h | 3 ++-
3 files changed, 27 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
index bd7d39e..e38d0c1 100644
--- a/drivers/scsi/cxgbi/libcxgbi.c
+++ b/drivers/scsi/cxgbi/libcxgbi.c
@@ -1556,6 +1556,11 @@ static inline int read_pdu_skb(struct iscsi_conn *conn,
*/
iscsi_conn_printk(KERN_ERR, conn, "Invalid pdu or skb.");
return -EFAULT;
+ case ISCSI_TCP_INTERNAL_ERR:
+ pr_info("skb 0x%p, off %u, %d, TCP_INTERNAL_ERR.\n",
+ skb, offset, offloaded);
+ iscsi_conn_printk(KERN_ERR, conn, "Internal error.");
+ return -EFAULT;
case ISCSI_TCP_SEGMENT_DONE:
log_debug(1 << CXGBI_DBG_PDU_RX,
"skb 0x%p, off %u, %d, TCP_SEG_DONE, rc %d.\n",
diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
index 63a1d69..a2427699 100644
--- a/drivers/scsi/libiscsi_tcp.c
+++ b/drivers/scsi/libiscsi_tcp.c
@@ -133,25 +133,23 @@ static void iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv)
if (page_count(sg_page(sg)) >= 1 && !recv)
return;

- if (recv) {
- segment->atomic_mapped = true;
- segment->sg_mapped = kmap_atomic(sg_page(sg));
- } else {
- segment->atomic_mapped = false;
- /* the xmit path can sleep with the page mapped so use kmap */
- segment->sg_mapped = kmap(sg_page(sg));
+ /* the xmit path can sleep with the page mapped so don't use atomic */
+ segment->sg_map_flags = recv ? SG_KMAP_ATOMIC : SG_KMAP;
+ segment->sg_mapped = sg_map(sg, segment->sg_map_flags);
+
+ if (IS_ERR(segment->sg_mapped)) {
+ segment->sg_mapped = NULL;
+ return;
}

- segment->data = segment->sg_mapped + sg->offset + segment->sg_offset;
+ segment->data = segment->sg_mapped + segment->sg_offset;
}

void iscsi_tcp_segment_unmap(struct iscsi_segment *segment)
{
if (segment->sg_mapped) {
- if (segment->atomic_mapped)
- kunmap_atomic(segment->sg_mapped);
- else
- kunmap(sg_page(segment->sg));
+ sg_unmap(segment->sg, segment->sg_mapped,
+ segment->sg_map_flags);
segment->sg_mapped = NULL;
segment->data = NULL;
}
@@ -304,6 +302,9 @@ iscsi_tcp_segment_recv(struct iscsi_tcp_conn *tcp_conn,
break;
}

+ if (segment->data)
+ return -EFAULT;
+
copy = min(len - copied, segment->size - segment->copied);
ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "copying %d\n", copy);
memcpy(segment->data + segment->copied, ptr + copied, copy);
@@ -927,6 +928,13 @@ int iscsi_tcp_recv_skb(struct iscsi_conn *conn, struct sk_buff *skb,
avail);
rc = iscsi_tcp_segment_recv(tcp_conn, segment, ptr, avail);
BUG_ON(rc == 0);
+ if (rc < 0) {
+ ISCSI_DBG_TCP(conn, "memory fault. Consumed %d\n",
+ consumed);
+ *status = ISCSI_TCP_INTERNAL_ERR;
+ goto skb_done;
+ }
+
consumed += rc;

if (segment->total_copied >= segment->total_size) {
diff --git a/include/scsi/libiscsi_tcp.h b/include/scsi/libiscsi_tcp.h
index 30520d5..58c79af 100644
--- a/include/scsi/libiscsi_tcp.h
+++ b/include/scsi/libiscsi_tcp.h
@@ -47,7 +47,7 @@ struct iscsi_segment {
struct scatterlist *sg;
void *sg_mapped;
unsigned int sg_offset;
- bool atomic_mapped;
+ int sg_map_flags;

iscsi_segment_done_fn_t *done;
};
@@ -92,6 +92,7 @@ enum {
ISCSI_TCP_SKB_DONE, /* skb is out of data */
ISCSI_TCP_CONN_ERR, /* iscsi layer has fired a conn err */
ISCSI_TCP_SUSPENDED, /* conn is suspended */
+ ISCSI_TCP_INTERNAL_ERR, /* an internal error occurred */
};

extern void iscsi_tcp_hdr_recv_prep(struct iscsi_tcp_conn *tcp_conn);
--
2.1.4

2017-04-13 22:15:25

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 07/22] crypto: shash, caam: Make use of the new sg_map helper function

Very straightforward conversion to the new function in two crypto
drivers.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
crypto/shash.c | 9 ++++++---
drivers/crypto/caam/caamalg.c | 8 +++-----
2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 5e31c8d..2b7de94 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -283,10 +283,13 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) {
void *data;

- data = kmap_atomic(sg_page(sg));
- err = crypto_shash_digest(desc, data + offset, nbytes,
+ data = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+
+ err = crypto_shash_digest(desc, data, nbytes,
req->result);
- kunmap_atomic(data);
+ sg_unmap(sg, data, SG_KMAP_ATOMIC);
crypto_yield(desc->flags);
} else
err = crypto_shash_init(desc) ?:
diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 9bc80eb..76b97de 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -89,7 +89,6 @@ static void dbg_dump_sg(const char *level, const char *prefix_str,
struct scatterlist *sg, size_t tlen, bool ascii)
{
struct scatterlist *it;
- void *it_page;
size_t len;
void *buf;

@@ -98,19 +97,18 @@ static void dbg_dump_sg(const char *level, const char *prefix_str,
* make sure the scatterlist's page
* has a valid virtual memory mapping
*/
- it_page = kmap_atomic(sg_page(it));
- if (unlikely(!it_page)) {
+ buf = sg_map(it, SG_KMAP_ATOMIC);
+ if (IS_ERR(buf)) {
printk(KERN_ERR "dbg_dump_sg: kmap failed\n");
return;
}

- buf = it_page + it->offset;
len = min_t(size_t, tlen, it->length);
print_hex_dump(level, prefix_str, prefix_type, rowsize,
groupsize, buf, len, ascii);
tlen -= len;

- kunmap_atomic(it_page);
+ sg_unmap(it, buf, SG_KMAP_ATOMIC);
}
}
#endif
--
2.1.4

2017-04-13 22:15:29

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 12/22] scsi: ipr, pmcraid, isci: Make use of the new sg_map helper in 4 call sites

Very straightforward conversion of three scsi drivers.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/scsi/ipr.c | 27 ++++++++++++++-------------
drivers/scsi/isci/request.c | 42 +++++++++++++++++++++++++-----------------
drivers/scsi/pmcraid.c | 19 ++++++++++++-------
3 files changed, 51 insertions(+), 37 deletions(-)

diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index b29afaf..f98f251 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -3853,7 +3853,7 @@ static void ipr_free_ucode_buffer(struct ipr_sglist *sglist)
static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
u8 *buffer, u32 len)
{
- int bsize_elem, i, result = 0;
+ int bsize_elem, i;
struct scatterlist *scatterlist;
void *kaddr;

@@ -3863,32 +3863,33 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
scatterlist = sglist->scatterlist;

for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) {
- struct page *page = sg_page(&scatterlist[i]);
+ kaddr = sg_map(&scatterlist[i], SG_KMAP);
+ if (IS_ERR(kaddr)) {
+ ipr_trace;
+ return PTR_ERR(kaddr);
+ }

- kaddr = kmap(page);
memcpy(kaddr, buffer, bsize_elem);
- kunmap(page);
+ sg_unmap(&scatterlist[i], kaddr, SG_KMAP);

scatterlist[i].length = bsize_elem;
-
- if (result != 0) {
- ipr_trace;
- return result;
- }
}

if (len % bsize_elem) {
- struct page *page = sg_page(&scatterlist[i]);
+ kaddr = sg_map(&scatterlist[i], SG_KMAP);
+ if (IS_ERR(kaddr)) {
+ ipr_trace;
+ return PTR_ERR(kaddr);
+ }

- kaddr = kmap(page);
memcpy(kaddr, buffer, len % bsize_elem);
- kunmap(page);
+ sg_unmap(&scatterlist[i], kaddr, SG_KMAP);

scatterlist[i].length = len % bsize_elem;
}

sglist->buffer_len = len;
- return result;
+ return 0;
}

/**
diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
index 47f66e9..66d6596 100644
--- a/drivers/scsi/isci/request.c
+++ b/drivers/scsi/isci/request.c
@@ -1424,12 +1424,14 @@ sci_stp_request_pio_data_in_copy_data_buffer(struct isci_stp_request *stp_req,
sg = task->scatter;

while (total_len > 0) {
- struct page *page = sg_page(sg);
-
copy_len = min_t(int, total_len, sg_dma_len(sg));
- kaddr = kmap_atomic(page);
- memcpy(kaddr + sg->offset, src_addr, copy_len);
- kunmap_atomic(kaddr);
+ kaddr = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(kaddr))
+ return SCI_FAILURE;
+
+ memcpy(kaddr, src_addr, copy_len);
+ sg_unmap(sg, kaddr, SG_KMAP_ATOMIC);
+
total_len -= copy_len;
src_addr += copy_len;
sg = sg_next(sg);
@@ -1771,14 +1773,16 @@ sci_io_request_frame_handler(struct isci_request *ireq,
case SCI_REQ_SMP_WAIT_RESP: {
struct sas_task *task = isci_request_access_task(ireq);
struct scatterlist *sg = &task->smp_task.smp_resp;
- void *frame_header, *kaddr;
+ void *frame_header;
u8 *rsp;

sci_unsolicited_frame_control_get_header(&ihost->uf_control,
frame_index,
&frame_header);
- kaddr = kmap_atomic(sg_page(sg));
- rsp = kaddr + sg->offset;
+ rsp = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(rsp))
+ return SCI_FAILURE;
+
sci_swab32_cpy(rsp, frame_header, 1);

if (rsp[0] == SMP_RESPONSE) {
@@ -1814,7 +1818,7 @@ sci_io_request_frame_handler(struct isci_request *ireq,
ireq->sci_status = SCI_FAILURE_CONTROLLER_SPECIFIC_IO_ERR;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
}
- kunmap_atomic(kaddr);
+ sg_unmap(sg, rsp, SG_KMAP_ATOMIC);

sci_controller_release_frame(ihost, frame_index);

@@ -2919,15 +2923,18 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
case SAS_PROTOCOL_SMP: {
struct scatterlist *sg = &task->smp_task.smp_req;
struct smp_req *smp_req;
- void *kaddr;

dma_unmap_sg(&ihost->pdev->dev, sg, 1, DMA_TO_DEVICE);

/* need to swab it back in case the command buffer is re-used */
- kaddr = kmap_atomic(sg_page(sg));
- smp_req = kaddr + sg->offset;
+ smp_req = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(smp_req)) {
+ status = SAS_ABORTED_TASK;
+ break;
+ }
+
sci_swab32_cpy(smp_req, smp_req, sg->length / sizeof(u32));
- kunmap_atomic(kaddr);
+ sg_unmap(sg, smp_req, SG_KMAP_ATOMIC);
break;
}
default:
@@ -3190,12 +3197,13 @@ sci_io_request_construct_smp(struct device *dev,
struct scu_task_context *task_context;
struct isci_port *iport;
struct smp_req *smp_req;
- void *kaddr;
u8 req_len;
u32 cmd;

- kaddr = kmap_atomic(sg_page(sg));
- smp_req = kaddr + sg->offset;
+ smp_req = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(smp_req))
+ return SCI_FAILURE;
+
/*
* Look at the SMP requests' header fields; for certain SAS 1.x SMP
* functions under SAS 2.0, a zero request length really indicates
@@ -3220,7 +3228,7 @@ sci_io_request_construct_smp(struct device *dev,
req_len = smp_req->req_len;
sci_swab32_cpy(smp_req, smp_req, sg->length / sizeof(u32));
cmd = *(u32 *) smp_req;
- kunmap_atomic(kaddr);
+ sg_unmap(sg, smp_req, SG_KMAP_ATOMIC);

if (!dma_map_sg(dev, sg, 1, DMA_TO_DEVICE))
return SCI_FAILURE;
diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
index 49e70a3..af1903e 100644
--- a/drivers/scsi/pmcraid.c
+++ b/drivers/scsi/pmcraid.c
@@ -3342,9 +3342,12 @@ static int pmcraid_copy_sglist(
scatterlist = sglist->scatterlist;

for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) {
- struct page *page = sg_page(&scatterlist[i]);
+ kaddr = sg_map(&scatterlist[i], SG_KMAP);
+ if (IS_ERR(kaddr)) {
+ pmcraid_err("failed to copy user data into sg list\n");
+ return PTR_ERR(kaddr);
+ }

- kaddr = kmap(page);
if (direction == DMA_TO_DEVICE)
rc = __copy_from_user(kaddr,
(void *)buffer,
@@ -3352,7 +3355,7 @@ static int pmcraid_copy_sglist(
else
rc = __copy_to_user((void *)buffer, kaddr, bsize_elem);

- kunmap(page);
+ sg_unmap(&scatterlist[i], kaddr, SG_KMAP);

if (rc) {
pmcraid_err("failed to copy user data into sg list\n");
@@ -3363,9 +3366,11 @@ static int pmcraid_copy_sglist(
}

if (len % bsize_elem) {
- struct page *page = sg_page(&scatterlist[i]);
-
- kaddr = kmap(page);
+ kaddr = sg_map(&scatterlist[i], SG_KMAP);
+ if (IS_ERR(kaddr)) {
+ pmcraid_err("failed to copy user data into sg list\n");
+ return PTR_ERR(kaddr);
+ }

if (direction == DMA_TO_DEVICE)
rc = __copy_from_user(kaddr,
@@ -3376,7 +3381,7 @@ static int pmcraid_copy_sglist(
kaddr,
len % bsize_elem);

- kunmap(page);
+ sg_unmap(&scatterlist[i], kaddr, SG_KMAP);

scatterlist[i].length = len % bsize_elem;
}
--
2.1.4

2017-04-13 22:15:12

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 10/22] staging: unisys: visorbus: Make use of the new sg_map helper function

Straightforward conversion to the new function.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/staging/unisys/visorhba/visorhba_main.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/unisys/visorhba/visorhba_main.c b/drivers/staging/unisys/visorhba/visorhba_main.c
index 0ce92c8..2d8c8bc 100644
--- a/drivers/staging/unisys/visorhba/visorhba_main.c
+++ b/drivers/staging/unisys/visorhba/visorhba_main.c
@@ -842,7 +842,6 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)
struct scatterlist *sg;
unsigned int i;
char *this_page;
- char *this_page_orig;
int bufind = 0;
struct visordisk_info *vdisk;
struct visorhba_devdata *devdata;
@@ -869,11 +868,14 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)

sg = scsi_sglist(scsicmd);
for (i = 0; i < scsi_sg_count(scsicmd); i++) {
- this_page_orig = kmap_atomic(sg_page(sg + i));
- this_page = (void *)((unsigned long)this_page_orig |
- sg[i].offset);
+ this_page = sg_map(sg + i, SG_KMAP_ATOMIC);
+ if (IS_ERR(this_page)) {
+ scsicmd->result = DID_ERROR << 16;
+ return;
+ }
+
memcpy(this_page, buf + bufind, sg[i].length);
- kunmap_atomic(this_page_orig);
+ sg_unmap(sg + i, this_page, SG_KMAP_ATOMIC);
}
} else {
devdata = (struct visorhba_devdata *)scsidev->host->hostdata;
--
2.1.4

2017-04-13 22:17:06

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 15/22] scsi: libfc, csiostor: Change to sg_copy_buffer in two drivers

These two drivers appear to duplicate the functionality of
sg_copy_buffer. So we clean them up to use the common code.

This helps us remove a couple of instances that would otherwise be
slightly tricky sg_map usages.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/scsi/csiostor/csio_scsi.c | 54 +++------------------------------------
drivers/scsi/libfc/fc_libfc.c | 49 ++++++++---------------------------
2 files changed, 14 insertions(+), 89 deletions(-)

diff --git a/drivers/scsi/csiostor/csio_scsi.c b/drivers/scsi/csiostor/csio_scsi.c
index a1ff75f..bd9d062 100644
--- a/drivers/scsi/csiostor/csio_scsi.c
+++ b/drivers/scsi/csiostor/csio_scsi.c
@@ -1489,60 +1489,14 @@ static inline uint32_t
csio_scsi_copy_to_sgl(struct csio_hw *hw, struct csio_ioreq *req)
{
struct scsi_cmnd *scmnd = (struct scsi_cmnd *)csio_scsi_cmnd(req);
- struct scatterlist *sg;
- uint32_t bytes_left;
- uint32_t bytes_copy;
- uint32_t buf_off = 0;
- uint32_t start_off = 0;
- uint32_t sg_off = 0;
- void *sg_addr;
- void *buf_addr;
struct csio_dma_buf *dma_buf;
+ size_t copied;

- bytes_left = scsi_bufflen(scmnd);
- sg = scsi_sglist(scmnd);
dma_buf = (struct csio_dma_buf *)csio_list_next(&req->gen_list);
+ copied = sg_copy_from_buffer(scsi_sglist(scmnd), scsi_sg_count(scmnd),
+ dma_buf->vaddr, scsi_bufflen(scmnd));

- /* Copy data from driver buffer to SGs of SCSI CMD */
- while (bytes_left > 0 && sg && dma_buf) {
- if (buf_off >= dma_buf->len) {
- buf_off = 0;
- dma_buf = (struct csio_dma_buf *)
- csio_list_next(dma_buf);
- continue;
- }
-
- if (start_off >= sg->length) {
- start_off -= sg->length;
- sg = sg_next(sg);
- continue;
- }
-
- buf_addr = dma_buf->vaddr + buf_off;
- sg_off = sg->offset + start_off;
- bytes_copy = min((dma_buf->len - buf_off),
- sg->length - start_off);
- bytes_copy = min((uint32_t)(PAGE_SIZE - (sg_off & ~PAGE_MASK)),
- bytes_copy);
-
- sg_addr = kmap_atomic(sg_page(sg) + (sg_off >> PAGE_SHIFT));
- if (!sg_addr) {
- csio_err(hw, "failed to kmap sg:%p of ioreq:%p\n",
- sg, req);
- break;
- }
-
- csio_dbg(hw, "copy_to_sgl:sg_addr %p sg_off %d buf %p len %d\n",
- sg_addr, sg_off, buf_addr, bytes_copy);
- memcpy(sg_addr + (sg_off & ~PAGE_MASK), buf_addr, bytes_copy);
- kunmap_atomic(sg_addr);
-
- start_off += bytes_copy;
- buf_off += bytes_copy;
- bytes_left -= bytes_copy;
- }
-
- if (bytes_left > 0)
+ if (copied != scsi_bufflen(scmnd))
return DID_ERROR;
else
return DID_OK;
diff --git a/drivers/scsi/libfc/fc_libfc.c b/drivers/scsi/libfc/fc_libfc.c
index d623d08..ce0805a 100644
--- a/drivers/scsi/libfc/fc_libfc.c
+++ b/drivers/scsi/libfc/fc_libfc.c
@@ -113,45 +113,16 @@ u32 fc_copy_buffer_to_sglist(void *buf, size_t len,
u32 *nents, size_t *offset,
u32 *crc)
{
- size_t remaining = len;
- u32 copy_len = 0;
-
- while (remaining > 0 && sg) {
- size_t off, sg_bytes;
- void *page_addr;
-
- if (*offset >= sg->length) {
- /*
- * Check for end and drop resources
- * from the last iteration.
- */
- if (!(*nents))
- break;
- --(*nents);
- *offset -= sg->length;
- sg = sg_next(sg);
- continue;
- }
- sg_bytes = min(remaining, sg->length - *offset);
-
- /*
- * The scatterlist item may be bigger than PAGE_SIZE,
- * but we are limited to mapping PAGE_SIZE at a time.
- */
- off = *offset + sg->offset;
- sg_bytes = min(sg_bytes,
- (size_t)(PAGE_SIZE - (off & ~PAGE_MASK)));
- page_addr = kmap_atomic(sg_page(sg) + (off >> PAGE_SHIFT));
- if (crc)
- *crc = crc32(*crc, buf, sg_bytes);
- memcpy((char *)page_addr + (off & ~PAGE_MASK), buf, sg_bytes);
- kunmap_atomic(page_addr);
- buf += sg_bytes;
- *offset += sg_bytes;
- remaining -= sg_bytes;
- copy_len += sg_bytes;
- }
- return copy_len;
+ size_t copied;
+
+ copied = sg_pcopy_from_buffer(sg, sg_nents(sg),
+ buf, len, *offset);
+
+ *offset += copied;
+ if (crc)
+ *crc = crc32(*crc, buf, copied);
+
+ return copied;
}

/**
--
2.1.4

2017-04-14 04:59:58

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 02/22] nvmet: Make use of the new sg_map helper function

On Thu, Apr 13, 2017 at 04:05:15PM -0600, Logan Gunthorpe wrote:
> This is a straight forward conversion in two places. Should kmap fail,
> the code will return an INVALD_DATA error in the completion.

It really should be using nvmet_copy_from_sgl to make things safer,
as we don't want to rely on any particular SG list layout. In fact
I'm pretty sure I did the conversion at some point, but it must never
have made it upstream.

2017-04-14 05:06:41

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 02/22] nvmet: Make use of the new sg_map helper function



On 13/04/17 10:59 PM, Christoph Hellwig wrote:
> On Thu, Apr 13, 2017 at 04:05:15PM -0600, Logan Gunthorpe wrote:
>> This is a straight forward conversion in two places. Should kmap fail,
>> the code will return an INVALD_DATA error in the completion.
>
> It really should be using nvmet_copy_from_sgl to make things safer,
> as we don't want to rely on any particular SG list layout. In fact
> I'm pretty sure I did the conversion at some point, but it must never
> have made it upstream.

Ha, I did the conversion too a couple times for my RFC series. I can
change this patch to do that. Or maybe I'll just send a patch for that
separately seeing it doesn't depend on anything and is pretty simple. I
can do that next week.

Thanks,

Logan

2017-04-14 05:18:55

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 02/22] nvmet: Make use of the new sg_map helper function

On Thu, Apr 13, 2017 at 11:06:16PM -0600, Logan Gunthorpe wrote:
> Or maybe I'll just send a patch for that
> separately seeing it doesn't depend on anything and is pretty simple. I
> can do that next week.

Yes, please just send that patch linux-nvme, we should be able to get
it into 4.12.

2017-04-14 08:35:25

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 01/22] scatterlist: Introduce sg_map helper functions

> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 0007b79..b95934b 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -37,6 +37,9 @@
>
> #include <uapi/linux/dma-buf.h>
>
> +/* Prevent the highmem.h macro from aliasing ops->kunmap_atomic */
> +#undef kunmap_atomic
> +
> static inline int is_dma_buf_file(struct file *);
>
> struct dma_buf_list {

I think the right fix here is to rename the operation to unmap_atomic
and send out a little patch for that ASAP.

> + * Flags can be any of:
> + * * SG_KMAP - Use kmap to create the mapping
> + * * SG_KMAP_ATOMIC - Use kmap_atomic to map the page atommically.
> + * Thus, the rules of that function apply: the cpu
> + * may not sleep until it is unmaped.
> + *
> + * Also, consider carefully whether this function is appropriate. It is
> + * largely not recommended for new code and if the sgl came from another
> + * subsystem and you don't know what kind of memory might be in the list
> + * then you definitely should not call it. Non-mappable memory may be in
> + * the sgl and thus this function may fail unexpectedly.
> + **/
> +static inline void *sg_map_offset(struct scatterlist *sg, size_t offset,
> + int flags)

I'd rather have separate functions for kmap vs kmap_atomic instead of
the flags parameter. And while you're at it just always pass the 0
offset parameter instead of adding a wrapper..

Otherwise this looks good to me.

2017-04-14 08:36:14

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 03/22] libiscsi: Make use of new the sg_map helper function

On Thu, Apr 13, 2017 at 04:05:16PM -0600, Logan Gunthorpe wrote:
> Convert the kmap and kmap_atomic uses to the sg_map function. We now
> store the flags for the kmap instead of a boolean to indicate
> atomicitiy. We also propogate a possible kmap error down and create
> a new ISCSI_TCP_INTERNAL_ERR error type for this.

Can you split out the new error handling into a separate prep patch
which should go to the iscsi maintainers ASAP?

2017-04-14 08:39:29

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 09/22] dm-crypt: Make use of the new sg_map helper in 4 call sites

On Thu, Apr 13, 2017 at 04:05:22PM -0600, Logan Gunthorpe wrote:
> Very straightforward conversion to the new function in all four spots.

I think the right fix here is to switch dm-crypt to the ahash API
that takes a scatterlist.

2017-04-14 15:35:00

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 01/22] scatterlist: Introduce sg_map helper functions



On 14/04/17 02:35 AM, Christoph Hellwig wrote:
>> +
>> static inline int is_dma_buf_file(struct file *);
>>
>> struct dma_buf_list {
>
> I think the right fix here is to rename the operation to unmap_atomic
> and send out a little patch for that ASAP.

Ok, I can do that next week.

> I'd rather have separate functions for kmap vs kmap_atomic instead of
> the flags parameter. And while you're at it just always pass the 0
> offset parameter instead of adding a wrapper..
>
> Otherwise this looks good to me.

I settled on the flags because I thought the interface could be expanded
to do more things like automatically copy iomem to a bounce buffer (with
a flag). It'd also be possible to add things like vmap and
physical_address to the interface which would cover even more sg_page
users. All the implementations would then share the common offset
calculations, and switching between them becomes a matter of changing a
couple flags.

If you're still not convinced by the above arguments then I'll change
it but I did have reasons for choosing to do it this way.

I am fine with removing the offset versions. I will make that change.

Thanks,

Logan

2017-04-14 15:37:43

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 03/22] libiscsi: Make use of new the sg_map helper function



On 14/04/17 02:36 AM, Christoph Hellwig wrote:
> On Thu, Apr 13, 2017 at 04:05:16PM -0600, Logan Gunthorpe wrote:
>> Convert the kmap and kmap_atomic uses to the sg_map function. We now
>> store the flags for the kmap instead of a boolean to indicate
>> atomicitiy. We also propogate a possible kmap error down and create
>> a new ISCSI_TCP_INTERNAL_ERR error type for this.
>
> Can you split out the new error handling into a separate prep patch
> which should go to the iscsi maintainers ASAP?
>

Yes, I can do that. I'd just have thought they'd want to see the use
case for the new error before accepting a patch like that...

Logan

2017-04-14 16:04:05

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 09/22] dm-crypt: Make use of the new sg_map helper in 4 call sites



On 14/04/17 02:39 AM, Christoph Hellwig wrote:
> On Thu, Apr 13, 2017 at 04:05:22PM -0600, Logan Gunthorpe wrote:
>> Very straightforward conversion to the new function in all four spots.
>
> I think the right fix here is to switch dm-crypt to the ahash API
> that takes a scatterlist.

Hmm, well I'm not sure I understand the code enough to make that
conversion. But I was looking at it. One tricky bit seems to be that
crypt_iv_lmk_one adds a seed, skips the first 16 bytes in the page and
then hashes another 16 bytes from other data. What would you do
construct a new sgl for it and pass it to the ahash api?

The other thing is crypt_iv_lmk_post also seems to modify the page after
the hash with a crypto_xor so you'd still need at least one kmap in there.

Logan

2017-04-14 16:07:28

by Kershner, David A

[permalink] [raw]
Subject: RE: [PATCH 10/22] staging: unisys: visorbus: Make use of the new sg_map helper function

> -----Original Message-----
> From: Logan Gunthorpe [mailto:[email protected]]
...
> Subject: [PATCH 10/22] staging: unisys: visorbus: Make use of the new
> sg_map helper function
>
> Straightforward conversion to the new function.
>
> Signed-off-by: Logan Gunthorpe <[email protected]>

Can you add Acked-by for this patch?

Acked-by: David Kershner <[email protected]>

Tested on s-Par and no problems.

Thanks,
David Kershner

> ---
> drivers/staging/unisys/visorhba/visorhba_main.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/staging/unisys/visorhba/visorhba_main.c
> b/drivers/staging/unisys/visorhba/visorhba_main.c
> index 0ce92c8..2d8c8bc 100644
> --- a/drivers/staging/unisys/visorhba/visorhba_main.c
> +++ b/drivers/staging/unisys/visorhba/visorhba_main.c
> @@ -842,7 +842,6 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct
> scsi_cmnd *scsicmd)
> struct scatterlist *sg;
> unsigned int i;
> char *this_page;
> - char *this_page_orig;
> int bufind = 0;
> struct visordisk_info *vdisk;
> struct visorhba_devdata *devdata;
> @@ -869,11 +868,14 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp,
> struct scsi_cmnd *scsicmd)
>
> sg = scsi_sglist(scsicmd);
> for (i = 0; i < scsi_sg_count(scsicmd); i++) {
> - this_page_orig = kmap_atomic(sg_page(sg + i));
> - this_page = (void *)((unsigned long)this_page_orig |
> - sg[i].offset);
> + this_page = sg_map(sg + i, SG_KMAP_ATOMIC);
> + if (IS_ERR(this_page)) {
> + scsicmd->result = DID_ERROR << 16;
> + return;
> + }
> +
> memcpy(this_page, buf + bufind, sg[i].length);
> - kunmap_atomic(this_page_orig);
> + sg_unmap(sg + i, this_page, SG_KMAP_ATOMIC);
> }
> } else {
> devdata = (struct visorhba_devdata *)scsidev->host-
> >hostdata;
> --
> 2.1.4


2017-04-14 16:12:54

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/22] staging: unisys: visorbus: Make use of the new sg_map helper function

Great, thanks!

Logan

On 14/04/17 10:07 AM, Kershner, David A wrote:
> Can you add Acked-by for this patch?
>
> Acked-by: David Kershner <[email protected]>
>
> Tested on s-Par and no problems.
>
> Thanks,
> David Kershner
>
>> ---
>> drivers/staging/unisys/visorhba/visorhba_main.c | 12 +++++++-----
>> 1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/staging/unisys/visorhba/visorhba_main.c
>> b/drivers/staging/unisys/visorhba/visorhba_main.c
>> index 0ce92c8..2d8c8bc 100644
>> --- a/drivers/staging/unisys/visorhba/visorhba_main.c
>> +++ b/drivers/staging/unisys/visorhba/visorhba_main.c
>> @@ -842,7 +842,6 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct
>> scsi_cmnd *scsicmd)
>> struct scatterlist *sg;
>> unsigned int i;
>> char *this_page;
>> - char *this_page_orig;
>> int bufind = 0;
>> struct visordisk_info *vdisk;
>> struct visorhba_devdata *devdata;
>> @@ -869,11 +868,14 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp,
>> struct scsi_cmnd *scsicmd)
>>
>> sg = scsi_sglist(scsicmd);
>> for (i = 0; i < scsi_sg_count(scsicmd); i++) {
>> - this_page_orig = kmap_atomic(sg_page(sg + i));
>> - this_page = (void *)((unsigned long)this_page_orig |
>> - sg[i].offset);
>> + this_page = sg_map(sg + i, SG_KMAP_ATOMIC);
>> + if (IS_ERR(this_page)) {
>> + scsicmd->result = DID_ERROR << 16;
>> + return;
>> + }
>> +
>> memcpy(this_page, buf + bufind, sg[i].length);
>> - kunmap_atomic(this_page_orig);
>> + sg_unmap(sg + i, this_page, SG_KMAP_ATOMIC);
>> }
>> } else {
>> devdata = (struct visorhba_devdata *)scsidev->host-
>>> hostdata;
>> --
>> 2.1.4
>

2017-04-15 04:53:39

by Harsh Jain

[permalink] [raw]
Subject: Re: [PATCH 08/22] crypto: chcr: Make use of the new sg_map helper function

On Fri, Apr 14, 2017 at 3:35 AM, Logan Gunthorpe <[email protected]> wrote:
> The get_page in this area looks *highly* suspect due to there being no
> corresponding put_page. However, I've left that as is to avoid breaking
> things.
chcr driver will post the request to LLD driver cxgb4 and put_page is
implemented there. it will no harm. Any how
we have removed the below code from driver.

http://www.mail-archive.com/[email protected]/msg24561.html

After this merge we can ignore your patch. Thanks

>
> I've also removed the KMAP_ATOMIC_ARGS check as it appears to be dead
> code that dates back to when it was first committed...


>
> Signed-off-by: Logan Gunthorpe <[email protected]>
> ---
> drivers/crypto/chelsio/chcr_algo.c | 28 +++++++++++++++-------------
> 1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
> index 41bc7f4..a993d1d 100644
> --- a/drivers/crypto/chelsio/chcr_algo.c
> +++ b/drivers/crypto/chelsio/chcr_algo.c
> @@ -1489,22 +1489,21 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
> return ERR_PTR(-EINVAL);
> }
>
> -static void aes_gcm_empty_pld_pad(struct scatterlist *sg,
> - unsigned short offset)
> +static int aes_gcm_empty_pld_pad(struct scatterlist *sg,
> + unsigned short offset)
> {
> - struct page *spage;
> unsigned char *addr;
>
> - spage = sg_page(sg);
> - get_page(spage); /* so that it is not freed by NIC */
> -#ifdef KMAP_ATOMIC_ARGS
> - addr = kmap_atomic(spage, KM_SOFTIRQ0);
> -#else
> - addr = kmap_atomic(spage);
> -#endif
> - memset(addr + sg->offset, 0, offset + 1);
> + get_page(sg_page(sg)); /* so that it is not freed by NIC */
> +
> + addr = sg_map(sg, SG_KMAP_ATOMIC);
> + if (IS_ERR(addr))
> + return PTR_ERR(addr);
> +
> + memset(addr, 0, offset + 1);
> + sg_unmap(sg, addr, SG_KMAP_ATOMIC);
>
> - kunmap_atomic(addr);
> + return 0;
> }
>
> static int set_msg_len(u8 *block, unsigned int msglen, int csize)
> @@ -1940,7 +1939,10 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
> if (req->cryptlen) {
> write_sg_to_skb(skb, &frags, src, req->cryptlen);
> } else {
> - aes_gcm_empty_pld_pad(req->dst, authsize - 1);
> + err = aes_gcm_empty_pld_pad(req->dst, authsize - 1);
> + if (err)
> + goto dstmap_fail;
> +
> write_sg_to_skb(skb, &frags, reqctx->dst, crypt_len);
>
> }
> --
> 2.1.4
>

2017-04-18 06:44:41

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH 05/22] drm/i915: Make use of the new sg_map helper function

On Thu, Apr 13, 2017 at 04:05:18PM -0600, Logan Gunthorpe wrote:
> This is a single straightforward conversion from kmap to sg_map.
>
> Signed-off-by: Logan Gunthorpe <[email protected]>

Acked-by: Daniel Vetter <[email protected]>

Probably makes sense to merge through some other tree, but please be aware
of the considerable churn rate in i915 (i.e. make sure your tree is in
linux-next before you send a pull request for this). Plane B would be to
get the prep patch in first and then merge the i915 conversion one kernel
release later.
-Daniel

> ---
> drivers/gpu/drm/i915/i915_gem.c | 27 ++++++++++++++++-----------
> 1 file changed, 16 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 67b1fc5..1b1b91a 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2188,6 +2188,15 @@ static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj)
> radix_tree_delete(&obj->mm.get_page.radix, iter.index);
> }
>
> +static void i915_gem_object_unmap(const struct drm_i915_gem_object *obj,
> + void *ptr)
> +{
> + if (is_vmalloc_addr(ptr))
> + vunmap(ptr);
> + else
> + sg_unmap(obj->mm.pages->sgl, ptr, SG_KMAP);
> +}
> +
> void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
> enum i915_mm_subclass subclass)
> {
> @@ -2215,10 +2224,7 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
> void *ptr;
>
> ptr = ptr_mask_bits(obj->mm.mapping);
> - if (is_vmalloc_addr(ptr))
> - vunmap(ptr);
> - else
> - kunmap(kmap_to_page(ptr));
> + i915_gem_object_unmap(obj, ptr);
>
> obj->mm.mapping = NULL;
> }
> @@ -2475,8 +2481,11 @@ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj,
> void *addr;
>
> /* A single page can always be kmapped */
> - if (n_pages == 1 && type == I915_MAP_WB)
> - return kmap(sg_page(sgt->sgl));
> + if (n_pages == 1 && type == I915_MAP_WB) {
> + addr = sg_map(sgt->sgl, SG_KMAP);
> + if (IS_ERR(addr))
> + return NULL;
> + }
>
> if (n_pages > ARRAY_SIZE(stack_pages)) {
> /* Too big for stack -- allocate temporary array instead */
> @@ -2543,11 +2552,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
> goto err_unpin;
> }
>
> - if (is_vmalloc_addr(ptr))
> - vunmap(ptr);
> - else
> - kunmap(kmap_to_page(ptr));
> -
> + i915_gem_object_unmap(obj, ptr);
> ptr = obj->mm.mapping = NULL;
> }
>
> --
> 2.1.4
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2017-04-18 14:14:20

by David Laight

[permalink] [raw]
Subject: RE: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function

From: Logan Gunthorpe
> Sent: 13 April 2017 23:05
> Straightforward conversion to the new helper, except due to
> the lack of error path, we have to warn if unmapable memory
> is ever present in the sgl.
>
> Signed-off-by: Logan Gunthorpe <[email protected]>
> ---
> drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
> 1 file changed, 27 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 5067a0a..7dcf41d 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> BUG_ON(sg->offset + sg->length > PAGE_SIZE);
>
> if (setup.need_copy) {
> - setup.bvec_off = sg->offset;
> - setup.bvec_data = kmap_atomic(sg_page(sg));
> + setup.bvec_off = 0;
> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
> + if (IS_ERR(setup.bvec_data)) {
> + /*
> + * This should really never happen unless
> + * the code is changed to use memory that is
> + * not mappable in the sg. Seeing there is a
> + * questionable error path out of here,
> + * we WARN.
> + */
> + WARN(1, "Non-mappable memory used in sg!");
> + return 1;
> + }
...

Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
inside sg_map().

David


2017-04-18 14:29:59

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function

On Tue, Apr 18, 2017 at 02:13:59PM +0000, David Laight wrote:
> From: Logan Gunthorpe
> > Sent: 13 April 2017 23:05
> > Straightforward conversion to the new helper, except due to
> > the lack of error path, we have to warn if unmapable memory
> > is ever present in the sgl.

Interesting that you didn't CC any of the maintainers. Could you
do that in the future please?

> >
> > Signed-off-by: Logan Gunthorpe <[email protected]>
> > ---
> > drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
> > 1 file changed, 27 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 5067a0a..7dcf41d 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> > BUG_ON(sg->offset + sg->length > PAGE_SIZE);
> >
> > if (setup.need_copy) {
> > - setup.bvec_off = sg->offset;
> > - setup.bvec_data = kmap_atomic(sg_page(sg));
> > + setup.bvec_off = 0;
> > + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
> > + if (IS_ERR(setup.bvec_data)) {
> > + /*
> > + * This should really never happen unless
> > + * the code is changed to use memory that is
> > + * not mappable in the sg. Seeing there is a
> > + * questionable error path out of here,
> > + * we WARN.
> > + */
> > + WARN(1, "Non-mappable memory used in sg!");
> > + return 1;
> > + }
> ...
>
> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
> inside sg_map().
>
> David
>
>
> _______________________________________________
> Linux-nvdimm mailing list
> [email protected]
> https://lists.01.org/mailman/listinfo/linux-nvdimm

2017-04-18 15:42:49

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function



On 18/04/17 08:27 AM, Konrad Rzeszutek Wilk wrote:
> Interesting that you didn't CC any of the maintainers. Could you
> do that in the future please?

Please read the cover letter. The distribution list for the patchset
would have been way too large to cc every maintainer (even as limited as
it was, I had mailing lists yelling at me). My plan was to get buy in
for the first patch, get it merged and resend the rest independently to
their respective maintainers. Of course, though, I'd be open to other
suggestions.

>>>
>>> Signed-off-by: Logan Gunthorpe <[email protected]>
>>> ---
>>> drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
>>> 1 file changed, 27 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>>> index 5067a0a..7dcf41d 100644
>>> --- a/drivers/block/xen-blkfront.c
>>> +++ b/drivers/block/xen-blkfront.c
>>> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>>> BUG_ON(sg->offset + sg->length > PAGE_SIZE);
>>>
>>> if (setup.need_copy) {
>>> - setup.bvec_off = sg->offset;
>>> - setup.bvec_data = kmap_atomic(sg_page(sg));
>>> + setup.bvec_off = 0;
>>> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
>>> + if (IS_ERR(setup.bvec_data)) {
>>> + /*
>>> + * This should really never happen unless
>>> + * the code is changed to use memory that is
>>> + * not mappable in the sg. Seeing there is a
>>> + * questionable error path out of here,
>>> + * we WARN.
>>> + */
>>> + WARN(1, "Non-mappable memory used in sg!");
>>> + return 1;
>>> + }
>> ...
>>
>> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
>> inside sg_map().

Thanks, that's a good suggestion. I'll make the change for v2.

Logan

2017-04-18 15:45:04

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 05/22] drm/i915: Make use of the new sg_map helper function



On 18/04/17 12:44 AM, Daniel Vetter wrote:
> On Thu, Apr 13, 2017 at 04:05:18PM -0600, Logan Gunthorpe wrote:
>> This is a single straightforward conversion from kmap to sg_map.
>>
>> Signed-off-by: Logan Gunthorpe <[email protected]>
>
> Acked-by: Daniel Vetter <[email protected]>
>
> Probably makes sense to merge through some other tree, but please be aware
> of the considerable churn rate in i915 (i.e. make sure your tree is in
> linux-next before you send a pull request for this). Plane B would be to
> get the prep patch in first and then merge the i915 conversion one kernel
> release later.

Yes, as per what I said in my cover letter, I was leaning towards a
"Plan B" style approach.

Logan

2017-04-18 15:51:43

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function

On Tue, Apr 18, 2017 at 09:42:20AM -0600, Logan Gunthorpe wrote:
>
>
> On 18/04/17 08:27 AM, Konrad Rzeszutek Wilk wrote:
> > Interesting that you didn't CC any of the maintainers. Could you
> > do that in the future please?
>
> Please read the cover letter. The distribution list for the patchset
> would have been way too large to cc every maintainer (even as limited as
> it was, I had mailing lists yelling at me). My plan was to get buy in

I am not sure if you know, but you can add on each patch the respective
maintainer via 'CC'. That way you can have certain maintainers CCed only
on the subsystems they cover. You put it after (or before) your SoB and
git send-email happilly picks it up.

It does mean that for every patch you have to run something like this:

$ more add_cc
#!/bin/bash

git diff HEAD^.. > /tmp/a
echo "---"
scripts/get_maintainer.pl --no-l /tmp/a | while read file
do
echo "Cc: $file"
done

Or such.


> for the first patch, get it merged and resend the rest independently to
> their respective maintainers. Of course, though, I'd be open to other
> suggestions.
>
> >>>
> >>> Signed-off-by: Logan Gunthorpe <[email protected]>
> >>> ---
> >>> drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
> >>> 1 file changed, 27 insertions(+), 6 deletions(-)
> >>>
> >>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> >>> index 5067a0a..7dcf41d 100644
> >>> --- a/drivers/block/xen-blkfront.c
> >>> +++ b/drivers/block/xen-blkfront.c
> >>> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> >>> BUG_ON(sg->offset + sg->length > PAGE_SIZE);
> >>>
> >>> if (setup.need_copy) {
> >>> - setup.bvec_off = sg->offset;
> >>> - setup.bvec_data = kmap_atomic(sg_page(sg));
> >>> + setup.bvec_off = 0;
> >>> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
> >>> + if (IS_ERR(setup.bvec_data)) {
> >>> + /*
> >>> + * This should really never happen unless
> >>> + * the code is changed to use memory that is
> >>> + * not mappable in the sg. Seeing there is a
> >>> + * questionable error path out of here,
> >>> + * we WARN.
> >>> + */
> >>> + WARN(1, "Non-mappable memory used in sg!");
> >>> + return 1;
> >>> + }
> >> ...
> >>
> >> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
> >> inside sg_map().
>
> Thanks, that's a good suggestion. I'll make the change for v2.
>
> Logan

2017-04-18 15:59:33

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function



On 18/04/17 09:50 AM, Konrad Rzeszutek Wilk wrote:
> I am not sure if you know, but you can add on each patch the respective
> maintainer via 'CC'. That way you can have certain maintainers CCed only
> on the subsystems they cover. You put it after (or before) your SoB and
> git send-email happilly picks it up.

Yes, but I've seen some maintainers complain when they receive a patch
with no context (ie. cover letter and first patch). So I chose to do it
this way. I expect in this situation, no matter what you do, someone is
going to complain about the approach chosen.

Thanks anyway for the tip.

Logan