Straightforward conversion to the new helper, except due to
the lack of error path, we have to warn if unmapable memory
is ever present in the sgl.
Signed-off-by: Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/[email protected]>
---
drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 5067a0a..7dcf41d 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
BUG_ON(sg->offset + sg->length > PAGE_SIZE);
if (setup.need_copy) {
- setup.bvec_off = sg->offset;
- setup.bvec_data = kmap_atomic(sg_page(sg));
+ setup.bvec_off = 0;
+ setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(setup.bvec_data)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there is a
+ * questionable error path out of here,
+ * we WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return 1;
+ }
}
gnttab_foreach_grant_in_range(sg_page(sg),
@@ -818,7 +829,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
&setup);
if (setup.need_copy)
- kunmap_atomic(setup.bvec_data);
+ sg_unmap(sg, setup.bvec_data, SG_KMAP_ATOMIC);
}
if (setup.segments)
kunmap_atomic(setup.segments);
@@ -1468,8 +1479,18 @@ static bool blkif_completion(unsigned long *id,
for_each_sg(s->sg, sg, num_sg, i) {
BUG_ON(sg->offset + sg->length > PAGE_SIZE);
- data.bvec_offset = sg->offset;
- data.bvec_data = kmap_atomic(sg_page(sg));
+ data.bvec_offset = 0;
+ data.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
+ if (IS_ERR(data.bvec_data)) {
+ /*
+ * This should really never happen unless
+ * the code is changed to use memory that is
+ * not mappable in the sg. Seeing there is no
+ * clear error path, we WARN.
+ */
+ WARN(1, "Non-mappable memory used in sg!");
+ return 1;
+ }
gnttab_foreach_grant_in_range(sg_page(sg),
sg->offset,
@@ -1477,7 +1498,7 @@ static bool blkif_completion(unsigned long *id,
blkif_copy_from_grant,
&data);
- kunmap_atomic(data.bvec_data);
+ sg_unmap(sg, data.bvec_data, SG_KMAP_ATOMIC);
}
}
/* Add the persistent grant into the list of free grants */
--
2.1.4
From: Logan Gunthorpe
> Sent: 13 April 2017 23:05
> Straightforward conversion to the new helper, except due to
> the lack of error path, we have to warn if unmapable memory
> is ever present in the sgl.
>
> Signed-off-by: Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/[email protected]>
> ---
> drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
> 1 file changed, 27 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 5067a0a..7dcf41d 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> BUG_ON(sg->offset + sg->length > PAGE_SIZE);
>
> if (setup.need_copy) {
> - setup.bvec_off = sg->offset;
> - setup.bvec_data = kmap_atomic(sg_page(sg));
> + setup.bvec_off = 0;
> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
> + if (IS_ERR(setup.bvec_data)) {
> + /*
> + * This should really never happen unless
> + * the code is changed to use memory that is
> + * not mappable in the sg. Seeing there is a
> + * questionable error path out of here,
> + * we WARN.
> + */
> + WARN(1, "Non-mappable memory used in sg!");
> + return 1;
> + }
...
Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
inside sg_map().
David
On Tue, Apr 18, 2017 at 02:13:59PM +0000, David Laight wrote:
> From: Logan Gunthorpe
> > Sent: 13 April 2017 23:05
> > Straightforward conversion to the new helper, except due to
> > the lack of error path, we have to warn if unmapable memory
> > is ever present in the sgl.
Interesting that you didn't CC any of the maintainers. Could you
do that in the future please?
> >
> > Signed-off-by: Logan Gunthorpe <[email protected]>
> > ---
> > drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
> > 1 file changed, 27 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 5067a0a..7dcf41d 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> > BUG_ON(sg->offset + sg->length > PAGE_SIZE);
> >
> > if (setup.need_copy) {
> > - setup.bvec_off = sg->offset;
> > - setup.bvec_data = kmap_atomic(sg_page(sg));
> > + setup.bvec_off = 0;
> > + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
> > + if (IS_ERR(setup.bvec_data)) {
> > + /*
> > + * This should really never happen unless
> > + * the code is changed to use memory that is
> > + * not mappable in the sg. Seeing there is a
> > + * questionable error path out of here,
> > + * we WARN.
> > + */
> > + WARN(1, "Non-mappable memory used in sg!");
> > + return 1;
> > + }
> ...
>
> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
> inside sg_map().
>
> David
>
>
> _______________________________________________
> Linux-nvdimm mailing list
> [email protected]
> https://lists.01.org/mailman/listinfo/linux-nvdimm
On 18/04/17 08:27 AM, Konrad Rzeszutek Wilk wrote:
> Interesting that you didn't CC any of the maintainers. Could you
> do that in the future please?
Please read the cover letter. The distribution list for the patchset
would have been way too large to cc every maintainer (even as limited as
it was, I had mailing lists yelling at me). My plan was to get buy in
for the first patch, get it merged and resend the rest independently to
their respective maintainers. Of course, though, I'd be open to other
suggestions.
>>>
>>> Signed-off-by: Logan Gunthorpe <[email protected]>
>>> ---
>>> drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
>>> 1 file changed, 27 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>>> index 5067a0a..7dcf41d 100644
>>> --- a/drivers/block/xen-blkfront.c
>>> +++ b/drivers/block/xen-blkfront.c
>>> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>>> BUG_ON(sg->offset + sg->length > PAGE_SIZE);
>>>
>>> if (setup.need_copy) {
>>> - setup.bvec_off = sg->offset;
>>> - setup.bvec_data = kmap_atomic(sg_page(sg));
>>> + setup.bvec_off = 0;
>>> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
>>> + if (IS_ERR(setup.bvec_data)) {
>>> + /*
>>> + * This should really never happen unless
>>> + * the code is changed to use memory that is
>>> + * not mappable in the sg. Seeing there is a
>>> + * questionable error path out of here,
>>> + * we WARN.
>>> + */
>>> + WARN(1, "Non-mappable memory used in sg!");
>>> + return 1;
>>> + }
>> ...
>>
>> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
>> inside sg_map().
Thanks, that's a good suggestion. I'll make the change for v2.
Logan
On Tue, Apr 18, 2017 at 09:42:20AM -0600, Logan Gunthorpe wrote:
>
>
> On 18/04/17 08:27 AM, Konrad Rzeszutek Wilk wrote:
> > Interesting that you didn't CC any of the maintainers. Could you
> > do that in the future please?
>
> Please read the cover letter. The distribution list for the patchset
> would have been way too large to cc every maintainer (even as limited as
> it was, I had mailing lists yelling at me). My plan was to get buy in
I am not sure if you know, but you can add on each patch the respective
maintainer via 'CC'. That way you can have certain maintainers CCed only
on the subsystems they cover. You put it after (or before) your SoB and
git send-email happilly picks it up.
It does mean that for every patch you have to run something like this:
$ more add_cc
#!/bin/bash
git diff HEAD^.. > /tmp/a
echo "---"
scripts/get_maintainer.pl --no-l /tmp/a | while read file
do
echo "Cc: $file"
done
Or such.
> for the first patch, get it merged and resend the rest independently to
> their respective maintainers. Of course, though, I'd be open to other
> suggestions.
>
> >>>
> >>> Signed-off-by: Logan Gunthorpe <[email protected]>
> >>> ---
> >>> drivers/block/xen-blkfront.c | 33 +++++++++++++++++++++++++++------
> >>> 1 file changed, 27 insertions(+), 6 deletions(-)
> >>>
> >>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> >>> index 5067a0a..7dcf41d 100644
> >>> --- a/drivers/block/xen-blkfront.c
> >>> +++ b/drivers/block/xen-blkfront.c
> >>> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> >>> BUG_ON(sg->offset + sg->length > PAGE_SIZE);
> >>>
> >>> if (setup.need_copy) {
> >>> - setup.bvec_off = sg->offset;
> >>> - setup.bvec_data = kmap_atomic(sg_page(sg));
> >>> + setup.bvec_off = 0;
> >>> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC);
> >>> + if (IS_ERR(setup.bvec_data)) {
> >>> + /*
> >>> + * This should really never happen unless
> >>> + * the code is changed to use memory that is
> >>> + * not mappable in the sg. Seeing there is a
> >>> + * questionable error path out of here,
> >>> + * we WARN.
> >>> + */
> >>> + WARN(1, "Non-mappable memory used in sg!");
> >>> + return 1;
> >>> + }
> >> ...
> >>
> >> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?)
> >> inside sg_map().
>
> Thanks, that's a good suggestion. I'll make the change for v2.
>
> Logan
On 18/04/17 09:50 AM, Konrad Rzeszutek Wilk wrote:
> I am not sure if you know, but you can add on each patch the respective
> maintainer via 'CC'. That way you can have certain maintainers CCed only
> on the subsystems they cover. You put it after (or before) your SoB and
> git send-email happilly picks it up.
Yes, but I've seen some maintainers complain when they receive a patch
with no context (ie. cover letter and first patch). So I chose to do it
this way. I expect in this situation, no matter what you do, someone is
going to complain about the approach chosen.
Thanks anyway for the tip.
Logan
_______________________________________________
Intel-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/intel-gfx