On 07/18/2014 07:04 PM, John Utz wrote:
>> On 07/18/2014 05:31 AM, John Utz wrote:
>>> Thankyou very much for the exhaustive answer! I forwarded on to my
>>> project peers because i don't think any of us where aware of the
>>> existing infrastructure.
>>>
>>> Of course, said infrastructure would have to be taught about ZAC,
>>> but it seems like it would be a nice place to start testing from....
>>>
>> ZAC is a different beast altogether; I've posted an initial set of
>> patches a while back on linux-scsi.
>> But I don't think multipath needs to be changed for that.
>> Other areas of device-mapper most certainly do.
>
> Pretty sure John is working on a new ZAC-oriented DM target.
>
> YUP.
>
> Per Ted T'so's suggestion several months ago, the goal is to create
> a new DM target that implements the ZAC/ZBC command set and the SMR
> write pointer architecture so that FSfolksen can try their hand at
> porting their stuff to it.
>
> It's in the very early stages so there is nothing to show yet, but
> development is ongoing. There are a few unknowns about how to surface
> some specific behaviors (new verbs and errors, particularly errors
> with sense codes that return a write pointer) but i have not gotten
> far enuf along in development to be able to construct succint and
> specific questions on the topic so that will have to wait for a bit.
>
I was pondering the 'best' ZAC implementation, too, and found the
'report zones' command _very_ cumbersome to use.
Especially the fact that in theory each zone could have a different
size _and_ plenty of zones could be present will be making zone
lookup hellish.
However: it seems to me that we might benefit from a generic
'block boundaries' implementation.
Reasoning here is that several subsystems (RAID, ZAC/ZBC, and things
like referrals) impose I/O scheduling boundaries which must not be
crossed when assembling requests.
Seeing that we already have some block limitations I was wondering
if we couldn't have some set of 'I/O scheduling boundaries' as part
of the request_queue structure.
Kent, Jens; comments here?
Cheers,
Hannes
--
Dr. Hannes Reinecke zSeries & Storage
[email protected] +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: J. Hawn, J. Guild, F. Imend?rffer, HRB 16746 (AG N?rnberg)
On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
> On 07/18/2014 07:04 PM, John Utz wrote:
> >>On 07/18/2014 05:31 AM, John Utz wrote:
> >>>Thankyou very much for the exhaustive answer! I forwarded on to my
> >>>project peers because i don't think any of us where aware of the
> >>>existing infrastructure.
> >>>
> >>>Of course, said infrastructure would have to be taught about ZAC,
> >>>but it seems like it would be a nice place to start testing from....
> >>>
> >>ZAC is a different beast altogether; I've posted an initial set of
> >>patches a while back on linux-scsi.
> >>But I don't think multipath needs to be changed for that.
> >>Other areas of device-mapper most certainly do.
> >
> >Pretty sure John is working on a new ZAC-oriented DM target.
> >
> >YUP.
> >
> >Per Ted T'so's suggestion several months ago, the goal is to create
> > a new DM target that implements the ZAC/ZBC command set and the SMR
> > write pointer architecture so that FSfolksen can try their hand at
> > porting their stuff to it.
> >
> >It's in the very early stages so there is nothing to show yet, but
> > development is ongoing. There are a few unknowns about how to surface
> > some specific behaviors (new verbs and errors, particularly errors
> > with sense codes that return a write pointer) but i have not gotten
> > far enuf along in development to be able to construct succint and
> > specific questions on the topic so that will have to wait for a bit.
> >
> I was pondering the 'best' ZAC implementation, too, and found the
> 'report zones' command _very_ cumbersome to use.
> Especially the fact that in theory each zone could have a different size
> _and_ plenty of zones could be present will be making zone lookup hellish.
>
> However: it seems to me that we might benefit from a generic
> 'block boundaries' implementation.
> Reasoning here is that several subsystems (RAID, ZAC/ZBC, and things like
> referrals) impose I/O scheduling boundaries which must not be crossed when
> assembling requests.
Wasn't Ted working on such a thing?
> Seeing that we already have some block limitations I was wondering if we
> couldn't have some set of 'I/O scheduling boundaries' as part
> of the request_queue structure.
I'd prefer not to dump yet more crap in request_queue, but that's a fairly minor
quibble :)
I also tend to think having different size zones is crazy and I would avoid
making any effort to support that in practice, but OTOH there's good reason for
wanting one or two "normal" zones and the rest append only so the interface is
going to have to accomadate some differences between zones.
Also, depending on the approach supporting different size zones might not
actually be problematic. If you're starting with something that's pure COW and
you're just plugging in this "ZAC allocation" stuff (which I think is what I'm
going to do in bcache) then it might not actually be an issue.
On 07/21/2014 09:28 PM, Kent Overstreet wrote:
> On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
>> On 07/18/2014 07:04 PM, John Utz wrote:
>>>> On 07/18/2014 05:31 AM, John Utz wrote:
>>>>> Thankyou very much for the exhaustive answer! I forwarded on to my
>>>>> project peers because i don't think any of us where aware of the
>>>>> existing infrastructure.
>>>>>
>>>>> Of course, said infrastructure would have to be taught about ZAC,
>>>>> but it seems like it would be a nice place to start testing from....
>>>>>
>>>> ZAC is a different beast altogether; I've posted an initial set of
>>>> patches a while back on linux-scsi.
>>>> But I don't think multipath needs to be changed for that.
>>>> Other areas of device-mapper most certainly do.
>>>
>>> Pretty sure John is working on a new ZAC-oriented DM target.
>>>
>>> YUP.
>>>
>>> Per Ted T'so's suggestion several months ago, the goal is to create
>>> a new DM target that implements the ZAC/ZBC command set and the SMR
>>> write pointer architecture so that FSfolksen can try their hand at
>>> porting their stuff to it.
>>>
>>> It's in the very early stages so there is nothing to show yet, but
>>> development is ongoing. There are a few unknowns about how to surface
>>> some specific behaviors (new verbs and errors, particularly errors
>>> with sense codes that return a write pointer) but i have not gotten
>>> far enuf along in development to be able to construct succint and
>>> specific questions on the topic so that will have to wait for a bit.
>>>
>> I was pondering the 'best' ZAC implementation, too, and found the
>> 'report zones' command _very_ cumbersome to use.
>> Especially the fact that in theory each zone could have a different size
>> _and_ plenty of zones could be present will be making zone lookup hellish.
>>
>> However: it seems to me that we might benefit from a generic
>> 'block boundaries' implementation.
>> Reasoning here is that several subsystems (RAID, ZAC/ZBC, and things like
>> referrals) impose I/O scheduling boundaries which must not be crossed when
>> assembling requests.
>
> Wasn't Ted working on such a thing?
>
>> Seeing that we already have some block limitations I was wondering if we
>> couldn't have some set of 'I/O scheduling boundaries' as part
>> of the request_queue structure.
>
> I'd prefer not to dump yet more crap in request_queue, but that's a fairly minor
> quibble :)
>
> I also tend to think having different size zones is crazy and I would avoid
> making any effort to support that in practice, but OTOH there's good reason for
> wanting one or two "normal" zones and the rest append only so the interface is
> going to have to accomadate some differences between zones.
>
> Also, depending on the approach supporting different size zones might not
> actually be problematic. If you're starting with something that's pure COW and
> you're just plugging in this "ZAC allocation" stuff (which I think is what I'm
> going to do in bcache) then it might not actually be an issue.
>
No, what I was suggesting is to introduce 'I/O scheduling barriers'.
Some devices like RAID or indeed ZAC have internal boundaries which
cannot be crossed by any I/O. So either the I/O has to be split up
or the I/O scheduler have to be made aware of these boundaries.
I have had this issue several times now (once with implementing
Referrals, now with ZAC) so I was wondering whether we can have some
sort of generic implementation in the block layer.
And as we're already having request queue limits this might fall
quite naturally into it. Or so I thought.
Hmm. Guess I should start coding here.
Cheers,
Hannes
--
Dr. Hannes Reinecke zSeries & Storage
[email protected] +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: J. Hawn, J. Guild, F. Imend?rffer, HRB 16746 (AG N?rnberg)
On 07/22/2014 07:46 AM, Hannes Reinecke wrote:
> On 07/21/2014 09:28 PM, Kent Overstreet wrote:
>> On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
>>> On 07/18/2014 07:04 PM, John Utz wrote:
>>>>> On 07/18/2014 05:31 AM, John Utz wrote:
>>>>>> Thankyou very much for the exhaustive answer! I forwarded on to my
>>>>>> project peers because i don't think any of us where aware of the
>>>>>> existing infrastructure.
>>>>>>
>>>>>> Of course, said infrastructure would have to be taught about ZAC,
>>>>>> but it seems like it would be a nice place to start testing from....
>>>>>>
>>>>> ZAC is a different beast altogether; I've posted an initial set of
>>>>> patches a while back on linux-scsi.
>>>>> But I don't think multipath needs to be changed for that.
>>>>> Other areas of device-mapper most certainly do.
>>>>
>>>> Pretty sure John is working on a new ZAC-oriented DM target.
>>>>
>>>> YUP.
>>>>
>>>> Per Ted T'so's suggestion several months ago, the goal is to create
>>>> a new DM target that implements the ZAC/ZBC command set and the SMR
>>>> write pointer architecture so that FSfolksen can try their hand at
>>>> porting their stuff to it.
>>>>
>>>> It's in the very early stages so there is nothing to show yet, but
>>>> development is ongoing. There are a few unknowns about how to surface
>>>> some specific behaviors (new verbs and errors, particularly errors
>>>> with sense codes that return a write pointer) but i have not gotten
>>>> far enuf along in development to be able to construct succint and
>>>> specific questions on the topic so that will have to wait for a bit.
>>>>
>>> I was pondering the 'best' ZAC implementation, too, and found the
>>> 'report zones' command _very_ cumbersome to use.
>>> Especially the fact that in theory each zone could have a different size
>>> _and_ plenty of zones could be present will be making zone lookup
>>> hellish.
>>>
>>> However: it seems to me that we might benefit from a generic
>>> 'block boundaries' implementation.
>>> Reasoning here is that several subsystems (RAID, ZAC/ZBC, and things
>>> like
>>> referrals) impose I/O scheduling boundaries which must not be crossed
>>> when
>>> assembling requests.
>>
>> Wasn't Ted working on such a thing?
>>
>>> Seeing that we already have some block limitations I was wondering if we
>>> couldn't have some set of 'I/O scheduling boundaries' as part
>>> of the request_queue structure.
>>
>> I'd prefer not to dump yet more crap in request_queue, but that's a
>> fairly minor
>> quibble :)
>>
>> I also tend to think having different size zones is crazy and I would
>> avoid
>> making any effort to support that in practice, but OTOH there's good
>> reason for
>> wanting one or two "normal" zones and the rest append only so the
>> interface is
>> going to have to accomadate some differences between zones.
>>
>> Also, depending on the approach supporting different size zones might not
>> actually be problematic. If you're starting with something that's pure
>> COW and
>> you're just plugging in this "ZAC allocation" stuff (which I think is
>> what I'm
>> going to do in bcache) then it might not actually be an issue.
>>
> No, what I was suggesting is to introduce 'I/O scheduling barriers'.
> Some devices like RAID or indeed ZAC have internal boundaries which
> cannot be crossed by any I/O. So either the I/O has to be split up or
> the I/O scheduler have to be made aware of these boundaries.
>
> I have had this issue several times now (once with implementing
> Referrals, now with ZAC) so I was wondering whether we can have some
> sort of generic implementation in the block layer.
Would it make any sense to put the hole block allocation strategy within
the block layer (or just on top of the device driver) and then let
MDs/FSs users hook into this for allocating new blocks?
That allows multiple implementations to use the same block address space.
>
> And as we're already having request queue limits this might fall quite
> naturally into it. Or so I thought.
>
> Hmm. Guess I should start coding here.
>
> Cheers,
>
> Hannes