2015-03-17 11:20:16

by Andrejs Hanins

[permalink] [raw]
Subject: core/gatt-db: Long writes into external GATT characteristics issue

Hi,

To begin with, there is an issue with long reading/writing of external GATT characteristics registered via GattManager1. Related ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue is easily fixed in a patch I sent yesterday. But the write case is more tricky. Currently, long write into external chrc (D-Bus method "WriteValue") is executed in a "chunked" fashion. Thats is, each ATT PrepareWrite op eventually causes an invocation of "WriteValue" with appropriate buffer (see exec_next_prep_write()). I see no easy way for external D-Bus GATT service to distinguish between full and partial writes, so no way to tell when the chrc is fully written.

I see two solutions for the issue:

1. Add a new method to GattCharacteristic1 called "WriteValuePartial" with a single buffer argument, so that len(buffer)==0 denotes that all chunks have been written and service can assemble the whole buffer for further processing. Or the existing "WriteValue" method can be extended with a parameter like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ design expert, but IMO this approach stinks a bit... D-Bus level API should be simple. I doubt that external service should have an burden related to ATT MTU fragmentation.

2. Add some logic to BlueZ daemon so that "WriteValue" is always executed with fully assembled data from multiple PrepareWrite ops. Such approach will not require any D-Bus API changes, which is good. The problem is that I personally don't see a clean and easy way how to implement it in the code. Attributes are written using gatt_db_attribute_write() which has ATT opcode and write offset, but this info is not enough to understand when the last chunk of an attribute value is being written to complete the operation in a single step. In case of external chrc, the gatt_db_attribute::write_func equals to chrc_write_cb() which immediately sends D-Bus calls to "WriteValue", thats it doing the "chunking". If there would be a possibility to tell when the call to gatt_db_attribute::write_func is for the last chunk of data, then chrc_write_cb() could do the assembly and invoke "WriteValue" only once. The task can be accomplished by adding an additional argument to gatt_db_write_t functor definit
ion telling that write is chunked and if the chunk is last or not. Not sure, does it look like a "brutal hack" or not for BlueZ gurus :)

Maybe someone with better BlueZ design knowledge can suggest how the 2-nd solution can be implemented or maybe propose something completely different.

BR, Andrey


2015-03-18 11:38:01

by Luiz Augusto von Dentz

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi Andrejs,

On Tue, Mar 17, 2015 at 9:05 PM, Andrejs Hanins <[email protected]> wrote:
> Hi,
>
> On 2015.03.17. 20:30, Luiz Augusto von Dentz wrote:
>> Hi Arman,
>>
>> On Tue, Mar 17, 2015 at 8:15 PM, Arman Uguray <[email protected]> wrote:
>>> Hi Luiz & Andrejs,
>>>
>>> On Tue, Mar 17, 2015 at 9:49 AM, Luiz Augusto von Dentz
>>> <[email protected]> wrote:
>>>> Hi Andrejs,
>>>>
>>>> On Tue, Mar 17, 2015 at 6:04 PM, Andrejs Hanins <[email protected]> wrote:
>>>>> Hi Luiz,
>>>>>
>>>>> On 2015.03.17. 17:15, Luiz Augusto von Dentz wrote:
>>>>>> Hi Andrejs,
>>>>>>
>>>>>> On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins
>>>>>> <[email protected]> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> To begin with, there is an issue with long reading/writing of
>>>>>>> external GATT characteristics registered via GattManager1. Related
>>>>>>> ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue
>>>>>>> is easily fixed in a patch I sent yesterday. But the write case is
>>>>>>> more tricky. Currently, long write into external chrc (D-Bus method
>>>>>>> "WriteValue") is executed in a "chunked" fashion. Thats is, each
>>>>>>> ATT PrepareWrite op eventually causes an invocation of "WriteValue"
>>>>>>> with appropriate buffer (see exec_next_prep_write()). I see no easy
>>>>>>> way for external D-Bus GATT service to distinguish between full and
>>>>>>> partial writes, so no way to tell when the chrc is fully written.
>>>>>>>
>>>>>>> I see two solutions for the issue:
>>>>>>>
>>>>>>> 1. Add a new method to GattCharacteristic1 called
>>>>>>> "WriteValuePartial" with a single buffer argument, so that
>>>>>>> len(buffer)==0 denotes that all chunks have been written and
>>>>>>> service can assemble the whole buffer for further processing. Or
>>>>>>> the existing "WriteValue" method can be extended with a parameter
>>>>>>> like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ
>>>>>>> design expert, but IMO this approach stinks a bit... D-Bus level
>>>>>>> API should be simple. I doubt that external service should have an
>>>>>>> burden related to ATT MTU fragmentation.
>>>>>>>
>>>>>>> 2. Add some logic to BlueZ daemon so that "WriteValue" is always
>>>>>>> executed with fully assembled data from multiple PrepareWrite ops.
>>>>>>> Such approach will not require any D-Bus API changes, which is
>>>>>>> good. The problem is that I personally don't see a clean and easy
>>>>>>> way how to implement it in the code. Attributes are written using
>>>>>>> gatt_db_attribute_write() which has ATT opcode and write offset,
>>>>>>> but this info is not enough to understand when the last chunk of
>>>>>>> an attribute value is being written to complete the operation in a
>>>>>>> single step. In case of external chrc, the
>>>>>>> gatt_db_attribute::write_func equals to chrc_write_cb() which
>>>>>>> immediately sends D-Bus calls to "WriteValue", thats it doing the
>>>>>>> "chunking". If there would be a possibility to tell when the call
>>>>>>> to gatt_db_attribute::write_func is for the last chunk of data,
>>>>>>> then chrc_write_cb() could do the assembly and invoke "WriteValue"
>>>>>>> only once. The task can be accomplished by adding an additional
>>>>>>> argument to gatt_db_write_t functor definit ion telling that write
>>>>>>> is chunked and if the chunk is last or not. Not sure, does it look
>>>>>>> like a "brutal hack" or not for BlueZ gurus :)
>>>>>>
>>>>>> I guess what we should do is to queue the prepare writes and wait
>>>>>> execute to actually do the write in gatt_db,
>>>>>
>>>>> This is exactly what is happening now. ExecuteWrite triggers all queued chunks to be written one by one into gatt_db, but not each PrepareWrite. PrepareWrite does only queuing.
>>>>>
>>>>>> I would probably leave this to bt_gatt_server since it should be
>>>>>> doing permission checking, etc, it could handle the prepare queue.
>>>>>
>>>>> It looks wrong to me, because the API to write into gatt_db (gatt_db_attribute_write()) is supposed to carry the knowledge about partial writes based on the 'offset' argument. There will be no reason to have 'offset' argument if bt_gatt_server would always squash all 'prepare write' chunks into one.
>>>>
>>>> Not really, the offset is still useful to tell where it start, but if
>>>> prepare writes are in sequence, like it should be for long write, than
>>>> there is no need to write multiple sets instead we should just use
>>>> realloc and reuse the same buffer, this is actually cheaper since we
>>>> don't have to allocate more elements in the queue and cause overall
>>>> less calls. Now if you have different areas to write to we still going
>>>> to do multiple writes, but I guess this less likely to happen in
>>>> practice.
>>>>
>>>>>> Anyway, anything that write on prepare is probably wrong since with
>>>>>> prepare there is the possibility to cancel the queue so we should
>>>>>> never call WriteValue in the first place.
>>>>>
>>>>> As said, it is already like this.
>>>>>
>>>>>>
>>>>>>> Maybe someone with better BlueZ design knowledge can suggest how
>>>>>>> the 2-nd solution can be implemented or maybe propose something
>>>>>>> completely different.
>>>>>>>
>>>
>>> bt_gatt_server already correctly handles long writes (and multiple
>>> writes) according to the spec; it maintains a prepare queue and
>>> executes them only when it receives an ExecuteWrite. The problem here
>>> is that a remote client may actually want to only partially modify a
>>> characteristic value; this is why the offset parameter exists in the
>>> first place. So this is a problem at the D-Bus API layer, not at the
>>> shared/gatt layer.
>>
>> Yes, but still my point about squashing fragments together if they are
>> continuous should be taken care of, we don't want to cause several
>> D-Bus calls to WriteValue on a row for long write that is for sure.
>
> I also think that squashing contiguous fragments into one WriteValue is right thing, especially if first offset=0, because multiple fragments committed by an ExecuteWrite should be treated as an atomic op anyway. But there is one potential use case which complicates the matter - what if PrepareWrite fragments do not start with offset=0 and have gaps in between? It means, ExecuteWrite should pass all fragments to external service at once, like an array of buffers each having its own length and offset. Kinda crazy, not sure whether it happens in practice.
>
>>
>>> IMO, extending the WriteValue with an offset parameter is very much
>>> reasonable and it's something that every platform API already allows.
>>> I think this is useful for both client and server cases, for an
>>> application to be able to specify an offset and even things like
>>> Signed Write, WriteWithoutResponse, and so on. Adding a parameter
>>> dictionary would be clean way to achieve this without breaking
>>> existing apps (which shouldn't technically matter since the API is
>>> still in experimental). This way, we also wouldn't have to do all the
>>> magic guesswork in the client code as to which type of write procedure
>>> should be used.
>>
>> offset should be needed, but things like signature are not even
>> possible to the client since they have no access to CSRK, by design,
>> also WriteWithoutResponse is basically ignoring the D-Bus reply.
>>
>
> So, by using offset and squashing contiguous fragments into one buffer it will be possible to issue a single WriteValue D-Bus call as an atomic op. But it will not be enough to support scattered fragments as I noted earlier. But do we really need it?

Yep, scattered is not supported, so it would cause multiple WriteValue
to the same attirbute which is unfortunate but I guess this type of
operation will be very rare since usually prepare is used for long
write or to write to multiple attributes at same time but even the
later I believe is very rare and actually is sometimes inconvenient
since the spec has this statement regarding failure:

'The state of the attributes that were to be written
from the prepare queue is not defined in this case.'

So it is up to the implementation, so in case of error the client
would probably have to read the values back to know what was the
effect, for example in case of D-Bus I think waiting for each reply
would consume to much time so we would probably do the WriteValue in
parallel and don't bother reverting if there is an error because that
could fail as well, in other words implementing an atomic multiple
write is probably not possible.

Given these problem Im considering supporting prepare writes only for
long writes, so we would only queue if the write is for the same
handle and for sequential/non-scattered offset, this should cover 99%
of the cases and for the remain cases they are probably better
implemented as plugins where it can be done atomically either locally
or using a dedicated D-Bus interface.

--
Luiz Augusto von Dentz

2015-03-17 19:05:51

by Andrejs Hanins

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi,

On 2015.03.17. 20:30, Luiz Augusto von Dentz wrote:
> Hi Arman,
>
> On Tue, Mar 17, 2015 at 8:15 PM, Arman Uguray <[email protected]> wrote:
>> Hi Luiz & Andrejs,
>>
>> On Tue, Mar 17, 2015 at 9:49 AM, Luiz Augusto von Dentz
>> <[email protected]> wrote:
>>> Hi Andrejs,
>>>
>>> On Tue, Mar 17, 2015 at 6:04 PM, Andrejs Hanins <[email protected]> wrote:
>>>> Hi Luiz,
>>>>
>>>> On 2015.03.17. 17:15, Luiz Augusto von Dentz wrote:
>>>>> Hi Andrejs,
>>>>>
>>>>> On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins
>>>>> <[email protected]> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> To begin with, there is an issue with long reading/writing of
>>>>>> external GATT characteristics registered via GattManager1. Related
>>>>>> ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue
>>>>>> is easily fixed in a patch I sent yesterday. But the write case is
>>>>>> more tricky. Currently, long write into external chrc (D-Bus method
>>>>>> "WriteValue") is executed in a "chunked" fashion. Thats is, each
>>>>>> ATT PrepareWrite op eventually causes an invocation of "WriteValue"
>>>>>> with appropriate buffer (see exec_next_prep_write()). I see no easy
>>>>>> way for external D-Bus GATT service to distinguish between full and
>>>>>> partial writes, so no way to tell when the chrc is fully written.
>>>>>>
>>>>>> I see two solutions for the issue:
>>>>>>
>>>>>> 1. Add a new method to GattCharacteristic1 called
>>>>>> "WriteValuePartial" with a single buffer argument, so that
>>>>>> len(buffer)==0 denotes that all chunks have been written and
>>>>>> service can assemble the whole buffer for further processing. Or
>>>>>> the existing "WriteValue" method can be extended with a parameter
>>>>>> like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ
>>>>>> design expert, but IMO this approach stinks a bit... D-Bus level
>>>>>> API should be simple. I doubt that external service should have an
>>>>>> burden related to ATT MTU fragmentation.
>>>>>>
>>>>>> 2. Add some logic to BlueZ daemon so that "WriteValue" is always
>>>>>> executed with fully assembled data from multiple PrepareWrite ops.
>>>>>> Such approach will not require any D-Bus API changes, which is
>>>>>> good. The problem is that I personally don't see a clean and easy
>>>>>> way how to implement it in the code. Attributes are written using
>>>>>> gatt_db_attribute_write() which has ATT opcode and write offset,
>>>>>> but this info is not enough to understand when the last chunk of
>>>>>> an attribute value is being written to complete the operation in a
>>>>>> single step. In case of external chrc, the
>>>>>> gatt_db_attribute::write_func equals to chrc_write_cb() which
>>>>>> immediately sends D-Bus calls to "WriteValue", thats it doing the
>>>>>> "chunking". If there would be a possibility to tell when the call
>>>>>> to gatt_db_attribute::write_func is for the last chunk of data,
>>>>>> then chrc_write_cb() could do the assembly and invoke "WriteValue"
>>>>>> only once. The task can be accomplished by adding an additional
>>>>>> argument to gatt_db_write_t functor definit ion telling that write
>>>>>> is chunked and if the chunk is last or not. Not sure, does it look
>>>>>> like a "brutal hack" or not for BlueZ gurus :)
>>>>>
>>>>> I guess what we should do is to queue the prepare writes and wait
>>>>> execute to actually do the write in gatt_db,
>>>>
>>>> This is exactly what is happening now. ExecuteWrite triggers all queued chunks to be written one by one into gatt_db, but not each PrepareWrite. PrepareWrite does only queuing.
>>>>
>>>>> I would probably leave this to bt_gatt_server since it should be
>>>>> doing permission checking, etc, it could handle the prepare queue.
>>>>
>>>> It looks wrong to me, because the API to write into gatt_db (gatt_db_attribute_write()) is supposed to carry the knowledge about partial writes based on the 'offset' argument. There will be no reason to have 'offset' argument if bt_gatt_server would always squash all 'prepare write' chunks into one.
>>>
>>> Not really, the offset is still useful to tell where it start, but if
>>> prepare writes are in sequence, like it should be for long write, than
>>> there is no need to write multiple sets instead we should just use
>>> realloc and reuse the same buffer, this is actually cheaper since we
>>> don't have to allocate more elements in the queue and cause overall
>>> less calls. Now if you have different areas to write to we still going
>>> to do multiple writes, but I guess this less likely to happen in
>>> practice.
>>>
>>>>> Anyway, anything that write on prepare is probably wrong since with
>>>>> prepare there is the possibility to cancel the queue so we should
>>>>> never call WriteValue in the first place.
>>>>
>>>> As said, it is already like this.
>>>>
>>>>>
>>>>>> Maybe someone with better BlueZ design knowledge can suggest how
>>>>>> the 2-nd solution can be implemented or maybe propose something
>>>>>> completely different.
>>>>>>
>>
>> bt_gatt_server already correctly handles long writes (and multiple
>> writes) according to the spec; it maintains a prepare queue and
>> executes them only when it receives an ExecuteWrite. The problem here
>> is that a remote client may actually want to only partially modify a
>> characteristic value; this is why the offset parameter exists in the
>> first place. So this is a problem at the D-Bus API layer, not at the
>> shared/gatt layer.
>
> Yes, but still my point about squashing fragments together if they are
> continuous should be taken care of, we don't want to cause several
> D-Bus calls to WriteValue on a row for long write that is for sure.

I also think that squashing contiguous fragments into one WriteValue is right thing, especially if first offset=0, because multiple fragments committed by an ExecuteWrite should be treated as an atomic op anyway. But there is one potential use case which complicates the matter - what if PrepareWrite fragments do not start with offset=0 and have gaps in between? It means, ExecuteWrite should pass all fragments to external service at once, like an array of buffers each having its own length and offset. Kinda crazy, not sure whether it happens in practice.

>
>> IMO, extending the WriteValue with an offset parameter is very much
>> reasonable and it's something that every platform API already allows.
>> I think this is useful for both client and server cases, for an
>> application to be able to specify an offset and even things like
>> Signed Write, WriteWithoutResponse, and so on. Adding a parameter
>> dictionary would be clean way to achieve this without breaking
>> existing apps (which shouldn't technically matter since the API is
>> still in experimental). This way, we also wouldn't have to do all the
>> magic guesswork in the client code as to which type of write procedure
>> should be used.
>
> offset should be needed, but things like signature are not even
> possible to the client since they have no access to CSRK, by design,
> also WriteWithoutResponse is basically ignoring the D-Bus reply.
>

So, by using offset and squashing contiguous fragments into one buffer it will be possible to issue a single WriteValue D-Bus call as an atomic op. But it will not be enough to support scattered fragments as I noted earlier. But do we really need it?

2015-03-17 18:30:22

by Luiz Augusto von Dentz

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi Arman,

On Tue, Mar 17, 2015 at 8:15 PM, Arman Uguray <[email protected]> wrote:
> Hi Luiz & Andrejs,
>
> On Tue, Mar 17, 2015 at 9:49 AM, Luiz Augusto von Dentz
> <[email protected]> wrote:
>> Hi Andrejs,
>>
>> On Tue, Mar 17, 2015 at 6:04 PM, Andrejs Hanins <[email protected]> wrote:
>>> Hi Luiz,
>>>
>>> On 2015.03.17. 17:15, Luiz Augusto von Dentz wrote:
>>>> Hi Andrejs,
>>>>
>>>> On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins
>>>> <[email protected]> wrote:
>>>>> Hi,
>>>>>
>>>>> To begin with, there is an issue with long reading/writing of
>>>>> external GATT characteristics registered via GattManager1. Related
>>>>> ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue
>>>>> is easily fixed in a patch I sent yesterday. But the write case is
>>>>> more tricky. Currently, long write into external chrc (D-Bus method
>>>>> "WriteValue") is executed in a "chunked" fashion. Thats is, each
>>>>> ATT PrepareWrite op eventually causes an invocation of "WriteValue"
>>>>> with appropriate buffer (see exec_next_prep_write()). I see no easy
>>>>> way for external D-Bus GATT service to distinguish between full and
>>>>> partial writes, so no way to tell when the chrc is fully written.
>>>>>
>>>>> I see two solutions for the issue:
>>>>>
>>>>> 1. Add a new method to GattCharacteristic1 called
>>>>> "WriteValuePartial" with a single buffer argument, so that
>>>>> len(buffer)==0 denotes that all chunks have been written and
>>>>> service can assemble the whole buffer for further processing. Or
>>>>> the existing "WriteValue" method can be extended with a parameter
>>>>> like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ
>>>>> design expert, but IMO this approach stinks a bit... D-Bus level
>>>>> API should be simple. I doubt that external service should have an
>>>>> burden related to ATT MTU fragmentation.
>>>>>
>>>>> 2. Add some logic to BlueZ daemon so that "WriteValue" is always
>>>>> executed with fully assembled data from multiple PrepareWrite ops.
>>>>> Such approach will not require any D-Bus API changes, which is
>>>>> good. The problem is that I personally don't see a clean and easy
>>>>> way how to implement it in the code. Attributes are written using
>>>>> gatt_db_attribute_write() which has ATT opcode and write offset,
>>>>> but this info is not enough to understand when the last chunk of
>>>>> an attribute value is being written to complete the operation in a
>>>>> single step. In case of external chrc, the
>>>>> gatt_db_attribute::write_func equals to chrc_write_cb() which
>>>>> immediately sends D-Bus calls to "WriteValue", thats it doing the
>>>>> "chunking". If there would be a possibility to tell when the call
>>>>> to gatt_db_attribute::write_func is for the last chunk of data,
>>>>> then chrc_write_cb() could do the assembly and invoke "WriteValue"
>>>>> only once. The task can be accomplished by adding an additional
>>>>> argument to gatt_db_write_t functor definit ion telling that write
>>>>> is chunked and if the chunk is last or not. Not sure, does it look
>>>>> like a "brutal hack" or not for BlueZ gurus :)
>>>>
>>>> I guess what we should do is to queue the prepare writes and wait
>>>> execute to actually do the write in gatt_db,
>>>
>>> This is exactly what is happening now. ExecuteWrite triggers all queued chunks to be written one by one into gatt_db, but not each PrepareWrite. PrepareWrite does only queuing.
>>>
>>>> I would probably leave this to bt_gatt_server since it should be
>>>> doing permission checking, etc, it could handle the prepare queue.
>>>
>>> It looks wrong to me, because the API to write into gatt_db (gatt_db_attribute_write()) is supposed to carry the knowledge about partial writes based on the 'offset' argument. There will be no reason to have 'offset' argument if bt_gatt_server would always squash all 'prepare write' chunks into one.
>>
>> Not really, the offset is still useful to tell where it start, but if
>> prepare writes are in sequence, like it should be for long write, than
>> there is no need to write multiple sets instead we should just use
>> realloc and reuse the same buffer, this is actually cheaper since we
>> don't have to allocate more elements in the queue and cause overall
>> less calls. Now if you have different areas to write to we still going
>> to do multiple writes, but I guess this less likely to happen in
>> practice.
>>
>>>> Anyway, anything that write on prepare is probably wrong since with
>>>> prepare there is the possibility to cancel the queue so we should
>>>> never call WriteValue in the first place.
>>>
>>> As said, it is already like this.
>>>
>>>>
>>>>> Maybe someone with better BlueZ design knowledge can suggest how
>>>>> the 2-nd solution can be implemented or maybe propose something
>>>>> completely different.
>>>>>
>
> bt_gatt_server already correctly handles long writes (and multiple
> writes) according to the spec; it maintains a prepare queue and
> executes them only when it receives an ExecuteWrite. The problem here
> is that a remote client may actually want to only partially modify a
> characteristic value; this is why the offset parameter exists in the
> first place. So this is a problem at the D-Bus API layer, not at the
> shared/gatt layer.

Yes, but still my point about squashing fragments together if they are
continuous should be taken care of, we don't want to cause several
D-Bus calls to WriteValue on a row for long write that is for sure.

> IMO, extending the WriteValue with an offset parameter is very much
> reasonable and it's something that every platform API already allows.
> I think this is useful for both client and server cases, for an
> application to be able to specify an offset and even things like
> Signed Write, WriteWithoutResponse, and so on. Adding a parameter
> dictionary would be clean way to achieve this without breaking
> existing apps (which shouldn't technically matter since the API is
> still in experimental). This way, we also wouldn't have to do all the
> magic guesswork in the client code as to which type of write procedure
> should be used.

offset should be needed, but things like signature are not even
possible to the client since they have no access to CSRK, by design,
also WriteWithoutResponse is basically ignoring the D-Bus reply.

--
Luiz Augusto von Dentz

2015-03-17 18:15:12

by Arman Uguray

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi Luiz & Andrejs,

On Tue, Mar 17, 2015 at 9:49 AM, Luiz Augusto von Dentz
<[email protected]> wrote:
> Hi Andrejs,
>
> On Tue, Mar 17, 2015 at 6:04 PM, Andrejs Hanins <[email protected]> wrote:
>> Hi Luiz,
>>
>> On 2015.03.17. 17:15, Luiz Augusto von Dentz wrote:
>>> Hi Andrejs,
>>>
>>> On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins
>>> <[email protected]> wrote:
>>>> Hi,
>>>>
>>>> To begin with, there is an issue with long reading/writing of
>>>> external GATT characteristics registered via GattManager1. Related
>>>> ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue
>>>> is easily fixed in a patch I sent yesterday. But the write case is
>>>> more tricky. Currently, long write into external chrc (D-Bus method
>>>> "WriteValue") is executed in a "chunked" fashion. Thats is, each
>>>> ATT PrepareWrite op eventually causes an invocation of "WriteValue"
>>>> with appropriate buffer (see exec_next_prep_write()). I see no easy
>>>> way for external D-Bus GATT service to distinguish between full and
>>>> partial writes, so no way to tell when the chrc is fully written.
>>>>
>>>> I see two solutions for the issue:
>>>>
>>>> 1. Add a new method to GattCharacteristic1 called
>>>> "WriteValuePartial" with a single buffer argument, so that
>>>> len(buffer)==0 denotes that all chunks have been written and
>>>> service can assemble the whole buffer for further processing. Or
>>>> the existing "WriteValue" method can be extended with a parameter
>>>> like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ
>>>> design expert, but IMO this approach stinks a bit... D-Bus level
>>>> API should be simple. I doubt that external service should have an
>>>> burden related to ATT MTU fragmentation.
>>>>
>>>> 2. Add some logic to BlueZ daemon so that "WriteValue" is always
>>>> executed with fully assembled data from multiple PrepareWrite ops.
>>>> Such approach will not require any D-Bus API changes, which is
>>>> good. The problem is that I personally don't see a clean and easy
>>>> way how to implement it in the code. Attributes are written using
>>>> gatt_db_attribute_write() which has ATT opcode and write offset,
>>>> but this info is not enough to understand when the last chunk of
>>>> an attribute value is being written to complete the operation in a
>>>> single step. In case of external chrc, the
>>>> gatt_db_attribute::write_func equals to chrc_write_cb() which
>>>> immediately sends D-Bus calls to "WriteValue", thats it doing the
>>>> "chunking". If there would be a possibility to tell when the call
>>>> to gatt_db_attribute::write_func is for the last chunk of data,
>>>> then chrc_write_cb() could do the assembly and invoke "WriteValue"
>>>> only once. The task can be accomplished by adding an additional
>>>> argument to gatt_db_write_t functor definit ion telling that write
>>>> is chunked and if the chunk is last or not. Not sure, does it look
>>>> like a "brutal hack" or not for BlueZ gurus :)
>>>
>>> I guess what we should do is to queue the prepare writes and wait
>>> execute to actually do the write in gatt_db,
>>
>> This is exactly what is happening now. ExecuteWrite triggers all queued chunks to be written one by one into gatt_db, but not each PrepareWrite. PrepareWrite does only queuing.
>>
>>> I would probably leave this to bt_gatt_server since it should be
>>> doing permission checking, etc, it could handle the prepare queue.
>>
>> It looks wrong to me, because the API to write into gatt_db (gatt_db_attribute_write()) is supposed to carry the knowledge about partial writes based on the 'offset' argument. There will be no reason to have 'offset' argument if bt_gatt_server would always squash all 'prepare write' chunks into one.
>
> Not really, the offset is still useful to tell where it start, but if
> prepare writes are in sequence, like it should be for long write, than
> there is no need to write multiple sets instead we should just use
> realloc and reuse the same buffer, this is actually cheaper since we
> don't have to allocate more elements in the queue and cause overall
> less calls. Now if you have different areas to write to we still going
> to do multiple writes, but I guess this less likely to happen in
> practice.
>
>>> Anyway, anything that write on prepare is probably wrong since with
>>> prepare there is the possibility to cancel the queue so we should
>>> never call WriteValue in the first place.
>>
>> As said, it is already like this.
>>
>>>
>>>> Maybe someone with better BlueZ design knowledge can suggest how
>>>> the 2-nd solution can be implemented or maybe propose something
>>>> completely different.
>>>>

bt_gatt_server already correctly handles long writes (and multiple
writes) according to the spec; it maintains a prepare queue and
executes them only when it receives an ExecuteWrite. The problem here
is that a remote client may actually want to only partially modify a
characteristic value; this is why the offset parameter exists in the
first place. So this is a problem at the D-Bus API layer, not at the
shared/gatt layer.

IMO, extending the WriteValue with an offset parameter is very much
reasonable and it's something that every platform API already allows.
I think this is useful for both client and server cases, for an
application to be able to specify an offset and even things like
Signed Write, WriteWithoutResponse, and so on. Adding a parameter
dictionary would be clean way to achieve this without breaking
existing apps (which shouldn't technically matter since the API is
still in experimental). This way, we also wouldn't have to do all the
magic guesswork in the client code as to which type of write procedure
should be used.

Thanks,
Arman

2015-03-17 16:49:59

by Luiz Augusto von Dentz

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi Andrejs,

On Tue, Mar 17, 2015 at 6:04 PM, Andrejs Hanins <[email protected]> wrote:
> Hi Luiz,
>
> On 2015.03.17. 17:15, Luiz Augusto von Dentz wrote:
>> Hi Andrejs,
>>
>> On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins
>> <[email protected]> wrote:
>>> Hi,
>>>
>>> To begin with, there is an issue with long reading/writing of
>>> external GATT characteristics registered via GattManager1. Related
>>> ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue
>>> is easily fixed in a patch I sent yesterday. But the write case is
>>> more tricky. Currently, long write into external chrc (D-Bus method
>>> "WriteValue") is executed in a "chunked" fashion. Thats is, each
>>> ATT PrepareWrite op eventually causes an invocation of "WriteValue"
>>> with appropriate buffer (see exec_next_prep_write()). I see no easy
>>> way for external D-Bus GATT service to distinguish between full and
>>> partial writes, so no way to tell when the chrc is fully written.
>>>
>>> I see two solutions for the issue:
>>>
>>> 1. Add a new method to GattCharacteristic1 called
>>> "WriteValuePartial" with a single buffer argument, so that
>>> len(buffer)==0 denotes that all chunks have been written and
>>> service can assemble the whole buffer for further processing. Or
>>> the existing "WriteValue" method can be extended with a parameter
>>> like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ
>>> design expert, but IMO this approach stinks a bit... D-Bus level
>>> API should be simple. I doubt that external service should have an
>>> burden related to ATT MTU fragmentation.
>>>
>>> 2. Add some logic to BlueZ daemon so that "WriteValue" is always
>>> executed with fully assembled data from multiple PrepareWrite ops.
>>> Such approach will not require any D-Bus API changes, which is
>>> good. The problem is that I personally don't see a clean and easy
>>> way how to implement it in the code. Attributes are written using
>>> gatt_db_attribute_write() which has ATT opcode and write offset,
>>> but this info is not enough to understand when the last chunk of
>>> an attribute value is being written to complete the operation in a
>>> single step. In case of external chrc, the
>>> gatt_db_attribute::write_func equals to chrc_write_cb() which
>>> immediately sends D-Bus calls to "WriteValue", thats it doing the
>>> "chunking". If there would be a possibility to tell when the call
>>> to gatt_db_attribute::write_func is for the last chunk of data,
>>> then chrc_write_cb() could do the assembly and invoke "WriteValue"
>>> only once. The task can be accomplished by adding an additional
>>> argument to gatt_db_write_t functor definit ion telling that write
>>> is chunked and if the chunk is last or not. Not sure, does it look
>>> like a "brutal hack" or not for BlueZ gurus :)
>>
>> I guess what we should do is to queue the prepare writes and wait
>> execute to actually do the write in gatt_db,
>
> This is exactly what is happening now. ExecuteWrite triggers all queued chunks to be written one by one into gatt_db, but not each PrepareWrite. PrepareWrite does only queuing.
>
>> I would probably leave this to bt_gatt_server since it should be
>> doing permission checking, etc, it could handle the prepare queue.
>
> It looks wrong to me, because the API to write into gatt_db (gatt_db_attribute_write()) is supposed to carry the knowledge about partial writes based on the 'offset' argument. There will be no reason to have 'offset' argument if bt_gatt_server would always squash all 'prepare write' chunks into one.

Not really, the offset is still useful to tell where it start, but if
prepare writes are in sequence, like it should be for long write, than
there is no need to write multiple sets instead we should just use
realloc and reuse the same buffer, this is actually cheaper since we
don't have to allocate more elements in the queue and cause overall
less calls. Now if you have different areas to write to we still going
to do multiple writes, but I guess this less likely to happen in
practice.

>> Anyway, anything that write on prepare is probably wrong since with
>> prepare there is the possibility to cancel the queue so we should
>> never call WriteValue in the first place.
>
> As said, it is already like this.
>
>>
>>> Maybe someone with better BlueZ design knowledge can suggest how
>>> the 2-nd solution can be implemented or maybe propose something
>>> completely different.
>>>
>>> BR, Andrey -- To unsubscribe from this list: send the line
>>> "unsubscribe linux-bluetooth" in the body of a message to
>>> [email protected] More majordomo info at
>>> http://vger.kernel.org/majordomo-info.html
>>
>>
>>



--
Luiz Augusto von Dentz

2015-03-17 16:04:33

by Andrejs Hanins

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi Luiz,

On 2015.03.17. 17:15, Luiz Augusto von Dentz wrote:
> Hi Andrejs,
>
> On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins
> <[email protected]> wrote:
>> Hi,
>>
>> To begin with, there is an issue with long reading/writing of
>> external GATT characteristics registered via GattManager1. Related
>> ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue
>> is easily fixed in a patch I sent yesterday. But the write case is
>> more tricky. Currently, long write into external chrc (D-Bus method
>> "WriteValue") is executed in a "chunked" fashion. Thats is, each
>> ATT PrepareWrite op eventually causes an invocation of "WriteValue"
>> with appropriate buffer (see exec_next_prep_write()). I see no easy
>> way for external D-Bus GATT service to distinguish between full and
>> partial writes, so no way to tell when the chrc is fully written.
>>
>> I see two solutions for the issue:
>>
>> 1. Add a new method to GattCharacteristic1 called
>> "WriteValuePartial" with a single buffer argument, so that
>> len(buffer)==0 denotes that all chunks have been written and
>> service can assemble the whole buffer for further processing. Or
>> the existing "WriteValue" method can be extended with a parameter
>> like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ
>> design expert, but IMO this approach stinks a bit... D-Bus level
>> API should be simple. I doubt that external service should have an
>> burden related to ATT MTU fragmentation.
>>
>> 2. Add some logic to BlueZ daemon so that "WriteValue" is always
>> executed with fully assembled data from multiple PrepareWrite ops.
>> Such approach will not require any D-Bus API changes, which is
>> good. The problem is that I personally don't see a clean and easy
>> way how to implement it in the code. Attributes are written using
>> gatt_db_attribute_write() which has ATT opcode and write offset,
>> but this info is not enough to understand when the last chunk of
>> an attribute value is being written to complete the operation in a
>> single step. In case of external chrc, the
>> gatt_db_attribute::write_func equals to chrc_write_cb() which
>> immediately sends D-Bus calls to "WriteValue", thats it doing the
>> "chunking". If there would be a possibility to tell when the call
>> to gatt_db_attribute::write_func is for the last chunk of data,
>> then chrc_write_cb() could do the assembly and invoke "WriteValue"
>> only once. The task can be accomplished by adding an additional
>> argument to gatt_db_write_t functor definit ion telling that write
>> is chunked and if the chunk is last or not. Not sure, does it look
>> like a "brutal hack" or not for BlueZ gurus :)
>
> I guess what we should do is to queue the prepare writes and wait
> execute to actually do the write in gatt_db,

This is exactly what is happening now. ExecuteWrite triggers all queued chunks to be written one by one into gatt_db, but not each PrepareWrite. PrepareWrite does only queuing.

> I would probably leave this to bt_gatt_server since it should be
> doing permission checking, etc, it could handle the prepare queue.

It looks wrong to me, because the API to write into gatt_db (gatt_db_attribute_write()) is supposed to carry the knowledge about partial writes based on the 'offset' argument. There will be no reason to have 'offset' argument if bt_gatt_server would always squash all 'prepare write' chunks into one.

> Anyway, anything that write on prepare is probably wrong since with
> prepare there is the possibility to cancel the queue so we should
> never call WriteValue in the first place.

As said, it is already like this.

>
>> Maybe someone with better BlueZ design knowledge can suggest how
>> the 2-nd solution can be implemented or maybe propose something
>> completely different.
>>
>> BR, Andrey -- To unsubscribe from this list: send the line
>> "unsubscribe linux-bluetooth" in the body of a message to
>> [email protected] More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
>
>
>

2015-03-17 15:15:44

by Luiz Augusto von Dentz

[permalink] [raw]
Subject: Re: core/gatt-db: Long writes into external GATT characteristics issue

Hi Andrejs,

On Tue, Mar 17, 2015 at 1:20 PM, Andrejs Hanins <[email protected]> wrote:
> Hi,
>
> To begin with, there is an issue with long reading/writing of external GATT characteristics registered via GattManager1. Related ATT ops are ReadBlob, PrepareWrite, ExecuteWrite etc. Reading issue is easily fixed in a patch I sent yesterday. But the write case is more tricky. Currently, long write into external chrc (D-Bus method "WriteValue") is executed in a "chunked" fashion. Thats is, each ATT PrepareWrite op eventually causes an invocation of "WriteValue" with appropriate buffer (see exec_next_prep_write()). I see no easy way for external D-Bus GATT service to distinguish between full and partial writes, so no way to tell when the chrc is fully written.
>
> I see two solutions for the issue:
>
> 1. Add a new method to GattCharacteristic1 called "WriteValuePartial" with a single buffer argument, so that len(buffer)==0 denotes that all chunks have been written and service can assemble the whole buffer for further processing. Or the existing "WriteValue" method can be extended with a parameter like op_type = ["Single", "Partial", "End"]. I'm not a BlueZ design expert, but IMO this approach stinks a bit... D-Bus level API should be simple. I doubt that external service should have an burden related to ATT MTU fragmentation.
>
> 2. Add some logic to BlueZ daemon so that "WriteValue" is always executed with fully assembled data from multiple PrepareWrite ops. Such approach will not require any D-Bus API changes, which is good. The problem is that I personally don't see a clean and easy way how to implement it in the code. Attributes are written using gatt_db_attribute_write() which has ATT opcode and write offset, but this info is not enough to understand when the last chunk of an attribute value is being written to complete the operation in a single step. In case of external chrc, the gatt_db_attribute::write_func equals to chrc_write_cb() which immediately sends D-Bus calls to "WriteValue", thats it doing the "chunking". If there would be a possibility to tell when the call to gatt_db_attribute::write_func is for the last chunk of data, then chrc_write_cb() could do the assembly and invoke "WriteValue" only once. The task can be accomplished by adding an additional argument to gatt_db_write_t functor definit
> ion telling that write is chunked and if the chunk is last or not. Not sure, does it look like a "brutal hack" or not for BlueZ gurus :)

I guess what we should do is to queue the prepare writes and wait
execute to actually do the write in gatt_db, I would probably leave
this to bt_gatt_server since it should be doing permission checking,
etc, it could handle the prepare queue. Anyway, anything that write on
prepare is probably wrong since with prepare there is the possibility
to cancel the queue so we should never call WriteValue in the first
place.

> Maybe someone with better BlueZ design knowledge can suggest how the 2-nd solution can be implemented or maybe propose something completely different.
>
> BR, Andrey
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bluetooth" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html



--
Luiz Augusto von Dentz