On 12/1/2017 2:33 AM, Saeed Mahameed wrote:
> On Mon, Nov 27, 2017 at 2:05 AM, Sagar Arun Kamble
> <[email protected]> wrote:
>>
>> On 11/24/2017 7:01 PM, Thomas Gleixner wrote:
>>> On Fri, 24 Nov 2017, Sagar Arun Kamble wrote:
>>>> On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
>>>>> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>>>>>> We needed inputs on possible optimization that can be done to
>>>>>> timecounter/cyclecounter structures/usage.
>>>>>> This mail is in response to review of patch
>>>>>> https://patchwork.freedesktop.org/patch/188448/.
>>>>>>
>>>>>> As Chris's observation below, about dozen of timecounter users in the
>>>>>> kernel
>>>>>> have below structures
>>>>>> defined individually:
>>>>>>
>>>>>> spinlock_t lock;
>>>>>> struct cyclecounter cc;
>>>>>> struct timecounter tc;
>>>>>>
>>>>>> Can we move lock and cc to tc? That way it will be convenient.
>>>>>> Also it will allow unifying the locking/overflow watchdog handling
>>>>>> across
>>>>>> all
>>>>>> drivers.
>>>>> Looks like none of the timecounter usage sites has a real need to
>>>>> separate
>>>>> timecounter and cyclecounter.
>>>> Yes. Will share patch for this change.
>>>>
>>>>> The lock is a different question. The locking of the various drivers
>>>>> differs and I have no idea how you want to handle that. Just sticking
>>>>> the
>>>>> lock into the datastructure and then not making use of it in the
>>>>> timercounter code and leave it to the callsites does not make sense.
>>>> Most of the locks are held around timecounter_read. In some instances it
>>>> is held when cyclecounter is updated standalone or is updated along with
>>>> timecounter calls. Was thinking if we move the lock in timecounter
>>>> functions, drivers just have to do locking around its operations on
>>>> cyclecounter. But then another problem I see is there are variation of
>>>> locking calls like lock_irqsave, lock_bh, write_lock_irqsave (some using
>>>> rwlock_t). Should this all locking be left to driver only then?
>>> You could have the lock in the struct and protect the inner workings in
>>> the
>>> related core functions.
>>>
>>> That might remove locking requirements from some of the callers and the
>>> others still have their own thing around it.
>>
>> For drivers having static/fixed cyclecounter, we can rely only on lock
>> inside timecounter.
>> Most of the network drivers update cyclecounter at runtime and they will
>> have to rely on two locks if
>> we add one to timecounter. This may not be efficient for them. Also the lock
>> in timecounter has to be less restrictive (may be seqlock) I guess.
>>
>> Cc'd Mellanox list for inputs on this.
>>
>> I have started feeling that the current approach of drivers managing the
>> locks is the right one so better leave the
>> lock out of timecounter.
>>
> I agree here,
>
> In mlx5 we rely on our own read/write lock to serialize access to
> mlx5_clock struct (mlx5 timecounter and cyclecounter).
> the access is not as simple as
> lock()
> call time_counter_API
> unlock()
>
> Sometimes we also explicitly update/adjust timecycles counters with
> mlx5 specific calculations after we read the timecounter all inside
> our lock.
> e.g.
> @mlx5_ptp_adjfreq()
>
> write_lock_irqsave(&clock->lock, flags);
> timecounter_read(&clock->tc);
> clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff :
> clock->nominal_c_mult + diff;
> write_unlock_irqrestore(&clock->lock, flags);
>
> So i don't think it will be a simple task to have a generic thread
> safe timecounter API, without the need to specifically adjust it for
> all driver use-cases.
> Also as said above, in runtime it is not obvious in which context the
> timecounter will be accessed irq/soft irq/user.
>
> let's keep it as is, and let the driver decide which locking scheme is
> most suitable for it.
Yes. Thanks for your inputs Saeed.
Regards
Sagar
>
> Thanks,
> Saeed.
>
>>> Thanks,
>>>
>>> tglx
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html