On Wed, Jun 09, 2021 at 11:59:38AM +0300, Dan Carpenter wrote:
> On Wed, Jun 09, 2021 at 05:28:10PM +0900, William Breathitt Gray wrote:
> > On Wed, Jun 09, 2021 at 11:07:08AM +0300, Dan Carpenter wrote:
> > > On Wed, Jun 09, 2021 at 10:31:29AM +0900, William Breathitt Gray wrote:
> > > > +static int counter_set_event_node(struct counter_device *const counter,
> > > > + struct counter_watch *const watch,
> > > > + const struct counter_comp_node *const cfg)
> > > > +{
> > > > + struct counter_event_node *event_node;
> > > > + struct counter_comp_node *comp_node;
> > > > +
> > >
> > > The caller should be holding the counter->events_list_lock lock but it's
> > > not.
> >
> > Hi Dan,
> >
> > The counter_set_event_node() function doesn't access or modify
> > counter->events_list (it works on counter->next_events_list) so holding
> > the counter->events_list_lock here isn't necessary.
> >
>
> There needs to be some sort of locking or this function can race with
> itself. (Two threads add the same event at exactly the same time). It
> looks like it can also race with counter_disable_events() leading to a
> use after free.
All right, I'll add in a lock to protect this function so it doesn't
race with itself nor counter_disable_events().
> > > > + /* Search for event in the list */
> > > > + list_for_each_entry(event_node, &counter->next_events_list, l)
> > > > + if (event_node->event == watch->event &&
> > > > + event_node->channel == watch->channel)
> > > > + break;
> > > > +
> > > > + /* If event is not already in the list */
> > > > + if (&event_node->l == &counter->next_events_list) {
> > > > + /* Allocate new event node */
> > > > + event_node = kmalloc(sizeof(*event_node), GFP_ATOMIC);
>
> Btw, say we decided that we can add/remove events locklessly, then these
> GFP_ATOMICs can be changed to GFP_KERNEL.
Because I'll be using a lock I'll keep these as GFP_ATOMICs afterall.
Thanks,
William Breathitt Gray