Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3159462imm; Fri, 10 Aug 2018 04:55:53 -0700 (PDT) X-Google-Smtp-Source: AA+uWPy97jJCRc7rJvkgFe10CkDKKNBow3e7uTvDctTGGNDnjxSEWO2rkJWYJrrj2r6iMY5Z3Fwr X-Received: by 2002:a62:3952:: with SMTP id g79-v6mr6777955pfa.133.1533902153238; Fri, 10 Aug 2018 04:55:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533902153; cv=none; d=google.com; s=arc-20160816; b=F+H+06NgQ9dDcoLQkHpAVtnkx7nW+EqNMT/Snn5ckjajzzfeU1qku+157kmI7xs/XK o2i5CVcgRqdqRE1roB6hkA2m4hn4eSA0H5W5LfrI3pUeuDeCPjxY8Z4OxtVz5eV7ilCP eROt+QLLamcuPx3cvQoxLqoAT4WQO8NQofejTUKg6Ti0aX/1ddx0x+k4rdCAdl6bwVpC eexTEXx7BAaLK4i8/RazpyjxyNeaaN5gkD7qmYH6lYAeH9PJXZspwz7j/KNeaD3OgYXd WGcKytsLs0J2U9IuDpxpSKwlbwSPpHOHHsIaOYRz7fL6OmHaoCcNbIvdm5t39OQ6Vc0B q8Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=T8owskLcpO2gK2dvr3kjTJuE0S1cmJxP+ph6uFZee4w=; b=p4JBrWZZnZzeS0gbme3HDpsdfoG0Ijy624707h0URT9IcDLerC+DY5Zc5aXF4obRw3 wNFZVndzhKng9FO+fDchpHX5NqeBMkwyNK4n6YjEWDrTqDpZ1wMexMzCaJeaX+1JEPTz fm1XQGyS0wDSPRqU+C+s6tqCdRRtLnFcQNaVGg/qoQOuMGBOjZlZhXDr4NgTQk+r9bpD BUdSSZur4QP/I5gBgsKKymbMJeKLfCe8aHRipNjEMJPzGyk5MiHL0ZGEywPkwLspU6xN f8wVsgJeqQL+ULuZ2as93CurI+Vge1bE8IRfvC8wYqrXnpGKpsf1CEK1zcVW3XH9ZGAp xxMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ce1-v6si8817879plb.391.2018.08.10.04.55.38; Fri, 10 Aug 2018 04:55:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728180AbeHJOYK (ORCPT + 99 others); Fri, 10 Aug 2018 10:24:10 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:37964 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727258AbeHJOYK (ORCPT ); Fri, 10 Aug 2018 10:24:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B2C7D40201C7; Fri, 10 Aug 2018 11:54:33 +0000 (UTC) Received: from krava (unknown [10.43.17.133]) by smtp.corp.redhat.com (Postfix) with SMTP id 71F5CFF69; Fri, 10 Aug 2018 11:54:32 +0000 (UTC) Date: Fri, 10 Aug 2018 13:54:31 +0200 From: Jiri Olsa To: Stephane Eranian Cc: LKML , Arnaldo Carvalho de Melo , Peter Zijlstra , mingo@elte.hu Subject: Re: [PATCH v2] perf ordered_events: fix crash in free_dup_event() Message-ID: <20180810115431.GA4162@krava> References: <1533767600-7794-1-git-send-email-eranian@google.com> <20180809080721.GB19243@krava> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 10 Aug 2018 11:54:33 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 10 Aug 2018 11:54:33 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jolsa@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 10, 2018 at 01:21:18AM -0700, Stephane Eranian wrote: > On Thu, Aug 9, 2018 at 1:07 AM Jiri Olsa wrote: > > > > On Wed, Aug 08, 2018 at 03:33:20PM -0700, Stephane Eranian wrote: > > > This patch fixes a bug in ordered_event.c:alloc_event(). > > > An ordered_event struct was not initialized properly potentially > > > causing crashes later on in free_dup_event() depending on the > > > content of the memory. If it was NULL, then it would work fine, > > > otherwise, it could cause crashes such as: > > > > I'm now little puzzled what do we use this first event for.. > > I can't see anything special about it, other than it's added > > on the list uninitialized ;-) > > > > it seems to work properly when we ditch it.. might be some > > prehistoric leftover or I'm terribly missing something > > > You need to keep track of the buffers to free. You do not free the > ordered_event structs > individually. For each oe->buffer, you need one free(). Each buffer is > put in the to_free > list. But to link it into the list it needs a list_head. This is what > buffer[0] is used for. > But the logic is broken in ordered_events__free(). It does not free individual > ordered_event structs, but a buffer with many. Yet, it is missing > freeing all of the duped > events. > > void ordered_events__free(struct ordered_events *oe) > { > while (!list_empty(&oe->to_free)) { > struct ordered_event *buffer; > > buffer = list_entry(oe->to_free.next, struct > ordered_event, list); > list_del(&buffer->list); > ----> free_dup_event(oe, event->event); > free(buffer); > } > } > This only frees the dup_event of buffer[0] which we know is NULL (well, now). > It needs to walk all the entries in buffer[] to free buffer[x].event. yes.. if there's copy_on_queue set, we need to do that, otherwise we're leaking all the events > > I think the goal was likely to avoid adding another list_head field to > each ordered_event > and instead use one per allocated buffer. > This is very convoluted and prone to errors and we are seeing right > now. This should > be cleaned. So either you add a list_head to ordered_event or you > would buffer[x] in > ordered_events_free(). > > At this point, this is my understanding. > Do you agree? yea, I see it now.. thanks for pointing this out how about something like below? haven't tested properly yet jirka --- diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c index bad9e0296e9a..5c0d85e90a18 100644 --- a/tools/perf/util/ordered-events.c +++ b/tools/perf/util/ordered-events.c @@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe, return oe->copy_on_queue ? __dup_event(oe, event) : event; } -static void free_dup_event(struct ordered_events *oe, union perf_event *event) +static void __free_dup_event(struct ordered_events *oe, union perf_event *event) { - if (event && oe->copy_on_queue) { + if (event) { oe->cur_alloc_size -= event->header.size; free(event); } } +static void free_dup_event(struct ordered_events *oe, union perf_event *event) +{ + if (oe->copy_on_queue) + __free_dup_event(oe, event); +} + #define MAX_SAMPLE_BUFFER (64 * 1024 / sizeof(struct ordered_event)) static struct ordered_event *alloc_event(struct ordered_events *oe, union perf_event *event) @@ -104,11 +110,11 @@ static struct ordered_event *alloc_event(struct ordered_events *oe, new = list_entry(cache->next, struct ordered_event, list); list_del(&new->list); } else if (oe->buffer) { - new = oe->buffer + oe->buffer_idx; + new = &oe->buffer->event[oe->buffer_idx]; if (++oe->buffer_idx == MAX_SAMPLE_BUFFER) oe->buffer = NULL; } else if (oe->cur_alloc_size < oe->max_alloc_size) { - size_t size = MAX_SAMPLE_BUFFER * sizeof(*new); + size_t size = sizeof(*oe->buffer) + MAX_SAMPLE_BUFFER * sizeof(*new); oe->buffer = malloc(size); if (!oe->buffer) { @@ -122,9 +128,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe, oe->cur_alloc_size += size; list_add(&oe->buffer->list, &oe->to_free); - /* First entry is abused to maintain the to_free list. */ - oe->buffer_idx = 2; - new = oe->buffer + 1; + oe->buffer_idx = 1; + new = &oe->buffer->event[0]; } else { pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size); } @@ -300,15 +305,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d oe->deliver = deliver; } +static void +ordered_events_buffer__free(struct ordered_events_buffer *buffer, + struct ordered_events *oe) +{ + if (oe->copy_on_queue) { + unsigned int i; + + for (i = 0; i < MAX_SAMPLE_BUFFER; i++) + __free_dup_event(oe, buffer->event[i].event); + } + + free(buffer); +} + void ordered_events__free(struct ordered_events *oe) { - while (!list_empty(&oe->to_free)) { - struct ordered_event *event; + struct ordered_events_buffer *buffer, *tmp; - event = list_entry(oe->to_free.next, struct ordered_event, list); - list_del(&event->list); - free_dup_event(oe, event->event); - free(event); + list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) { + list_del(&buffer->list); + ordered_events_buffer__free(buffer, oe); } } diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h index 8c7a2948593e..1338d5c345dc 100644 --- a/tools/perf/util/ordered-events.h +++ b/tools/perf/util/ordered-events.h @@ -25,23 +25,28 @@ struct ordered_events; typedef int (*ordered_events__deliver_t)(struct ordered_events *oe, struct ordered_event *event); +struct ordered_events_buffer { + struct list_head list; + struct ordered_event event[0]; +}; + struct ordered_events { - u64 last_flush; - u64 next_flush; - u64 max_timestamp; - u64 max_alloc_size; - u64 cur_alloc_size; - struct list_head events; - struct list_head cache; - struct list_head to_free; - struct ordered_event *buffer; - struct ordered_event *last; - ordered_events__deliver_t deliver; - int buffer_idx; - unsigned int nr_events; - enum oe_flush last_flush_type; - u32 nr_unordered_events; - bool copy_on_queue; + u64 last_flush; + u64 next_flush; + u64 max_timestamp; + u64 max_alloc_size; + u64 cur_alloc_size; + struct list_head events; + struct list_head cache; + struct list_head to_free; + struct ordered_events_buffer *buffer; + struct ordered_event *last; + ordered_events__deliver_t deliver; + int buffer_idx; + unsigned int nr_events; + enum oe_flush last_flush_type; + u32 nr_unordered_events; + bool copy_on_queue; }; int ordered_events__queue(struct ordered_events *oe, union perf_event *event,