Received: by 2002:a89:d88:0:b0:1fa:5c73:8e2d with SMTP id eb8csp2332792lqb; Mon, 27 May 2024 16:44:08 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWLKFLh0WO4bSvTJAlRvGTRqYRLY0xWiu/uPUoGEETYXfWatQwINhHF5pOP4xt13kQJbo6pYOf8ojNLJxrSo6Lyu6JMkJdiZv7fRFhYCw== X-Google-Smtp-Source: AGHT+IHmhCFL8ZQtYS/HIOdrDkY6dKOiBC/Ie1sjx8S7x2zVYNRRTvVjUmKSyKhp8FuxO5458tDI X-Received: by 2002:a50:8a84:0:b0:579:d1c6:1647 with SMTP id 4fb4d7f45d1cf-579d1c616d6mr2621754a12.42.1716853448389; Mon, 27 May 2024 16:44:08 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1716853448; cv=pass; d=google.com; s=arc-20160816; b=iLGJc9BzrY+dtE1cp98tuex48/qh5FfMLP2t8R34WiuyZWDmSJgHb73lK3ZDqWT/X5 ZQk2872Ky6djYtmPy4AAbxH0QEFX6qRyDkPOidBd4POHhoh/rraxZQM8OrKGlIupHRXu 9uLx3iWmLHMkcJEnoqDjTnYs12+AFrSzZFtOYKwFpwlrUZSLMfxY8d5qwHh3qqhTnGcc c+NScefxiV320yJzi4CO91z53EIjEEE272plrjmbbXDc8r/z8g2LD5Jy9mHJYhjrv6F/ s5+bXgZDX8mNp2vghnbutFZoPS3RMi6OloOu8EX7eEGFKuBWS5folMsq9kj+WKf1gKwp X0TQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :subject:cc:to:from:date; bh=2NQCKJZfPrgssuJynamzJOXTNgNkMm66CDlWgq3WEao=; fh=Su4EHRkylUFTOhrLamiyGeNyzf5PJwqMb8PbJoWZ+Ig=; b=Us+Hw1Try1qd7ES4wMj7awZQklt494oAy5bqdu6CaKKQWnZ5mrbw9enP+IfuBD16cS zPBuu+JmTXYDsu0JDgOrVkSjE1R+yUkzExA75jGI8aVzAYZCh9eAIRx1ezKOEI2QbuGD NFxHw03HWw7ZTtkSvkmtDfE1u5LbiO+BR9q1QicLAJlqdkuGNneeKlxwxiZvkc13HiVm cCvA+j07oWUOD7Dq+tFPMwU03g4d61dFNiaNCPbzRkE8BoFS6oDpB/057K3mGwFT0755 PkbIcU1yjDU7JjvSRYjkUMXoI2wCgQdXo08SU8nHsWZsNgtuzcmQUgMa4/3hHH6KeLxU 0nNQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-191515-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-191515-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 4fb4d7f45d1cf-579e2135b7esi807052a12.44.2024.05.27.16.44.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 May 2024 16:44:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-191515-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-191515-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-191515-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 21CE31F22108 for ; Mon, 27 May 2024 23:44:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F34A6152E02; Mon, 27 May 2024 23:43:59 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7360C45C14; Mon, 27 May 2024 23:43:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716853439; cv=none; b=umlCShUvk2GuGhRg7XpIO78vRaG40BHvHRFIsUyaxYvNoCaEWpz1cMZsHUsu1mzLdW/AELqVWEQL4y+q9byWSJR/oSslxaY4pib3Uqy/ZyzlrsC2Be9zDJQ6jXFS+pg2FStff3Lw+tHArCGhStW6QVIOd9LuiNzO5h4VMD2bIZo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716853439; c=relaxed/simple; bh=wA75dp8NtJiqrtCG2x9kKbaGpLOo0caNQxzgsGyldXE=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=P6YGivvUZ2c0UiVtFN+7R7LNqmnj1JVasQbU31a+Yi3C7ARZ+roN8whGwicV/yeGMhdS+9cQRJFF79HyFBD2fQJTWORwq+TO0/gRSBSkwFhg2LphQRmmwQc8CeyXEhptFaSdINsmR8pJ8KmVqpZ3K3lTtHh5P+SvZGNnrp/DFJY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 158A8C2BBFC; Mon, 27 May 2024 23:43:57 +0000 (UTC) Date: Mon, 27 May 2024 19:43:56 -0400 From: Steven Rostedt To: Petr Pavlu Cc: Masami Hiramatsu , Mathieu Desnoyers , linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rt-users Subject: Re: [PATCH 2/2] ring-buffer: Fix a race between readers and resize checks Message-ID: <20240527194356.5078b56f@rorschach.local.home> In-Reply-To: <2b920bab-23a2-4a8d-95c2-b69472d38373@suse.com> References: <20240517134008.24529-1-petr.pavlu@suse.com> <20240517134008.24529-3-petr.pavlu@suse.com> <20240520095037.33a7fde6@gandalf.local.home> <2b920bab-23a2-4a8d-95c2-b69472d38373@suse.com> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Mon, 27 May 2024 11:36:55 +0200 Petr Pavlu wrote: > >> static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) > >> { > >> @@ -2200,8 +2205,13 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, > >> */ > >> synchronize_rcu(); > >> for_each_buffer_cpu(buffer, cpu) { > >> + unsigned long flags; > >> + > >> cpu_buffer = buffer->buffers[cpu]; > >> + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > >> rb_check_pages(cpu_buffer); > >> + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, > >> + flags); > > > > Putting my RT hat on, I really don't like the above fix. The > > rb_check_pages() iterates all subbuffers which makes the time interrupts > > are disabled non-deterministic. > > I see, this applies also to the same rb_check_pages() validation invoked > from ring_buffer_read_finish(). > > > > > Instead, I would rather have something where we disable readers while we do > > the check, and re-enable them. > > > > raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > > cpu_buffer->read_disabled++; > > raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); > > > > // Also, don't put flags on a new line. We are allow to go 100 characters now. > > Noted. > > > > > > > rb_check_pages(cpu_buffer); > > raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > > cpu_buffer->read_disabled--; > > raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); > > > > Or something like that. Yes, that also requires creating a new > > "read_disabled" field in the ring_buffer_per_cpu code. > > I think this would work but I'm personally not immediately sold on this > approach. If I understand the idea correctly, readers should then check > whether cpu_buffer->read_disabled is set and bail out with some error if > that is the case. The rb_check_pages() function is only a self-check > code and as such, I feel it doesn't justify disrupting readers with > a new error condition and adding more complex locking. Honestly, this code was never made for more than one reader per cpu_buffer. I'm perfectly fine if all check_pages() causes other readers to the same per_cpu buffer to get -EBUSY. Do you really see this being a problem? What use case is there for hitting the check_pages() and reading the same cpu buffer at the same time? -- Steve