Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp740408pxu; Fri, 23 Oct 2020 12:04:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy0IepjCsm8L7Tnn6c7hcdlWqbbJ5/50Dx+242MxNOFOCd1IwmqjOVP9keeJf+GBvWVXRXL X-Received: by 2002:a17:906:1e45:: with SMTP id i5mr3344875ejj.203.1603479860356; Fri, 23 Oct 2020 12:04:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603479860; cv=none; d=google.com; s=arc-20160816; b=hFgcP4k25jvRBNtz81dsvR7gq8nPGV96QqBbSZifR6O3tBtoOGMeX6wmIyj7DsAZ3Z GlQmvoaqm2CFcmy3dM1SPr38CAJmA4HK0QMapYtNJ1SqpNEpoGyhdnS8ChQtuFY8A847 wQzwxBbHCwFJ+ukiXRRcKgpET+7LnARfVYf5HNshFwKRhlgUAtNhAgMLz4oefqkE6C0t jNfpq+CKPYwpglW7ftcjpQut2zrvkPTbWlqOyGY6eK2a2ZULGEXcSkkCvqkVc58LZ2H5 +/oMFNTnUq8K4YxRL4FgX/pQ3RwuxsKT2HzlSh2wIFbpUZN3nacP4mHgio1xh1MrXNAW 98hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:subject:from :references:cc:to; bh=uUDkulg0HIab1NOyjJ+tM5tAFmgP4oYHFAq58ILjI60=; b=PDurzA3xXCZax3ukOpsbLEvdTgBNaIDnuwOyBrs7mERjphCdKi6s6UbQXgRRlfw9D9 DaC7OprRRIhQJmgZzj4+7W2PtSAyAJs0MOxKAqjzWu9IDC3zRNy22zf2s4Dmg1KvTwPF jVWT4punglBNY1xSbt05rQkta4XH0I+e+Vnhov3yXGrQvoyjqRRqRu1xc2xXHJnrJA3/ 34P7r1gsPqUxKFPlElA7tUT919DJXGQRTEg1WkAtpAC5NIRcBpCSs51YN5gCJ0NEKFCt QrSOAKDlaDOQUSclYjuCRvq8d6Y6/33N+ureABe7I9HagQ5NSR/RoWDquUOI2XHFfFqv BN6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z6si1433922eje.302.2020.10.23.12.03.57; Fri, 23 Oct 2020 12:04:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S463445AbgJWR4v (ORCPT + 99 others); Fri, 23 Oct 2020 13:56:51 -0400 Received: from mx2.suse.de ([195.135.220.15]:43380 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S462685AbgJWR4v (ORCPT ); Fri, 23 Oct 2020 13:56:51 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id BD450AC82; Fri, 23 Oct 2020 17:56:49 +0000 (UTC) To: Axel Rasmussen Cc: Steven Rostedt , Ingo Molnar , Andrew Morton , Michel Lespinasse , Daniel Jordan , Jann Horn , Chinwen Chang , Davidlohr Bueso , David Rientjes , Yafang Shao , LKML , Linux MM References: <20201020184746.300555-1-axelrasmussen@google.com> <20201020184746.300555-2-axelrasmussen@google.com> From: Vlastimil Babka Subject: Re: [PATCH v4 1/1] mmap_lock: add tracepoints around lock acquisition Message-ID: Date: Fri, 23 Oct 2020 19:56:49 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.3 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/23/20 7:38 PM, Axel Rasmussen wrote: > On Fri, Oct 23, 2020 at 7:00 AM Vlastimil Babka wrote: >> >> On 10/20/20 8:47 PM, Axel Rasmussen wrote: >> > The goal of these tracepoints is to be able to debug lock contention >> > issues. This lock is acquired on most (all?) mmap / munmap / page fault >> > operations, so a multi-threaded process which does a lot of these can >> > experience significant contention. >> > >> > We trace just before we start acquisition, when the acquisition returns >> > (whether it succeeded or not), and when the lock is released (or >> > downgraded). The events are broken out by lock type (read / write). >> > >> > The events are also broken out by memcg path. For container-based >> > workloads, users often think of several processes in a memcg as a single >> > logical "task", so collecting statistics at this level is useful. >> > >> > The end goal is to get latency information. This isn't directly included >> > in the trace events. Instead, users are expected to compute the time >> > between "start locking" and "acquire returned", using e.g. synthetic >> > events or BPF. The benefit we get from this is simpler code. >> > >> > Because we use tracepoint_enabled() to decide whether or not to trace, >> > this patch has effectively no overhead unless tracepoints are enabled at >> > runtime. If tracepoints are enabled, there is a performance impact, but >> > how much depends on exactly what e.g. the BPF program does. >> > >> > Reviewed-by: Michel Lespinasse >> > Acked-by: Yafang Shao >> > Acked-by: David Rientjes >> > Signed-off-by: Axel Rasmussen >> >> All seem fine to me, except I started to wonder.. >> >> > + >> > +#ifdef CONFIG_MEMCG >> > + >> > +DEFINE_PER_CPU(char[MAX_FILTER_STR_VAL], trace_memcg_path); >> > + >> > +/* >> > + * Write the given mm_struct's memcg path to a percpu buffer, and return a >> > + * pointer to it. If the path cannot be determined, the buffer will contain the >> > + * empty string. >> > + * >> > + * Note: buffers are allocated per-cpu to avoid locking, so preemption must be >> > + * disabled by the caller before calling us, and re-enabled only after the >> > + * caller is done with the pointer. >> >> Is this enough? What if we fill the buffer and then an interrupt comes and the >> handler calls here again? We overwrite the buffer and potentially report a wrong >> cgroup after the execution resumes? >> If nothing worse can happen (are interrupts disabled while the ftrace code is >> copying from the buffer?), then it's probably ok? > > I think you're right, get_cpu()/put_cpu() only deals with preemption, > not interrupts. > > I'm somewhat sure this code can be called in interrupt context, so I > don't think we can use locks to prevent this situation. I think it > works like this: say we acquire the lock, an interrupt happens, and > then we try to acquire again on the same CPU; we can't sleep, so we're > stuck. Yes, we could perhaps trylock() and if it fails, give up on the memcg path. > I think we can't kmalloc here (instead of a percpu buffer) either, > since I would guess that kmalloc may also acquire mmap_lock itself? the overhead is not worth it anyway, for a tracepoint > Is adding local_irq_save()/local_irq_restore() in addition to > get_cpu()/put_cpu() sufficient? If you do that, then I guess you don't need get_cpu()/put_cpu() anymore. But it's more costly. But sounds like we are solving something that the tracing subystem has to solve as well to store the trace event data, so maybe Steven has some better idea? >> >> > + */ >> > +static const char *get_mm_memcg_path(struct mm_struct *mm) >> > +{ >> > + struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); >> > + >> > + if (memcg != NULL && likely(memcg->css.cgroup != NULL)) { >> > + char *buf = this_cpu_ptr(trace_memcg_path); >> > + >> > + cgroup_path(memcg->css.cgroup, buf, MAX_FILTER_STR_VAL); >> > + return buf; >> > + } >> > + return ""; >> > +} >> > + >> > +#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ >> > + do { \ >> > + get_cpu(); \ >> > + trace_mmap_lock_##type(mm, get_mm_memcg_path(mm), \ >> > + ##__VA_ARGS__); \ >> > + put_cpu(); \ >> > + } while (0) >> > + >> > +#else /* !CONFIG_MEMCG */ >> > + >> > +#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ >> > + trace_mmap_lock_##type(mm, "", ##__VA_ARGS__) >> > + >> > +#endif /* CONFIG_MEMCG */ >> > + >> > +/* >> > + * Trace calls must be in a separate file, as otherwise there's a circular >> > + * dependency between linux/mmap_lock.h and trace/events/mmap_lock.h. >> > + */ >> > + >> > +void __mmap_lock_do_trace_start_locking(struct mm_struct *mm, bool write) >> > +{ >> > + TRACE_MMAP_LOCK_EVENT(start_locking, mm, write); >> > +} >> > +EXPORT_SYMBOL(__mmap_lock_do_trace_start_locking); >> > + >> > +void __mmap_lock_do_trace_acquire_returned(struct mm_struct *mm, bool write, >> > + bool success) >> > +{ >> > + TRACE_MMAP_LOCK_EVENT(acquire_returned, mm, write, success); >> > +} >> > +EXPORT_SYMBOL(__mmap_lock_do_trace_acquire_returned); >> > + >> > +void __mmap_lock_do_trace_released(struct mm_struct *mm, bool write) >> > +{ >> > + TRACE_MMAP_LOCK_EVENT(released, mm, write); >> > +} >> > +EXPORT_SYMBOL(__mmap_lock_do_trace_released); >> > >> >