Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp4996229rwl; Tue, 28 Mar 2023 14:35:44 -0700 (PDT) X-Google-Smtp-Source: AKy350a2d+5iORA6apKw7yWr7ttXlPkdC4lAK+B3RWcfOoMVHxrvXQTFFEg93j3IxCpWCzZxXWUb X-Received: by 2002:a17:906:7c54:b0:930:5862:6818 with SMTP id g20-20020a1709067c5400b0093058626818mr17615599ejp.10.1680039344085; Tue, 28 Mar 2023 14:35:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680039344; cv=none; d=google.com; s=arc-20160816; b=DKFBSMcsjEWQ1ecbjPqxg8MohY5Zu2OtYtTd9Br4NTJTHzXmx5merrPEBnb9X02cCx h1re8PvarHVZp+FAyThSEVgwhoquXag8ZSgNI5UtAjtIxH3eoyL42PxvT1DEJMuOB0sY T0Fl31lk4cxYhvjc4Koca6tTn1gvEa5MkZ2nl8MQiZOd1XOCip93D0KFMv3zAHMYUVbr 88hCYdrutdF8/JztGppo/CnixxLcazoruKM9T8wiULmHUG5d9QH//f2w4CQ6abnuQzXI 5VghoQWxjTy8GaB3NaJc/mivU1PFO6ZEjHn1YD4FVROOsFcQvzyxfqS/tko1groPjUT+ 4zxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=Wv/1Qt0KfZxVdKngtQZJQiD0/fgiKomx7FOta34fg0s=; b=QqP9arZ510g0a96b/8ZsMsarXMBpjxXQUoGRaY1E9GAgHnUPkNTInOqURNRRAflLAY QXuyAMpIa0zb1SseA6OKJ1kGHCeDcRk0N87CvXLJRUxozu3gEsD4ScnN2IcuPqvFUpIV QrBi0aXlrtAs7JMQ2Wn6Y25NtCYNQCHv07rEXPy7aZ26ntCjmYaCJiWtzyq6U8RwUoVE ZSaP+XYf8VgduMdsR6qjRcttA+qT2ToAw6/g4BEvL8hOdk7ODxtfh/o4IKQq1Fymbzvx 4oPekZdkDLd22dF0CJqgYwrq/UvySnGtwMNew78gyYYZNwMOJEoa5EdSjSVml82MJe7/ rQLQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mh7-20020a170906eb8700b00924daa6c305si26165881ejb.625.2023.03.28.14.35.19; Tue, 28 Mar 2023 14:35:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229564AbjC1VU6 (ORCPT + 99 others); Tue, 28 Mar 2023 17:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229540AbjC1VU5 (ORCPT ); Tue, 28 Mar 2023 17:20:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FEB21BC8; Tue, 28 Mar 2023 14:20:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8A07761952; Tue, 28 Mar 2023 21:20:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78E4EC433EF; Tue, 28 Mar 2023 21:20:51 +0000 (UTC) Date: Tue, 28 Mar 2023 17:20:49 -0400 From: Steven Rostedt To: Beau Belgrave Cc: mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de, linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v8 04/11] tracing/user_events: Fixup enable faults asyncly Message-ID: <20230328172049.10061257@gandalf.local.home> In-Reply-To: <20230221211143.574-5-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> <20230221211143.574-5-beaub@linux.microsoft.com> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 21 Feb 2023 13:11:36 -0800 Beau Belgrave wrote: > @@ -263,7 +277,85 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) > } > > static int user_event_enabler_write(struct user_event_mm *mm, > - struct user_event_enabler *enabler) > + struct user_event_enabler *enabler, > + bool fixup_fault); > + > +static void user_event_enabler_fault_fixup(struct work_struct *work) > +{ > + struct user_event_enabler_fault *fault = container_of( > + work, struct user_event_enabler_fault, work); > + struct user_event_enabler *enabler = fault->enabler; > + struct user_event_mm *mm = fault->mm; > + unsigned long uaddr = enabler->addr; > + int ret; > + > + ret = user_event_mm_fault_in(mm, uaddr); > + > + if (ret && ret != -ENOENT) { > + struct user_event *user = enabler->event; > + > + pr_warn("user_events: Fault for mm: 0x%pK @ 0x%llx event: %s\n", > + mm->mm, (unsigned long long)uaddr, EVENT_NAME(user)); > + } > + > + /* Prevent state changes from racing */ > + mutex_lock(&event_mutex); > + > + /* > + * If we managed to get the page, re-issue the write. We do not > + * want to get into a possible infinite loop, which is why we only > + * attempt again directly if the page came in. If we couldn't get > + * the page here, then we will try again the next time the event is > + * enabled/disabled. > + */ What case would we not get the page? A bad page mapping? User space doing something silly? Or something else, for which how can it go into an infinite loop? Can that only happen if userspace is doing something mischievous? -- Steve > + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); > + > + if (!ret) { > + mmap_read_lock(mm->mm); > + user_event_enabler_write(mm, enabler, true); > + mmap_read_unlock(mm->mm); > + } > + > + mutex_unlock(&event_mutex); > + > + /* In all cases we no longer need the mm or fault */ > + user_event_mm_put(mm); > + kmem_cache_free(fault_cache, fault); > +} > + > +static bool user_event_enabler_queue_fault(struct user_event_mm *mm, > + struct user_event_enabler *enabler) > +{ > + struct user_event_enabler_fault *fault; > + > + fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT | __GFP_NOWARN); > + > + if (!fault) > + return false; > + > + INIT_WORK(&fault->work, user_event_enabler_fault_fixup); > + fault->mm = user_event_mm_get(mm); > + fault->enabler = enabler; > + > + /* Don't try to queue in again while we have a pending fault */ > + set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); > + > + if (!schedule_work(&fault->work)) { > + /* Allow another attempt later */ > + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); > + > + user_event_mm_put(mm); > + kmem_cache_free(fault_cache, fault); > + > + return false; > + } > + > + return true; > +} > +