Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp5334861rwl; Tue, 11 Apr 2023 04:05:23 -0700 (PDT) X-Google-Smtp-Source: AKy350YgRohMzS2odGSri7H2NA+skLOd14kR+Hbh2LalYicbFtPqOk3jIZ2B/FceuThp+NEVOFDW X-Received: by 2002:a17:902:cecb:b0:19e:dc0e:1269 with SMTP id d11-20020a170902cecb00b0019edc0e1269mr21186338plg.7.1681211122784; Tue, 11 Apr 2023 04:05:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681211122; cv=none; d=google.com; s=arc-20160816; b=wyflJ/jss19AHCoeYnMzFqi95BBRyMRqLPQs4K5i04hdefOXwueCBtR3BPzODIMa6J jEmLBNfHD0wJJZz4Zy5UtApN8tkMCg+9D0uJz41pfhAg8AbETqFtBzWgEqfLhRa2oPWC H2RNJZOxDVj1tMCc4uS0ORkmdAsB9b4mS++UTBeTj1inP1dgjHfjhULPxHX5dNIA7J+u DyXkvItkQQHSc4AosyXrvavCFTrbL+Q87vilxZPCV4NLcMp7owE2W8zDnxEQdPoDnHiq V72iWdLELqPcW1EKlW0L37x+qkH7MN24TuakA4TRwkxkZ3cf7H33G3r5AajpMGKKLr5V V/LA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=PEEIRFcCY8V94MFUrFLu3tv9SHVUfUNorm0Jdg+YNsM=; b=koqO0d9qpTEE3Wey7LsY9rgirOm/5VeheOSghZKMRrvWN7eyBjKFU8EUySeiO6hjKE tC5NruC4c3cQzL6S1dFysJXjUt/UaH0O/O5GS/qukUr+gh5QxV5lixYO/96KYZl2MFmQ 0cqbkbks7Ulp4Gp7ATvum18XW/ylWyrWLbhlQkbNV5r/PoPooaBJK2vIPAbEub0E/MJD vciDTWRX4IzseswDF6leektnQhi3eanO7eFjPHn8AzwT6bXB5rr22WlesyxBmOqsEHNL z8amKZRbyu7pXOJupD6uxeYWgIKHzEA4WwpIPN5pxIJ4SNdYTKNdw+PyoUbAwHQY2aB2 OGBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=GAs7AfVo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g9-20020a631109000000b0050f9b910fa1si12631356pgl.368.2023.04.11.04.05.09; Tue, 11 Apr 2023 04:05:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=GAs7AfVo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229940AbjDKLEv (ORCPT + 99 others); Tue, 11 Apr 2023 07:04:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229973AbjDKLEp (ORCPT ); Tue, 11 Apr 2023 07:04:45 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFC604491 for ; Tue, 11 Apr 2023 04:04:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=PEEIRFcCY8V94MFUrFLu3tv9SHVUfUNorm0Jdg+YNsM=; b=GAs7AfVokscfCgCy6zW0yVJe/a 1ubg6uTrcsewDRyG4lvPuQFDiZ28OZHvZwBeMBoF49NE6NEfxEzUF1dxgwTpc6ex4IyylyTJ+ebtR 7F+YiVfkHev7q77RsmCHripVOp1LNiG2KsDwjOcWANnFsOY8qcm7dOX0s7Re0QHlsx2e7ioVRSeuG LWjahAC8aYZMRplyWOTd4YscNI1XwSmKdhCEItQM02MhQluVQdy/VgxQZ2pSpXeA9CCoRC1gc0Str qvP9yXkEF+WnyLElGL7ktWcT413b4hh5eQJTmSjCcA0+KecZZ7zvRZrUkQgqXnWrS6M72XiXISr2/ w4e6kSJA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pmBms-00DKS0-20; Tue, 11 Apr 2023 11:03:46 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2B82A300274; Tue, 11 Apr 2023 13:03:45 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id EE57420593938; Tue, 11 Apr 2023 13:03:44 +0200 (CEST) Date: Tue, 11 Apr 2023 13:03:44 +0200 From: Peter Zijlstra To: Mathieu Desnoyers Cc: linux-kernel@vger.kernel.org, Aaron Lu , Olivier Dion , michael.christie@oracle.com, npiggin@gmail.com Subject: Re: [RFC PATCH v3] sched: Fix performance regression introduced by mm_cid Message-ID: <20230411110344.GC576825@hirez.programming.kicks-ass.net> References: <20230405162635.225245-1-mathieu.desnoyers@efficios.com> <386a6e32-a746-9eb1-d5ae-e5bedaa8fc75@efficios.com> <20230406095122.GF386572@hirez.programming.kicks-ass.net> <3b4684ea-5c0d-376b-19cf-195684ec4e0e@efficios.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3b4684ea-5c0d-376b-19cf-195684ec4e0e@efficios.com> X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 07, 2023 at 09:14:36PM -0400, Mathieu Desnoyers wrote: > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index 2a243616f222..f20fc0600fcc 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -37,6 +37,11 @@ static inline void mmgrab(struct mm_struct *mm) > atomic_inc(&mm->mm_count); > } > +static inline void smp_mb__after_mmgrab(void) > +{ > + smp_mb__after_atomic(); > +} > + > extern void __mmdrop(struct mm_struct *mm); > static inline void mmdrop(struct mm_struct *mm) > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 9e0fa4193499..8d410c0dcb39 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -5117,7 +5117,6 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev, > sched_info_switch(rq, prev, next); > perf_event_task_sched_out(prev, next); > rseq_preempt(prev); > - switch_mm_cid(prev, next); > fire_sched_out_preempt_notifiers(prev, next); > kmap_local_sched_out(); > prepare_task(next); > @@ -5273,6 +5272,9 @@ context_switch(struct rq *rq, struct task_struct *prev, > * > * kernel -> user switch + mmdrop() active > * user -> user switch > + * > + * switch_mm_cid() needs to be updated if the barriers provided > + * by context_switch() are modified. > */ > if (!next->mm) { // to kernel > enter_lazy_tlb(prev->active_mm, next); > @@ -5302,6 +5304,9 @@ context_switch(struct rq *rq, struct task_struct *prev, > } > } > + /* switch_mm_cid() requires the memory barriers above. */ > + switch_mm_cid(prev, next); > + > rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); > prepare_lock_switch(rq, next, rf); > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index bc0e1cd0d6ac..f3e7dc2cd1cc 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -3354,6 +3354,37 @@ static inline int mm_cid_get(struct mm_struct *mm) > static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next) > { > + /* > + * Provide a memory barrier between rq->curr store and load of > + * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition. > + * > + * Should be adapted if context_switch() is modified. > + */ > + if (!next->mm) { // to kernel > + /* > + * user -> kernel transition does not guarantee a barrier, but > + * we can use the fact that it performs an atomic operation in > + * mmgrab(). > + */ > + if (prev->mm) // from user > + smp_mb__after_mmgrab(); > + /* > + * kernel -> kernel transition does not change rq->curr->mm > + * state. It stays NULL. > + */ > + } else { // to user > + /* > + * kernel -> user transition does not provide a barrier > + * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu]. > + * Provide it here. > + */ > + if (!prev->mm) // from kernel > + smp_mb(); > + /* > + * user -> user transition guarantees a memory barrier through > + * switch_mm(). > + */ > + } > if (prev->mm_cid_active) { > mm_cid_put_lazy(prev); > prev->mm_cid = -1; > This is going to be pain wrt.: https://lkml.kernel.org/r/20230203071837.1136453-3-npiggin@gmail.com which is already in -next. Also, I recon Nick isn't going to too happy -- although I recond smp_mb() is better than an atomic op on Power. But still. Urgh... For Nick; the TL;DR is we need an smp_mb() after setting rq->curr and before calling switch_mm_cid() *IFF* rq->curr->mm changes. Normally this is provided by switch_mm() itself per actually changing the address space, except for the whole active_mm/lazy swizzle nonsense, which gives a few holes still. The very much longer explanation is upthread here: https://lkml.kernel.org/r/fdaa7242-4ddd-fbe2-bc0e-6c62054dbde8@efficios.com