Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp5167372pxv; Tue, 6 Jul 2021 19:44:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyzl0SmC5j5cnUlhUwP2gg6DoMZZec5N4nXwmByI0ZWBtXhYBODcDoNAQDXqeXcE9vdCy0x X-Received: by 2002:a17:906:f6d6:: with SMTP id jo22mr21028039ejb.26.1625625867704; Tue, 06 Jul 2021 19:44:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625625867; cv=none; d=google.com; s=arc-20160816; b=VTkALLhD9zJBcnnGIAopQ3VFGp1nYSiZ5NxOylXwHAAikBe0aNlc4LVvX0gpxWh8qW aFTyKg9+G8cxpniw6tvSa3/FYMjIHc06OPwq/wmS4MwEGD+qYvFNQdmTPUFjVC6XIbax ujtoEd6Uqkq9dzOfS04HLfHn7/cBxJqZjkQ7iwwL4WZLDfCm7dXL87BaXXKAiMNRNYOR hvcWbr+KmBxQzZtaxM9Vp9k61m/r4fPomJMvJQvZAPUj+kCHg3fPqbYx7nxtWXlUNWtz Qnu0V3hutHxnE73D13LM+NrtRhh0+SXTxxIJq4MZ9oNWVCQ6pbY3fPB1zDfbzN21URas s3Vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=u4Yuded0Kw88JMGw8sa798WPvDJHLb/pzmUTCTo2A3w=; b=Jx/VhUtM2OHo82eiOA74HjiDLC4Kj87E/FF/LVH0cK89cHnvmEoXAW4uLTJVcJdQ2u Q6HmaWwCZuL8nmJtztm8Ak6CmJdisCGJvi4W8hvzMEkVNAqQhvSryjJk5Cn+fp2aGpzo ibC4K93QCiH0LJMHmP/QlA8tD+XrPUtXuj18TOKSgZVFGAQ/9OloroIzPJyZq6I0qY8j erZnAiRpsNLINh6CkxKKUpvAstxeoySrjLNKL/6NyW3qxtVgTtESO3tFZLhq3zSZx/vg Oe9eBMtRKV6C3HTz2upk9nNSXqJUXvTkBlENy9GyBQP2Hb3YGa3gR0cfFDU6qMcOQQj1 xIyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=a6MEnYH9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cw17si7747464ejc.348.2021.07.06.19.44.04; Tue, 06 Jul 2021 19:44:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=a6MEnYH9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229989AbhGGCpa (ORCPT + 99 others); Tue, 6 Jul 2021 22:45:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229894AbhGGCpa (ORCPT ); Tue, 6 Jul 2021 22:45:30 -0400 Received: from mail-yb1-xb2c.google.com (mail-yb1-xb2c.google.com [IPv6:2607:f8b0:4864:20::b2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02198C061574 for ; Tue, 6 Jul 2021 19:42:49 -0700 (PDT) Received: by mail-yb1-xb2c.google.com with SMTP id s129so844676ybf.3 for ; Tue, 06 Jul 2021 19:42:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=u4Yuded0Kw88JMGw8sa798WPvDJHLb/pzmUTCTo2A3w=; b=a6MEnYH9x2ODwMVBkGbhVVIIM/+36glvaFrVGSV2bwRkix+hmKqRW0m1TQKUw9C+c/ cXCJ6wbYyAttOu+xTCroLVhwFL2Wn1J0fSCp0u4Ock1Lp18wYm28mZ863YKafjdXGXhy vQ/Fs1OK5DB8IcTRsZq/PlEbtfRhKD2H6jYkp2C1RhT2AMFT+rFIT2yZwCc5odDJB4wS D4JrXp2WFLPLSEAgN/MXH1GDhQnvofMrKhmE/kuS5NiU+PXJyj35LTW1adx/FHE69WsV mt2r5axHqNgPctRlnmKU2kSpOq99/DqPOYYwobJWv9lcNSCWTbhrKY5DALA8Mq5/ZgWX avww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=u4Yuded0Kw88JMGw8sa798WPvDJHLb/pzmUTCTo2A3w=; b=WElj9xhH9UiXhSL+WiM0EuF9na0BQqQbXf17GEz9NbuxXogyErIpD/SHMgl1mqASAS 9eeybrVztG46+97uwxxrJWENMm/HXR5KAkytM2ube7iSNesA3/OXZuNnEisn0xBBk/oG g09jXXqu6tvnrkYX9tH6D5DohKeqJgcJ9xtYSF8vkXQZHSLApWspKD1wsX9xNkSz/D9j 7KEWmfUemO+jn+iIQ9KyO6ms5EdL0Y1IGvB6oJPWfYOtmdjYjv+4JyhfWRSJTwEujf1A QY8BQhjSi91s0UF0rhkumI9bVaDWI/BoLbYXsQ59eJFKFMkFKF+XOuziBU+NPJnu/C8I xGvQ== X-Gm-Message-State: AOAM531UYHEOpoH0WvZ1le9S9Wpvr0Ix3Rq+msEBHH9dw7xAcye/07yF NYI9bot5kkl2Vuzyk4V274tfcRqUFgwjbxMdcczsOw== X-Received: by 2002:a25:83ca:: with SMTP id v10mr29809951ybm.84.1625625768387; Tue, 06 Jul 2021 19:42:48 -0700 (PDT) MIME-Version: 1.0 References: <20210630205151.137001-1-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 6 Jul 2021 19:42:37 -0700 Message-ID: Subject: Re: [PATCH v2 1/1] psi: stop relying on timer_pending for poll_work rescheduling To: Peter Zijlstra Cc: Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Benjamin Segall , Mel Gorman , Daniel Bristot de Oliveira , matthias.bgg@gmail.com, Minchan Kim , Tim Murray , YT Chang , =?UTF-8?B?V2VuanUgWHUgKOiuuOaWh+S4vik=?= , =?UTF-8?B?Sm9uYXRoYW4gSk1DaGVuICjpmbPlrrbmmI4p?= , LKML , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, kernel-team , SH Chen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 2, 2021 at 8:49 AM Suren Baghdasaryan wrote: > > On Fri, Jul 2, 2021 at 2:28 AM Peter Zijlstra wrote: > > > > On Thu, Jul 01, 2021 at 09:28:04AM -0700, Suren Baghdasaryan wrote: > > > On Thu, Jul 1, 2021 at 9:12 AM Peter Zijlstra wrote: > > > > > > > > On Thu, Jul 01, 2021 at 09:09:25AM -0700, Suren Baghdasaryan wrote: > > > > > On Thu, Jul 1, 2021 at 1:59 AM Peter Zijlstra wrote: > > > > > > > > > > > > On Wed, Jun 30, 2021 at 01:51:51PM -0700, Suren Baghdasaryan wrote: > > > > > > > + /* cmpxchg should be called even when !force to set poll_scheduled */ > > > > > > > + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) && !force) > > > > > > > return; > > > > > > > > > > > > Why is that a cmpxchg() ? > > > > > > > > > > We want to set poll_scheduled and proceed with rescheduling the timer > > > > > unless it's already scheduled, so cmpxchg helps us to make that > > > > > decision atomically. Or did I misunderstand your question? > > > > > > > > What's wrong with: atomic_xchg(&group->poll_scheduled, 1) ? > > > > > > Yes, since poll_scheduled can be only 0 or 1 atomic_xchg should work > > > fine here. Functionally equivalent but I assume atomic_xchg() is more > > > efficient due to no comparison. > > > > Mostly conceptually simpler; the cmpxchg-on-0 makes that you have to > > check if there's ever any state outside of {0,1}. The xchg() thing is > > the classical test-and-set pattern. > > > > On top of all that, the cmpxchg() can fail, which brings ordering > > issues. > > Oh, I see. That was my mistake. I was wrongly assuming that all RMW > atomic operations are fully ordered but indeed, documentation states > that: > ``` > - RMW operations that have a return value are fully ordered; > - RMW operations that are conditional are unordered on FAILURE, > otherwise the above rules apply. > ``` > So that's the actual functional difference here. Thanks for catching > this and educating me! > > > > > Typically, I think, you want to ensure that everything that happens > > before psi_schedule_poll_work() is visible to the work when it runs > > (also see Johannes' email). > > Correct and I think I understand now the concern Johannes expressed. > > > In case poll_scheduled is already 1, the > > cmpxchg will fail and *NOT* provide that ordering. Meaning the work > > might not observe the latest changes. xchg() doesn't have this subtlety. > > Got it. > So I think the modifications needed to this patch is: > 1. replacing atomic_cmpxchg(&group->poll_scheduled, 0, 1) with > atomic_chg(&group->poll_scheduled, 1) > 2. an explicit smp_mb() barrier right after > atomic_set(&group->poll_scheduled, 0) in psi_poll_work(). > > I think that should ensure the correct ordering here. > If you folks agree I'll respin v3 with these changes (or maybe I > should respin and we continue discussion with that version?). To keep things moving I posted v3 (https://lore.kernel.org/patchwork/patch/1454547) with the changes I mentioned above. Let's keep discussing it there. Thanks! > > > > > -- > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > >