Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp215485ioo; Thu, 26 May 2022 01:53:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzV9fbknVTqeBEaV/uJI3ikkVJrTAgPKiVLFNHkoiTgQepcMn7HYjvVM1h3Eftya70b25qZ X-Received: by 2002:a17:906:2883:b0:6e8:7012:4185 with SMTP id o3-20020a170906288300b006e870124185mr32083329ejd.204.1653555238233; Thu, 26 May 2022 01:53:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653555238; cv=none; d=google.com; s=arc-20160816; b=iGmHLCQPt68NncfQY5rorvyq55xmBHFrVLZbnKWo/I8v4dkc18ohzuWW9jFxzj2Tui uIA48gKTnsiOq1N51U2yj+0J62tSa7JpclJozOucDMQ+PZpGRg1baUf4Zbv4fPZX6884 +zStHLtgL9Xney0+vh+sbye4rhdVRYZPHRDLttgydZtwt3jCipDw725kNQ/dYJLCTFUo te0SY4SkbhVOpYxWrbsRYzdNgXxm+cKmgfVXsulsJIFU+2QgL6Gr9YMjvnoHOayDaZjk ll4zZ0Y51xG59igO4OiuBck6zrCJxhhhqjdOY0zoLCPeGLU8Hhe46TH3OuaK6kDtjs1x p74g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=BoiYeaDjs15JekgURHYc+TLJ+J0nfYc/PIk2wBIW2ng=; b=zM6W1jreCcYzUOniqVQiu2j8/fw74XNBPDqtnvuv1WuRz+SGQZ1bRpGg7Wv9GyIbP1 40047lBWYFOmVQGTLu80pp2okPSMPa6kw4Rf3bsZ+2Z9LxyQERngiIILSVnO3bKzTHjB YunOxdIvC+tpWrmDh6V2ovNmK6TzauyqkhMn7j8YvY7jctphUq3kvak+3UukPK46xYBm c4WcB+DQmjS4wH1slIWYj5QF6e76LGkvCBM+olWF2HCgUc3wcFdYeV4GUToWGvpWBw7R XdrK+aCSsS/kBaIyOnQmNVojTf0BQUscgwff5iWVAdT62zLU5v9QGtTQ+eSkpNmjShDx pJIw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z1-20020a05640235c100b0042accc0b1basi1124011edc.546.2022.05.26.01.53.29; Thu, 26 May 2022 01:53:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237045AbiEXLxw (ORCPT + 99 others); Tue, 24 May 2022 07:53:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236667AbiEXLxs (ORCPT ); Tue, 24 May 2022 07:53:48 -0400 Received: from mail-yw1-f171.google.com (mail-yw1-f171.google.com [209.85.128.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3166E5C87A; Tue, 24 May 2022 04:53:43 -0700 (PDT) Received: by mail-yw1-f171.google.com with SMTP id 00721157ae682-2f83983782fso178788997b3.6; Tue, 24 May 2022 04:53:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BoiYeaDjs15JekgURHYc+TLJ+J0nfYc/PIk2wBIW2ng=; b=yo7r8dw5K62i/0OSSBOhlFLgnIt4R/YU7E6r4PiTIdSUSDtsW6IoIRavipKYZOI4YD 1exbTS2QYTVjvEoDgIPIuBd4kc8t52wg/vrBVF2vYophX7/FIHG7U6FTciD9W16ilf4R McOAWt21xzKPgqSy8BrlyJUOT+NyQSR6t2EEifDP9Gm+OxUKbbzxTY1qLoxwtk/cL8su TcjUHNQVX204ZQMcxBzMuHHivpIS/qoqiYN82dtkNQ+HjaD0qJkzCOT+GWmHBgQZ96X8 jiaCYlzRXCi/DvpT1th17g9swTqrHgm6X0SsdDB0MfZqhz3ssoI4Qgfu6kHyWiemzCd2 ev1g== X-Gm-Message-State: AOAM530LT0/T+h+XUT3Jd67srpNfPcfcswuA4IMKmy1I6Ma4vw9hW7Dz 4wWHcls4SfbJX4nvteK15wQeU+rX3sRYpsmiyMs= X-Received: by 2002:a0d:c442:0:b0:2fe:beab:1fef with SMTP id g63-20020a0dc442000000b002febeab1fefmr28721987ywd.196.1653393222323; Tue, 24 May 2022 04:53:42 -0700 (PDT) MIME-Version: 1.0 References: <20220511043515.fn2gz6q3kcpdai5p@vireshk-i7> <20220511122114.wccgyur6g3qs6fps@vireshk-i7> <20220512065623.q4aa6y52pst3zpxu@vireshk-i7> <20220513042705.nbnd6vccuiu6lb7a@vireshk-i7> <20220524111456.hw4qugsvt4bm7reh@vireshk-i7> <20220524112917.apcvvvblksg7jdu4@vireshk-i7> In-Reply-To: From: "Rafael J. Wysocki" Date: Tue, 24 May 2022 13:53:31 +0200 Message-ID: Subject: Re: [PATCH v3] cpufreq: fix race on cpufreq online To: Viresh Kumar Cc: Schspa Shi , Linux Kernel Mailing List , Linux PM Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 24, 2022 at 1:48 PM Rafael J. Wysocki wrote: > > On Tue, May 24, 2022 at 1:29 PM Viresh Kumar wrote: > > > > On 24-05-22, 13:22, Rafael J. Wysocki wrote: > > > On Tue, May 24, 2022 at 1:15 PM Viresh Kumar wrote: > > > > > > > > On 13-05-22, 09:57, Viresh Kumar wrote: > > > > > On 12-05-22, 12:49, Rafael J. Wysocki wrote: > > > > > > > > Moreover, I'm not sure why the locking dance in store() is necessary. > > > > > > > > > > > > > > commit fdd320da84c6 ("cpufreq: Lock CPU online/offline in cpufreq_register_driver()") > > > > > > > > > > > > I get that, but I'm wondering if locking CPU hotplug from store() is > > > > > > needed at all. I mean, if we are in store(), we are holding an active > > > > > > reference to the policy kobject, so the policy cannot go away until we > > > > > > are done anyway. Thus it should be sufficient to use the policy rwsem > > > > > > for synchronization. > > > > > > > > > > I think after the current patchset is applied and we have the inactive > > > > > policy check in store(), we won't required the dance after all. > > > > > > > > I was writing a patch for this and then thought maybe look at mails > > > > around this time, when you sent the patch, and found the reason why we > > > > need the locking dance :) > > > > > > > > https://lore.kernel.org/lkml/20150729091136.GN7557@n2100.arm.linux.org.uk/ > > > > Actually no, this is for the lock in cpufreq_driver_register(). > > > > > Well, again, if we are in store(), we are holding a reference to the > > > policy kobject, so this is not initialization time. > > > > This is the commit which made the change. > > > > commit 4f750c930822 ("cpufreq: Synchronize the cpufreq store_*() routines with CPU hotplug") > > So this was done before the entire CPU hotplug rework and it was > useful at that time. > > The current code always runs cpufreq_set_policy() under policy->rwsem > and governors are stopped under policy->rwsem, so this particular race > cannot happen AFAICS. > > Locking CPU hotplug prevents CPUs from going away while store() is > running, but in order to run store(), the caller must hold an active > reference to the policy kobject. That prevents the policy from being > freed and so policy->rwsem can be acquired. After policy->rwsem has > been acquired, policy->cpus can be checked to determine whether or not > there are any online CPUs for the given policy (there may be none), > because policy->cpus is only manipulated under policy->rwsem. > > If a CPU that belongs to the given policy is going away, > cpufreq_offline() has to remove it from policy->cpus under > policy->rwsem, so either it has to wait for store() to release > policy->rwsem, or store() will acquire policy->rwsem after it and will > find that policy->cpus is empty. Moreover, locking CPU hotplug doesn't actually prevent cpufreq_remove_dev() from running which can happen when the cpufreq driver is unregistered, for example.