Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp450219pxj; Thu, 17 Jun 2021 06:32:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw2tolbDBwpCIjGRVqUUZnHlStksFBTas+9DWHQuD/ICvqEWUhgThK/PXG3WeUt14BXYqfX X-Received: by 2002:aa7:d74b:: with SMTP id a11mr6444953eds.95.1623936720673; Thu, 17 Jun 2021 06:32:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623936720; cv=none; d=google.com; s=arc-20160816; b=HnD5xp1w+xF/2iVwLvsDUwkm7IwAVKrzkn7WRkzpMknFZrU+ThlRybQi0mO6U+viRn 5dYUepLvLQ5gAaYZpRpP+aNUXWmyEAs5y6bvv+3eZsatho38KuOGk2x/RbxXOA0nhBgv EZaby9SwRQKxOKX75CPWWnQXJecQaTxUwv4k72TGFKgNmlLQfAh3ZGCsE6kHkRPnc3Hi yaYqCUTizxs09Psdm1zsVdfp4HtAU9s/xDty4mzmUUWayAWKZVIA6Sw1pJT4n0jiyaPt 41L0DUzF0+BTnHfEzv5IFpD/yIM1K6jegfg5NYp7W5uSwVq1z2Df+SctJ9uA/SDJ9nR1 WiRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=Np/lQVozeidMH5d/F8qOsmZ9gxCf0/IInqaWOQ/30j4=; b=XcFr7cp3MdfuDAaXq3oopwqBevN9yPX2aJtGvFl5VTcQmcjPZIv5zcBMqao1vhJx8b fWoP7EOET3hyzxutIpOaU1rl2JGgXQKMJa195M5C3/zl7jtkHYZFTi444q2NG+yU4JSQ lmSImFVrOpQPaPdr1tMAwnP+khX7PZb0euEg2L7Yjdif2oxmmTel4J+o+ll4K9pAefli g0zmQB7nA2qkPbGbgnAhy/KOkvzY8mLIpSpg4Pj39RzOHhbjUyITlVh0zVCDQ2EeguTA I6efTi+lelhT+xjHqsRcSwrKDYnd4sF8Ra06cNu/sOP3abrzGZhpyU7IZO396nJIscKH i/hg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=L7dHSltw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si5289326edc.102.2021.06.17.06.31.37; Thu, 17 Jun 2021 06:32:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=L7dHSltw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232495AbhFQLVs (ORCPT + 99 others); Thu, 17 Jun 2021 07:21:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231422AbhFQLVr (ORCPT ); Thu, 17 Jun 2021 07:21:47 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97EDBC061574 for ; Thu, 17 Jun 2021 04:19:39 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id e33so4717697pgm.3 for ; Thu, 17 Jun 2021 04:19:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Np/lQVozeidMH5d/F8qOsmZ9gxCf0/IInqaWOQ/30j4=; b=L7dHSltwk04Vk547K8izfarGGRCbF4ISWk6Fb0R1ubDfVGBnc2sFCI9UVohrf3z8hM bfHXvQH6gcaLZHZArDnbvRp9HvDkJnyrua/eizeIjvKJHpLMJusIHhjNATDE7h6TErC5 7CZz6Ul0kbXdmlmjXzNHbpmCZxZ/4AiIkQ0CxGMY2QcMDrzN4s/c3O73uMvTPD7Wxb6C MwCRLlkrqHRt8Pcv/UF1TmBMAudCnqRwKjwGjUF8EZYf/MOud9ege9kPmavrzYcEkjSx Furj36cWPM+oRKegQtePu4ABTLDiHiuTjbnzmNQETeUYIDjPgNzrEffPnFNsGlVDOIUv a6Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Np/lQVozeidMH5d/F8qOsmZ9gxCf0/IInqaWOQ/30j4=; b=ktttMS4lcKjBUfQeRzUndtZpnDriIgUFtKnUDPHSEm9ea7oVb8lsRhACEXI00u+5cv +02OFNSsLFCSDqwyXM0Xf6XiwsD33vLhFrh+jXyIP1OMng2T0ftPS04j+CbNYrDNsuIZ GKk7t7xRa8ZPu+wwQhI9b4IVJgEN6XOEHhb5DvYzgsPhMntxwyyHncLoZMNGhzYOebHg kK1vb6QuCPUNrecLBwi3FXJ2C0TEG5JOSg5Sj/7rSbcDy9kdpSBlNSH/pW5rgYgV3bqZ hcCQM1c4gxBUpcZ9WQNsrmtSCAt0qeHl8JPOvWHXBf6WDN/VVyl5oP/wh9F3HDRBmem+ +Ysw== X-Gm-Message-State: AOAM531D4W0bdkt8vZ9EC0wJdYZVb9Xd4Hiymc09QhpQGnn8qaZVIdyX llcoGFQtc+7GpWlfWQAiPW1Qpw== X-Received: by 2002:aa7:8f28:0:b029:2f4:9245:4ed with SMTP id y8-20020aa78f280000b02902f4924504edmr4735299pfr.24.1623928779088; Thu, 17 Jun 2021 04:19:39 -0700 (PDT) Received: from localhost ([136.185.134.182]) by smtp.gmail.com with ESMTPSA id v15sm4886439pfm.216.2021.06.17.04.19.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Jun 2021 04:19:38 -0700 (PDT) Date: Thu, 17 Jun 2021 16:49:36 +0530 From: Viresh Kumar To: Ionela Voinescu Cc: Rafael Wysocki , Sudeep Holla , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , linux-pm@vger.kernel.org, Qian Cai , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH V2 3/3] cpufreq: CPPC: Add support for frequency invariance Message-ID: <20210617111936.cfjzoh6g5zvolaf5@vireshk-i7> References: <20210616124806.GA6495@arm.com> <20210617032416.r2gfp25xxvhc5t4x@vireshk-i7> <20210617103415.GA29877@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210617103415.GA29877@arm.com> User-Agent: NeoMutt/20180716-391-311a52 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 17-06-21, 11:34, Ionela Voinescu wrote: > I might be missing something, but when you offline a single CPU in a > policy, the worse that can happen is that a last call to > cppc_scale_freq_tick() would have sneaked in before irqs and the tick > are disabled. But even if we have a last call to > cppc_scale_freq_workfn(), the counter read methods would know how to > cope with hotplug, and the cppc_cpudata structure would still be > allocated and have valid desired_perf and highest_perf values. Okay, I somehow assumed that cppc_scale_freq_workfn() needs to run on the local CPU, while it can actually land anywhere. My fault. But the irq-work being queued here is per-cpu and it will get queued on the local CPU where the tick occurred. Now I am not sure of what will happen to this irq-work in that case. I tried to look now and I see that these irq-work items are processed first on tick and then the tick handler of scheduler is called, so that will again queue the cppc irq-work. What happens if this happens along with CPU hotplug ? I am not sure I understand that. There may or may not be any side effects of this. Lets assume the work item is left in the queue as is and no tick happens after that as the CPU is offlined. We are good. Now if we unload the cpufreq driver at this moment, the driver will call irq_work_sync(), which may end up in a while loop ? There is no irq-work-cancel() API. Peter: Can you help here on this ? Lemme try to explain a bit here: We are starting an irq-work (in cppc cpufreq driver) from scheduler_tick()->arch_scale_freq_tick(). What will happen if the driver doesn't take care of CPU hotplug explicitly and make sure this work isn't queued again from the next tick. Is it important for user to make sure it gets rid of the irq-work during hotplug here ? > Worse case, the last scale factor set for the CPU will be meaningless, > but it's already meaningless as the CPU is going down. > > When you are referring to the issue reported by Qian I suppose you are > referring to this [1]. I think this is the case where you hotplug the > last CPU in a policy and free cppc_cpudata. > > [1] https://lore.kernel.org/linux-pm/41f5195e-0e5f-fdfe-ba37-34e1fd8e4064@quicinc.com/ Yes, I was talking about this report only, I am not sure now if I understand what actually happened here :) Ionela: I have skipped replying to rest of your email, will get back to that once we have clarification on the above first. Thanks a lot for your reviews, always on time :) -- viresh