Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp4215880imm; Wed, 30 May 2018 01:05:10 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIoe5EjV76Wkq1CZKVAuEER3GNrBj1Ql+4PERWIsrpbbcSGlwAl//XbREf2sP3tWP/7+pC4 X-Received: by 2002:a65:4443:: with SMTP id e3-v6mr1438286pgq.348.1527667510095; Wed, 30 May 2018 01:05:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527667510; cv=none; d=google.com; s=arc-20160816; b=QHJax8KlaCAsIw9z1IlwbtT+VVEaK+HClbUEab20cpBT5ESm1dHStdBcPu5inWEeYn yiqGoEewAovU3yY8dbPhJ/hYfc/kNxm8OWL2sSbzP5Rl1bpiYrCrZFyDcBD13/8jd8Lj LjQyEjmzgXCfDsLBU1Puo2nzvyKqtf5YopdQ68CoFmVWMlquT7dd3REtdWQcaVByCNoY EFOr5NegljToFh2gz7OJRlIu/R63gOlb+WrRwG77zlgD/Nv3FuHgeVjIFPhYwvxQAcjs JOUKfBjKMCyfYLQWs7jQGunlMxzrcL24527wSvhDCWoAjtLGtUvZ11HEz0+m1tNPcWo5 4jUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject:arc-authentication-results; bh=Y1xr7OLjLoaUyNcvcvZsy3HwHgd9Td14HhrLoFH57Zg=; b=gwhanhUnuGohEB8hByr3crZe7fpPP1b8zB5I+KvER+QtkIDjz7AXWLdTQ96ZxLpJq2 gRsWXTisAMIrae1ye/PmkMAUaGdzGSYTNKMm3vVY2wjNhxs69zA+UYVVSIVJiS2xbekI wWndsa6NaVTu9GlH7FvKBXTlBIBK9z5VaHl327qK+XxvVxb1yXnJoDUnN8thEmGBFcIf oyda3Xm3muoOfRiGsOOSydjwjCpOiwq64vK62wcWL4q29nj9arj6PydGiWWOjbobX00Y gDZoP7bsVEcANaukjsa7eVqmcWWK1tp9jUwf5JKjoJcBt2Odk5ngMBcLxsVPxW8HwdBV hlxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g12-v6si35592048pfi.212.2018.05.30.01.04.56; Wed, 30 May 2018 01:05:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935999AbeE3IEJ (ORCPT + 99 others); Wed, 30 May 2018 04:04:09 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:8163 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S935429AbeE3IEB (ORCPT ); Wed, 30 May 2018 04:04:01 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 9C2B588FD48EE; Wed, 30 May 2018 16:03:46 +0800 (CST) Received: from [127.0.0.1] (10.121.90.40) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.382.0; Wed, 30 May 2018 16:03:39 +0800 Subject: Re: [PATCH V2] cpufreq: reinitialize new policy min/max when writing scaling_(max|min)_freq To: Viresh Kumar CC: , , , , References: <1527144234-96396-1-git-send-email-kevin.wangtao@hisilicon.com> <1527319008-66663-1-git-send-email-kevin.wangtao@hisilicon.com> <20180529102638.ikmw2xjf523sf4kf@vireshk-i7> From: "Wangtao (Kevin, Kirin)" Message-ID: <46d37d2b-f410-63de-721c-27843d25a326@hisilicon.com> Date: Wed, 30 May 2018 16:03:37 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <20180529102638.ikmw2xjf523sf4kf@vireshk-i7> Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.121.90.40] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2018/5/29 18:26, Viresh Kumar ะด??: > On 26-05-18, 15:16, Kevin Wangtao wrote: >> consider such situation, current user_policy.min is 1000000, >> current user_policy.max is 1200000, in cpufreq_set_policy, >> other driver may update policy.min to 1200000, policy.max to >> 1300000. After that, If we input "echo 1300000 > scaling_min_freq", >> then user_policy.min will be 1300000, and user_policy.max is >> still 1200000, because the input value is checked with policy.max >> not user_policy.max. if we get all related cpus offline and >> online again, it will cause cpufreq_init_policy fail because >> user_policy.min is higher than user_policy.max. >> >> The solution is when user space tries to write scaling_(max|min)_freq, >> the min/max of new_policy should be reinitialized with min/max >> of user_policy, like what cpufreq_update_policy does. >> >> Signed-off-by: Kevin Wangtao >> --- >> drivers/cpufreq/cpufreq.c | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c >> index b79c532..82123a1 100644 >> --- a/drivers/cpufreq/cpufreq.c >> +++ b/drivers/cpufreq/cpufreq.c >> @@ -697,6 +697,8 @@ static ssize_t store_##file_name \ >> struct cpufreq_policy new_policy; \ >> \ >> memcpy(&new_policy, policy, sizeof(*policy)); \ > > Maybe add a comment here on why this is required ? OK > >> + new_policy.min = policy->user_policy.min; \ >> + new_policy.max = policy->user_policy.max; \ >> \ >> ret = sscanf(buf, "%u", &new_policy.object); \ >> if (ret != 1) \ > > Acked-by: Viresh Kumar >