Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3150682imm; Tue, 29 May 2018 01:50:50 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqsUnBuVYas//qCGVtyQT0iB4fb/Zg1p83s/B+H45iV1S4eilBr8ifTdy2Mx0M61UWpzOwZ X-Received: by 2002:a63:7986:: with SMTP id u128-v6mr13037680pgc.127.1527583850862; Tue, 29 May 2018 01:50:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527583850; cv=none; d=google.com; s=arc-20160816; b=s+SVjukji/r7Z8kWzGd4pt+nYqt7aFbnsuEqnOVZYFL/idGQs+mgVlN66nCuMf242P idni3GZbobKN0VUBUQ9onTFYLBNx6lxEIxpK7SU5P0g5GaDm8WCU1rM3fumR8BKBs2/v L3eusfhDDTTdxHxx0pdHnhBqTqZUXgIz0MK9rD+e9yurJi7lYWekK1pT5uu/PdsgWsJR cTr+zsD3f5MDBqZljxJfcsf6jAIjC9PrUbItuc1w03NR6v6SF6d6q+FkUSumE4HQr+2A zAgC+7ZuZnpiGoG9qcxATRBuQt2WgU1jgF4YMUb6E/g072qxldWn178WQQSHrTloUXqH Kv8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=WPDtqqpACfNPxqcDt1TyO3R6pceCsVt7HbLmKkmB4G8=; b=hLB67sn4gnjGBhAqRkTCISALxWqDBky5NO2PrriVPmd6NdA84Vnk9OaRWAw1ocYt2K aNPCd35xzmdwq3Dutcy6JeFg18htLCP/+UxxneffnxvdpoMBwFd7/hQh2Q+JBQs013zq ZbX5GwuLDQb0AvJCe+AwN4J3/iPrUcTzfX73RHq0oSkdr2b8r2JtwVOMuU9iO/uA6D0S ry5b9pseH1nvEZYNYTE6ewtJMP4Ywp22iGMp1rKTod9K9/XcDhpWzv/vxxKoB1oiNSZF YGqDYT6+QfRs1xnvoenIeHk723L5m4y1nf9k8N1q+sBfOFAgBjKQO4JIWpD8w1+UFMLU q7gw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s2-v6si4440113plq.372.2018.05.29.01.50.37; Tue, 29 May 2018 01:50:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755454AbeE2Isr convert rfc822-to-8bit (ORCPT + 99 others); Tue, 29 May 2018 04:48:47 -0400 Received: from cloudserver094114.home.pl ([79.96.170.134]:50123 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755438AbeE2Ism (ORCPT ); Tue, 29 May 2018 04:48:42 -0400 Received: from 79.184.254.169.ipv4.supernova.orange.pl (79.184.254.169) (HELO aspire.rjw.lan) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83) id cb87ca25d6b48c54; Tue, 29 May 2018 10:48:40 +0200 From: "Rafael J. Wysocki" To: "Wangtao (Kevin, Kirin)" Cc: "Rafael J. Wysocki" , Viresh Kumar , Linux PM , Linux Kernel Mailing List , gengyanping@hisilicon.com, sunzhaosheng@hisilicon.com Subject: Re: [PATCH] cpufreq: reinitialize new policy min/max when writing scaling_(max|min)_freq Date: Tue, 29 May 2018 10:47:55 +0200 Message-ID: <2309223.3p98OPSZcO@aspire.rjw.lan> In-Reply-To: References: <1527144234-96396-1-git-send-email-kevin.wangtao@hisilicon.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8BIT Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Saturday, May 26, 2018 8:50:46 AM CEST Wangtao (Kevin, Kirin) wrote: > > 在 2018/5/24 15:45, Rafael J. Wysocki 写道: > > On Thu, May 24, 2018 at 8:43 AM, Kevin Wangtao > > wrote: > >> consider such situation, current user_policy.min is 1000000, > >> current user_policy.max is 1200000, in cpufreq_set_policy, > >> other driver may update policy.min to 1200000, policy.max to > >> 1300000. After that, If we input "echo 1300000 > scaling_min_freq", > >> then user_policy.min will be 1300000, and user_policy.max is > >> still 1200000, because the input value is checked with policy.max > >> not user_policy.max. if we get all related cpus offline and > >> online again, it will cause cpufreq_init_policy fail because > >> user_policy.min is higher than user_policy.max. > > > > How do you reproduce this, exactly? > > I can also reproduce this issue with upstream code, write max frequency to scaling_max_freq > and scaling_min_freq, run benchmark to let cpu cooling take effect to clip freq, then write > the cliped freq to scaling_max_freq, thus user_policy.min is still max frequency but user_policy.max > is cliped freq which is lower than max frequency. OK, this is a bit more convincing. It looks like bad interaction between cpufreq_update_policy() and updates of the limits via sysfs.