Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3784596yba; Tue, 7 May 2019 07:04:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqwOn0e8hOwb5+XNl5DqIJVNlnhK4zgPjQKz/5+RXBsRgR5nQnreqvxMJ6dK2I7iVLMfhraP X-Received: by 2002:a65:4b88:: with SMTP id t8mr39645093pgq.374.1557237887466; Tue, 07 May 2019 07:04:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557237887; cv=none; d=google.com; s=arc-20160816; b=FPvIOeTWJO52YnJ3Njns889YEGpyN3s2zxDbRrkHuNij1wAmu1yN7rIAPignGheZ3Y hofBQK6fQX2rf2LFNADxL6rzqTZedr0LF3x+KBH8j0/BRijLnqpaCDNV31nRO0FWh66S JD1aZarFCBeVCT1T8xV2V8EjLUD3k+ZTQ7E4q9Xjyp2xXOlZnC0F2kTlvMWUuKBh0NNV 15oMJ5RcT8+ICvfX4Vmx0CjvhOpGMW/8Pf34RwbHXxre/j9X1Jb6oLjDbO4r4E+FpWqy X766stti/ERueQAwllYfZAOMbD0yMoPXRWaQVJ2/61W7i9MpehjAwV/Ce56kujW51Ve1 7X4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=E/Jt1+RcngXXvrLCRfRp/86h8nqwvRt5WnKSo4M9b/4=; b=Zki1of1D185I8SyztZBFpwFhOEiUq/4s34aH6W3Yd92UpXhvsXCQeK2CTKT80DhMYP TZKiMA71oFu+r+kMhJ7OEj8lQ+SCix+MgHbVLGPOpzUEq2pCip5CIe5Hz0tgEd4vAIWJ 4HYaHXr5ovt4xwFfz76RH7xqCY6WJwn7MTnrrjiA36SSWFSJqu5gZpqGPdmN7K8wpP0Q AmH1KwWRbNxFG1zYZv24ic6QwQKKqP8mX8GwdbD0MPuK7Uqz1RwD7GNvYYLS2tYCJKqp TUMiEDft4kWcd+HYSMPKjXS1N8fEzwWLVOMdpA/CvrAW1NXMM6C6C8jkHZZyeCdISill ufGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r25si12702722pfn.253.2019.05.07.07.04.11; Tue, 07 May 2019 07:04:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726727AbfEGOCi (ORCPT + 99 others); Tue, 7 May 2019 10:02:38 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:55440 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726063AbfEGOCi (ORCPT ); Tue, 7 May 2019 10:02:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BB59080D; Tue, 7 May 2019 07:02:37 -0700 (PDT) Received: from queper01-lin (queper01-lin.cambridge.arm.com [10.1.195.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 38CB63F5C1; Tue, 7 May 2019 07:02:35 -0700 (PDT) Date: Tue, 7 May 2019 15:02:33 +0100 From: Quentin Perret To: Vincent Guittot Cc: Luca Abeni , linux-kernel , Greg Kroah-Hartman , "Rafael J . Wysocki" , Ingo Molnar , Peter Zijlstra , "Paul E . McKenney" , Joel Fernandes , Luc Van Oostenryck , Morten Rasmussen , Juri Lelli , Daniel Bristot de Oliveira , Patrick Bellasi , Tommaso Cucinotta Subject: Re: [RFC PATCH 1/6] sched/dl: Improve deadline admission control for asymmetric CPU capacities Message-ID: <20190507140231.5hglz2d64stadbhm@queper01-lin> References: <20190506044836.2914-1-luca.abeni@santannapisa.it> <20190506044836.2914-2-luca.abeni@santannapisa.it> <20190507134850.yreebscc3zigfmtd@queper01-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171215 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tuesday 07 May 2019 at 15:55:37 (+0200), Vincent Guittot wrote: > On Tue, 7 May 2019 at 15:48, Quentin Perret wrote: > > > > Hi Luca, > > > > On Monday 06 May 2019 at 06:48:31 (+0200), Luca Abeni wrote: > > > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c > > > index edfcf8d982e4..646d6d349d53 100644 > > > --- a/drivers/base/arch_topology.c > > > +++ b/drivers/base/arch_topology.c > > > @@ -36,6 +36,7 @@ DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE; > > > > > > void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity) > > > { > > > + topology_update_cpu_capacity(cpu, per_cpu(cpu_scale, cpu), capacity); > > > > Why is that one needed ? Don't you end up re-building the sched domains > > after this anyways ? > > I was looking at the same point. > Also this doesn't take into account if the cpu is offline > > Do we also need of the line below in set_rq_online > + rq->rd->rd_capacity += arch_scale_cpu_capacity(NULL, > cpu_of(rq)); > > building the sched_domain seems a better place to set rq->rd->rd_capacity Perhaps this could hook directly in rq_attach_root() ? We don't really need the decrement part no ? That is, in case of hotplug the old rd should be destroyed anyways. Thanks, Quentin > > > > > > per_cpu(cpu_scale, cpu) = capacity; > > > } > > > > Thanks, > > Quentin