Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp418335pxy; Fri, 30 Apr 2021 08:08:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzRM0FJDZ9JODsQcqveaqyN+UvK4JeVJYup7mp/lKcUtXtChMEvoc4i+NdEkqwj8i9UfVrl X-Received: by 2002:a17:906:430f:: with SMTP id j15mr4850728ejm.543.1619795299190; Fri, 30 Apr 2021 08:08:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619795299; cv=none; d=google.com; s=arc-20160816; b=qiLcaLkLK/YT+BHZV8EUo0lA+cITo83lsAiXp1TKuEGbOQcnT8EHsrudohMN/JBzjb Ss+LuM3xrZ+qaKTE/gqI8H4IJuuGVNJbd35hNsbJkQ0XZrZo0AAtNNp5/UZl8hIhsHES gAK75qRDBreTtrvyP6ZUHdIFmFry6kYFSUIjD90ZwjUTEkCns8YGp0H+RXPOTJsV/0n3 TxWb5uCbGkoy71XCRDm+uShE2xMTOvRTYt+FvuhjxIpDTJEysZHcN6I9SCrW4ohgM0KC uWmR2cghnc4Ex3i+rtbzutM92K6o5GzUXv49+VTaLZph2niYtL/rSHVrCdqepI2kWLDX Z1ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=E2frTET9qgJ9xa7/+/INrqZPEY8+03U9G8Ls2sBQ9lo=; b=I/aOdU7La6OfwyozP5LwN6ljGYt65g4jK/MNgLnjEn2rmQ3S6Z/Qmg7nWAxTd6WCBT gWNTz7IGxELdLkxqb3Pxl+E7L/PikBALYP33aQd0Pp5V7ZdtfigdvzT2oWsuaylfzyev ZjRHrUd1f0oor4yX02oDQDL6MO48nAvfbfXsL+AJAB5xOgv817Uoldra5ph9vi5jvx5X dAr0Bn84P5QCNz96YYkYV5X/f0w313hpRnjq8gTlZ0WbfoQM3DOG3YjEF3oiErrds6w7 BQIkYaoqBlJEPB0KpB4j/Sjm7GLUzaGg69hZvPEmYc0iie5KX1DrTRXTyoESYdkaKJkA 9klg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id co24si1978560edb.184.2021.04.30.08.07.46; Fri, 30 Apr 2021 08:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230446AbhD3PF0 (ORCPT + 99 others); Fri, 30 Apr 2021 11:05:26 -0400 Received: from foss.arm.com ([217.140.110.172]:49578 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229798AbhD3PFZ (ORCPT ); Fri, 30 Apr 2021 11:05:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 80699101E; Fri, 30 Apr 2021 08:04:36 -0700 (PDT) Received: from [192.168.178.6] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9E2A63F70D; Fri, 30 Apr 2021 08:04:33 -0700 (PDT) Subject: Re: [PATCH v2] sched: Fix out-of-bound access in uclamp To: Vincent Guittot , Quentin Perret Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Qais Yousef , Android Kernel Team , linux-kernel , Patrick Bellasi References: <20210429152656.4118460-1-qperret@google.com> From: Dietmar Eggemann Message-ID: <98a96d35-a6ae-f913-13f9-b5c17689039c@arm.com> Date: Fri, 30 Apr 2021 17:04:31 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 30/04/2021 16:16, Vincent Guittot wrote: > On Fri, 30 Apr 2021 at 15:14, Quentin Perret wrote: >> >> On Friday 30 Apr 2021 at 15:00:00 (+0200), Dietmar Eggemann wrote: >>> On 30/04/2021 14:03, Vincent Guittot wrote: [...] >>> Looks like this will fix a lot of possible configs: >>> >>> nbr buckets 1-4, 7-8, 10-12, 14-17, *20*, 26, 29-32 ... >>> >>> We would still introduce larger last buckets, right? >> >> Indeed. The only better alternative I could see was to 'spread' the >> error accross multiple buckets (e.g. make the last few buckets a bit >> bigger instead of having all of it accumulated on the last one), but not >> sure it is worth the overhead. > > I don't think it's worth the overhead. Me neither. [...]