Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4950872imu; Tue, 8 Jan 2019 08:59:49 -0800 (PST) X-Google-Smtp-Source: ALg8bN7ycNqNxy6sLdOENSaq6EMmZwICARtB9K0/uPWZFexGrckeKe15p2TprMZkINWH6PiKlX1T X-Received: by 2002:a17:902:a50a:: with SMTP id s10mr2382826plq.278.1546966789299; Tue, 08 Jan 2019 08:59:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546966789; cv=none; d=google.com; s=arc-20160816; b=NSnLITvE4r4AY540Z4Xe2ovRX31MgToeMIN4MGk/baqWNSsfUbYqzAgL4DhGXjpZ+e /9k7rJyIeKF4lqfdW1eaReFL2b7El0CMlcumCBAMmpDFN2JLMws+Wsy4M02E65H8WEwy HhTDoZaC898pUvWtMvTxwmWWmsMus1po/uismhNrAUfRUnsXrggtUEbAYHOWbVEbHwJ/ uLekTKB0zllVYn0i/K/TL910k5PayotwsfbV6kFasikjCxx7n4F7Bl/cPmCYnFQIIKBY afanaqqJZy3B4H2ioKizPu5kSJYwVzXvh0s09DcWzya+F6nWfqoLJV5/dSGVayqe9pcd RrCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:autocrypt:openpgp:from:references:cc:to :subject; bh=IS9OZHaHGZxZupo0463peUeIcolM4YWnFhICXIROv6c=; b=VsChkvrJjzuYaJ/0XJ5j+zMIoPFYrq237n1SQolUJTFMrRRLIX4aBqvzE/dc+4L9iy 6V1NVDH0UBkTLYttBh3snQcVhTNj+7gNnWRW8Hvq7/2F2/JEgVkZ0XveJyQgFY8cLS4j 3Ms7rqgItcGKM7lnDD6kzCsmhC1OW4+g0njzG2toQn2yCZYZnljFDgVDO1v1g+ZyIYT5 Eyuv37DFS1ENGSGS9iLMnSu7yaNVcoxJh7OvIdS6ro4+81ZDBM4tVDGiO6+Db9LqkQke GR+R3f1MBiT8/Vu75KNc8eMxdiHMF1JAPP7WFNwfr/BibgJ0hYMbVgSQfJiuVBcdgUTn 5jlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j2si27202842plt.93.2019.01.08.08.59.33; Tue, 08 Jan 2019 08:59:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728786AbfAHQ63 convert rfc822-to-8bit (ORCPT + 99 others); Tue, 8 Jan 2019 11:58:29 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37242 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727677AbfAHQ63 (ORCPT ); Tue, 8 Jan 2019 11:58:29 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B52E5C0703A8; Tue, 8 Jan 2019 16:58:28 +0000 (UTC) Received: from llong.remote.csb (dhcp-17-223.bos.redhat.com [10.18.17.223]) by smtp.corp.redhat.com (Postfix) with ESMTP id 10CD05C23E; Tue, 8 Jan 2019 16:58:26 +0000 (UTC) Subject: Re: [PATCH 0/2] /proc/stat: Reduce irqs counting performance overhead To: Dave Chinner Cc: Andrew Morton , Alexey Dobriyan , Luis Chamberlain , Kees Cook , Jonathan Corbet , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, Davidlohr Bueso , Miklos Szeredi , Daniel Colascione , Randy Dunlap References: <1546873978-27797-1-git-send-email-longman@redhat.com> <20190107223214.GZ6311@dastard> <9b4208b7-f97b-047c-4dab-15bd3791e7de@redhat.com> <20190108020422.GA27534@dastard> From: Waiman Long Openpgp: preference=signencrypt Autocrypt: addr=longman@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFgsZGsBEAC3l/RVYISY3M0SznCZOv8aWc/bsAgif1H8h0WPDrHnwt1jfFTB26EzhRea XQKAJiZbjnTotxXq1JVaWxJcNJL7crruYeFdv7WUJqJzFgHnNM/upZuGsDIJHyqBHWK5X9ZO jRyfqV/i3Ll7VIZobcRLbTfEJgyLTAHn2Ipcpt8mRg2cck2sC9+RMi45Epweu7pKjfrF8JUY r71uif2ThpN8vGpn+FKbERFt4hW2dV/3awVckxxHXNrQYIB3I/G6mUdEZ9yrVrAfLw5M3fVU CRnC6fbroC6/ztD40lyTQWbCqGERVEwHFYYoxrcGa8AzMXN9CN7bleHmKZrGxDFWbg4877zX 0YaLRypme4K0ULbnNVRQcSZ9UalTvAzjpyWnlnXCLnFjzhV7qsjozloLTkZjyHimSc3yllH7 VvP/lGHnqUk7xDymgRHNNn0wWPuOpR97J/r7V1mSMZlni/FVTQTRu87aQRYu3nKhcNJ47TGY evz/U0ltaZEU41t7WGBnC7RlxYtdXziEn5fC8b1JfqiP0OJVQfdIMVIbEw1turVouTovUA39 Qqa6Pd1oYTw+Bdm1tkx7di73qB3x4pJoC8ZRfEmPqSpmu42sijWSBUgYJwsziTW2SBi4hRjU h/Tm0NuU1/R1bgv/EzoXjgOM4ZlSu6Pv7ICpELdWSrvkXJIuIwARAQABzR9Mb25nbWFuIExv bmcgPGxsb25nQHJlZGhhdC5jb20+wsF/BBMBAgApBQJYLGRrAhsjBQkJZgGABwsJCAcDAgEG FQgCCQoLBBYCAwECHgECF4AACgkQbjBXZE7vHeYwBA//ZYxi4I/4KVrqc6oodVfwPnOVxvyY oKZGPXZXAa3swtPGmRFc8kGyIMZpVTqGJYGD9ZDezxpWIkVQDnKM9zw/qGarUVKzElGHcuFN ddtwX64yxDhA+3Og8MTy8+8ZucM4oNsbM9Dx171bFnHjWSka8o6qhK5siBAf9WXcPNogUk4S fMNYKxexcUayv750GK5E8RouG0DrjtIMYVJwu+p3X1bRHHDoieVfE1i380YydPd7mXa7FrRl 7unTlrxUyJSiBc83HgKCdFC8+ggmRVisbs+1clMsK++ehz08dmGlbQD8Fv2VK5KR2+QXYLU0 rRQjXk/gJ8wcMasuUcywnj8dqqO3kIS1EfshrfR/xCNSREcv2fwHvfJjprpoE9tiL1qP7Jrq 4tUYazErOEQJcE8Qm3fioh40w8YrGGYEGNA4do/jaHXm1iB9rShXE2jnmy3ttdAh3M8W2OMK 4B/Rlr+Awr2NlVdvEF7iL70kO+aZeOu20Lq6mx4Kvq/WyjZg8g+vYGCExZ7sd8xpncBSl7b3 99AIyT55HaJjrs5F3Rl8dAklaDyzXviwcxs+gSYvRCr6AMzevmfWbAILN9i1ZkfbnqVdpaag QmWlmPuKzqKhJP+OMYSgYnpd/vu5FBbc+eXpuhydKqtUVOWjtp5hAERNnSpD87i1TilshFQm TFxHDzbOwU0EWCxkawEQALAcdzzKsZbcdSi1kgjfce9AMjyxkkZxcGc6Rhwvt78d66qIFK9D Y9wfcZBpuFY/AcKEqjTo4FZ5LCa7/dXNwOXOdB1Jfp54OFUqiYUJFymFKInHQYlmoES9EJEU yy+2ipzy5yGbLh3ZqAXyZCTmUKBU7oz/waN7ynEP0S0DqdWgJnpEiFjFN4/ovf9uveUnjzB6 lzd0BDckLU4dL7aqe2ROIHyG3zaBMuPo66pN3njEr7IcyAL6aK/IyRrwLXoxLMQW7YQmFPSw drATP3WO0x8UGaXlGMVcaeUBMJlqTyN4Swr2BbqBcEGAMPjFCm6MjAPv68h5hEoB9zvIg+fq M1/Gs4D8H8kUjOEOYtmVQ5RZQschPJle95BzNwE3Y48ZH5zewgU7ByVJKSgJ9HDhwX8Ryuia 79r86qZeFjXOUXZjjWdFDKl5vaiRbNWCpuSG1R1Tm8o/rd2NZ6l8LgcK9UcpWorrPknbE/pm MUeZ2d3ss5G5Vbb0bYVFRtYQiCCfHAQHO6uNtA9IztkuMpMRQDUiDoApHwYUY5Dqasu4ZDJk bZ8lC6qc2NXauOWMDw43z9He7k6LnYm/evcD+0+YebxNsorEiWDgIW8Q/E+h6RMS9kW3Rv1N qd2nFfiC8+p9I/KLcbV33tMhF1+dOgyiL4bcYeR351pnyXBPA66ldNWvABEBAAHCwWUEGAEC AA8FAlgsZGsCGwwFCQlmAYAACgkQbjBXZE7vHeYxSQ/+PnnPrOkKHDHQew8Pq9w2RAOO8gMg 9Ty4L54CsTf21Mqc6GXj6LN3WbQta7CVA0bKeq0+WnmsZ9jkTNh8lJp0/RnZkSUsDT9Tza9r GB0svZnBJMFJgSMfmwa3cBttCh+vqDV3ZIVSG54nPmGfUQMFPlDHccjWIvTvyY3a9SLeamaR jOGye8MQAlAD40fTWK2no6L1b8abGtziTkNh68zfu3wjQkXk4kA4zHroE61PpS3oMD4AyI9L 7A4Zv0Cvs2MhYQ4Qbbmafr+NOhzuunm5CoaRi+762+c508TqgRqH8W1htZCzab0pXHRfywtv 0P+BMT7vN2uMBdhr8c0b/hoGqBTenOmFt71tAyyGcPgI3f7DUxy+cv3GzenWjrvf3uFpxYx4 yFQkUcu06wa61nCdxXU/BWFItryAGGdh2fFXnIYP8NZfdA+zmpymJXDQeMsAEHS0BLTVQ3+M 7W5Ak8p9V+bFMtteBgoM23bskH6mgOAw6Cj/USW4cAJ8b++9zE0/4Bv4iaY5bcsL+h7TqQBH Lk1eByJeVooUa/mqa2UdVJalc8B9NrAnLiyRsg72Nurwzvknv7anSgIkL+doXDaG21DgCYTD wGA5uquIgb8p3/ENgYpDPrsZ72CxVC2NEJjJwwnRBStjJOGQX4lV1uhN1XsZjBbRHdKF2W9g weim8xU= Organization: Red Hat Message-ID: <56954b42-4258-7268-53b5-ddca28758193@redhat.com> Date: Tue, 8 Jan 2019 11:58:26 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20190108020422.GA27534@dastard> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 08 Jan 2019 16:58:29 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/07/2019 09:04 PM, Dave Chinner wrote: > On Mon, Jan 07, 2019 at 05:41:39PM -0500, Waiman Long wrote: >> On 01/07/2019 05:32 PM, Dave Chinner wrote: >>> On Mon, Jan 07, 2019 at 10:12:56AM -0500, Waiman Long wrote: >>>> As newer systems have more and more IRQs and CPUs available in their >>>> system, the performance of reading /proc/stat frequently is getting >>>> worse and worse. >>> Because the "roll-your-own" per-cpu counter implementaiton has been >>> optimised for low possible addition overhead on the premise that >>> summing the counters is rare and isn't a performance issue. This >>> patchset is a direct indication that this "summing is rare and can >>> be slow" premise is now invalid. >>> >>> We have percpu counter infrastructure that trades off a small amount >>> of addition overhead for zero-cost reading of the counter value. >>> i.e. why not just convert this whole mess to percpu_counters and >>> then just use percpu_counter_read_positive()? Then we just don't >>> care how often userspace reads the /proc file because there is no >>> summing involved at all... >>> >>> Cheers, >>> >>> Dave. >> Yes, percpu_counter_read_positive() is cheap. However, you still need to >> pay the price somewhere. In the case of percpu_counter, the update is >> more expensive. > Ummm, that's exactly what I just said. It's a percpu counter that > solves the "sum is expensive and frequent" problem, just like you > are encountering here. I do not need basic scalability algorithms > explained to me. What I am trying to say is that the "sum is expensive and frequent" is only true of a very small percentage of applications. It is not true for most of them. I am hesitating to add latency to the interrupt path that will affect all applications.   >> I would say the percentage of applications that will hit this problem is >> small. But for them, this problem has some significant performance overhead. > Well, duh! > > What I was suggesting is that you change the per-cpu counter > implementation to the /generic infrastructure/ that solves this > problem, and then determine if the extra update overhead is at all > measurable. If you can't measure any difference in update overhead, > then slapping complexity on the existing counter to attempt to > mitigate the summing overhead is the wrong solution. > > Indeed, it may be that you need o use a custom batch scaling curve > for the generic per-cpu coutner infrastructure to mitigate the > update overhead, but the fact is we already have generic > infrastructure that solves your problem and so the solution should > be "use the generic infrastructure" until it can be proven not to > work. > > i.e. prove the generic infrastructure is not fit for purpose and > cannot be improved sufficiently to work for this use case before > implementing a complex, one-off snowflake counter implementation... I see your point. I like the deferred summation approach that I am currently using. If I have to modify the current per-cpu counter implementation to support that and I probably need to add counter grouping support to amortize the overhead, that can be a major undertaking. This is not a high priority item for me at the moment, so I may have to wait until I have some spare time left. Thanks, Longman