Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp717151ybk; Wed, 20 May 2020 10:10:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwy7rBYdgQE+YMKKieCF3FrznGVowlc8oNXdebASLBYlm3EwKSGJR4+mHqRwJ+AmLLSApL1 X-Received: by 2002:a17:906:37d9:: with SMTP id o25mr60189ejc.15.1589994616206; Wed, 20 May 2020 10:10:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589994616; cv=none; d=google.com; s=arc-20160816; b=EEbUlJvYCK49V0pHNLnc9/iixboqtiKMXPItf2wvSnMVVDWlB6hw+f/k/q/M5gVOvU yN5vKznvYTVkaISTSMj6bfPK7wx0Q+1zRj2JXyjuuvxCeRG1g+nBx9LIOsQ3WM7YXhPt Z4inb9mvgonfSkmV985XfPyjR3g6q4bhr5QOVzxjji8uZakWJcAUUuZmYiNECPUg1cVr YfUjsJMvzRyozqWGJBhkp3ienYQpc1ads7sLqfaSgQW4LnQyru1j7N6aXvQdrjW/ieej gQe+5GHxygJu3VRy7UtJGbTAumK4ixKcz48CqCQbjpBieGMxo/ZDt8CGNLV+o30ENkBo BhOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=QFcFDrweLsHs/eDTMzE5RGwgzTvtMGMwFqj5dc1kgW0=; b=fvYulJMatgSW9QFi1+mkg+4tW7DPWufxtQikCY0Cnt58jTx1sTkD0EK8O7Pr/V6xyD krv6fLFhPu+j+gXAvy18cuLbO3Krxi/mT93gIUtMO8iFW5Jq9cVxLmQebCiUpjw4SC2k kUHR43CiAlJntKwptO5kLJQGWXJMt+v+bniowSwR3exc+zodzXztmJCR51qDddReERNJ FpC3ShKmGHknOZIOpBqA0GC6inzowzDChs7cLZcDJC8PYYB+7zXevEd2iB44Rpkol3Ei GZKhyA06Sqq1Jr3i/+72J0MyDTgTFZK3z7mxmKa7B33NubJgcqOznSngjj4402EgNojz m5XA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cc2si1879265edb.598.2020.05.20.10.09.51; Wed, 20 May 2020 10:10:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727117AbgETRIH (ORCPT + 99 others); Wed, 20 May 2020 13:08:07 -0400 Received: from foss.arm.com ([217.140.110.172]:60208 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726826AbgETRIG (ORCPT ); Wed, 20 May 2020 13:08:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E73D130E; Wed, 20 May 2020 10:08:05 -0700 (PDT) Received: from gaia (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EBC813F305; Wed, 20 May 2020 10:08:02 -0700 (PDT) Date: Wed, 20 May 2020 18:08:00 +0100 From: Catalin Marinas To: Zhenyu Ye Cc: linux-arch@vger.kernel.org, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, steven.price@arm.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, guohanjun@huawei.com, olof@lixom.net, kuhn.chenqun@huawei.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [RFC PATCH v3 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 Message-ID: <20200520170759.GE18302@gaia> References: <20200414112835.1121-1-yezhenyu2@huawei.com> <20200414112835.1121-3-yezhenyu2@huawei.com> <20200514152840.GC1907@gaia> <54468aae-dbb1-66bd-c633-82fc75936206@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54468aae-dbb1-66bd-c633-82fc75936206@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 18, 2020 at 08:21:02PM +0800, Zhenyu Ye wrote: > On 2020/5/14 23:28, Catalin Marinas wrote: > > On Tue, Apr 14, 2020 at 07:28:35PM +0800, Zhenyu Ye wrote: > >> + } > >> + scale++; > >> + range_size >>= TLB_RANGE_MASK_SHIFT; > >> + } > > > > So, you start from scale 0 and increment it until you reach the maximum. > > I think (haven't done the maths on paper) you could also start from the > > top with something like scale = ilog2(range_size) / 5. Not sure it's > > significantly better though, maybe avoiding the loop 3 times if your > > range is 2MB (which happens with huge pages). > > This optimization is only effective when the range is a multiple of 256KB > (when the page size is 4KB), and I'm worried about the performance > of ilog2(). I traced the __flush_tlb_range() last year and found that in > most cases the range is less than 256K (see details in [1]). THP or hugetlbfs would exercise bigger strides but I guess it depends on the use-case. ilog2() should be reduced to a few instructions on arm64 AFAICT (haven't tried but it should use the CLZ instruction). > > Anyway, I think it would be more efficient if we combine the > > __flush_tlb_range() and the _directly one into the same function with a > > single loop for both. For example, if the stride is 2MB already, we can > > handle this with a single classic TLBI without all the calculations for > > the range operation. The hardware may also handle this better since the > > software already told it there can be only one entry in that 2MB range. > > So each loop iteration could figure which operation to use based on > > cpucaps, TLBI range ops, stride and reduce range_size accordingly. > > Summarize your suggestion in one sentence: use 'stride' to optimize the > preformance of TLBI. This can also be done by dividing into two functions, > and this should indeed be taken into account in the TLBI RANGE feature. > > But if we figure which operation to use based on cpucaps in each loop > iteration, then cpus_have_const_cap() will be called frequently, which > may affect performance of TLBI. In my opinion, we should do as few > judgments as possible in the loop, so judge the cpucaps outside the > loop maybe a good choice. cpus_have_const_cap() is a static label, so should be patched with a branch or nop. My point was that in the classic __flush_tlb_range() loop, instead of an addr += stride we could have something more dynamic depending on whether the CPU supports range TLBI ops or not. But we would indeed have more (static) branches in the loop, so possibly some performance degradation. If the code looks ok, I'd favour this and we can look at the optimisation later. But I can't really tell how the code would look without attempting to merge the two. Anyway, a first step would be to to add the the range and stride to the decision (i.e. (end-start)/stride > 1) before jumping to the range operations. You can avoid the additional checks in the new TLBI functions since we know we have at least two (huge)pages. -- Catalin