Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp97550pxy; Fri, 30 Apr 2021 00:57:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwf6RDTNQJQKvn/taOv1uqc+bXZ204O1s3UyYehqmCnVYTRcPfEs3NfS0ulrTRIQTgsRt0Q X-Received: by 2002:a62:5f83:0:b029:20e:70c3:c3e3 with SMTP id t125-20020a625f830000b029020e70c3c3e3mr3671343pfb.60.1619769445775; Fri, 30 Apr 2021 00:57:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619769445; cv=none; d=google.com; s=arc-20160816; b=ReGdbCWTjqomVe6Bwvg3Ko9IbbTNZze8vPKpjdQCYrweTgpIICVhTz8onM7KjIKUMZ PqfkNDgPfrPRgP0leTt3RWpiFFo+4UVzFtFd7ZRTJCFlaPKjh/hf+u/4TYIAn/M4lSh+ AID6ivrDVAHTKXTtx2e+LlRn1zq6kvlWPk6BKPq3fV0fW1Iz1kUA2V3a/je7fdmx+B8Y skBKeJsyZgfgiJBzVw9MNU3Kk3QyBAEtJmwTdmU/mrlnlC816MDJL+w4iFePgDI6WSt2 shdemD8g26vgHImCDKnvRoQHimvsA0CFchdFAj77ZsWaCybN1MSc7yTukV+bsXXC7meR z2aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=vfGssGB9eFRKXmPsFL0dyP7FAbvhKa2jy/g3CZDNuK8=; b=i5Be1yfxPV2VRWhZSZ5RfRYXegdU2oCifeVLl2I03VV025q6O6/ve7TjvzuTmgRtMa mp36NhBni7DsjFnZpkDJvMtja34hASpCoaofPk1Zgwks0Nu4mUSr7BjJYYM2l8d/P2Il I00DHs/ndXRnbybFV/97xGs1AhIrkGkvY1nt5XBAAfs11nkJZ8/4yUsar3pVp1xHYE9g Bs0//xvP95qWyIQ6iaJE+RJhIOt1IBahu6gx/oPqTRWNxhd2jWn7ykPDUQV6A+1Z7P0b 2bhIvMIyVD1bnWeyyl2/+ZXh97xbTb918hekLr+Aur1XiJeMwltOsU2/F9R4JlVoRrKD KCiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 13si2825850pgo.339.2021.04.30.00.57.12; Fri, 30 Apr 2021 00:57:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230335AbhD3H51 (ORCPT + 99 others); Fri, 30 Apr 2021 03:57:27 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:17827 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230418AbhD3H5V (ORCPT ); Fri, 30 Apr 2021 03:57:21 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FWl4T18j5zBtLk; Fri, 30 Apr 2021 15:54:01 +0800 (CST) Received: from huawei.com (10.175.113.32) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 30 Apr 2021 15:56:22 +0800 From: Nanyong Sun To: , , CC: , , , , , Subject: [PATCH -next 3/4] riscv: mm: add param stride for __sbi_tlb_flush_range Date: Fri, 30 Apr 2021 16:28:49 +0800 Message-ID: <20210430082850.462609-4-sunnanyong@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430082850.462609-1-sunnanyong@huawei.com> References: <20210430082850.462609-1-sunnanyong@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.32] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a parameter: stride for __sbi_tlb_flush_range(), represent the page stride between the address of start and end. Normally, the stride is PAGE_SIZE, and when flush huge page address, the stride can be the huge page size such as:PMD_SIZE, then it only need to flush one tlb entry if the address range within PMD_SIZE. Signed-off-by: Nanyong Sun --- arch/riscv/mm/tlbflush.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 720b443c4..382781abf 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -15,7 +15,7 @@ void flush_tlb_all(void) * Kernel may panic if cmask is NULL. */ static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, - unsigned long size) + unsigned long size, unsigned long stride) { struct cpumask hmask; unsigned int cpuid; @@ -27,7 +27,7 @@ static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { /* local cpu is the only cpu present in cpumask */ - if (size <= PAGE_SIZE) + if (size <= stride) local_flush_tlb_page(start); else local_flush_tlb_all(); @@ -41,16 +41,16 @@ static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); + __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start, PAGE_SIZE); } -- 2.25.1