Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp22967pxb; Wed, 14 Apr 2021 08:33:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxQCFeqlemawMIN0pO7CAz73KzVE2utuwBhcYdMXRfgXzWvB2vW3+Q9nNcVGIWAx2vealMb X-Received: by 2002:a17:906:29cf:: with SMTP id y15mr13314773eje.500.1618414429297; Wed, 14 Apr 2021 08:33:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618414429; cv=none; d=google.com; s=arc-20160816; b=SY9WQVoqDBdO6WWRCOUeHz4W2fvWmnyfBUTHj7CDzitsuD2utAVB0HFsa3SQtO4gkZ nPrABIUIg6abo9n33hQCbOD205fmJcxtolIUQvbPWZB1GG/HrK6WlGcza6YzihsDULsP h1TJvA8vvSHS4SNw0j4Kl2JbgjAqJPZGyEP19Oq6vjegvi034h4Mbxrtt/fbIEPGip0g FBv0btNrNqhhnvVx9/wmGqaWsUBA2o8Sumjz+Uv+b77OkV36VnqWfA2e94yyO6Cc8sfZ 3m6ILzXHFxvZIOW+XheYf52QbtMeC5CbMVIiP+5TfWKwG9cpVGpKJSRkJHwP+XGKlWno FwUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=1KUBSehEABzbFhWQ2fXSchVUTX8EIdLJ6+HFAPBk/TQ=; b=LJrnDDURrLQyawIleJmKvMxZM4PqcAsqBJpWYkkJolFqwD1RiIxyMHvn9sd+ftbxLL ZWwh7oe5wvQKu1Pdd6U/0cHI60Ph/fKV/WezydInTb7zhpF0Y59fEWPfUPF+4Vs/p26h E6d25CMiQa2gStnhD3SyoZ/u3cj0JuZgiQZDgsXJ3IsU+FoA+cG+GnHG39BCao7Tg3A8 gozuRJNpPh5Vn0RrsSBYirlhunoE1RbiYJPL32VtUe9mO0jtJJjhW1G2+P0qCb8o1Q23 KoLRz/PxCwy74gt0g1yMEqRmKioIIHbFsJX8w/lobLGI7IaUvPJDeFZ2+m4ZJk0apSVe 1U4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q20si11513580edw.269.2021.04.14.08.33.26; Wed, 14 Apr 2021 08:33:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350497AbhDNL0W (ORCPT + 99 others); Wed, 14 Apr 2021 07:26:22 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:16997 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350480AbhDNLZb (ORCPT ); Wed, 14 Apr 2021 07:25:31 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0S74W29zNtWp; Wed, 14 Apr 2021 19:22:15 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:00 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 09/16] arm64/mm: Split the function check_and_switch_context in 3 parts Date: Wed, 14 Apr 2021 12:23:05 +0100 Message-ID: <20210414112312.13704-10-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Julien Grall The function check_and_switch_context is used to: 1) Check whether the ASID is still valid 2) Generate a new one if it is not valid 3) Switch the context While the latter is specific to the MM subsystem, the rest could be part of the generic ASID allocator. After this patch, the function is now split in 3 parts which corresponds to the use of the functions: 1) asid_check_context: Check if the ASID is still valid 2) asid_new_context: Generate a new ASID for the context 3) check_and_switch_context: Call 1) and 2) and switch the context 1) and 2) have not been merged in a single function because we want to avoid to add a branch in when the ASID is still valid. This will matter when the code will be moved in separate file later on as 1) will reside in the header as a static inline function. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3 comment: Will wants to avoid to add a branch when the ASID is still valid. So 1) and 2) are in separates function. The former will move to a new header and make static inline. --- arch/arm64/mm/context.c | 70 ++++++++++++++++++++++++++++------------- 1 file changed, 48 insertions(+), 22 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 041c3c5e0216..40ef013c90c3 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -222,17 +222,49 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid, return idx2asid(info, asid) | generation; } -void check_and_switch_context(struct mm_struct *mm) +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @pinned: refcount if asid is pinned. + * Caller needs to make sure preempt is disabled before calling this function. + */ +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) { unsigned long flags; - unsigned int cpu; - u64 asid, old_active_asid; - struct asid_info *info = &asid_info; + u64 asid; + unsigned int cpu = smp_processor_id(); - if (system_supports_cnp()) - cpu_set_reserved_ttbr0(); + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if (!asid_gen_match(asid, info)) { + asid = new_context(info, pasid, pinned); + atomic64_set(pasid, asid); + } - asid = atomic64_read(&mm->context.id); + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) + local_flush_tlb_all(); + + atomic64_set(&active_asid(info, cpu), asid); + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @pinned: refcount if asid is pinned + * Caller needs to make sure preempt is disabled before calling this function. + */ +static void asid_check_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); /* * The memory ordering here is subtle. @@ -252,24 +284,18 @@ void check_and_switch_context(struct mm_struct *mm) if (old_active_asid && asid_gen_match(asid, info) && atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active), old_active_asid, asid)) - goto switch_mm_fastpath; - - raw_spin_lock_irqsave(&info->lock, flags); - /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(&mm->context.id); - if (!asid_gen_match(asid, info)) { - asid = new_context(info, &mm->context.id, &mm->context.pinned); - atomic64_set(&mm->context.id, asid); - } + return; - cpu = smp_processor_id(); - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - local_flush_tlb_all(); + asid_new_context(info, pasid, pinned); +} - atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&info->lock, flags); +void check_and_switch_context(struct mm_struct *mm) +{ + if (system_supports_cnp()) + cpu_set_reserved_ttbr0(); -switch_mm_fastpath: + asid_check_context(&asid_info, &mm->context.id, + &mm->context.pinned); arm64_apply_bp_hardening(); -- 2.17.1