Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp385289pxb; Wed, 24 Feb 2021 05:03:15 -0800 (PST) X-Google-Smtp-Source: ABdhPJxN/H6V2EFqOaGy2lpqRoXSaGZwBBp59c8nNdtgnn8RuVaZ2Lb/u32nl9gRRfD4Qf18GN/Q X-Received: by 2002:a05:6402:50ce:: with SMTP id h14mr33038257edb.283.1614171794900; Wed, 24 Feb 2021 05:03:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614171794; cv=none; d=google.com; s=arc-20160816; b=qe7XDkK3EvAsma90CgkJ4O1zWb+R39KlSa4jlurcJUqniAl0YsDRfC8j/d0rOjXZ1k 1bOIr1hbT9KBNkxv32t172/mgRAZLtzekvtmzM0AbeKlwTYraERiwHKirlK1PHU6y+c4 jazUZ92lCyWnvQ+0iJteERCOviZVFKZNnpZgIKpgpw9hj8E+gIoY+j22564l6dm4l5x6 b+G1BHGqnYyfNWmnQnbi1zxrsrlc0ER/OBieqCBRS7XTMvKGRcT7Q5m+nUk+S58BfUHG +n8YoXvPrIH+b0z6bzqJJC/eiaEqmn5y4qrqFwGUoyGej2wdwg+Jl0IABk/PHDv/5kEq nCYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=2ORDhSeLgcbJf10JRvgxb+oaT3edW3pNJvkPhpDKNN8=; b=Fczy04tZ6YM+dKlWLA3t4CL/wqaG+8mPk1/BpiUr0QRwOY/1xfH/N00Hniwi4zpv2Q MHn/r9X1ydLp09t4c4AxA30BRMIoT3GKGLnqC5mhcCrif7otsLDQMa727UppFxQ0VdBH S2+HXgsy8o4scyPfatz/zLI7FoGs/OhNsgnVQfjIGM5Ap9OR0WERJmVlbHWXFiExstYB pN037G2pHVD2gjdipawplM34dR+LFgqCiCpnSxKmC7+H23UGZgLKvTtUyDzQfvrjqw1S GOUoy98uMLS50ZykbU1CNEzou0kgkXqWiajqpyayTTV0oZku1NbvWpWqtDEnj09r6Wk7 tTuw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j14si1256506eje.590.2021.02.24.05.02.49; Wed, 24 Feb 2021 05:03:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230165AbhBXIEo (ORCPT + 99 others); Wed, 24 Feb 2021 03:04:44 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12569 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233817AbhBXIEE (ORCPT ); Wed, 24 Feb 2021 03:04:04 -0500 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DlpJG59S9zMdfQ; Wed, 24 Feb 2021 16:00:46 +0800 (CST) Received: from huawei.com (10.175.124.27) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.498.0; Wed, 24 Feb 2021 16:02:36 +0800 From: wanghongzhe To: , CC: , , , , , , , , , , , , , Subject: [PATCH v3] seccomp: Improve performace by optimizing rmb() Date: Wed, 24 Feb 2021 16:49:45 +0800 Message-ID: <1614156585-18842-1-git-send-email-wanghongzhe@huawei.com> X-Mailer: git-send-email 1.7.12.4 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.124.27] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As Kees haved accepted the v2 patch at a381b70a1 which just replaced rmb() with smp_rmb(), this patch will base on that and just adjust the smp_rmb() to the correct position. As the original comment shown (and indeed it should be): /* * Make sure that any changes to mode from another thread have * been seen after SYSCALL_WORK_SECCOMP was seen. */ the smp_rmb() should be put between reading SYSCALL_WORK_SECCOMP and reading seccomp.mode to make sure that any changes to mode from another thread have been seen after SYSCALL_WORK_SECCOMP was seen, for TSYNC situation. However, it is misplaced between reading seccomp.mode and seccomp->filter. This issue seems to be misintroduced at 13aa72f0fd0a9f98a41cefb662487269e2f1ad65 which aims to refactor the filter callback and the API. So let's just adjust the smp_rmb() to the correct position. A next optimization patch will be provided if this ajustment is appropriate. v2 -> v3: - move the smp_rmb() to the correct position v1 -> v2: - only replace rmb() with smp_rmb() - provide the performance test number RFC -> v1: - replace rmb() with smp_rmb() - move the smp_rmb() logic to the middle between TIF_SECCOMP and mode Signed-off-by: wanghongzhe --- kernel/seccomp.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/kernel/seccomp.c b/kernel/seccomp.c index 1d60fc2c9987..64b236cb8a7f 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -1160,12 +1160,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, int data; struct seccomp_data sd_local; - /* - * Make sure that any changes to mode from another thread have - * been seen after SYSCALL_WORK_SECCOMP was seen. - */ - smp_rmb(); - if (!sd) { populate_seccomp_data(&sd_local); sd = &sd_local; @@ -1291,7 +1285,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, int __secure_computing(const struct seccomp_data *sd) { - int mode = current->seccomp.mode; int this_syscall; if (IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) && @@ -1301,7 +1294,13 @@ int __secure_computing(const struct seccomp_data *sd) this_syscall = sd ? sd->nr : syscall_get_nr(current, current_pt_regs()); - switch (mode) { + /* + * Make sure that any changes to mode from another thread have + * been seen after SYSCALL_WORK_SECCOMP was seen. + */ + smp_rmb(); + + switch (current->seccomp.mode) { case SECCOMP_MODE_STRICT: __secure_computing_strict(this_syscall); /* may call do_exit */ return 0; -- 2.19.1