Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4509418pxj; Wed, 12 May 2021 07:11:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySgyQkleDftYnuCmEMO1n8pWRZZ/pBzdUDyGGR8PwcJuNP58pLg5pciffrJszQH9V44/G4 X-Received: by 2002:a4a:8dd6:: with SMTP id a22mr28282916ool.74.1620828666584; Wed, 12 May 2021 07:11:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620828666; cv=none; d=google.com; s=arc-20160816; b=N3xG87SBc/5bK2oXU1kfmeKb8sRMVcx6iU1DjPOvo5eQwpeqTZc6epjv2w4KVRSeIh Af+TAkOg9EBv3IO5WjOTZQxSb8EOgVeCYpH6yq4R+HNA5deD0tKhXfTz1vuK+ELizVPb qRH2jGfwYRZiVMpxfM+GM0uwlz87U2AI1m10rYPMgLCq6WLE+/PmCSl9XOFiLcJZSZ1l wy62ELoTMho0h1qn7SgRBN8cKmZYTEaMvataoH7xG45ru7gZpUhtetccmV1fIQ4RtuNY RdzmNmSIy8uWI98wbJo2VoTk/CE93bOwM4igdDZ2jLKMj9dctbVmbBOVXokTGh1KF+iJ kmlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:ironport-sdr:ironport-sdr; bh=dvHk5A8FCHgJXuPdXacGYEM1tkZ/XR73q5gNQnJktVw=; b=qNEny+YYOQnJjDV0s8sBfuFBwPbhpiKcKVtzsGq1ak045LBJMg9pvDAam739FOvWA1 XybLGeFUrO4YDICJ9b0zEs9fderh2I/UjndmyEFjqE/II68XqHWntYM+zPC/ICG0Vrlf STrSs0proPbnRnBsDOMlEJKdWoCz4T2F3b0BQbfzsfVGQ0U5OvCpvG6beYDE6zVqOgXl 6HlMCHqHISnmhovHd6bLzFUkrpmduVnaupJyJsEFOpmGblKdzw1Q492lHcUfWhhPm/7p 4b02oyb2kYdEwKjX+4Y71Ok9xZibxuCbQR+lVLmFfMYmNK0CpXhUZRTIDdz19ZUyhrk4 CoPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l12si71059oib.225.2021.05.12.07.10.52; Wed, 12 May 2021 07:11:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231182AbhELOLV (ORCPT + 99 others); Wed, 12 May 2021 10:11:21 -0400 Received: from mga14.intel.com ([192.55.52.115]:16915 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbhELOLK (ORCPT ); Wed, 12 May 2021 10:11:10 -0400 IronPort-SDR: RZVmWhFVZgUaVGoR126Qb7HKmcIfglT/rUFLFodKYfSibp96YiGD+/+VLhnjHrCAcOb7icyHyd ntSDwzGiYwyA== X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="199392373" X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="199392373" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2021 07:10:02 -0700 IronPort-SDR: MwK393zN0NXsGH9v371DsExp1Duy8q4dd45lXpmIecRhhmWCN5FkdKFOFlybpNy9cooSAeIanX TBndhYy0A4jg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="437218966" Received: from linux.intel.com ([10.54.29.200]) by orsmga008.jf.intel.com with ESMTP; 12 May 2021 07:10:02 -0700 Received: from [10.209.99.183] (kliang2-MOBL.ccr.corp.intel.com [10.209.99.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by linux.intel.com (Postfix) with ESMTPS id 9C484580342; Wed, 12 May 2021 07:10:00 -0700 (PDT) Subject: Re: [PATCH V6] perf: Reset the dirty counter to prevent the leak for an RDPMC task To: Peter Zijlstra Cc: Rob Herring , Ingo Molnar , "linux-kernel@vger.kernel.org" , Andi Kleen , Arnaldo Carvalho de Melo , Mark Rutland , Andy Lutomirski , Stephane Eranian , Namhyung Kim References: <1619115952-155809-1-git-send-email-kan.liang@linux.intel.com> <20210510191811.GA21560@worktop.programming.kicks-ass.net> From: "Liang, Kan" Message-ID: <03fff406-3050-57dc-1f17-0f5630e810af@linux.intel.com> Date: Wed, 12 May 2021 10:09:59 -0400 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/12/2021 3:35 AM, Peter Zijlstra wrote: > On Tue, May 11, 2021 at 05:42:54PM -0400, Liang, Kan wrote: >> diff --git a/kernel/events/core.c b/kernel/events/core.c >> index 1574b70..8216acc 100644 >> --- a/kernel/events/core.c >> +++ b/kernel/events/core.c >> @@ -3851,7 +3851,7 @@ static void perf_event_context_sched_in(struct >> perf_event_context *ctx, >> cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); >> perf_event_sched_in(cpuctx, ctx, task); >> >> - if (cpuctx->sched_cb_usage && pmu->sched_task) >> + if (pmu->sched_task && (cpuctx->sched_cb_usage || >> atomic_read(&pmu->sched_cb_usages))) >> pmu->sched_task(cpuctx->task_ctx, true); > > Aside from the obvious whitespace issues; I think this should work. > Thanks. The whitespace should be caused by the copy/paste. I will fix it in the V7. I did more tests. For some cases, I can still observe the dirty counter for the first RDPMC read. I think we still have to clear the dirty counters in the x86_pmu_event_mapped() for the first RDPMC read. I have to disable the the interrupts to prevent the preemption. static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) { + unsigned long flags; + if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) return; /* + * Enable sched_task() for the RDPMC task, + * and clear the existing dirty counters. + */ + if (x86_pmu.sched_task && event->hw.target) { + atomic_inc(&event->pmu->sched_cb_usages); + local_irq_save(flags); + x86_pmu_clear_dirty_counters(); + local_irq_restore(flags); + } + + /* * This function relies on not being called concurrently in two * tasks in the same mm. Otherwise one task could observe * perf_rdpmc_allowed > 1 and return all the way back to Thanks, Kan