Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4221167yba; Sun, 19 May 2019 13:54:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqwi2WS5m45bVb5gGYb5CRO0YhfnRMaGSEwKFVdSdTsY+5QYI28DKq4dJOffrJuSBtk7dA1Z X-Received: by 2002:a17:902:63:: with SMTP id 90mr18016060pla.342.1558299261309; Sun, 19 May 2019 13:54:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558299261; cv=none; d=google.com; s=arc-20160816; b=aK0aUXPRaC0zkxmaPFPOcHsHOIwmd9i1/yQ+YIA3f6DWlFonlvU46yYq+9kZ98eclr IO4zSEzq3z2WBAI+rjmqYOZVp9L1ezgZFikyKcQVu6ql9KRpbhDTW4hosq7/o9UPRSjj KPSUeCk6dlZYAmH/xkeUxZ22V8luN9ri+qlhc8Zcn55dfVhJoH5zqdnockamLNbYzNft YHESz0RUIkjMw0l5zPuZEgnl5OsJNmymJJBIdHQQZcJrVDZ5QKJTX4P/DT2eQrmt5G6b 9p6mVWQscKfDKxF/CLOAlPorhoiTtMX/OhtLK5uk6AGC7yt1iApHYeST6RTr6tldUmxs 5ESw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=NjENsJDTUCsMxGUFUolS4YIs9hq3tdH+LFovvP1cL+4=; b=Y4vMBWtPLaUS6MAxbsZycXopt6Mv7YRbEU5EQf4fXsk0Obs4QkWEJdPpHKwNvTZQLn FvJTruaFK9rf1UFDf+wEUKFTci6U1UwJ2k/8Xxa0V1es/n5W8SclieAie0cjtNVcB/ds q0Y86sjKl6Uhre/InVLc78kwHim3x/eegYqcNibZG5RZSMPf6o4Mu25LKZgQxByFnke1 n14+6l2I5VUWj4b/B8vY31P0+KZjxqLIKEqWiJ51g+3VUOdVPqMEB4/PYychgy9sZ40x IyBu/Sv7gb/imI3m5oCe/xIdpm7QfUkOiE2u00w8U9AxcxBP9Kn3l8lSElKMzuU92x4X SE3Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s17si16203458pgj.186.2019.05.19.13.54.06; Sun, 19 May 2019 13:54:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728944AbfESR52 (ORCPT + 99 others); Sun, 19 May 2019 13:57:28 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:41786 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727620AbfESR50 (ORCPT ); Sun, 19 May 2019 13:57:26 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 2FE1C505548047439AB4; Sun, 19 May 2019 18:07:38 +0800 (CST) Received: from HGHY1z004218071.china.huawei.com (10.177.29.32) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.439.0; Sun, 19 May 2019 18:07:29 +0800 From: Xiang Zheng To: , , CC: , , , , , , , Xiang Zheng Subject: [PATCH] KVM: ARM64: Update perf event when setting PMU count value Date: Sun, 19 May 2019 18:05:59 +0800 Message-ID: <20190519100559.7188-1-zhengxiang9@huawei.com> X-Mailer: git-send-email 2.15.1.windows.2 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.177.29.32] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Guest will adjust the sample period and set PMU counter value when it takes a long time to handle the PMU interrupts. However, we don't have a corresponding change on the virtual PMU which is emulated via a perf event. It could cause a large number of PMU interrupts injected to guest. Then guest will get hang for handling these interrupts. So update the sample_period of perf event if the counter value is changed to avoid this case. Signed-off-by: Xiang Zheng --- virt/kvm/arm/pmu.c | 54 +++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 45 insertions(+), 9 deletions(-) diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 1c5b76c..cbad3ec 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -24,6 +24,11 @@ #include #include +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); +static struct perf_event *kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, + struct kvm_pmc *pmc, + struct perf_event_attr *attr); + /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -57,11 +62,29 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) */ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) { - u64 reg; + u64 reg, counter, old_sample_period; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct perf_event *event; + struct perf_event_attr attr; reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); + + if (pmc->perf_event) { + attr = pmc->perf_event->attr; + old_sample_period = attr.sample_period; + counter = kvm_pmu_get_counter_value(vcpu, select_idx); + attr.sample_period = (-counter) & pmc->bitmask; + if (attr.sample_period == old_sample_period) + return; + + kvm_pmu_stop_counter(vcpu, pmc); + event = kvm_pmu_create_perf_event(vcpu, pmc, &attr); + if (event) + pmc->perf_event = event; + } } /** @@ -303,6 +326,24 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, } } +static struct perf_event *kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, + struct kvm_pmc *pmc, + struct perf_event_attr *attr) +{ + struct perf_event *event; + + event = perf_event_create_kernel_counter(attr, -1, current, + kvm_pmu_perf_overflow, pmc); + + if (IS_ERR(event)) { + pr_err_once("kvm: pmu event creation failed %ld\n", + PTR_ERR(event)); + return NULL; + } + + return event; +} + /** * kvm_pmu_software_increment - do software increment * @vcpu: The vcpu pointer @@ -416,15 +457,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, /* The initial sample period (overflow count) of an event. */ attr.sample_period = (-counter) & pmc->bitmask; - event = perf_event_create_kernel_counter(&attr, -1, current, - kvm_pmu_perf_overflow, pmc); - if (IS_ERR(event)) { - pr_err_once("kvm: pmu event creation failed %ld\n", - PTR_ERR(event)); - return; - } + event = kvm_pmu_create_perf_event(vcpu, pmc, &attr); - pmc->perf_event = event; + if (event) + pmc->perf_event = event; } bool kvm_arm_support_pmu_v3(void) -- 1.8.3.1