Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1516822ybl; Tue, 13 Aug 2019 14:07:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1SXwRR++jb4EhuIJjJ7w8vwqIg9x1SmlvzHnP2tfhjUlcWf8awP71vP2DHNgIeP9Pq8+e X-Received: by 2002:a17:902:5a2:: with SMTP id f31mr38441349plf.72.1565730472442; Tue, 13 Aug 2019 14:07:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565730472; cv=none; d=google.com; s=arc-20160816; b=Zo5GKxz0AhcXZiGIDO1dndYlwGJbCOkl9JiQ292t8tr2qMNLidiLdkjA6his1UaUkq mLiHfb4Fxks6hGXPC+XKqqtoV4VPT5chBfDmAChX8cNCp8+6ETEGSDMsgwqyXFRQmshH XUttZUdoxZ5wCMpS/vXDGb5ORW3nkNlpt9Ih+Ax1xrMi7ws5L+IU9EPAzKBUESRNh0KA OAfWPnU72MeVJ8rCxA6hAFQxJIQHuPLbYikgflGBtj5HtKipsdQNvOfrr0dBTCEIX9k2 Ysn4Tpq0uxSlLtLYJbrU46N8E7pIViCdXTa2Ht62QD8GpDURtczF1jt+lSI8yik1DfBM DHAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:mime-version:user-agent:date:message-id:subject :from:to; bh=cMoMqPeBsR5Z1WEr0S9ZgtZfiT4rZFHIzC58VOSEHMU=; b=CLdx87eULYQHYfXvI5vp8eVy30Y9JNRkfrpeQD61Vb7k4MNx+q1PSm0/0ZAOezXTIs 6UWRxe/QJi0PXbHcF5uqc15/XQnYQNKp1lesKnxEYbS/lE740vV0gRnHDBe4KCqmvSWi 4E10xBXDNV1cwholDSMNORq5wHADa0nABDrhAQJra3p0bogLzyRQqYLQwwpq9Fw+FfC1 AyA3Gb397sXnmC4X5/t7VHulPNMhxb3TQH6UcWELdvj0Za2vLhJVXT22P44YGRH0rlET /DzE4BcJL21sjCL1+gFFKU/dINT8cRacCyDPL8BVd0rKaRaeDJ/jDCtCfyjPxRd5k54n snkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t21si3368673pfe.231.2019.08.13.14.07.36; Tue, 13 Aug 2019 14:07:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727505AbfHMVEu (ORCPT + 99 others); Tue, 13 Aug 2019 17:04:50 -0400 Received: from mail.windriver.com ([147.11.1.11]:56068 "EHLO mail.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727024AbfHMVEr (ORCPT ); Tue, 13 Aug 2019 17:04:47 -0400 Received: from ALA-HCA.corp.ad.wrs.com ([147.11.189.40]) by mail.windriver.com (8.15.2/8.15.1) with ESMTPS id x7DL4gQv003132 (version=TLSv1 cipher=AES128-SHA bits=128 verify=FAIL); Tue, 13 Aug 2019 14:04:42 -0700 (PDT) Received: from [172.25.39.5] (172.25.39.5) by ALA-HCA.corp.ad.wrs.com (147.11.189.50) with Microsoft SMTP Server (TLS) id 14.3.468.0; Tue, 13 Aug 2019 14:04:41 -0700 To: LKML , linux-rt-users From: Chris Friesen Subject: [RT] should pm_qos_resume_latency_us on one CPU affect latency on another? Message-ID: <4b3bf6d8-7e1a-138b-048d-b3c1f5f65297@windriver.com> Date: Tue, 13 Aug 2019 15:04:39 -0600 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.25.39.5] Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, Just wondering if what I'm seeing is expected. I'm using the CentOS 7 RT kernel with boot args of "skew_tick=1 irqaffinity=0 rcu_nocbs=1-27 nohz_full=1-27" among others. Normally if I run cyclictest it sets /dev/cpu_dma_latency to zero. This gives worst-case latency around 6usec. If I set /dev/cpu_dma_latency to something large and then set /sys/devices/system/cpu/cpu${num}/power/pm_qos_resume_latency_us to "2" for the CPUs that cyclictest is running on then the worst-case latency jumps to more like 16usec. If I set pm_qos_resume_latency_us to "2" for all CPUs on the system, then the worst-case latency comes back down. It's not sufficient to set it for all CPUs on the same socket as cyclictest. It does not seem to make any difference in the worst-case latency to set cpuset.sched_load_balance to zero for the cpuset containing cyclictest. (All cpusets but one have cpuset.sched_load_balance set to zero, and that one doesn't include the CPUs that cyclictest runs on.) Looking at the latency traces, there does not appear to be any single culprit. I've seen cases where it appears to take extra time in migrate_task_rq_fair(), tick_do_update_jiffies64(), rcu_irq_enter(), and enqueue_entity(). I'm trying to dynamically isolate CPUs from the system for running RT tasks, but it seems like the rest of the system still affects the isolated CPUs. Any comments/suggestions would be appreciated. Thanks, Chris