Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752398AbdHNTOV (ORCPT ); Mon, 14 Aug 2017 15:14:21 -0400 Received: from mga03.intel.com ([134.134.136.65]:55189 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751000AbdHNTOS (ORCPT ); Mon, 14 Aug 2017 15:14:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,374,1498546800"; d="scan'208";a="137352772" Date: Mon, 14 Aug 2017 12:16:18 -0700 (PDT) From: Shivappa Vikas X-X-Sender: vikas@vshiva-Udesk To: Thomas Gleixner cc: Vikas Shivappa , vikas.shivappa@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org, hpa@zytor.com, peterz@infradead.org, ravi.v.shankar@intel.com, tony.luck@intel.com, fenghua.yu@intel.com, eranian@google.com, davidcc@google.com, ak@linux.intel.com, sai.praneeth.prakhya@intel.com Subject: Re: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing In-Reply-To: Message-ID: References: <1502304395-7166-1-git-send-email-vikas.shivappa@linux.intel.com> <1502304395-7166-4-git-send-email-vikas.shivappa@linux.intel.com> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2626 Lines: 91 On Mon, 14 Aug 2017, Thomas Gleixner wrote: > On Wed, 9 Aug 2017, Vikas Shivappa wrote: > >> @@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d) >> GFP_KERNEL); >> if (!d->rmid_busy_llc) >> return -ENOMEM; >> + INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); >> + if (has_busy_rmid(r, d)) >> + cqm_setup_limbo_handler(d); > > This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How > would a bit be set here? If we logically offline all cpus in a package and bring it back, the worker needs to be scheduled on the package if there were busy RMIDs on this package. Otherwise that RMID never gets freed as its rmid->busy stays 1.. I needed to scan the limbo list and set the bits for all limbo RMIDs after the alloc and before doing the 'has_busy_rmid' check. Will fix > >> } >> if (is_mbm_total_enabled()) { >> tsize = sizeof(*d->mbm_total); >> @@ -536,11 +539,25 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) >> list_del(&d->list); >> if (is_mbm_enabled()) >> cancel_delayed_work(&d->mbm_over); >> + >> + if (is_llc_occupancy_enabled() && >> + has_busy_rmid(r, d)) > > What is that line break helping here and why can't you just unconditionally > cancel the work? Will fix the line break. The has_busy_rmid makes sure the worker was indeed scheduled - that way we cancel the worker which was actually scheduled.. > >> + cancel_delayed_work(&d->cqm_limbo); >> + >> kfree(d); >> - } else if (r == &rdt_resources_all[RDT_RESOURCE_L3] && >> - cpu == d->mbm_work_cpu && is_mbm_enabled()) { >> - cancel_delayed_work(&d->mbm_over); >> - mbm_setup_overflow_handler(d); >> + return; >> + } >> + >> + if (r == &rdt_resources_all[RDT_RESOURCE_L3]) { >> + if (is_mbm_enabled() && cpu == d->mbm_work_cpu) { >> + cancel_delayed_work(&d->mbm_over); >> + mbm_setup_overflow_handler(d); > > I think this is the wrong approach. If the timer is about to fire you > essentially double the interval. So you better flush the work, which will > reschedule it if needed. Ok will fix. We can flush(setup and run it immediately) the work here on the new cpu. > >> + } >> + if (is_llc_occupancy_enabled() && cpu == d->mbm_work_cpu && > > That want's to be d->cbm_work_cpu, right? Correct - thanks for pointing , will fix. > >> + has_busy_rmid(r, d)) { >> + cancel_delayed_work(&d->cqm_limbo); >> + cqm_setup_limbo_handler(d); > > See above. For cqm 1s is not a hard requirement, but can flush the work like mbm to keep it uniform.. Thanks, Vikas > > Thanks, > > tglx >