Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp1309760rwl; Fri, 7 Apr 2023 13:34:13 -0700 (PDT) X-Google-Smtp-Source: AKy350aRFYfJFDXIkxX97t5CV0gvkZUkY1spDgjD1hEpySUBnKiv2Bu/OvJyQrIqKtfVUuztnMvB X-Received: by 2002:a17:906:4d0f:b0:944:18ef:c970 with SMTP id r15-20020a1709064d0f00b0094418efc970mr652667eju.32.1680899652834; Fri, 07 Apr 2023 13:34:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680899652; cv=none; d=google.com; s=arc-20160816; b=SeAxBRPuokSOfa5yiUdjYIEemq7LX/AS8UmcKEtwsKqP90jRhOEQBevfSzFAhIbdpU ehBd1A2EgZeYc2K0pJnEBU2OZF8nGVysBM4HM7vYYvpavMdgEh2bMlYjnB3XOviYv8/k Au5cfD3m43UTD/sSyIQ7WMote0H4M4UCf38wvuO/QdJTahVnJDX8995bhZgNK+tE3aO5 iD/o4yznYiRM+pTiQf1+YUvDQjCmiEuMSb5xfPODJhlJgmRBSMv+G/vpKcjCtD/FGTCj /1yYpBqC0KgaW0i1rqAV5Mf9BoCVUYbkLXsXS7BeaBQCAj14EcSWj7PT3TzlEmNF5ptT fTDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Hbc93eK7ymDhjrSHC0zJ4DyV2mQ2gtmB4lByKauqVtE=; b=h9gSQRREZ9R5x654NzasGqU4jiVqjabVPRcUmyNjbnpPFn6+Uj/Wl8qIkLgP3xHQft O5JXQeLMlYRcXP+93ZgZ71++qhMG8R9zVI85JrivZvFit1tpQJjJ7r25P9R47tGmyI54 hAhQLt33H30dTQUwCHOkOnjBcDacS8V7Cnk9MBZR6aPjCdk/ihPQzduJXahzp2i3YgnD bPTMczLEG5+ngmZysky6JMxOXECvs96C/kPYYRxik5w3agA/btm0e0RG1oDlna7tbUHZ rlEWxt8QSX9ISsIcfZkz/VkvU/INF4/zCieA779YscnBFfomMRtYDf/O0qLRjRH6vLZ3 ys8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RhJQ2kae; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v1-20020a056402184100b004acdca4cb9asi4427885edy.636.2023.04.07.13.33.48; Fri, 07 Apr 2023 13:34:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RhJQ2kae; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231424AbjDGUcS (ORCPT + 99 others); Fri, 7 Apr 2023 16:32:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231266AbjDGUbf (ORCPT ); Fri, 7 Apr 2023 16:31:35 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFF51C140; Fri, 7 Apr 2023 13:31:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680899492; x=1712435492; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=weIoFwPmDBgK/Kz633bx40JkpnSzh85lXO3LAWwlT/Q=; b=RhJQ2kaeAQbHp9z7ANhRbQ+NbpQBlszMpRi3S40q7VE+I4wr00lz5yTm h+PBh8iMOS5gGFVoZPhy8kpnryhILV/WL3f9kG/ggrzLqSHdclkTQjltf mAAkzpX/usw0M0IHJz4HJ4nPd0g0F1VL8F6kAUVkq2MWgTympOUJMY4CS hMPfx1UbW+c3jQsN8EjHiEgv/yA1ft2EMoQ5aouOhMiLmU/v/aiJsyBH6 U78CC9zYCpkj7YwIw4yULnGbO17yp1FC83FhC7SmFsVQzs/j1DVAXVdpD TT//LhSNh60OdIfjyaX/dH3TfL4Q0otPk1lHRSmz1dpkz4ynBNzYF7ify w==; X-IronPort-AV: E=McAfee;i="6600,9927,10673"; a="408196876" X-IronPort-AV: E=Sophos;i="5.98,327,1673942400"; d="scan'208";a="408196876" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Apr 2023 13:31:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10673"; a="681125897" X-IronPort-AV: E=Sophos;i="5.98,327,1673942400"; d="scan'208";a="681125897" Received: from fyu1.sc.intel.com ([172.25.103.126]) by orsmga007.jf.intel.com with ESMTP; 07 Apr 2023 13:31:27 -0700 From: Fenghua Yu To: "Vinod Koul" , "Dave Jiang" Cc: dmaengine@vger.kernel.org, "linux-kernel" , Tony Zhu , Fenghua Yu Subject: [PATCH v4 12/16] dmaengine: idxd: add per file user counters for completion record faults Date: Fri, 7 Apr 2023 13:31:39 -0700 Message-Id: <20230407203143.2189681-13-fenghua.yu@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230407203143.2189681-1-fenghua.yu@intel.com> References: <20230407203143.2189681-1-fenghua.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Jiang Add counters per opened file for the char device in order to keep track how many completion record faults occurred and how many of those faults failed the writeback by the driver after attempt to fault in the page. The counters are managed by xarray that associates the PASID with struct idxd_user_context. Tested-by: Tony Zhu Signed-off-by: Dave Jiang Co-developed-by: Fenghua Yu Signed-off-by: Fenghua Yu --- v3: - Move majority of the xarry code to patch 8 which implements new function idxd_copy_cr() since the function needs the xarry to maintain and find mm by wq and PASID. Only keep the user counters related xarry code here. drivers/dma/idxd/cdev.c | 18 ++++++++++++++++++ drivers/dma/idxd/idxd.h | 7 +++++++ drivers/dma/idxd/irq.c | 4 ++++ 3 files changed, 29 insertions(+) diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c index 0a51c33198f6..5c8e964e671b 100644 --- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -39,6 +39,7 @@ struct idxd_user_context { struct mm_struct *mm; unsigned int flags; struct iommu_sva *sva; + u64 counters[COUNTER_MAX]; }; static void idxd_cdev_dev_release(struct device *dev) @@ -84,6 +85,23 @@ static void idxd_xa_pasid_remove(struct idxd_user_context *ctx) mutex_unlock(&wq->uc_lock); } +void idxd_user_counter_increment(struct idxd_wq *wq, u32 pasid, int index) +{ + struct idxd_user_context *ctx; + + if (index >= COUNTER_MAX) + return; + + mutex_lock(&wq->uc_lock); + ctx = xa_load(&wq->upasid_xa, pasid); + if (!ctx) { + mutex_unlock(&wq->uc_lock); + return; + } + ctx->counters[index]++; + mutex_unlock(&wq->uc_lock); +} + static int idxd_cdev_open(struct inode *inode, struct file *filp) { struct idxd_user_context *ctx; diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h index 4c4baa80c731..9fb26d017285 100644 --- a/drivers/dma/idxd/idxd.h +++ b/drivers/dma/idxd/idxd.h @@ -127,6 +127,12 @@ struct idxd_pmu { #define IDXD_MAX_PRIORITY 0xf +enum { + COUNTER_FAULTS = 0, + COUNTER_FAULT_FAILS, + COUNTER_MAX +}; + enum idxd_wq_state { IDXD_WQ_DISABLED = 0, IDXD_WQ_ENABLED, @@ -713,6 +719,7 @@ int idxd_wq_add_cdev(struct idxd_wq *wq); void idxd_wq_del_cdev(struct idxd_wq *wq); int idxd_copy_cr(struct idxd_wq *wq, ioasid_t pasid, unsigned long addr, void *buf, int len); +void idxd_user_counter_increment(struct idxd_wq *wq, u32 pasid, int index); /* perfmon */ #if IS_ENABLED(CONFIG_INTEL_IDXD_PERFMON) diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c index c660d63a3eb8..f4b0f59c95ba 100644 --- a/drivers/dma/idxd/irq.c +++ b/drivers/dma/idxd/irq.c @@ -240,6 +240,7 @@ static void idxd_evl_fault_work(struct work_struct *work) evl->batch_fail[entry_head->batch_id] = false; copy_size = cr_size; + idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULTS); break; case DSA_COMP_BATCH_EVL_ERR: bf = &evl->batch_fail[entry_head->batch_id]; @@ -251,6 +252,7 @@ static void idxd_evl_fault_work(struct work_struct *work) *result = 1; *bf = false; } + idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULTS); break; case DSA_COMP_DRAIN_EVL: copy_size = cr_size; @@ -282,6 +284,7 @@ static void idxd_evl_fault_work(struct work_struct *work) switch (fault->status) { case DSA_COMP_CRA_XLAT: if (copied != copy_size) { + idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULT_FAILS); dev_dbg_ratelimited(dev, "Failed to write to completion record: (%d:%d)\n", copy_size, copied); if (entry_head->batch) @@ -290,6 +293,7 @@ static void idxd_evl_fault_work(struct work_struct *work) break; case DSA_COMP_BATCH_EVL_ERR: if (copied != copy_size) { + idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULT_FAILS); dev_dbg_ratelimited(dev, "Failed to write to batch completion record: (%d:%d)\n", copy_size, copied); } -- 2.37.1