Received: by 2002:a05:6a10:8a4d:0:0:0:0 with SMTP id dn13csp180004pxb; Thu, 12 Aug 2021 13:45:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxhSAw8DGQqqELmLVncr4CxH0zljrgU3WoupLPsW0jS/EKu/QdJ8aE8/0l13tqGpqvpopEF X-Received: by 2002:a05:6402:3481:: with SMTP id v1mr7577578edc.60.1628801149850; Thu, 12 Aug 2021 13:45:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628801149; cv=none; d=google.com; s=arc-20160816; b=uPyAS6thalEOKHGl23MN7Mj5NH2RXwzFUW/+/qtoneGNe8R7gB25GvfO201s+0fvEv lyV5I8dgWGgBliPpTNDUXaMIbR6ZXReLaBkIzBpXv+lCivYHkT0j5MqqDK0H3OUdbOw3 c8wXhOTSVKI7qwC88Ik/RTP4pT3Ahd2SNa1UCSgjh0FPfaarVzfQk+z3FPfiDd+Uz3/Z IF74UCV1os1f2E3C+uvTDMjrBR5aBpDg/+FSZrCfSBuu+XrMgQnPV4uiBV3m2MkL+y/0 geLsEVAjHbaiFVh/p5F0TKEbPxhPeVmB0ziZwwOQZHMPJ0CEhV4o/9qahNGRJHUvB4lP +JeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:organization :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=lwK/cM7xkik4PoxjWvyCZqqxP5PXaMxuMaBAxQEcwo4=; b=tBYYw6NiHUIuZDTsQEJSeTGgc8QupAKjPEjs3KJoJ8N7yC1LDrKN05Mbbhmvu+QG3+ 1QQ413CZZJcZDSlnLBFQT/tGEmzziRXCQxgDmK/CpF5wxf/bkmEUCuMVEtiDw8qa9ZNU hkFR6BXOOPqGAh3IULBfnFTPv9RvtPjdF4AXLFONFJc7JkN2onZxX68hwOenUwEqkz4d GCvbjxHyPGGm9mVQgQUmFX3/w92i43YphkiBlivA5jFNH7p/QoCdgOduNTTzA1KKCTXs HbcQ0xAAC3QITv17m01IBU70ZQfYv7re6+EuCbKXw5tpTcoEz7HNE5g5YIJK47NFjXCb LEdw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb13si4393272edb.470.2021.08.12.13.45.25; Thu, 12 Aug 2021 13:45:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236956AbhHLUW1 (ORCPT + 99 others); Thu, 12 Aug 2021 16:22:27 -0400 Received: from mga03.intel.com ([134.134.136.65]:4154 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237087AbhHLUW0 (ORCPT ); Thu, 12 Aug 2021 16:22:26 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10074"; a="215474045" X-IronPort-AV: E=Sophos;i="5.84,316,1620716400"; d="scan'208";a="215474045" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2021 13:22:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,316,1620716400"; d="scan'208";a="517608666" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by FMSMGA003.fm.intel.com with ESMTP; 12 Aug 2021 13:21:58 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Kanchana Velusamy , Marco Chiappero , Giovanni Cabiddu , Fiona Trahe Subject: [PATCH 12/20] crypto: qat - protect interrupt mask CSRs with a spinlock Date: Thu, 12 Aug 2021 21:21:21 +0100 Message-Id: <20210812202129.18831-13-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210812202129.18831-1-giovanni.cabiddu@intel.com> References: <20210812202129.18831-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Organization: Intel Research and Development Ireland Ltd - Co. Reg. #308263 - Collinstown Industrial Park, Leixlip, County Kildare - Ireland Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Kanchana Velusamy In the PF interrupt handler, the interrupt is disabled for a set of VFs by writing to the interrupt source mask register, ERRMSK. The interrupt is re-enabled in the bottom half handler by writing to the same CSR. This is done through the functions enable_vf2pf_interrupts() and disable_vf2pf_interrupts() which perform a read-modify-write operation on the ERRMSK registers to mask and unmask the source of interrupt. There can be a race condition where the top half handler for one VF interrupt runs just as the bottom half for another VF is about to re-enable the interrupt. Depending on whether the top or bottom half updates the CSR first, this would result either in a spurious interrupt or in the interrupt not being re-enabled. This patch protects the access of ERRMSK with a spinlock. The functions adf_enable_vf2pf_interrupts() and adf_disable_vf2pf_interrupts() have been changed to acquire a spin lock before accessing and modifying the ERRMSK registers. These functions use spin_lock_irqsave() to disable IRQs and avoid potential deadlocks. In addition, the function adf_disable_vf2pf_interrupts_irq() has been added. This uses spin_lock() and it is meant to be used in the top half only. Signed-off-by: Kanchana Velusamy Co-developed-by: Marco Chiappero Signed-off-by: Marco Chiappero Co-developed-by: Giovanni Cabiddu Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 2 ++ .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 2 ++ .../crypto/qat/qat_common/adf_accel_devices.h | 2 ++ .../crypto/qat/qat_common/adf_common_drv.h | 2 ++ drivers/crypto/qat/qat_common/adf_isr.c | 2 +- drivers/crypto/qat/qat_common/adf_pf2vf_msg.c | 33 +++++++++++++++++-- .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 2 ++ 7 files changed, 41 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index 1c7f6a6f6f2d..912e84ecb9a3 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -161,6 +161,8 @@ static void adf_enable_ints(struct adf_accel_dev *accel_dev) static int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) { + spin_lock_init(&accel_dev->pf.vf2pf_ints_lock); + return 0; } diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index a202f912820c..069b5d6857e8 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -163,6 +163,8 @@ static void adf_enable_ints(struct adf_accel_dev *accel_dev) static int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) { + spin_lock_init(&accel_dev->pf.vf2pf_ints_lock); + return 0; } diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 180c7dba3ff2..8250cf856e07 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -246,6 +246,8 @@ struct adf_accel_dev { struct adf_accel_pci accel_pci_dev; union { struct { + /* protects VF2PF interrupts access */ + spinlock_t vf2pf_ints_lock; /* vf_info is non-zero when SR-IOV is init'ed */ struct adf_accel_vf_info *vf_info; } pf; diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h index bd76b8a86b86..3f6277347278 100644 --- a/drivers/crypto/qat/qat_common/adf_common_drv.h +++ b/drivers/crypto/qat/qat_common/adf_common_drv.h @@ -193,6 +193,8 @@ int adf_sriov_configure(struct pci_dev *pdev, int numvfs); void adf_disable_sriov(struct adf_accel_dev *accel_dev); void adf_disable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask); +void adf_disable_vf2pf_interrupts_irq(struct adf_accel_dev *accel_dev, + u32 vf_mask); void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask); void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c index 2302e43dfaf4..c678d5c531aa 100644 --- a/drivers/crypto/qat/qat_common/adf_isr.c +++ b/drivers/crypto/qat/qat_common/adf_isr.c @@ -103,7 +103,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr) int i; /* Disable VF2PF interrupts for VFs with pending ints */ - adf_disable_vf2pf_interrupts(accel_dev, vf_mask); + adf_disable_vf2pf_interrupts_irq(accel_dev, vf_mask); /* * Handle VF2PF interrupt unless the VF is malicious and diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c index 0a927ed91b19..1b0df4a5b8b7 100644 --- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c +++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c @@ -11,8 +11,8 @@ #define ADF_DH895XCC_ERRMSK5 (ADF_DH895XCC_EP_OFFSET + 0xDC) #define ADF_DH895XCC_ERRMSK5_VF2PF_U_MASK(vf_mask) (vf_mask >> 16) -void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, - u32 vf_mask) +static void __adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, + u32 vf_mask) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; struct adf_bar *pmisc = @@ -35,7 +35,17 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, } } -void adf_disable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask) +void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask) +{ + unsigned long flags; + + spin_lock_irqsave(&accel_dev->pf.vf2pf_ints_lock, flags); + __adf_enable_vf2pf_interrupts(accel_dev, vf_mask); + spin_unlock_irqrestore(&accel_dev->pf.vf2pf_ints_lock, flags); +} + +static void __adf_disable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, + u32 vf_mask) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; struct adf_bar *pmisc = @@ -58,6 +68,22 @@ void adf_disable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask) } } +void adf_disable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask) +{ + unsigned long flags; + + spin_lock_irqsave(&accel_dev->pf.vf2pf_ints_lock, flags); + __adf_disable_vf2pf_interrupts(accel_dev, vf_mask); + spin_unlock_irqrestore(&accel_dev->pf.vf2pf_ints_lock, flags); +} + +void adf_disable_vf2pf_interrupts_irq(struct adf_accel_dev *accel_dev, u32 vf_mask) +{ + spin_lock(&accel_dev->pf.vf2pf_ints_lock); + __adf_disable_vf2pf_interrupts(accel_dev, vf_mask); + spin_unlock(&accel_dev->pf.vf2pf_ints_lock); +} + static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr) { struct adf_accel_pci *pci_info = &accel_dev->accel_pci_dev; @@ -264,6 +290,7 @@ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info) /* re-enable interrupt on PF from this VF */ adf_enable_vf2pf_interrupts(accel_dev, (1 << vf_nr)); + return; err: dev_dbg(&GET_DEV(accel_dev), "Unknown message from VF%d (0x%x);\n", diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index dced2426edc1..07e7ba5c057d 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -182,6 +182,8 @@ static void adf_enable_ints(struct adf_accel_dev *accel_dev) static int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) { + spin_lock_init(&accel_dev->pf.vf2pf_ints_lock); + return 0; } -- 2.31.1