Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp5003689iob; Mon, 9 May 2022 06:43:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxkHe8rRUoVknlpeZdGvtI4wVCTBcDdDWGzSgnlNywO4gyYMd6UeNdyPCemZQJoZYGKbf7G X-Received: by 2002:a17:903:246:b0:153:84fe:a9b0 with SMTP id j6-20020a170903024600b0015384fea9b0mr16397109plh.163.1652103823825; Mon, 09 May 2022 06:43:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652103823; cv=none; d=google.com; s=arc-20160816; b=PNmlMPbpHDWIHdvJdZ9FQHb3WwBpQIFhwGtNSb3fX19tu+VTKMpXBQLaVMB5fF1BRB 59ckPs6H2i7HH+tzrYOmOigcbhpZsd6cIMWDxrzyIT9dLLqCPPxMh3Ld6VUqyITzD2L6 Cthno13nimur+xgaU/+rv9nO2v6+PtFKTc2EyB45tBIy/Q854kobW50CaBk97EG2/B2J EBqFmpzXvaGZBO4KUdA3fKwq7gTUtEwuxrwj9+vwUj1Xgl5V7024ukqczJHCAWIpW6Y0 UK0Lx+mvnKHGrUoF7yK8G++T2F1jqKm2Wmh07RQ65q4to4uTM6XldM9O/b1dEnyxZlPS AWzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:organization :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=O6CaijGdVdHRZinqDAX5VqwoT7OPsX19T88OBjZ3lbI=; b=pZz3ERSBt7Khp3rTXtT4YOTHhlimPZvGFNUkEGetPogt5Vio8surZScN619pna7jta NzriE3IFCIBoqoFqsY7YKPwVeRp2DIU2oM+besE5DN+6Hzi2dJeUgHieVxY7Gr5tk7jF YkkiF/iHlbvzdSeWPXCk/lYurF2pjbTDsqo/dIR0sQbS0lOFzoZWi9ursjG/iVTxnXtn se/4gKnLESex4JfP3H9CBjvRZD40hw5RDMBWDv0ecu/D0dcUmpOJQXfN68RkxLavVFR1 uB4ZScUtSbjfv1nsWims5jp0X1jXBa6qFNzZKjTh3jdMUz369WOBs5Yu8ZfW5QY2NK+O i7mw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ebgq9oJW; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id l14-20020a170903244e00b0015d284e10a2si13353390pls.150.2022.05.09.06.43.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 May 2022 06:43:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ebgq9oJW; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6237B23EB55; Mon, 9 May 2022 06:34:37 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236041AbiEINi1 (ORCPT + 99 others); Mon, 9 May 2022 09:38:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235972AbiEINi0 (ORCPT ); Mon, 9 May 2022 09:38:26 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9E4523724E; Mon, 9 May 2022 06:34:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652103272; x=1683639272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PHirnwRewAXRkYbi7irfJOicze95UPl0UU4GfkNeB54=; b=Ebgq9oJWtePo9G4t8K+49lHLhPdr2reh65zqERFd87beX4faD6dWQ+qK Jzdb4yqVY+4GA8zPFjAo4ScJScIn6vjVpWJRjGgW6dY9Zw9eRMt6phSao S83b1uLrQbL+8RHywGxLeoetKTYpZBcFiDcxmM53MEnsDTU8ODpd4eC1W XI53h3u0nuWQa2f92KkSg+W51MsAhB7Ozn5HdwIxZmGUL+We62CRvbFDW cwqbwcxbxePkYfS3ptBUJaAXM5Wc6tEtocQfVBwzmG6qUvafiu+XEzKk5 WCRek0thx7Il+uv/VnZuDtR1vyH2knDhEqEJ8X38FB4u5+cgKp8XgPHWJ w==; X-IronPort-AV: E=McAfee;i="6400,9594,10341"; a="248952318" X-IronPort-AV: E=Sophos;i="5.91,211,1647327600"; d="scan'208";a="248952318" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 06:34:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,211,1647327600"; d="scan'208";a="622967211" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.76]) by fmsmga008.fm.intel.com with ESMTP; 09 May 2022 06:34:29 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Vlad Dronov , Giovanni Cabiddu , stable@vger.kernel.org, Mikulas Patocka , Marco Chiappero , Wojciech Ziemba Subject: [PATCH v3 01/10] crypto: qat - use pre-allocated buffers in datapath Date: Mon, 9 May 2022 14:34:08 +0100 Message-Id: <20220509133417.56043-2-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220509133417.56043-1-giovanni.cabiddu@intel.com> References: <20220509133417.56043-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Organization: Intel Research and Development Ireland Ltd - Co. Reg. #308263 - Collinstown Industrial Park, Leixlip, County Kildare - Ireland Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In order to do DMAs, the QAT device requires that the scatterlist structures are mapped and translated into a format that the firmware can understand. This is defined as the composition of a scatter gather list (SGL) descriptor header, the struct qat_alg_buf_list, plus a variable number of flat buffer descriptors, the struct qat_alg_buf. The allocation and mapping of these data structures is done each time a request is received from the skcipher and aead APIs. In an OOM situation, this behaviour might lead to a dead-lock if an allocation fails. Based on the conversation in [1], increase the size of the aead and skcipher request contexts to include an SGL descriptor that can handle a maximum of 4 flat buffers. If requests exceed 4 entries buffers, memory is allocated dynamically. [1] https://lore.kernel.org/linux-crypto/20200722072932.GA27544@gondor.apana.org.au/ Cc: stable@vger.kernel.org Fixes: d370cec32194 ("crypto: qat - Intel(R) QAT crypto interface") Reported-by: Mikulas Patocka Signed-off-by: Giovanni Cabiddu Reviewed-by: Marco Chiappero Reviewed-by: Wojciech Ziemba --- drivers/crypto/qat/qat_common/qat_algs.c | 64 +++++++++++++--------- drivers/crypto/qat/qat_common/qat_crypto.h | 24 ++++++++ 2 files changed, 61 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index f998ed58457c..ec635fe44c1f 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -46,19 +46,6 @@ static DEFINE_MUTEX(algs_lock); static unsigned int active_devs; -struct qat_alg_buf { - u32 len; - u32 resrvd; - u64 addr; -} __packed; - -struct qat_alg_buf_list { - u64 resrvd; - u32 num_bufs; - u32 num_mapped_bufs; - struct qat_alg_buf bufers[]; -} __packed __aligned(64); - /* Common content descriptor */ struct qat_alg_cd { union { @@ -693,7 +680,10 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst, bl->bufers[i].len, DMA_BIDIRECTIONAL); dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE); - kfree(bl); + + if (!qat_req->buf.sgl_src_valid) + kfree(bl); + if (blp != blpout) { /* If out of place operation dma unmap only data */ int bufless = blout->num_bufs - blout->num_mapped_bufs; @@ -704,7 +694,9 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst, DMA_BIDIRECTIONAL); } dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE); - kfree(blout); + + if (!qat_req->buf.sgl_dst_valid) + kfree(blout); } } @@ -721,15 +713,24 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, dma_addr_t blp = DMA_MAPPING_ERROR; dma_addr_t bloutp = DMA_MAPPING_ERROR; struct scatterlist *sg; - size_t sz_out, sz = struct_size(bufl, bufers, n + 1); + size_t sz_out, sz = struct_size(bufl, bufers, n); + int node = dev_to_node(&GET_DEV(inst->accel_dev)); if (unlikely(!n)) return -EINVAL; - bufl = kzalloc_node(sz, GFP_ATOMIC, - dev_to_node(&GET_DEV(inst->accel_dev))); - if (unlikely(!bufl)) - return -ENOMEM; + qat_req->buf.sgl_src_valid = false; + qat_req->buf.sgl_dst_valid = false; + + if (n > QAT_MAX_BUFF_DESC) { + bufl = kzalloc_node(sz, GFP_ATOMIC, node); + if (unlikely(!bufl)) + return -ENOMEM; + } else { + bufl = &qat_req->buf.sgl_src.sgl_hdr; + memset(bufl, 0, sizeof(struct qat_alg_buf_list)); + qat_req->buf.sgl_src_valid = true; + } for_each_sg(sgl, sg, n, i) bufl->bufers[i].addr = DMA_MAPPING_ERROR; @@ -760,12 +761,18 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, struct qat_alg_buf *bufers; n = sg_nents(sglout); - sz_out = struct_size(buflout, bufers, n + 1); + sz_out = struct_size(buflout, bufers, n); sg_nctr = 0; - buflout = kzalloc_node(sz_out, GFP_ATOMIC, - dev_to_node(&GET_DEV(inst->accel_dev))); - if (unlikely(!buflout)) - goto err_in; + + if (n > QAT_MAX_BUFF_DESC) { + buflout = kzalloc_node(sz_out, GFP_ATOMIC, node); + if (unlikely(!buflout)) + goto err_in; + } else { + buflout = &qat_req->buf.sgl_dst.sgl_hdr; + memset(buflout, 0, sizeof(struct qat_alg_buf_list)); + qat_req->buf.sgl_dst_valid = true; + } bufers = buflout->bufers; for_each_sg(sglout, sg, n, i) @@ -810,7 +817,9 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, dma_unmap_single(dev, buflout->bufers[i].addr, buflout->bufers[i].len, DMA_BIDIRECTIONAL); - kfree(buflout); + + if (!qat_req->buf.sgl_dst_valid) + kfree(buflout); err_in: if (!dma_mapping_error(dev, blp)) @@ -823,7 +832,8 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, bufl->bufers[i].len, DMA_BIDIRECTIONAL); - kfree(bufl); + if (!qat_req->buf.sgl_src_valid) + kfree(bufl); dev_err(dev, "Failed to map buf for dma\n"); return -ENOMEM; diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index b6a4c95ae003..0928f159ea99 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -21,6 +21,26 @@ struct qat_crypto_instance { atomic_t refctr; }; +#define QAT_MAX_BUFF_DESC 4 + +struct qat_alg_buf { + u32 len; + u32 resrvd; + u64 addr; +} __packed; + +struct qat_alg_buf_list { + u64 resrvd; + u32 num_bufs; + u32 num_mapped_bufs; + struct qat_alg_buf bufers[]; +} __packed; + +struct qat_alg_fixed_buf_list { + struct qat_alg_buf_list sgl_hdr; + struct qat_alg_buf descriptors[QAT_MAX_BUFF_DESC]; +} __packed __aligned(64); + struct qat_crypto_request_buffs { struct qat_alg_buf_list *bl; dma_addr_t blp; @@ -28,6 +48,10 @@ struct qat_crypto_request_buffs { dma_addr_t bloutp; size_t sz; size_t sz_out; + bool sgl_src_valid; + bool sgl_dst_valid; + struct qat_alg_fixed_buf_list sgl_src; + struct qat_alg_fixed_buf_list sgl_dst; }; struct qat_crypto_request; -- 2.35.1