Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3439533imu; Sun, 11 Nov 2018 15:07:46 -0800 (PST) X-Google-Smtp-Source: AJdET5ervigzL9oU3ChaveD/xyFE5qrnNNABPGRSFX2TC5JsihhRopPVqKxK0d1LBJTk/rBEYrgB X-Received: by 2002:a63:9c1a:: with SMTP id f26mr15507443pge.381.1541977666801; Sun, 11 Nov 2018 15:07:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541977666; cv=none; d=google.com; s=arc-20160816; b=pgg3ethwVtEDaIIgfqq6OU6jWT5PczB4X75IdXneIS1eUE6un8lGO6Y0gNpQQjLpC2 R6RGOmnHmZND/aBdVYsvYTHF3scQyCDDE1nqU29TDeipZnz2EOaLT+q7tiQ5Mx2i+Edu n7izZL7GSwJU6MyzM2AUClPAPVFDD+uc5aeKqIuXuHaR0kPcS+2pDivHNCrXLAunHPrH k2AGhGQVMM2vgH+X+dTPu39TtNJK9GJ8L4X4dJxjYKp48C3ARuMaZslEHO46efwG88sb HhqNlKdZLqP8rFvf0XpXk7WdNrnD4EK7GblSaQJyBYnSGLVUJULOKK6bynSoYyIWSfZg 4SmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Q5slRq7DuK3Ywz1i2paEg3ob/8O+RMr8shc5I6qyqes=; b=eBpOxYguZJ2yC1cUxU9bEeSY5HBLGRowoQOUwzuHRbRNKz+G4a3Tj54DB4ShXrHt/m Ivd+QdDQvhjBYnCM+nDD8+7R/VwfREmi2RUA7e8izIR0pcmJsqUCBa4addLg9M3+5DXK D1tqAMKCibrR+EqVQr1Rf2b0TP+VXDaguiA8ui6IR7tc8uAhqqtdhc66CPttH915Crzp mIKOIG5Gxrrf3s3m8BrS9OCqpzPMYgsIDPr6leLVfYNkGE00m8EdGkGh/r7OchckAoAY ZdSuU3eq+ZVVJR6MDtGU/SzJxrMb1GzcwZtZ9fmE1psz7TscWkQfnmyd0NHSDKrJA04d 8ptA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="gu/RTQru"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f17-v6si16086017pfa.123.2018.11.11.15.07.31; Sun, 11 Nov 2018 15:07:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="gu/RTQru"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390421AbeKLIWO (ORCPT + 99 others); Mon, 12 Nov 2018 03:22:14 -0500 Received: from mail.kernel.org ([198.145.29.99]:51232 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390369AbeKLIWN (ORCPT ); Mon, 12 Nov 2018 03:22:13 -0500 Received: from localhost (unknown [206.108.79.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2EC8722510; Sun, 11 Nov 2018 22:32:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1541975536; bh=eGMPk1Mwb5jl1D3HzPeRwsoiDf8s4q9GTUiVs+VWgoE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gu/RTQruscT3+iTN4A4L0yDUnYlmUnl6E0/AcISVo32S+Tpq+tNo0lSvDg09yOgLx /ZUQh7MImc7Rt53cHv5NEgO+/2GZ6JUmJjCNgxUJdpcvDghrICYSaC9G75pWnhpe2R 1hlPNKkpCmJlnbi82U/eAorqsljR50XXxEH4Aulw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shaul Triebitz , Luca Coelho , Sasha Levin Subject: [PATCH 4.14 053/222] iwlwifi: pcie: avoid empty free RB queue Date: Sun, 11 Nov 2018 14:22:30 -0800 Message-Id: <20181111221652.832743353@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181111221647.665769131@linuxfoundation.org> References: <20181111221647.665769131@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Shaul Triebitz [ Upstream commit 868a1e863f95183f00809363fefba6d4f5bcd116 ] If all free RB queues are empty, the driver will never restock the free RB queue. That's because the restocking happens in the Rx flow, and if the free queue is empty there will be no Rx. Although there's a background worker (a.k.a. allocator) allocating memory for RBs so that the Rx handler can restock them, the worker may run only after the free queue has become empty (and then it is too late for restocking as explained above). There is a solution for that called 'emergency': If the number of used RB's reaches half the amount of all RB's, the Rx handler will not wait for the allocator but immediately allocate memory for the used RB's and restock the free queue. But, since the used RB's is per queue, it may happen that the used RB's are spread between the queues such that the emergency check will fail for each of the queues (and still run out of RBs, causing the above symptom). To fix it, move to emergency mode if the sum of *all* used RBs (for all Rx queues) reaches half the amount of all RB's Signed-off-by: Shaul Triebitz Signed-off-by: Luca Coelho Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 32 +++++++++++++++++---------- 1 file changed, 21 insertions(+), 11 deletions(-) --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c @@ -1049,6 +1049,14 @@ void iwl_pcie_rx_free(struct iwl_trans * kfree(trans_pcie->rxq); } +static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq, + struct iwl_rb_allocator *rba) +{ + spin_lock(&rba->lock); + list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty); + spin_unlock(&rba->lock); +} + /* * iwl_pcie_rx_reuse_rbd - Recycle used RBDs * @@ -1080,9 +1088,7 @@ static void iwl_pcie_rx_reuse_rbd(struct if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) { /* Move the 2 RBDs to the allocator ownership. Allocator has another 6 from pool for the request completion*/ - spin_lock(&rba->lock); - list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty); - spin_unlock(&rba->lock); + iwl_pcie_rx_move_to_allocator(rxq, rba); atomic_inc(&rba->req_pending); queue_work(rba->alloc_wq, &rba->rx_alloc); @@ -1260,10 +1266,18 @@ restart: IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r); while (i != r) { + struct iwl_rb_allocator *rba = &trans_pcie->rba; struct iwl_rx_mem_buffer *rxb; - - if (unlikely(rxq->used_count == rxq->queue_size / 2)) + /* number of RBDs still waiting for page allocation */ + u32 rb_pending_alloc = + atomic_read(&trans_pcie->rba.req_pending) * + RX_CLAIM_REQ_ALLOC; + + if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 && + !emergency)) { + iwl_pcie_rx_move_to_allocator(rxq, rba); emergency = true; + } if (trans->cfg->mq_rx_supported) { /* @@ -1306,17 +1320,13 @@ restart: iwl_pcie_rx_allocator_get(trans, rxq); if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) { - struct iwl_rb_allocator *rba = &trans_pcie->rba; - /* Add the remaining empty RBDs for allocator use */ - spin_lock(&rba->lock); - list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty); - spin_unlock(&rba->lock); + iwl_pcie_rx_move_to_allocator(rxq, rba); } else if (emergency) { count++; if (count == 8) { count = 0; - if (rxq->used_count < rxq->queue_size / 3) + if (rb_pending_alloc < rxq->queue_size / 3) emergency = false; rxq->read = i;