Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3489122imu; Sun, 11 Nov 2018 16:15:05 -0800 (PST) X-Google-Smtp-Source: AJdET5frwCp7J2GY+NKGg2fED//2MEIOjzEUtIEuNlchfWBaL/YqeSDIsI9gs8yDWpSkc1F9j6A9 X-Received: by 2002:a62:8145:: with SMTP id t66-v6mr17967136pfd.246.1541981705872; Sun, 11 Nov 2018 16:15:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541981705; cv=none; d=google.com; s=arc-20160816; b=wyg+1o+9B28myaiFrMtwvazeG1SD476/m4uqseKjVbsA9/uLWET3EzBwaMPZvU3mQH +b98cVQCQgCvjuHWw/xcOzD5POiXXWWS60HSTg9to1iT4wDbySVrTFU1dL22exA3Nim/ aGA4aUdHTvP1k2GJguRH3T0/KKX2yRtXNYav0HUNOsBFBt8h6ZoNuCnfKjQES1HPHxMo CHepqhLml6Md7fH24952BbQ3Bd0bq3RgjW6jFoPfrieMcGBb9UR4U/c4r60tHJlT6o7R vyNMwFLl+tnSL2xCspICOYGp5YtMBLkPe9cDq/l/7uHT+x4Hc3Qk34FT1JqvUu4CMWTa EJMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=4pbU0HnWFds+4LHqY+ZC4fpU4ICIhQIy8hSGmjCBXTc=; b=aOgxoGO03bdjEEVINWLBIgpwB9ABXwumRL40Y0oWEwWVYNyx7BDgjPUi+bTt3zp7+y CEBplC4jIkJu25Zx+v9xLn+xDBCAK2yr7TGk1FtgmX9sihDMxcsP/Zlszk39W/Ib2YvN OyH5Ka16i5028T4vb6BucuTpgINvQWFjDPB1iYoPlJRjraubJlPHNAySkzylvYnD+sOF cZq8Fp2lVYH6lBcZXbddSDfIWsEpSNjfrMqTi9oRzqb52RVG60ZdXULwVedFEsYWUQWp 4EoCG1Ry0HgrW54QlysDhQjDJ2FrnKRQy92FqCebTkIM/GlmAVpBVThVQ135TzRIq9Zz z4PA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=h7D6KKAm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z15si14185759pgl.85.2018.11.11.16.14.51; Sun, 11 Nov 2018 16:15:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=h7D6KKAm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730412AbeKLIRN (ORCPT + 99 others); Mon, 12 Nov 2018 03:17:13 -0500 Received: from mail.kernel.org ([198.145.29.99]:59922 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730203AbeKLIRM (ORCPT ); Mon, 12 Nov 2018 03:17:12 -0500 Received: from localhost (unknown [206.108.79.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1F390223CB; Sun, 11 Nov 2018 22:27:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1541975237; bh=x6QAcJOEGBa61Bd+4mtno+kSNyRpjWJKCdKf+wMZhsY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h7D6KKAmjT/eHcjw/a6TGWVT9c0gtUvJM8j4K5huZR0KmcWT9juYBhk0yYyXPjy4y 7ELtpBGaUBYM0vKYiGwIMsR8TcxSQAcRScJuXi/uFfre5ZYk+Z3beckRisboIpF3O8 a6o7BW714oUikoVcE5aN3UiNkk/SxrZXhiotZXH8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shaul Triebitz , Luca Coelho , Sasha Levin Subject: [PATCH 4.19 073/361] iwlwifi: pcie: avoid empty free RB queue Date: Sun, 11 Nov 2018 14:17:00 -0800 Message-Id: <20181111221630.228724642@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181111221619.915519183@linuxfoundation.org> References: <20181111221619.915519183@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Shaul Triebitz [ Upstream commit 868a1e863f95183f00809363fefba6d4f5bcd116 ] If all free RB queues are empty, the driver will never restock the free RB queue. That's because the restocking happens in the Rx flow, and if the free queue is empty there will be no Rx. Although there's a background worker (a.k.a. allocator) allocating memory for RBs so that the Rx handler can restock them, the worker may run only after the free queue has become empty (and then it is too late for restocking as explained above). There is a solution for that called 'emergency': If the number of used RB's reaches half the amount of all RB's, the Rx handler will not wait for the allocator but immediately allocate memory for the used RB's and restock the free queue. But, since the used RB's is per queue, it may happen that the used RB's are spread between the queues such that the emergency check will fail for each of the queues (and still run out of RBs, causing the above symptom). To fix it, move to emergency mode if the sum of *all* used RBs (for all Rx queues) reaches half the amount of all RB's Signed-off-by: Shaul Triebitz Signed-off-by: Luca Coelho Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 32 +++++++++++++++++---------- 1 file changed, 21 insertions(+), 11 deletions(-) --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c @@ -1144,6 +1144,14 @@ void iwl_pcie_rx_free(struct iwl_trans * kfree(trans_pcie->rxq); } +static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq, + struct iwl_rb_allocator *rba) +{ + spin_lock(&rba->lock); + list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty); + spin_unlock(&rba->lock); +} + /* * iwl_pcie_rx_reuse_rbd - Recycle used RBDs * @@ -1175,9 +1183,7 @@ static void iwl_pcie_rx_reuse_rbd(struct if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) { /* Move the 2 RBDs to the allocator ownership. Allocator has another 6 from pool for the request completion*/ - spin_lock(&rba->lock); - list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty); - spin_unlock(&rba->lock); + iwl_pcie_rx_move_to_allocator(rxq, rba); atomic_inc(&rba->req_pending); queue_work(rba->alloc_wq, &rba->rx_alloc); @@ -1396,10 +1402,18 @@ restart: IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r); while (i != r) { + struct iwl_rb_allocator *rba = &trans_pcie->rba; struct iwl_rx_mem_buffer *rxb; - - if (unlikely(rxq->used_count == rxq->queue_size / 2)) + /* number of RBDs still waiting for page allocation */ + u32 rb_pending_alloc = + atomic_read(&trans_pcie->rba.req_pending) * + RX_CLAIM_REQ_ALLOC; + + if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 && + !emergency)) { + iwl_pcie_rx_move_to_allocator(rxq, rba); emergency = true; + } rxb = iwl_pcie_get_rxb(trans, rxq, i); if (!rxb) @@ -1421,17 +1435,13 @@ restart: iwl_pcie_rx_allocator_get(trans, rxq); if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) { - struct iwl_rb_allocator *rba = &trans_pcie->rba; - /* Add the remaining empty RBDs for allocator use */ - spin_lock(&rba->lock); - list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty); - spin_unlock(&rba->lock); + iwl_pcie_rx_move_to_allocator(rxq, rba); } else if (emergency) { count++; if (count == 8) { count = 0; - if (rxq->used_count < rxq->queue_size / 3) + if (rb_pending_alloc < rxq->queue_size / 3) emergency = false; rxq->read = i;