Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3311182imu; Sun, 11 Nov 2018 12:07:22 -0800 (PST) X-Google-Smtp-Source: AJdET5eOlTu7oEiyTuCsr+kn3TIYFf3zDxPh+Ikbshx1iG29h6Ur3kdNb1e+8zzGb4lzNQYoxFBh X-Received: by 2002:a62:c5c6:: with SMTP id j189-v6mr17884734pfg.194.1541966842489; Sun, 11 Nov 2018 12:07:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541966842; cv=none; d=google.com; s=arc-20160816; b=nQ/FpU0l1MT3pzyLYptyTBWYFcKt9rL/A1HX9n/MnrITTg/Hgc8KpdnBU15kBnP0eh gd/ZbEdorB/7j1XxnIv3srsOv4OpQXZ+OBZDybLL6AEfat0ZGdkbtB2bjH1NRNW1KLf0 NmPYbGMKCJN9Ev3y96LdypH622yAFo9Yv+5a7K9lMA6GvANONYoHmPBSStTD5TxtvLhZ ZP2dJCMuV3+PAu+rW3wVrtD4fnd6LS+BbCHxGIFqaItcYjQfMv/QzaiQrXRkaakaEXgV O1BAJqDN+CS28/9Uzx6wdpOFfDjjN3olorekfpQV1IFSZItW5wAIrnShzzotnlCXHBEt vZAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition; bh=AQ3JWMfD1HpAvRF36a2fnwfE9zOZuU+sgAj0nEws6x8=; b=C0QFSEUzn6uRSIN5Y3oKwm6Rvl21avqNIDkloYIBOq+jQjAxprdVI3UILBQfc75hRY PoMTpBSjRckcu0tAAO9DaTqhj4LmdwlaT38H0XRQAYpAQ/Bv3eTdzBWlsTvqVRjdtmUc O9vO+ii8KML99f2S0GPQpLItvfqb4g0NO8qh8gsbcDR5PthlAsOFb39wSPGxp0+X4OyD 3vwmFAkLq4TQoFyiBe4hglsxJp/cdLMZqn1nQHq4hBAxXehPRKcvCk0qYuWZ1C1/tQRH Xwr7UgdKpCX4rs6VgPnSZDPK3FBaoSNxWCuqlafQm0EYVsQHuQwpLtNhZoipjZjNXaAA QDnw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o9-v6si5364664plk.434.2018.11.11.12.07.07; Sun, 11 Nov 2018 12:07:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731394AbeKLF4R (ORCPT + 99 others); Mon, 12 Nov 2018 00:56:17 -0500 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:52026 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727449AbeKLF4Q (ORCPT ); Mon, 12 Nov 2018 00:56:16 -0500 Received: from [192.168.4.242] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1gLvsv-0000l8-C7; Sun, 11 Nov 2018 19:59:05 +0000 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1gLvsV-0001eq-1l; Sun, 11 Nov 2018 19:58:39 +0000 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "David S. Miller" , "Julian Wiedmann" Date: Sun, 11 Nov 2018 19:49:05 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 214/366] s390/qeth: don't clobber buffer on async TX completion In-Reply-To: X-SA-Exim-Connect-IP: 192.168.4.242 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.61-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Julian Wiedmann commit ce28867fd20c23cd769e78b4d619c4755bf71a1c upstream. If qeth_qdio_output_handler() detects that a transmit requires async completion, it replaces the pending buffer's metadata object (qeth_qdio_out_buffer) so that this queue buffer can be re-used while the data is pending completion. Later when the CQ indicates async completion of such a metadata object, qeth_qdio_cq_handler() tries to free any data associated with this object (since HW has now completed the transfer). By calling qeth_clear_output_buffer(), it erronously operates on the queue buffer that _previously_ belonged to this transfer ... but which has been potentially re-used several times by now. This results in double-free's of the buffer's data, and failing transmits as the buffer descriptor is scrubbed in mid-air. The correct way of handling this situation is to 1. scrub the queue buffer when it is prepared for re-use, and 2. later obtain the data addresses from the async-completion notifier (ie. the AOB), instead of the queue buffer. All this only affects qeth devices used for af_iucv HiperTransport. Fixes: 0da9581ddb0f ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: Julian Wiedmann Signed-off-by: David S. Miller [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings --- drivers/s390/net/qeth_core.h | 11 +++++++++++ drivers/s390/net/qeth_core_main.c | 22 ++++++++++++++++------ 2 files changed, 27 insertions(+), 6 deletions(-) --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -844,6 +844,17 @@ struct qeth_trap_id { /*some helper functions*/ #define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "") +static inline void qeth_scrub_qdio_buffer(struct qdio_buffer *buf, + unsigned int elements) +{ + unsigned int i; + + for (i = 0; i < elements; i++) + memset(&buf->element[i], 0, sizeof(struct qdio_buffer_element)); + buf->element[14].sflags = 0; + buf->element[15].sflags = 0; +} + static inline struct qeth_card *CARD_FROM_CDEV(struct ccw_device *cdev) { struct qeth_card *card = dev_get_drvdata(&((struct ccwgroup_device *) --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -65,9 +65,6 @@ static void qeth_notify_skbs(struct qeth struct qeth_qdio_out_buffer *buf, enum iucv_tx_notify notification); static void qeth_release_skbs(struct qeth_qdio_out_buffer *buf); -static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue, - struct qeth_qdio_out_buffer *buf, - enum qeth_qdio_buffer_states newbufstate); static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int); struct workqueue_struct *qeth_wq; @@ -451,6 +448,7 @@ static inline void qeth_qdio_handle_aob( struct qaob *aob; struct qeth_qdio_out_buffer *buffer; enum iucv_tx_notify notification; + unsigned int i; aob = (struct qaob *) phys_to_virt(phys_aob_addr); QETH_CARD_TEXT(card, 5, "haob"); @@ -475,10 +473,18 @@ static inline void qeth_qdio_handle_aob( qeth_notify_skbs(buffer->q, buffer, notification); buffer->aob = NULL; - qeth_clear_output_buffer(buffer->q, buffer, - QETH_QDIO_BUF_HANDLED_DELAYED); + /* Free dangling allocations. The attached skbs are handled by + * qeth_cleanup_handled_pending(). + */ + for (i = 0; + i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card); + i++) { + if (aob->sba[i] && buffer->is_header[i]) + kmem_cache_free(qeth_core_header_cache, + (void *) aob->sba[i]); + } + atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED); - /* from here on: do not touch buffer anymore */ qdio_release_aob(aob); } @@ -3635,6 +3641,10 @@ void qeth_qdio_output_handler(struct ccw QETH_CARD_TEXT(queue->card, 5, "aob"); QETH_CARD_TEXT_(queue->card, 5, "%lx", virt_to_phys(buffer->aob)); + + /* prepare the queue slot for re-use: */ + qeth_scrub_qdio_buffer(buffer->buffer, + QETH_MAX_BUFFER_ELEMENTS(card)); if (qeth_init_qdio_out_buf(queue, bidx)) { QETH_CARD_TEXT(card, 2, "outofbuf"); qeth_schedule_recovery(card);