Received: by 10.192.165.148 with SMTP id m20csp4217896imm; Mon, 30 Apr 2018 14:05:09 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqUrLFUvDxAKVDqJ6Do8RjU9G8UeQyiVqlmni4+2XjtYYIyzFMuhnbUhg6csxk1VjMfJtQi X-Received: by 2002:a63:3758:: with SMTP id g24-v6mr6908172pgn.283.1525122309801; Mon, 30 Apr 2018 14:05:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525122309; cv=none; d=google.com; s=arc-20160816; b=zcKNnL2s59VVga4siLL319KdYStZcIjAJtRLRR5eJGlfXL86zctIMIVwUtwXd9bIdl S4FzVbRzPCW96/Ee/k8boWeBaw/OQkbiezctvJwCBiVZE6xF9kLgVymYyIEiYupjFaJa VBlmVVWwTuTT93cNa3aLDZqiDTJwRqZi/gBn6/V/G/x4l0zvzcfeYzsqxs3kCZj2wNRB zQUiq2C0irOItl9Iq28uqsSSOt4CuXy643ZJgHWILs9zXSvkI0gXuvHIZK1SC2biV2TS sfuFJLF3jJ1uGk4lwZofHYm/hgi4IekVBsP+znDLdp1wSYZpqGudHDomnDPvfdTbNwkt +DKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:references :in-reply-to:mime-version:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=qbqmRhkpvcyIHbtP/SSLxZLxfqUEJ9wpd3L3J9ChNBc=; b=NUvBQkdkZ+/rFslyglEkwLkWbMlE8iobJNZChljwdyjTm/4uvHu3ZViPPGnc+gFqWx p0JRV4Vuk0StNSIHPY99DX8Gbexr0g+XCFafTT8BfWC1i8EzJ6b+1o3WlR3g7iVYxvCQ 90KwumpV6F6nEPwja1hgEaQsWjnf4Lm16TBiQ1gKUYeZiQHWrYLfZtQP+AjV6Gjfzlt/ kxHJZLiWzZm/LLQw6SXd1dpd8di/owyLLEk1L38VI6gzMlpnt9B8HMGnH8rBAw+WkF3L u3lab26A6Mb8rjpAF7XAUafSITIuS6xB2usvZreqKRWNu90nxMamFu+eF8ChPgiehQ99 0E6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=SsYGSyjg; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s2si8041326pfb.39.2018.04.30.14.04.55; Mon, 30 Apr 2018 14:05:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=SsYGSyjg; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755416AbeD3VDX (ORCPT + 99 others); Mon, 30 Apr 2018 17:03:23 -0400 Received: from out2-smtp.messagingengine.com ([66.111.4.26]:33365 "EHLO out2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755369AbeD3VDH (ORCPT ); Mon, 30 Apr 2018 17:03:07 -0400 Received: from compute7.internal (compute7.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 6F36C22B09; Mon, 30 Apr 2018 17:03:06 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute7.internal (MEProxy); Mon, 30 Apr 2018 17:03:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:in-reply-to:message-id:mime-version :references:references:subject:to:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=qbqmRhkpvcyIHbtP/SSLxZLxfqUEJ9wpd3L3J9ChN Bc=; b=SsYGSyjgVPzLk6bNZgjg9TQCLjyYefIiYAcY7qG1M5OuXByG126ecZPAV o/9w3aV9WWwvi42W5pLEiHuLyRXrCPtoHIV3J6zK0k3vyNoa6MwFa9N5SdD9nNFq vDWj48bdlgfDKhjhmAwoCQoZJUcz1TCg2HuEcTW0Na90ee3NTqz5FyduyTCqO/kn zEfyYHAVqOWcjsOEM4pmeiEulKFXLIlUqEzIK/+TqVMRG2TplgjAQNVQZLe1PS81 odHbuJRjE+6KTC5JXDi1nGb6kFA4o/SVKyGgjeKY1/l3hhh/YA9yTYhbvSSn7RB3 +r/HwfiTaofNRYGRnBHvK+MpESxSw== X-ME-Sender: Received: from localhost.localdomain (ip5b40bfaa.dynamic.kabel-deutschland.de [91.64.191.170]) by mail.messagingengine.com (Postfix) with ESMTPA id E8166E4EAC; Mon, 30 Apr 2018 17:03:04 -0400 (EDT) From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= To: xen-devel@lists.xenproject.org Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= , stable@vger.kernel.org, Konrad Rzeszutek Wilk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Boris Ostrovsky , Juergen Gross , Jens Axboe , linux-block@vger.kernel.org (open list:BLOCK LAYER), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring Date: Mon, 30 Apr 2018 23:01:50 +0200 Message-Id: <951a221b0e655b3077d1f96ac365194320bc8809.1525122026.git-series.marmarek@invisiblethingslab.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: References: MIME-Version: 1.0 In-Reply-To: References: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Do not reuse data which theoretically might be already modified by the backend. This is mostly about private copy of the request (info->shadow[id].req) - make sure the request saved there is really the one just filled. This is complementary to XSA155. CC: stable@vger.kernel.org Signed-off-by: Marek Marczykowski-Górecki --- drivers/block/xen-blkfront.c | 76 +++++++++++++++++++++---------------- 1 file changed, 44 insertions(+), 32 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 3926811..b100b55 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -525,19 +525,16 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode, static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo, struct request *req, - struct blkif_request **ring_req) + struct blkif_request *ring_req) { unsigned long id; - *ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); - rinfo->ring.req_prod_pvt++; - id = get_id_from_freelist(rinfo); rinfo->shadow[id].request = req; rinfo->shadow[id].status = REQ_WAITING; rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID; - (*ring_req)->u.rw.id = id; + ring_req->u.rw.id = id; return id; } @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo, static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo) { struct blkfront_info *info = rinfo->dev_info; - struct blkif_request *ring_req; + struct blkif_request ring_req = { 0 }; unsigned long id; /* Fill out a communications ring structure. */ id = blkif_ring_get_request(rinfo, req, &ring_req); - ring_req->operation = BLKIF_OP_DISCARD; - ring_req->u.discard.nr_sectors = blk_rq_sectors(req); - ring_req->u.discard.id = id; - ring_req->u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req); + ring_req.operation = BLKIF_OP_DISCARD; + ring_req.u.discard.nr_sectors = blk_rq_sectors(req); + ring_req.u.discard.id = id; + ring_req.u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req); if (req_op(req) == REQ_OP_SECURE_ERASE && info->feature_secdiscard) - ring_req->u.discard.flag = BLKIF_DISCARD_SECURE; + ring_req.u.discard.flag = BLKIF_DISCARD_SECURE; else - ring_req->u.discard.flag = 0; + ring_req.u.discard.flag = 0; + + /* make the request available to the backend */ + *RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = ring_req; + wmb(); + rinfo->ring.req_prod_pvt++; /* Keep a private copy so we can reissue requests when recovering. */ - rinfo->shadow[id].req = *ring_req; + rinfo->shadow[id].req = ring_req; return 0; } @@ -693,7 +695,7 @@ static void blkif_setup_extra_req(struct blkif_request *first, static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *rinfo) { struct blkfront_info *info = rinfo->dev_info; - struct blkif_request *ring_req, *extra_ring_req = NULL; + struct blkif_request ring_req = { 0 }, extra_ring_req = { 0 }; unsigned long id, extra_id = NO_ASSOCIATED_ID; bool require_extra_req = false; int i; @@ -758,16 +760,16 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri * BLKIF_OP_WRITE */ BUG_ON(req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA); - ring_req->operation = BLKIF_OP_INDIRECT; - ring_req->u.indirect.indirect_op = rq_data_dir(req) ? + ring_req.operation = BLKIF_OP_INDIRECT; + ring_req.u.indirect.indirect_op = rq_data_dir(req) ? BLKIF_OP_WRITE : BLKIF_OP_READ; - ring_req->u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req); - ring_req->u.indirect.handle = info->handle; - ring_req->u.indirect.nr_segments = num_grant; + ring_req.u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req); + ring_req.u.indirect.handle = info->handle; + ring_req.u.indirect.nr_segments = num_grant; } else { - ring_req->u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req); - ring_req->u.rw.handle = info->handle; - ring_req->operation = rq_data_dir(req) ? + ring_req.u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req); + ring_req.u.rw.handle = info->handle; + ring_req.operation = rq_data_dir(req) ? BLKIF_OP_WRITE : BLKIF_OP_READ; if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) { /* @@ -778,15 +780,15 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri * since it is guaranteed ordered WRT previous writes.) */ if (info->feature_flush && info->feature_fua) - ring_req->operation = + ring_req.operation = BLKIF_OP_WRITE_BARRIER; else if (info->feature_flush) - ring_req->operation = + ring_req.operation = BLKIF_OP_FLUSH_DISKCACHE; else - ring_req->operation = 0; + ring_req.operation = 0; } - ring_req->u.rw.nr_segments = num_grant; + ring_req.u.rw.nr_segments = num_grant; if (unlikely(require_extra_req)) { extra_id = blkif_ring_get_request(rinfo, req, &extra_ring_req); @@ -796,7 +798,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri */ rinfo->shadow[extra_id].num_sg = 0; - blkif_setup_extra_req(ring_req, extra_ring_req); + blkif_setup_extra_req(&ring_req, &extra_ring_req); /* Link the 2 requests together */ rinfo->shadow[extra_id].associated_id = id; @@ -804,12 +806,12 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri } } - setup.ring_req = ring_req; + setup.ring_req = &ring_req; setup.id = id; setup.require_extra_req = require_extra_req; if (unlikely(require_extra_req)) - setup.extra_ring_req = extra_ring_req; + setup.extra_ring_req = &extra_ring_req; for_each_sg(rinfo->shadow[id].sg, sg, num_sg, i) { BUG_ON(sg->offset + sg->length > PAGE_SIZE); @@ -831,10 +833,20 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri if (setup.segments) kunmap_atomic(setup.segments); + /* make the request available to the backend */ + *RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = ring_req; + wmb(); + rinfo->ring.req_prod_pvt++; /* Keep a private copy so we can reissue requests when recovering. */ - rinfo->shadow[id].req = *ring_req; - if (unlikely(require_extra_req)) - rinfo->shadow[extra_id].req = *extra_ring_req; + rinfo->shadow[id].req = ring_req; + + if (unlikely(require_extra_req)) { + *RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = extra_ring_req; + wmb(); + rinfo->ring.req_prod_pvt++; + /* Keep a private copy so we can reissue requests when recovering. */ + rinfo->shadow[extra_id].req = extra_ring_req; + } if (new_persistent_gnts) gnttab_free_grant_references(setup.gref_head); -- git-series 0.9.1