Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751580AbdF1IfU (ORCPT ); Wed, 28 Jun 2017 04:35:20 -0400 Received: from smtp.eu.citrix.com ([185.25.65.24]:14151 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751475AbdF1IfN (ORCPT ); Wed, 28 Jun 2017 04:35:13 -0400 X-IronPort-AV: E=Sophos;i="5.40,274,1496102400"; d="scan'208";a="48509497" Date: Wed, 28 Jun 2017 09:34:59 +0100 From: Roger Pau =?iso-8859-1?Q?Monn=E9?= To: Dongli Zhang CC: , , , , Subject: Re: [PATCH 1/1] xen/blkfront: always allocate grants first from per-queue persistent grants Message-ID: <20170628083459.aodudybeq2zcvcsg@dhcp-3-128.uk.xensource.com> References: <1498624983-6293-1-git-send-email-dongli.zhang@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1498624983-6293-1-git-send-email-dongli.zhang@oracle.com> User-Agent: NeoMutt/20170609 (1.8.3) X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To AMSPEX02CL02.citrite.net (10.69.22.126) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2340 Lines: 55 On Wed, Jun 28, 2017 at 12:43:03PM +0800, Dongli Zhang wrote: > This patch partially reverts 3df0e50 ("xen/blkfront: pseudo support for > multi hardware queues/rings"). The xen-blkfront queue/ring might hang due > to grants allocation failure in the situation when gnttab_free_head is > almost empty while many persistent grants are reserved for this queue/ring. > > As persistent grants management was per-queue since 73716df ("xen/blkfront: > make persistent grants pool per-queue"), we should always allocate from > persistent grants first. > > Signed-off-by: Dongli Zhang > --- > drivers/block/xen-blkfront.c | 17 ++++++++++------- > 1 file changed, 10 insertions(+), 7 deletions(-) > > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > index 3945963..d2b759f 100644 > --- a/drivers/block/xen-blkfront.c > +++ b/drivers/block/xen-blkfront.c > @@ -713,6 +713,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri > * existing persistent grants, or if we have to get new grants, > * as there are not sufficiently many free. > */ > + bool new_persistent_gnts = false; > struct scatterlist *sg; > int num_sg, max_grefs, num_grant; > > @@ -724,12 +725,13 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri > */ > max_grefs += INDIRECT_GREFS(max_grefs); > > - /* > - * We have to reserve 'max_grefs' grants because persistent > - * grants are shared by all rings. > - */ > - if (max_grefs > 0) > - if (gnttab_alloc_grant_references(max_grefs, &setup.gref_head) < 0) { > + /* Check if we have enough persistent grants to allocate a requests */ > + if (rinfo->persistent_gnts_c < max_grefs) { > + new_persistent_gnts = true; > + > + if (gnttab_alloc_grant_references( > + max_grefs - rinfo->persistent_gnts_c, > + &setup.gref_head) < 0) { > gnttab_request_free_callback( > &rinfo->callback, > blkif_restart_queue_callback, AFAICT you should also change the call to gnttab_request_free_callback to request for max_grefs - rinfo->persistent_gnts_c. In any case the number of persistent grants is not going to decrease now, because the buffer is per-queue, so the only thing that can happen is that requests complete and the number of persistent grants increase. Roger.