Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp932815ybi; Fri, 24 May 2019 14:01:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqxCzuSX/W0FpQm6Rd+zfVHQtxKAgX62lnTziJx5C8op6qcE8VoFqItwVeKZjQE/qE0fPPEQ X-Received: by 2002:a62:b40a:: with SMTP id h10mr108575391pfn.216.1558731692081; Fri, 24 May 2019 14:01:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558731692; cv=none; d=google.com; s=arc-20160816; b=NNntLENy3YIjECWzV/TKiyaCsjygAvtm1pwfHuj6LYfAtDxO+RMTa4rlaMDyzecLUg 1cFjFpeTADiJdz3BwnHVgTmf5Ok5AqxfmbjS3M/rwA3ef9qQHkR6FXlCyLUZ8eJZ/GLc SEeo+r4Lj1ek2h4ga1oE31CkqrxiL2JcWWexp1coVBgEWtVMqvMkUNBCxak4E734Rk5n kkn5H5NeN8smEYkuav8aIpP4mRgcaMlP9AffCbIS/IbZcm3U96f+44vJyH7il/7Q+oDz qmU7hyWiCN1O6z3o0FXKoNb70JEFI+u0ZSJYIDleLxFaNzupMX14dzY++HCM3bqHvgWW slIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=dQixjEx/OjdyBVMPEV/reHUYECsSH+hRGB2aro9iiUQ=; b=0fmH1Iew/BJ3yysRTvDGcd0pKfbf8TXZijS0YSPLY/KEEETNBOcuwytwFffAO6QWgT 2srUONbszl5Y7yaI+CZMAlGXN1h76N2tuRjujcRdr8pVIzuW4MOfV07JYBjISg+POwCD OsokHnlJI7jnG10LjDYHct2O70jiqrW8BOZKGsc5z1zV92wDHUwls5OjAp29BFjY5OdN 8iHRp0kzgVoqW6Ovf7J1l4I4oFY6Y1jTfictpaQadJgLNbtwsPEE7G1czEOSCNR8JnSG K7AEbfH956KpaaReS+jZSrFEwkCJ7zz6CJq2dw4/afdEPUUU2xAZli9pY6c43T/YzQJo nREw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=owBJmx8M; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l1si6103960pgi.278.2019.05.24.14.01.16; Fri, 24 May 2019 14:01:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=owBJmx8M; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404215AbfEXU6x (ORCPT + 99 others); Fri, 24 May 2019 16:58:53 -0400 Received: from mail.kernel.org ([198.145.29.99]:45508 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730009AbfEXU6w (ORCPT ); Fri, 24 May 2019 16:58:52 -0400 Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 667CA2081C; Fri, 24 May 2019 20:58:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558731531; bh=fFzgp3n479Ejxq4l0Izq0W5hH7EErgnFzURn8ywDr6o=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=owBJmx8MIMCiG6wU+qcTpmm4Cm7juj1Q8NhJ/CiTzs8MeQupcu9Iqs3V2XNU/W0o6 pcGbRgOgpH0ZGFk+S5K2cnj8Im09Nc0qwac2NZm0xyllc6quLZeXea5j97lWQmMguX BKUrmpd8DR+kZ30PY3Ol4r7qE9YYg5I7e9Hr638E= Date: Fri, 24 May 2019 13:58:50 -0700 (PDT) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s To: Sergey Dyasli cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, Konrad Rzeszutek Wilk , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Paul Durrant Subject: Re: [PATCH v1] xen/swiotlb: rework early repeat code In-Reply-To: <20190524144250.5102-1-sergey.dyasli@citrix.com> Message-ID: References: <20190524144250.5102-1-sergey.dyasli@citrix.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 24 May 2019, Sergey Dyasli wrote: > Current repeat code is plain broken for the early=true case. Xen exchanges > all DMA (<4GB) pages that it can on the first xen_swiotlb_fixup() attempt. > All further attempts with a halved region will fail immediately because > all DMA pages already belong to Dom0. > > Introduce contig_pages param for xen_swiotlb_fixup() to track the number > of pages that were made contiguous in MFN space and use the same bootmem > region while halving the memory requirements. > > Signed-off-by: Sergey Dyasli Just FYI I am touching the same code to fix another unrelated bug, see: https://marc.info/?l=xen-devel&m=155856767022893 > --- > CC: Konrad Rzeszutek Wilk > CC: Boris Ostrovsky > CC: Juergen Gross > CC: Stefano Stabellini > CC: Paul Durrant > --- > drivers/xen/swiotlb-xen.c | 36 ++++++++++++++++++++++++++++++------ > 1 file changed, 30 insertions(+), 6 deletions(-) > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > index 5dcb06fe9667..d2aba804d06c 100644 > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -142,7 +142,8 @@ static int is_xen_swiotlb_buffer(dma_addr_t dma_addr) > static int max_dma_bits = 32; > > static int > -xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) > +xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs, > + unsigned long *contig_pages) > { > int i, rc; > int dma_bits; > @@ -156,10 +157,13 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) > int slabs = min(nslabs - i, (unsigned long)IO_TLB_SEGSIZE); > > do { > + unsigned int order = get_order(slabs << IO_TLB_SHIFT); > rc = xen_create_contiguous_region( > p + (i << IO_TLB_SHIFT), > - get_order(slabs << IO_TLB_SHIFT), > + order, > dma_bits, &dma_handle); > + if (rc == 0) > + *contig_pages += 1 << order; > } while (rc && dma_bits++ < max_dma_bits); > if (rc) > return rc; > @@ -202,7 +206,7 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err) > } > int __ref xen_swiotlb_init(int verbose, bool early) > { > - unsigned long bytes, order; > + unsigned long bytes, order, contig_pages; > int rc = -ENOMEM; > enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN; > unsigned int repeat = 3; > @@ -244,13 +248,32 @@ int __ref xen_swiotlb_init(int verbose, bool early) > /* > * And replace that memory with pages under 4GB. > */ > + contig_pages = 0; > rc = xen_swiotlb_fixup(xen_io_tlb_start, > bytes, > - xen_io_tlb_nslabs); > + xen_io_tlb_nslabs, > + &contig_pages); > if (rc) { > - if (early) > + if (early) { > + unsigned long orig_bytes = bytes; > + while (repeat-- > 0) { > + xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */ > + (xen_io_tlb_nslabs >> 1)); > + pr_info("Lowering to %luMB\n", > + (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20); > + bytes = xen_set_nslabs(xen_io_tlb_nslabs); > + order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT); > + xen_io_tlb_end = xen_io_tlb_start + bytes; > + if (contig_pages >= (1ul << order)) { > + /* Enough pages were made contiguous */ > + memblock_free(__pa(xen_io_tlb_start + bytes), > + PAGE_ALIGN(orig_bytes - bytes)); > + goto fixup_done; > + } > + } > memblock_free(__pa(xen_io_tlb_start), > PAGE_ALIGN(bytes)); > + } > else { > free_pages((unsigned long)xen_io_tlb_start, order); > xen_io_tlb_start = NULL; > @@ -258,6 +281,7 @@ int __ref xen_swiotlb_init(int verbose, bool early) > m_ret = XEN_SWIOTLB_EFIXUP; > goto error; > } > +fixup_done: > start_dma_addr = xen_virt_to_bus(xen_io_tlb_start); > if (early) { > if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, > @@ -272,7 +296,7 @@ int __ref xen_swiotlb_init(int verbose, bool early) > > return rc; > error: > - if (repeat--) { > + if (repeat-- > 0) { > xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */ > (xen_io_tlb_nslabs >> 1)); > pr_info("Lowering to %luMB\n", > -- > 2.17.1 >