Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3559166yba; Mon, 29 Apr 2019 04:45:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqyK0ed07m8K2rE2z09r0xlrpkgVDFXdUBWEic/DYSQhhVNu+AcgMigsL4ulIC2K0PuuhVTB X-Received: by 2002:a63:550d:: with SMTP id j13mr58813591pgb.18.1556538329446; Mon, 29 Apr 2019 04:45:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556538329; cv=none; d=google.com; s=arc-20160816; b=DeZoynYhZBT56OuKgTFZcR4a3il/Tb/dT3HXR6uEektANca9kp/OQgIN6Bry3jMPLN hEm3NFWH+1+PqH5ewIs4mfYNrRVWcZfZHrNc4YOv9UoOVasoUxRs3zkw+uOC7k9GAP8a h1aB7YuhC6QJiJCXhs/Ixz7406vmvGuqase7m7VSwNmmE/x2smbamveYYKUPvv/5eh+e C5dvdM1mOp4tQ//vZDFQBRt2fam0DfOehNMDUiNNZ7V5ez/70wgRm98uk7kBTUv+u1ys /Z/R9I4+bKWSZevBoNcVhhMoSNTLfiX96Fb7I5jmP5Kx2LCcLd7dduFJJO5N5ppTmU+N 5wVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=QLT43OgPi7yXxiN1BQ61aAAPa6+1n3Jx+v6bCICGEmQ=; b=rk+IojkuIMKbgFNP/y/Nt0TnXLmcQ1aD5zBJOneFkZxJaMsi2ZpZ9uEewxzUaDVgJP D4mklD9Vuq4MkLp2XJv2QPm+AJtiRYIqnFLdarsFPTi1hRzqVd6kNLhBzSsfSM/x/e9D mUBzpaoNCdmw3WZiFoVp0PHBwERs3eGM0ZaQlpNzMPsGI3E+D10GldMzhgHAddDdww9Y E0+EYrMF42zaXdogr1yzThGprjO3VFDu5JCAuDnMWJvmCF7RwL46LXPNN9dc0nE7gmPw 0yajXinIVsDr9ZjMX4sPWPL8EzdXuFOtjIizVYTwoaJcveDp9A+tPwQeI7duF1aJyIOV T+/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d186si34304126pfa.218.2019.04.29.04.45.14; Mon, 29 Apr 2019 04:45:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727986AbfD2LoT (ORCPT + 99 others); Mon, 29 Apr 2019 07:44:19 -0400 Received: from verein.lst.de ([213.95.11.211]:37997 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727933AbfD2LoS (ORCPT ); Mon, 29 Apr 2019 07:44:18 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id 405C268AFE; Mon, 29 Apr 2019 13:44:01 +0200 (CEST) Date: Mon, 29 Apr 2019 13:44:01 +0200 From: Christoph Hellwig To: Robin Murphy Cc: Lu Baolu , Christoph Hellwig , David Woodhouse , Joerg Roedel , ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com, kevin.tian@intel.com, mika.westerberg@linux.intel.com, pengfei.xu@intel.com, Konrad Rzeszutek Wilk , Marek Szyprowski , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 02/10] swiotlb: Factor out slot allocation and free Message-ID: <20190429114401.GA30333@lst.de> References: <20190421011719.14909-3-baolu.lu@linux.intel.com> <20190422164555.GA31181@lst.de> <0c6e5983-312b-0d6b-92f5-64861cd6804d@linux.intel.com> <20190423061232.GB12762@lst.de> <20190424144532.GA21480@lst.de> <20190426150433.GA19930@lst.de> <93b3d627-782d-cae0-2175-77a5a8b3fe6e@linux.intel.com> <90182d27-5764-7676-8ca6-b2773a40cfe1@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <90182d27-5764-7676-8ca6-b2773a40cfe1@arm.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 29, 2019 at 12:06:52PM +0100, Robin Murphy wrote: > > From the reply up-thread I guess you're trying to include an optimisation > to only copy the head and tail of the buffer if it spans multiple pages, > and directly map the ones in the middle, but AFAICS that's going to tie you > to also using strict mode for TLB maintenance, which may not be a win > overall depending on the balance between invalidation bandwidth vs. memcpy > bandwidth. At least if we use standard SWIOTLB logic to always copy the > whole thing, we should be able to release the bounce pages via the flush > queue to allow 'safe' lazy unmaps. Oh. The head and tail optimization is what I missed. Yes, for that we'd need the offset. > Either way I think it would be worth just implementing the straightforward > version first, then coming back to consider optimisations later. Agreed, let's start simple. Especially as large DMA mappings or allocations should usually be properly aligned anyway, and if not we should fix that for multiple reasons.