Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1242028imu; Wed, 23 Jan 2019 13:15:24 -0800 (PST) X-Google-Smtp-Source: ALg8bN4PpHefy78WkSxN7gUJY1FJuH2j3IgpQhwSH2QG/7xjuO+EFZbBO+qTUiz9e3OWIXz3xT0+ X-Received: by 2002:a17:902:298a:: with SMTP id h10mr3920817plb.312.1548278124917; Wed, 23 Jan 2019 13:15:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548278124; cv=none; d=google.com; s=arc-20160816; b=Eizu8LuIZCzeIakiBhlTOgShkvpyU1fW4DGI2Qs4I3KmIPGc0Qvubxq1aE5Cy3umFW qYny89dZl5hOOu9W1fslqNekg7QarAUkMyKDlMzRRj3P7QG7FQnrZV4BE18HeTfaJgU0 MzCxUL/U3oKDWHTB1YuJziRGfftO4XWPL5IHjtjJGveGlzMt1ZajP14e9g11/Hj3zpCk nCj4WxyJew9w+yTV6DABHM/ALnsb1I+NzI0Ps3kklMi3RYpM2ilv8Ngbx5zi9HeGPo8C rYU1AH1JXu0BR0Giv95jC8KtUbCXGWqUDFNsJkcDvNnQ5b0zSc6nLoMmkbNuqb6PYdL2 g54g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=69YYeuYP9YDr0UyuJDxswdUHpYF1gBVAQ/hr0X6Qomo=; b=SAwSCEcgBOZ0e32Hv4/VvGHTLt4RMhXxFuRCZwn23AFnJY8jHebO3aXec+wNTS2n4B QDuhpFX+kqv5Q1PRny8ieKDxAO6yXfyE/5neerlJMn9Ctp/QCl/z4Hvx6BOaP/CPGSZd 5oj5s2LcgoGrxXh+TNat5W+adYREYg6iZxwJ+CCLYdtJ8HdpiXwM7aW3ri4F1dIYEFD5 lg6iHc8nHfuI/kMRvEOlRUXaeLoYNNKxXyDURcjvvWXjuNMtWklkLQC1o5R9F8+U3W9d Z7n5qInVkIdpUE2mnUBZgxRGSxsmXrKXX8nU/3X6sxsRm+f3yyPSMY2efJRHAdG6rVnR 84Cg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=BHPUeQYn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c20si20515379plz.391.2019.01.23.13.15.08; Wed, 23 Jan 2019 13:15:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=BHPUeQYn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726948AbfAWVOK (ORCPT + 99 others); Wed, 23 Jan 2019 16:14:10 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52148 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726322AbfAWVOJ (ORCPT ); Wed, 23 Jan 2019 16:14:09 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Transfer-Encoding :Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=69YYeuYP9YDr0UyuJDxswdUHpYF1gBVAQ/hr0X6Qomo=; b=BHPUeQYnortvcNJ9j9BiMCJ7xr cQ2+RX+moIsETd4/4+uHJEGqpTywZCJwEfTrGbm90QrEKhjIJdc/ADWFk0BgEJftxV3J5H2Vs27lm yi2ED1nox5GwL7HV0BgDP9vY8Oh2fY3Zw5vTzTehk8JZaIurqcgXpJetWr4HTJQurVOMWyBukzyV8 aWm/RwGDRHXEreodr8HJhZFH+u2l91Zy1+aa72kuG/sDeYPUJUMYqJstHMIokg3Yt2wVLqHCiaA3f AXz6xtgR2cbpUmqCKyUVzv41st18iOUFXKbCHshZa+96Mx3Lp3HR6DBV7aZfwcwRAysDeJk1oTW5Y W572QCPA==; Received: from hch by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gmPqX-0003O9-Uv; Wed, 23 Jan 2019 21:14:05 +0000 Date: Wed, 23 Jan 2019 13:14:05 -0800 From: "hch@infradead.org" To: Stefano Stabellini Cc: "hch@infradead.org" , Peng Fan , "mst@redhat.com" , "jasowang@redhat.com" , "xen-devel@lists.xenproject.org" , "linux-remoteproc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , luto@kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com Subject: Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain Message-ID: <20190123211405.GA4971@infradead.org> References: <20190121050056.14325-1-peng.fan@nxp.com> <20190123071232.GA20526@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.9.2 (2017-12-15) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote: > If vring_use_dma_api is actually supposed to return true when > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote > are not fixing the real issue here. > > I don't know enough about remoteproc to know where the problem actually > lies though. The problem is the following: Devices can declare a specific memory region that they want to use when the driver calls dma_alloc_coherent for the device, this is done using the shared-dma-pool DT attribute, which comes in two variants that would be a little to much to explain here. remoteproc makes use of that because apparently the device can only communicate using that region. But it then feeds back memory obtained with dma_alloc_coherent into the virtio code. For that it calls vmalloc_to_page on the dma_alloc_coherent, which is a huge no-go for the ĐMA API and only worked accidentally on a few platform, and apparently arm64 just changed a few internals that made it stop working for remoteproc. The right answer is to not use the DMA API to allocate memory from a device-speficic region, but to tie the driver directly into the DT reserved memory API in a way that allows it to easilt obtain a struct device for it. This is orthogonal to another issue, and that is that hardware virtio devices really always need to use the DMA API, otherwise we'll bypass such features as the device specific DMA pools, DMA offsets, cache flushing, etc, etc.