Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp806258imu; Fri, 25 Jan 2019 11:20:06 -0800 (PST) X-Google-Smtp-Source: ALg8bN6vjZSVu/QGYNEySZYg3jMGm00hw38Nu0OH/BQvo0u7hDUW3J4nuS6jnY81SDJ6clFR9cAf X-Received: by 2002:a63:cf02:: with SMTP id j2mr11222120pgg.113.1548444006120; Fri, 25 Jan 2019 11:20:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548444006; cv=none; d=google.com; s=arc-20160816; b=guFrWj5cP9Yc14as62Hg5TWONASQAQBQT7VXHXj7tw7OTMEqDBouAlStm+dympqkph NKFCR/vZ2JxJcbfTuTEpf/OaaZdqQBP1KZOrxqKSvc0g4A2ArZcIJTX5ydyIZCyvHFwX Wkz3JTboC2RFJUIUEpFXr12VX9iqC686sZx4qZTETMDZd6p1387UERdKDcRxPTzZ/v/U WgpTKFnFunN+GKRdtC/Z2BvryWI3sL3P7Xdx6vxktqtyBUz2JKbWNmJQr4GlwWfWJp6f e0RoDzYottiCW69WPZN2jkCir1hg92iVnFPUL+hFPn5sLXq9HpJzIya3zeVIcN6ijZ6a 1ckg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=34ZSg5m0GqOMVYkgRiYzqsNbG0nKyhxICMm1XSaCPwQ=; b=IvA9DyyFyU4DLWGcO4kO/eG059LaXrx6iH5AGdSeXHdU6qsXwauCXrG5YZ26OFgJED yQdS0b26CeaEg0JfSx+PzWJj+wLvL1jxSbDjWK1ZlKeCigCrp2EW0no0aZrM+v/iTD8t 2xwKGyqjftOHJQShtx61Bk7uKwOPPXNEOZqQrwAHUScd/Ys1rUOzG7+c01JxHSysPJWJ 0X0BnVnqWwlkraSeOaN0idRuUT/UzlcDa6UBZ1WnK8+CTf+/gU8wFbNUESEkCvpzXj5Z qHE85y91qUBlXK9JRW/OxPLU9GbxDUZBTcus6zxXFTh0n7JDhRwYGFpxKd5QaYPf9zJ+ D4Kw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=HvY7ycY+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t23si903194pgi.181.2019.01.25.11.19.50; Fri, 25 Jan 2019 11:20:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=HvY7ycY+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727229AbfAYTSV (ORCPT + 99 others); Fri, 25 Jan 2019 14:18:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:41368 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726179AbfAYTSU (ORCPT ); Fri, 25 Jan 2019 14:18:20 -0500 Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 22C252087E; Fri, 25 Jan 2019 19:18:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1548443899; bh=CkA6N+7hiwe3Ze0vrYUYMz3WhbsnnITc4IzBZ7bnBEY=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=HvY7ycY+wD3SORQGb6C5zsiH7qy9c6dK273aTulMPMExn7jqekR+NT8XyFJVahv0C OaOF+6qhMA0Yp/YejPTHSKmDSFbGk05Vm+FkRpE1CfrVvesuMpEcfMPNbKVRm4TH5h BNZqVEVZR8NhS3abd3tyNGslZ6m27x7Q8wMoGCA0= Date: Fri, 25 Jan 2019 11:18:18 -0800 (PST) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-X260 To: Peng Fan cc: "hch@infradead.org" , Stefano Stabellini , "mst@redhat.com" , "jasowang@redhat.com" , "xen-devel@lists.xenproject.org" , "linux-remoteproc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "luto@kernel.org" , "jgross@suse.com" , "boris.ostrovsky@oracle.com" , Andy Duan , bjorn.andersson@linaro.org, ohad@wizery.com Subject: RE: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain In-Reply-To: Message-ID: References: <20190121050056.14325-1-peng.fan@nxp.com> <20190123071232.GA20526@infradead.org> <20190123211405.GA4971@infradead.org> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-143402054-1548443899=:17936" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --8323329-143402054-1548443899=:17936 Content-Type: TEXT/PLAIN; charset=UTF-8 Content-Transfer-Encoding: 8BIT On Fri, 25 Jan 2019, Peng Fan wrote: > > On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote: > > > If vring_use_dma_api is actually supposed to return true when > > > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote > > > are not fixing the real issue here. > > > > > > I don't know enough about remoteproc to know where the problem > > > actually lies though. > > > > The problem is the following: > > > > Devices can declare a specific memory region that they want to use when the > > driver calls dma_alloc_coherent for the device, this is done using the > > shared-dma-pool DT attribute, which comes in two variants that would be a > > little to much to explain here. > > > > remoteproc makes use of that because apparently the device can only > > communicate using that region. But it then feeds back memory obtained > > with dma_alloc_coherent into the virtio code. For that it calls > > vmalloc_to_page on the dma_alloc_coherent, which is a huge no-go for the > > ĐMA API and only worked accidentally on a few platform, and apparently > > arm64 just changed a few internals that made it stop working for remoteproc. > > > > The right answer is to not use the DMA API to allocate memory from a > > device-speficic region, but to tie the driver directly into the DT reserved > > memory API in a way that allows it to easilt obtain a struct device for it. > > Just have a question, > > Since vmalloc_to_page is ok for cma area, no need to take cma and per device > cma into consideration right? > > we only need to implement a piece code to handle per device specific region > using RESERVEDMEM_OF_DECLARE, just like: > RESERVEDMEM_OF_DECLARE(rpmsg-dma, "rpmsg-dma-pool", > rmem_rpmsg_dma_setup); > And implement the device_init call back and build a map between page and phys. > Then in rpmsg driver, scatter list could use page structure, no need vmalloc_to_page > for per device dma. > > Is this the right way? I CC'ed the rpmsg maintainers, you want to keep them in the loop on this. --8323329-143402054-1548443899=:17936--