Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3550665imu; Fri, 18 Jan 2019 12:22:42 -0800 (PST) X-Google-Smtp-Source: ALg8bN7HQbSUVyvz3TEDqwsSnY/K3qTMaTrjeG68Om/QTzG1zxQ06AUzAH458i1/yYjhQSqBUAWH X-Received: by 2002:a63:66c6:: with SMTP id a189mr19184497pgc.167.1547842962517; Fri, 18 Jan 2019 12:22:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547842962; cv=none; d=google.com; s=arc-20160816; b=PzURWjpNZThy0AP7Jp18YUIhsoYTg1unt/bz18qsM4hv1sUC2OvB4h6DOBjCs436s2 391cdRxzgRlhZrQ5jfe8FvhiGAAJ47oHYpqEfsOVZKcVuRg7+EKelaXtwwQQISiLNdIn u04jIBZ8HgT1obGsKjSPTezlM6/HkUIzYzfQ0Syvtz0pmXIa2b54OusvKfQp81cq/n9E fC2IOhp20AbdFf/ZzY9tPEMeieJliFKGN92ZuparZAwucT0PjAVyWSevyswLOWHNBRaE p9auWIF4kO9gVDeVMIDJUCkZ375LA8Go8f63T5pdQ+mqsYbzkmTMlj6Lz4CQKD262MYF JpRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=aufRV8DxPBlCH1R+TJXTjW299Jj7QbDj1aFMqdBqcdQ=; b=yk33qVLxA1NmHDkCxKLV1Nyj8DfA91YTGITuCXEqfB5Zqqs/W+tpo7R5dJycLdMNXk KiJixi0WeZtBBMIpuw+km76mRRhcvWLIGMP5cVHuJoRTLqBPqJ66WwJeCsD/8S8BLIfr GuwUdf81iK8rgndh8sXY1ZrZTczK8wKTl310aTG2V5QnWdAGSp8rYqu8LXacclxQULJN H4DQfTj2gKzWxVEj5wcUmkxfGgRk6/3vTzfH1FcIpcuRROGKCbn/+8N3td0kNaqxdFlo 0XuwCpZUtM/dpWMgmlht7F+8OI7n/vJ/ezJQu3iQD3NV39bZ+bytSlS/6RP94c8MpYlj S1Yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=ZBCXI76N; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 63si5427000pla.65.2019.01.18.12.22.23; Fri, 18 Jan 2019 12:22:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=ZBCXI76N; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729451AbfARUUv (ORCPT + 99 others); Fri, 18 Jan 2019 15:20:51 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:49986 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729381AbfARUUv (ORCPT ); Fri, 18 Jan 2019 15:20:51 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id x0IKKgJ3020110; Fri, 18 Jan 2019 14:20:42 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1547842842; bh=aufRV8DxPBlCH1R+TJXTjW299Jj7QbDj1aFMqdBqcdQ=; h=Subject:To:CC:References:From:Date:In-Reply-To; b=ZBCXI76NjRC3K0DYdc6kgHzAw9g/dvybIkh0XvXF3ilConZl3qujLISbMW7lIUDvT a3wWS8bFzFw4k+11aPUo/tSkknGthHroPUAVUuTWONWDSOLBwwW3h6p3xhZpJQfC49 pBgKVPb6CBysx1tS05Dk7aGtMN3Aa6t1I2KKjjbw= Received: from DFLE102.ent.ti.com (dfle102.ent.ti.com [10.64.6.23]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id x0IKKgHG060324 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 18 Jan 2019 14:20:42 -0600 Received: from DFLE112.ent.ti.com (10.64.6.33) by DFLE102.ent.ti.com (10.64.6.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1591.10; Fri, 18 Jan 2019 14:20:42 -0600 Received: from dflp32.itg.ti.com (10.64.6.15) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.1591.10 via Frontend Transport; Fri, 18 Jan 2019 14:20:42 -0600 Received: from [172.22.84.228] (ileax41-snat.itg.ti.com [10.172.224.153]) by dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id x0IKKfKX012468; Fri, 18 Jan 2019 14:20:41 -0600 Subject: Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory To: Liam Mark , , CC: , , , , , , , , , References: <1547836667-13695-1-git-send-email-lmark@codeaurora.org> <1547836667-13695-3-git-send-email-lmark@codeaurora.org> From: "Andrew F. Davis" Message-ID: <69b18f39-8ce0-3c4d-3528-dfab8399f24f@ti.com> Date: Fri, 18 Jan 2019 14:20:41 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <1547836667-13695-3-git-send-email-lmark@codeaurora.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/18/19 12:37 PM, Liam Mark wrote: > The ION begin_cpu_access and end_cpu_access functions use the > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache > maintenance. > > Currently it is possible to apply cache maintenance, via the > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not > dma mapped. > > The dma sync sg APIs should not be called on sg lists which have not been > dma mapped as this can result in cache maintenance being applied to the > wrong address. If an sg list has not been dma mapped then its dma_address > field has not been populated, some dma ops such as the swiotlb_dma_ops ops > use the dma_address field to calculate the address onto which to apply > cache maintenance. > > Also I don’t think we want CMOs to be applied to a buffer which is not > dma mapped as the memory should already be coherent for access from the > CPU. Any CMOs required for device access taken care of in the > dma_buf_map_attachment and dma_buf_unmap_attachment calls. > So really it only makes sense for begin_cpu_access and end_cpu_access to > apply CMOs if the buffer is dma mapped. > > Fix the ION begin_cpu_access and end_cpu_access functions to only apply > cache maintenance to buffers which are dma mapped. > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping") > Signed-off-by: Liam Mark > --- > drivers/staging/android/ion/ion.c | 26 +++++++++++++++++++++----- > 1 file changed, 21 insertions(+), 5 deletions(-) > > diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c > index 6f5afab7c1a1..1fe633a7fdba 100644 > --- a/drivers/staging/android/ion/ion.c > +++ b/drivers/staging/android/ion/ion.c > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment { > struct device *dev; > struct sg_table *table; > struct list_head list; > + bool dma_mapped; > }; > > static int ion_dma_buf_attach(struct dma_buf *dmabuf, > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, > > a->table = table; > a->dev = attachment->dev; > + a->dma_mapped = false; > INIT_LIST_HEAD(&a->list); > > attachment->priv = a; > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, > { > struct ion_dma_buf_attachment *a = attachment->priv; > struct sg_table *table; > + struct ion_buffer *buffer = attachment->dmabuf->priv; > > table = a->table; > > + mutex_lock(&buffer->lock); > if (!dma_map_sg(attachment->dev, table->sgl, table->nents, > - direction)) > + direction)) { > + mutex_unlock(&buffer->lock); > return ERR_PTR(-ENOMEM); > + } > + a->dma_mapped = true; > + mutex_unlock(&buffer->lock); > > return table; > } > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, > struct sg_table *table, > enum dma_data_direction direction) > { > + struct ion_dma_buf_attachment *a = attachment->priv; > + struct ion_buffer *buffer = attachment->dmabuf->priv; > + > + mutex_lock(&buffer->lock); > dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); > + a->dma_mapped = false; > + mutex_unlock(&buffer->lock); > } > > static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > > mutex_lock(&buffer->lock); > list_for_each_entry(a, &buffer->attachments, list) { When no devices are attached then buffer->attachments is empty and the below does not run, so if I understand this patch correctly then what you are protecting against is CPU access in the window after dma_buf_attach but before dma_buf_map. This is the kind of thing that again makes me think a couple more ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require the backing memory to be allocated until map time, this is why the dma_address field would still be null as you note in the commit message. So why should the CPU be performing accesses on a buffer that is not actually backed yet? I can think of two solutions: 1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at least one device is mapped. 2) Treat the CPU access request like the a device map request and trigger the allocation of backing memory just like if a device map had come in. I know the current Ion heaps (and most other DMA-BUF exporters) all do the allocation up front so the memory is already there, but DMA-BUF was designed with late allocation in mind. I have a use-case I'm working on that finally exercises this DMA-BUF functionality and I would like to have it export through ION. This patch doesn't prevent that, but seems like it is endorsing the the idea that buffers always need to be backed, even before device attach/map is has occurred. Either of the above two solutions would need to target the DMA-BUF framework, Sumit, Any comment? Thanks, Andrew > - dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, > - direction); > + if (a->dma_mapped) > + dma_sync_sg_for_cpu(a->dev, a->table->sgl, > + a->table->nents, direction); > } > > unlock: > @@ -369,8 +384,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, > > mutex_lock(&buffer->lock); > list_for_each_entry(a, &buffer->attachments, list) { > - dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents, > - direction); > + if (a->dma_mapped) > + dma_sync_sg_for_device(a->dev, a->table->sgl, > + a->table->nents, direction); > } > mutex_unlock(&buffer->lock); > >