Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2596389imu; Thu, 17 Jan 2019 17:53:50 -0800 (PST) X-Google-Smtp-Source: ALg8bN7Npg29efGbxuEV82dxzS7WU5FALMhVTHKTEpNtJNR+Z2bve2xW6D+reCBbZbG2IJBOQucG X-Received: by 2002:a62:8e19:: with SMTP id k25mr17284780pfe.185.1547776430543; Thu, 17 Jan 2019 17:53:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547776430; cv=none; d=google.com; s=arc-20160816; b=lWL9NhyBc47OsU1et6ZpXjoSFED/fYcamHbRxz/6UK3wnYsfWOLuJB3rHcq7GmqYGz byW2UP4VEIRjG0MTkv1Vy/dymlbzPYLzeRyVKMYp7i0XXQyzeX4MMGvGdPJ0FPLpwq8a lGZ5at+ffELy0Kc9HS5nYu0F/+tJe7TSQdgJxQ0k6fmrEClmpUHXytSGEPPAbJ6EacSy J/+bSB5pqDGFKUplRPKMWl0uk63HviV6tH6RGY+/JiMrVn/FuW0cD6YmsjxmlL2IUFVY L9pVQYGe/ckXMaupq9WfYYQlQG0Oh/Ld5N6hRp5LvsGayAOIP+5YUVhxM7tj/ZeaNAdf FoaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=3UE7B3ORB/t8r4sYMv+Vd3QogNvc8PTwbBJqvufyUCA=; b=p3CSGlQyV8pfzC8d4VCIhHDh1Y7zkoAUcVQvwc4SsWEl78GkC1JZIayUznEqUyv+b0 bQuFNr+Gvq4yEnLRAMFlIWQmdPLv6CzG/yYnNCSzURWjCfBXIwyUQ6x0MxlooCUqkLKe IYEg45QjsSWsVSAc756tlgI/OYDcVsBsFQqNNGlpawBiZayW0rCHfLmmu9F2dHBsIjiO aTlxdvh5mE3GPHhsRiCK70WVlNlbszjH8RyyijPnIdAUmFC1+HEmdC2Vn95Vg+wlKl6z a/sLIqqAFg3dq4qb7yv+EhRUovfMmOGYn9upx3oDNS/jEVdEX9jef0rTW2NbQ7IkjYPv uENw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c65si3267346pfe.202.2019.01.17.17.53.31; Thu, 17 Jan 2019 17:53:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727078AbfARBwQ (ORCPT + 99 others); Thu, 17 Jan 2019 20:52:16 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:58196 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726329AbfARBwQ (ORCPT ); Thu, 17 Jan 2019 20:52:16 -0500 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 0E8A35AF7626748187A2; Fri, 18 Jan 2019 09:52:14 +0800 (CST) Received: from [127.0.0.1] (10.151.21.212) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.408.0; Fri, 18 Jan 2019 09:52:04 +0800 Subject: Re: [PATCH] staging: android: ion: move map_kernel to ion_dma_buf_kmap To: Laura Abbott CC: , , , , , , , , , , , References: <1545639585-24785-1-git-send-email-saberlily.xia@hisilicon.com> <1f507556-890e-e7f4-f80e-49cf7084d78a@redhat.com> From: "Xiaqing (A)" Message-ID: <958e5a0a-533e-b0e0-6f0d-0a0ecfcf4e91@hisilicon.com> Date: Fri, 18 Jan 2019 09:51:36 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <1f507556-890e-e7f4-f80e-49cf7084d78a@redhat.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.151.21.212] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/1/3 6:36, Laura Abbott wrote: > On 12/24/18 12:19 AM, Qing Xia wrote: >> Now, as Google's user guide, if userspace need clean ION buffer's >> cache, they should call ioctl(fd, DMA_BUF_IOCTL_SYNC, sync). Then >> we found that ion_dma_buf_begin_cpu_access/ion_dma_buf_end_cpu_access >> will do ION buffer's map_kernel, it's not necessary. And if usersapce >> only need clean cache, they will call ion_dma_buf_end_cpu_access by >> dmabuf's ioctl, ION buffer's kmap_cnt will be wrong value -1, then >> driver could not get right kernel vaddr by dma_buf_kmap. >> > > The problem is this subtly violates how dma_buf is supposed > to work. All setup is supposed to happen in begin_cpu_access. > I agree calling map kernel each time isn't great but I think > this needs to be worked out with dma_buf. > > Thanks, > Laura > Thanks for your explanation. >> Signed-off-by: Qing Xia >> --- >>   drivers/staging/android/ion/ion.c | 46 ++++++++++++++++++--------------------- >>   1 file changed, 21 insertions(+), 25 deletions(-) >> >> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c >> index 9907332..f7e2812 100644 >> --- a/drivers/staging/android/ion/ion.c >> +++ b/drivers/staging/android/ion/ion.c >> @@ -303,45 +303,47 @@ static void ion_dma_buf_release(struct dma_buf *dmabuf) >>   static void *ion_dma_buf_kmap(struct dma_buf *dmabuf, unsigned long offset) >>   { >>       struct ion_buffer *buffer = dmabuf->priv; >> +    void *vaddr; >> -    return buffer->vaddr + offset * PAGE_SIZE; >> +    if (buffer->heap->ops->map_kernel) { >> +        mutex_lock(&buffer->lock); >> +        vaddr = ion_buffer_kmap_get(buffer); >> +        mutex_unlock(&buffer->lock); >> +        if (IS_ERR(vaddr)) >> +            return vaddr; >> + >> +        return vaddr + offset * PAGE_SIZE; >> +    } >> + >> +    return NULL; >>   } >>   static void ion_dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long offset, >>                      void *ptr) >>   { >> +    struct ion_buffer *buffer = dmabuf->priv; >> + >> +    if (buffer->heap->ops->map_kernel) { >> +        mutex_lock(&buffer->lock); >> +        ion_buffer_kmap_put(buffer); >> +        mutex_unlock(&buffer->lock); >> +    } >>   } >>   static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, >>                       enum dma_data_direction direction) >>   { >>       struct ion_buffer *buffer = dmabuf->priv; >> -    void *vaddr; >>       struct ion_dma_buf_attachment *a; >> -    int ret = 0; >> - >> -    /* >> -     * TODO: Move this elsewhere because we don't always need a vaddr >> -     */ >> -    if (buffer->heap->ops->map_kernel) { >> -        mutex_lock(&buffer->lock); >> -        vaddr = ion_buffer_kmap_get(buffer); >> -        if (IS_ERR(vaddr)) { >> -            ret = PTR_ERR(vaddr); >> -            goto unlock; >> -        } >> -        mutex_unlock(&buffer->lock); >> -    } >>       mutex_lock(&buffer->lock); >>       list_for_each_entry(a, &buffer->attachments, list) { >>           dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, >>                       direction); >>       } >> - >> -unlock: >>       mutex_unlock(&buffer->lock); >> -    return ret; >> + >> +    return 0; >>   } >>   static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, >> @@ -350,12 +352,6 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, >>       struct ion_buffer *buffer = dmabuf->priv; >>       struct ion_dma_buf_attachment *a; >> -    if (buffer->heap->ops->map_kernel) { >> -        mutex_lock(&buffer->lock); >> -        ion_buffer_kmap_put(buffer); >> -        mutex_unlock(&buffer->lock); >> -    } >> - >>       mutex_lock(&buffer->lock); >>       list_for_each_entry(a, &buffer->attachments, list) { >>           dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents, >> > > > . >