Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp5833171ybi; Wed, 31 Jul 2019 04:07:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqy95Hraq7KfR1jGqyPn20tOpeAqg+Sn0MqUHuW06vf7Ty8YGWMt26CG3JHio+OJJfoIPpN8 X-Received: by 2002:a17:902:b582:: with SMTP id a2mr122161087pls.128.1564571220086; Wed, 31 Jul 2019 04:07:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564571220; cv=none; d=google.com; s=arc-20160816; b=gWRrJZC02s7VY0lqZmQvut/gnGjIivxWdAn1JA8UurkD+y/cmK0hg/TP0sh7U9qnrD 3PnobMNbk9ATrdFVw/zwWNrkp+dl44lfgT3qYNB/rXk6EzLG6MzfiJg490Aiq7hWurFi 4Dd7UzagixkaFbvS8gs7mj5YNX1LW31DCoI0HmhYIX2x6dmgTRTmXFyV4lGv7j8xVT1h 58oJM93moZKMLk2g66Gr3Im5QSkoY+uvfMY1pH/ZzQ4cYfOSzlJsyFOaetPPny+EqXA+ ITBdYMTgs3NghaMeZKWEmwm+U3ZiEt4VWiSzFNPzx2iRmyO9v5jO7jjV0jxS0Xuy1GT/ yshQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:subject:cc:to :from:date; bh=cQF4WS4RI5nisIM6w7DWApNArIS1mTgh3UNCzggmSzA=; b=pBlKqnz25X9CHmu4naTkUZbwQxt50Ah7aIx8GOxGeoXcLA42TSzXZuEHJCqQ6/pvYj Bsy3X6EJm5C6MLQsnsNGsWMmNabd0+mYPz/2Xp8ZjRUO1+H7A3dzvNAi0OBZoKBenWpZ DlJDFqxga5JSWZCiz+7DLLuDQDTTYvg1KlQSPEuxz7BjvswMMcw+anMxSRhtgDB0wf6M 9fROYkI4GgCQaaXNHRmEvUmkRrBE9I1FonfnrTukw8gkAOEOidv3xw/cb7XuQmy/8HIC q66U/U5oCjuVhoz9A6k1NumgwHVcofgatPtnkuwpmNT1xMDThRfa6Fm1qg3kvrPNiAnK 7lLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 131si30962188pfu.165.2019.07.31.04.06.44; Wed, 31 Jul 2019 04:07:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728989AbfGaJ5z (ORCPT + 99 others); Wed, 31 Jul 2019 05:57:55 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:57604 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727224AbfGaJ5z (ORCPT ); Wed, 31 Jul 2019 05:57:55 -0400 Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x6V9vMcW033778 for ; Wed, 31 Jul 2019 05:57:54 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 2u370yw6dq-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 31 Jul 2019 05:57:53 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 31 Jul 2019 10:57:51 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 31 Jul 2019 10:57:48 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x6V9vlWW33358064 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 31 Jul 2019 09:57:47 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C11224C04A; Wed, 31 Jul 2019 09:57:47 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 465574C044; Wed, 31 Jul 2019 09:57:44 +0000 (GMT) Received: from in.ibm.com (unknown [9.109.246.128]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Wed, 31 Jul 2019 09:57:43 +0000 (GMT) Date: Wed, 31 Jul 2019 15:27:35 +0530 From: Bharata B Rao To: Christoph Hellwig Cc: =?iso-8859-1?B?Suly9G1l?= Glisse , Jason Gunthorpe , Ben Skeggs , Ralph Campbell , Andrew Morton , linux-mm@kvack.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 5/9] nouveau: simplify nouveau_dmem_migrate_to_ram Reply-To: bharata@linux.ibm.com References: <20190729142843.22320-1-hch@lst.de> <20190729142843.22320-6-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190729142843.22320-6-hch@lst.de> User-Agent: Mutt/1.12.0 (2019-05-25) X-TM-AS-GCONF: 00 x-cbid: 19073109-0012-0000-0000-00000337F1FF X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19073109-0013-0000-0000-000021719A7D Message-Id: <20190731095735.GB18807@in.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-07-31_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1907310102 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 29, 2019 at 05:28:39PM +0300, Christoph Hellwig wrote: > Factor the main copy page to ram routine out into a helper that acts on > a single page and which doesn't require the nouveau_dmem_fault > structure for argument passing. Also remove the loop over multiple > pages as we only handle one at the moment, although the structure of > the main worker function makes it relatively easy to add multi page > support back if needed in the future. But at least for now this avoid > the needed to dynamically allocate memory for the dma addresses in > what is essentially the page fault path. > > Signed-off-by: Christoph Hellwig > --- > drivers/gpu/drm/nouveau/nouveau_dmem.c | 158 ++++++------------------- > 1 file changed, 39 insertions(+), 119 deletions(-) > > diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c > index 21052a4aaf69..036e6c07d489 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c > +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c > @@ -86,13 +86,6 @@ static inline struct nouveau_dmem *page_to_dmem(struct page *page) > return container_of(page->pgmap, struct nouveau_dmem, pagemap); > } > > -struct nouveau_dmem_fault { > - struct nouveau_drm *drm; > - struct nouveau_fence *fence; > - dma_addr_t *dma; > - unsigned long npages; > -}; > - > struct nouveau_migrate { > struct vm_area_struct *vma; > struct nouveau_drm *drm; > @@ -146,130 +139,55 @@ static void nouveau_dmem_fence_done(struct nouveau_fence **fence) > } > } > > -static void > -nouveau_dmem_fault_alloc_and_copy(struct vm_area_struct *vma, > - const unsigned long *src_pfns, > - unsigned long *dst_pfns, > - unsigned long start, > - unsigned long end, > - struct nouveau_dmem_fault *fault) > +static vm_fault_t nouveau_dmem_fault_copy_one(struct nouveau_drm *drm, > + struct vm_area_struct *vma, unsigned long addr, > + unsigned long src, unsigned long *dst, dma_addr_t *dma_addr) > { > - struct nouveau_drm *drm = fault->drm; > struct device *dev = drm->dev->dev; > - unsigned long addr, i, npages = 0; > - nouveau_migrate_copy_t copy; > - int ret; > - > - > - /* First allocate new memory */ > - for (addr = start, i = 0; addr < end; addr += PAGE_SIZE, i++) { > - struct page *dpage, *spage; > - > - dst_pfns[i] = 0; > - spage = migrate_pfn_to_page(src_pfns[i]); > - if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE)) > - continue; > + struct page *dpage, *spage; > > - dpage = alloc_page_vma(GFP_HIGHUSER, vma, addr); > - if (!dpage) { > - dst_pfns[i] = MIGRATE_PFN_ERROR; > - continue; > - } > - lock_page(dpage); > - > - dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)) | > - MIGRATE_PFN_LOCKED; > - npages++; > - } > + spage = migrate_pfn_to_page(src); > + if (!spage || !(src & MIGRATE_PFN_MIGRATE)) > + return 0; > > - /* Allocate storage for DMA addresses, so we can unmap later. */ > - fault->dma = kmalloc(sizeof(*fault->dma) * npages, GFP_KERNEL); > - if (!fault->dma) > + dpage = alloc_page_vma(GFP_HIGHUSER, args->vma, addr); > + if (!dpage) > goto error; > + lock_page(dpage); > > - /* Copy things over */ > - copy = drm->dmem->migrate.copy_func; > - for (addr = start, i = 0; addr < end; addr += PAGE_SIZE, i++) { > - struct page *spage, *dpage; > - > - dpage = migrate_pfn_to_page(dst_pfns[i]); > - if (!dpage || dst_pfns[i] == MIGRATE_PFN_ERROR) > - continue; > - > - spage = migrate_pfn_to_page(src_pfns[i]); > - if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE)) { > - dst_pfns[i] = MIGRATE_PFN_ERROR; > - __free_page(dpage); > - continue; > - } > - > - fault->dma[fault->npages] = > - dma_map_page_attrs(dev, dpage, 0, PAGE_SIZE, > - PCI_DMA_BIDIRECTIONAL, > - DMA_ATTR_SKIP_CPU_SYNC); > - if (dma_mapping_error(dev, fault->dma[fault->npages])) { > - dst_pfns[i] = MIGRATE_PFN_ERROR; > - __free_page(dpage); > - continue; > - } > - > - ret = copy(drm, 1, NOUVEAU_APER_HOST, > - fault->dma[fault->npages++], > - NOUVEAU_APER_VRAM, > - nouveau_dmem_page_addr(spage)); > - if (ret) { > - dst_pfns[i] = MIGRATE_PFN_ERROR; > - __free_page(dpage); > - continue; > - } > - } > + *dma_addr = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL); > + if (dma_mapping_error(dev, *dma_addr)) > + goto error_free_page; > > - nouveau_fence_new(drm->dmem->migrate.chan, false, &fault->fence); > + if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr, > + NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) > + goto error_dma_unmap; > > - return; > + *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; > > +error_dma_unmap: > + dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); > +error_free_page: > + __free_page(dpage); > error: > - for (addr = start, i = 0; addr < end; addr += PAGE_SIZE, ++i) { > - struct page *page; > - > - if (!dst_pfns[i] || dst_pfns[i] == MIGRATE_PFN_ERROR) > - continue; > - > - page = migrate_pfn_to_page(dst_pfns[i]); > - dst_pfns[i] = MIGRATE_PFN_ERROR; > - if (page == NULL) > - continue; > - > - __free_page(page); > - } > -} > - > -static void > -nouveau_dmem_fault_finalize_and_map(struct nouveau_dmem_fault *fault) > -{ > - struct nouveau_drm *drm = fault->drm; > - > - nouveau_dmem_fence_done(&fault->fence); > - > - while (fault->npages--) { > - dma_unmap_page(drm->dev->dev, fault->dma[fault->npages], > - PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); > - } > - kfree(fault->dma); > + return VM_FAULT_SIGBUS; Looks like nouveau_dmem_fault_copy_one() is now returning VM_FAULT_SIGBUS for success case. Is this expected? Regards, Bharata.