Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp805160pxb; Thu, 28 Jan 2021 00:00:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJwcLEQvtZCz7Yw3w4aPaNeFpkVLh3Bhx8xPb/Nc7YXFZjOrtevU56lFR8RPx3Uvd8e23Cn7 X-Received: by 2002:a50:b246:: with SMTP id o64mr12136958edd.132.1611820852745; Thu, 28 Jan 2021 00:00:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611820852; cv=none; d=google.com; s=arc-20160816; b=j/LmNP5Bfx+YFq0I6bp6dUcRrfNe4AjpI6pkwQ6yBDreA6ek9/bLFKI44fqOThGWGF TFMMZyeg2RRG0QkH8Ig8H8OUX0xfcTlac9RPFfNKhCS/BINeb/iyiOYl0GroVcg4v9xW voj8jFKHZDaZSOwrtnPPQgnxB7T1riZdXmuvFKMojV53xPj1T7aVS6Pe7LGao0WGWThp zLlpVcragKOtZhHvBUN1D5xVBcqv0fMb0pyohT5/iX41fQoeMsv6QIstULBq7cm9GHoS umRo17DQtJ9JT7kaFaTpFa8/CbxJinCOTtyfuH22cXFp7ZPsxtzYsPsKfeQSZIx5sPAd 5Q+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=8+5dcejhRaQpDCwiyEVn753oCGrTihEzoxoKOU9kwWA=; b=VtvBdJvk507ZJh7AsnWkOBoI/HdSmxX6sKFt1lUusHxLH+48itBELQG4P3kTPeqq74 zX8/12/dLn+bcPM5ldy/AguMo2iMbpTo4+mOnbDi6ygN6vNdbQwWcz0zKYB9Aq3X20KN 2W8/KFj8KRzEFqQ+u7KuPsTjWXHpE7EpkZQET+SIuYD0le9dJhXq+fghBS944xAZribV gZ67oAOIHFd++at+KRyKDqkuWVN/398NJUa7pDY4Qv6UkNTe0BTzrf8qKHjU68apBggt h0rLYX0H6Nokkt6U0H+GhoCbpRvYULhhqQ04KgBTpgIiXeull9vv5ef1AMRCcWyNUfVH D53Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZRaEUkWx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qp1si2044097ejb.183.2021.01.28.00.00.28; Thu, 28 Jan 2021 00:00:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZRaEUkWx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232356AbhA1H7d (ORCPT + 99 others); Thu, 28 Jan 2021 02:59:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231452AbhA1HFf (ORCPT ); Thu, 28 Jan 2021 02:05:35 -0500 Received: from mail-vs1-xe33.google.com (mail-vs1-xe33.google.com [IPv6:2607:f8b0:4864:20::e33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97368C061794 for ; Wed, 27 Jan 2021 23:04:28 -0800 (PST) Received: by mail-vs1-xe33.google.com with SMTP id s83so147957vsc.4 for ; Wed, 27 Jan 2021 23:04:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=8+5dcejhRaQpDCwiyEVn753oCGrTihEzoxoKOU9kwWA=; b=ZRaEUkWxALHTVzsipCFQj35joZ7/P6/xdlQtefyyRmxLlAGo8gHta0a7fC8kj3x5r+ GxclBFXoUJx51nRjfo+V34nIRM63VqC/CQshNpy3h2dCgg8646a+2U4jrEa/dypNRyfO ZzJFbe98NYDv4NAT5C9qLzoTHoB78A3YTmV/7UZbnOQjoHOPMZ63eCQc9UurepxD2ksp Qhi0d9fpObvKbsfnqNPTho1MZsOnUbK5865rJz/INF/EsZplNtp0r68tFpIzxJ3TRz1a IsJZaR/mrnNzbZgwxEb+QCC6k/kAWhsb4OndU91f+iluj6lJFf8tpaglkzhUZHz7qT7g GD+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=8+5dcejhRaQpDCwiyEVn753oCGrTihEzoxoKOU9kwWA=; b=owBIO0YPvcKd7YT4APa/wQwQ+bObVvtakfCsu5thpO1CZk/ltMYrmYAPjYh3kD3F9x SV+/O+DmDosJbwBIWCKmfJZyawHmmbxfxyw67XtNjLghIPGx+AXYheAxmOMTDAQhU6ZR kxgmPYO2HXuu4VkXy56LxXwv1ukpJ1D6db5/vPiGarUGdUJClgpryl4LsNV1dfmxMBsE YEBbM14w9+OssYy8HWuq2XRq7brwWr8QrOe7wRfSmwE+9OCHOVGqV/XbJWSpghUv3lOS 9P1bWSdnbewUfN+ldqZOtT8L7vD/2aTovBRZvL0YeatmJabyskCzK6mnOM6VkffgamW2 KlmQ== X-Gm-Message-State: AOAM53111xHoWXQq3/Pf/MdKJNZqPzykeHEyWKPG2RIZgUM+baJnGefF tZybkTZ9A1WmQ5I8KxB81FBsAPFaqATKGU1h4yjmBg== X-Received: by 2002:a05:6102:199:: with SMTP id r25mr10592240vsq.56.1611817467057; Wed, 27 Jan 2021 23:04:27 -0800 (PST) MIME-Version: 1.0 References: <20210123034655.102813-1-john.stultz@linaro.org> <20210123034655.102813-2-john.stultz@linaro.org> In-Reply-To: From: Daniel Mentz Date: Wed, 27 Jan 2021 23:04:15 -0800 Message-ID: Subject: Re: [PATCH v2 2/3] dma-buf: system_heap: Add pagepool support to system heap To: John Stultz Cc: lkml , Daniel Vetter , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , =?UTF-8?Q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media , dri-devel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 27, 2021 at 9:10 PM John Stultz wrote: > > On Wed, Jan 27, 2021 at 12:21 PM Daniel Mentz wrote: > > > > On Fri, Jan 22, 2021 at 7:47 PM John Stultz wrote: > > > +static int system_heap_clear_pages(struct page **pages, int num, pgprot_t pgprot) > > > +{ > > > + void *addr = vmap(pages, num, VM_MAP, pgprot); > > > + > > > + if (!addr) > > > + return -ENOMEM; > > > + memset(addr, 0, PAGE_SIZE * num); > > > + vunmap(addr); > > > + return 0; > > > +} > > > > I thought that vmap/vunmap are expensive, and I am wondering if > > there's a faster way that avoids vmap. > > > > How about lifting this code from lib/iov_iter.c > > static void memzero_page(struct page *page, size_t offset, size_t len) > > { > > char *addr = kmap_atomic(page); > > memset(addr + offset, 0, len); > > kunmap_atomic(addr); > > } > > > > Or what about lifting that code from the old ion_cma_heap.c > > > > if (PageHighMem(pages)) { > > unsigned long nr_clear_pages = nr_pages; > > struct page *page = pages; > > > > while (nr_clear_pages > 0) { > > void *vaddr = kmap_atomic(page); > > > > memset(vaddr, 0, PAGE_SIZE); > > kunmap_atomic(vaddr); > > page++; > > nr_clear_pages--; > > } > > } else { > > memset(page_address(pages), 0, size); > > } > > Though, this last memset only works since CMA is contiguous, so it > probably needs to always do the kmap_atomic for each page, right? Yeah, but with the system heap page pool, some of these pages might be 64KB or 1MB large. kmap_atomic(page) just maps to page_address(page) in most cases. I think iterating over all pages individually in this manner might still be faster than using vmap. > > I'm still a little worried if this is right, as the current > implementation with the vmap comes from the old ion_heap_sglist_zero > logic, which similarly tries to batch the vmaps 32 pages at at time, > but I'll give it a try.