Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp569837pxb; Wed, 3 Feb 2021 11:56:51 -0800 (PST) X-Google-Smtp-Source: ABdhPJxkgbdwLnC/vNoXStnKMYtVsI/68FvYicKUs1EERiMYJTzL7ikGe5lim5Sb6P7Z/Kcm1OVX X-Received: by 2002:a17:906:a186:: with SMTP id s6mr4552577ejy.339.1612382211137; Wed, 03 Feb 2021 11:56:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612382211; cv=none; d=google.com; s=arc-20160816; b=uvWVFXlxU2Db05JDKSZnNBufwA5aAmRsYdCnmJEHGttIA3pxfqvrypBnxtl1jz6luM Y+U4JpiipJ4nuhy1tV3LQa//i0Ty3TuSa4CaPthbBRigpVjuE1D8lHxhr6n6VW+BFbIW Nudntx+yl9BcWmA5yjVFIO+aj+gQYQool34IMbmatsIJ5IaAZRV9pHptBYIVp6yA3aIa oBRKijUsakWkvRZzw20t6NT35gmO6/+RM38Xqi/NQgFc7tT8BzmVAUmCEUkx4GJUA0sw x68TwPKOXbhYRXh0XeJhbRpD0MidTX3xr9Wad+RsnzaMIdM63+ihbO6B+19Kui66iLxy jgqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=yeYVRDirmQQ5wl5j/KtVwzs1oXNB1UpSXJMPqylroD8=; b=PrZiHzq+V1YfQjrPaUepQwZ05c5CtWK4nk123eKW8ZgUSIDJH18gQqDpGcB1vFw3vm ya3xbS3RuyepQXdoRPSVlDDO6SnCtWUw6CQINu1JibDpXGT7UlVR9ZvJzKrnJdiKqVzm K5WcdSkA15KnDIMfsIYbRCCQvoreO+wJls4QD08vxee1IdOuxnSyKTRnwozReuUtcqkP 9sOyvm+cxqkMl057t0rOM8Wf417NWAOizZO/G3dA0PDLfEoJDQ1EA5rM0BhZCJ+45pyh dOehpZz0oznn9mBAW8YjmNQBETIcFet5qrWjCJC3tQjUGDVGZ57HjM+GgeG3+RZ1ncuC qWqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=PrG7fsvo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 33si981885edq.187.2021.02.03.11.56.26; Wed, 03 Feb 2021 11:56:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=PrG7fsvo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231280AbhBCTyd (ORCPT + 99 others); Wed, 3 Feb 2021 14:54:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230368AbhBCTyb (ORCPT ); Wed, 3 Feb 2021 14:54:31 -0500 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 882C7C061573 for ; Wed, 3 Feb 2021 11:53:51 -0800 (PST) Received: by mail-wm1-x32d.google.com with SMTP id 190so940462wmz.0 for ; Wed, 03 Feb 2021 11:53:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=yeYVRDirmQQ5wl5j/KtVwzs1oXNB1UpSXJMPqylroD8=; b=PrG7fsvod/NdW0x6EITygzcKs0maUOZab7byusEFWwMulxeL9s4s45eFmyLgbahgZK L8MQvn//WnpvVobRbcm1gY5btgOSiWMwe+YvJBv/jkZMmI7NmoJKXj1Svk9y8N082iX9 qmAAE/VJmbAkgGci5vuK3MazRNomaM2AwQgFjf38G+kIvYVksJ5TLCQDxhwY40obK+5c AN+mVmCvgBV5oaV7ql4n8iMPiKphRmRpt9VDn1EEpw74FRxmLNZ8ecxEiyTvJsPSvrkR jGVPqUPY+39WNA7bC4kzm3fo65JAgxZPJqi8Q6g9BniWJtMWBTdOTcZoDibsW94F5ySd HJ5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=yeYVRDirmQQ5wl5j/KtVwzs1oXNB1UpSXJMPqylroD8=; b=q+c2vVAK0pcwUq8Pw6WsL8Qr1Zu77+bQCweMrLoptjYBKjsdTIGx70ygg+W0ITOfoy btDHkyRM6XJ+dxhGKCakpZmnWDtayEKKZZVfznw0JLtm7/i2qkPn2q994OX0UwOW/U8L 0L6IvmsEX/tVIWOFsU0n659erdAWxg+vSU86r2ZUiVACtof0gxV3OMJHzxDG6d82xeyx Axqux8wHNu6I27tnnGH/Jhf5RcVE05hIQDkNQDhkFMP4Fk8O7bYXxK25Pp4toDwDAc/M kYN/JwOYL4qN4ujOlBvfgkO6rLg43Zaa78fzA0m9TECjCfBddM6rvAMuDPjiwzkB1DH8 E9mQ== X-Gm-Message-State: AOAM5327KSeeyAnbEjYpz/nyMlXPXNuWAh1JWedzo316QlXYzZQs2NHu jVhCYwpsdZsRnnQ9iL/1HqLUYzJhZUIHRt7pDtlUYg== X-Received: by 2002:a05:600c:4f93:: with SMTP id n19mr4218419wmq.163.1612382029968; Wed, 03 Feb 2021 11:53:49 -0800 (PST) MIME-Version: 1.0 References: <20210203003134.2422308-1-surenb@google.com> <20210203003134.2422308-2-surenb@google.com> <1ea3d79a-2413-bba5-147e-e24df3f91ce0@amd.com> In-Reply-To: <1ea3d79a-2413-bba5-147e-e24df3f91ce0@amd.com> From: Suren Baghdasaryan Date: Wed, 3 Feb 2021 11:53:38 -0800 Message-ID: Subject: Re: [PATCH v2 2/2] dma-buf: heaps: Map system heap pages as managed by linux vm To: =?UTF-8?Q?Christian_K=C3=B6nig?= Cc: Minchan Kim , Sumit Semwal , Andrew Morton , Christoph Hellwig , Liam Mark , Laura Abbott , Brian Starkey , John Stultz , Chris Goldsworthy , =?UTF-8?Q?=C3=98rjan_Eide?= , Robin Murphy , James Jones , Hridya Valsaraju , Sandeep Patil , linux-media , DRI mailing list , "moderated list:DMA BUFFER SHARING FRAMEWORK" , linux-mm , LKML , kernel-team Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 3, 2021 at 12:06 AM Christian K=C3=B6nig wrote: > > Am 03.02.21 um 03:02 schrieb Suren Baghdasaryan: > > On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim wrote: > >> On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: > >>> Currently system heap maps its buffers with VM_PFNMAP flag using > >>> remap_pfn_range. This results in such buffers not being accounted > >>> for in PSS calculations because vm treats this memory as having no > >>> page structs. Without page structs there are no counters representing > >>> how many processes are mapping a page and therefore PSS calculation > >>> is impossible. > >>> Historically, ION driver used to map its buffers as VM_PFNMAP areas > >>> due to memory carveouts that did not have page structs [1]. That > >>> is not the case anymore and it seems there was desire to move away > >>> from remap_pfn_range [2]. > >>> Dmabuf system heap design inherits this ION behavior and maps its > >>> pages using remap_pfn_range even though allocated pages are backed > >>> by page structs. > >>> Replace remap_pfn_range with vm_insert_page, following Laura's sugges= tion > >>> in [1]. This would allow correct PSS calculation for dmabufs. > >>> > >>> [1] https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F= %2Fdriverdev-devel.linuxdriverproject.narkive.com%2Fv0fJGpaD%2Fusing-ion-me= mory-for-direct-io&data=3D04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145= b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637= 479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzI= iLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3DW1N%2B%2BlcFDaRSvXdSPe5h= PNMRByHfGkU7Uc3cmM3FCTU%3D&reserved=3D0 > >>> [2] https://nam11.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%= 2Fdriverdev.linuxdriverproject.org%2Fpipermail%2Fdriverdev-devel%2F2018-Oct= ober%2F127519.html&data=3D04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145= b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637= 479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzI= iLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3DjQxSzKEr52lUcAIx%2FuBHMJ= 7yOgof%2FVMlW9%2BB2f%2FoS%2FE%3D&reserved=3D0 > >>> (sorry, could not find lore links for these discussions) > >>> > >>> Suggested-by: Laura Abbott > >>> Signed-off-by: Suren Baghdasaryan > >> Reviewed-by: Minchan Kim > >> > >> A note: This patch makes dmabuf system heap accounted as PSS so > >> if someone has relies on the size, they will see the bloat. > >> IIRC, there was some debate whether PSS accounting for their > >> buffer is correct or not. If it'd be a problem, we need to > >> discuss how to solve it(maybe, vma->vm_flags and reintroduce > >> remap_pfn_range for them to be respected). > > I did not see debates about not including *mapped* dmabufs into PSS > > calculation. I remember people were discussing how to account dmabufs > > referred only by the FD but that is a different discussion. If the > > buffer is mapped into the address space of a process then IMHO > > including it into PSS of that process is not controversial. > > Well, I think it is. And to be honest this doesn't looks like a good > idea to me since it will eventually lead to double accounting of system > heap DMA-bufs. Thanks for the comment! Could you please expand on this double accounting issue? Do you mean userspace could double account dmabufs because it expects dmabufs not to be part of PSS or is there some in-kernel accounting mechanism that would be broken by this? > > As discussed multiple times it is illegal to use the struct page of a > DMA-buf. This case here is a bit special since it is the owner of the > pages which does that, but I'm not sure if this won't cause problems > elsewhere as well. I would be happy to keep things as they are but calculating dmabuf contribution to PSS without struct pages is extremely inefficient and becomes a real pain when we consider the possibilities of partial mappings, when not the entire dmabuf is being mapped. Calculating this would require parsing /proc/pid/maps for the process, finding dmabuf mappings and the size for each one, then parsing /proc/pid/maps for ALL processes in the system to see if the same dmabufs are used by other processes and only then calculating the PSS. I hope that explains the desire to use already existing struct pages to obtain PSS in a much more efficient way. > > A more appropriate solution would be to held processes accountable for > resources they have allocated through device drivers. Are you suggesting some new kernel mechanism to account resources allocated by a process via a driver? If so, any details? > > Regards, > Christian. > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >