Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp832109ybt; Wed, 24 Jun 2020 12:18:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxTSGnUZM1btFBEXhqPvCOtsUWUhAy8viEVhC7gPw260OnxlvJbC8gbIrfN5nLknkhBdnFa X-Received: by 2002:a17:906:fa9b:: with SMTP id lt27mr3565638ejb.365.1593026310205; Wed, 24 Jun 2020 12:18:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593026310; cv=none; d=google.com; s=arc-20160816; b=B0Pm2qCDivBTahcD+7CSimZdCZoOfGnCojuKtejlBwfCnYd+yiyuDGGytAvNKM3J7K asFvdAMC3tZe+bXWNoS5r30bYye1Qt7rQ0n7f3mF2HEnLZwK2zWUYzJYwQmIgrwLtbNv go5jrTIXGxunC7TEfnI8Aw+0kap++YWMdFGNlSwZN4d+PdL4nWp2SkV7EcU34ISmm13S CpYHww7NzPVkrb/tsGHsJckAYOyzXvjMgcpPnuDAxilZ4zrKU/28pdSbgBmSw4VsmzuA ceVM2Hw658QSjcSZyt+hHXSHT4FP3HtLH1HW6Fb+ycJTb5BMNYc844ggZ5+4WKNqn/3w zkZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=KpB8FdM9m72L/ZPHei7mWIiVu1mZTcWxaFbK0RWWb8Y=; b=IQZjNwUWkxWhedDHQv/apUUxSXc7nl6wKnvrvUk7ByWbpPUHzpNyea1cZgHcgMEYqV XkHWMHPpNI70pgnyozpzaPSqXM0n5HmfvztSW/wTnJACiUgMD7nBmnl4cC+RGN9WhYdi 2LAgSfblSiWmssiPrY2i3x7TYgk681VsnbyQuvt3MbBGGoUXBgi7luU72xJEwkjnfDRf oFMd/hrAZOfszQlPe+1ZcTDMqo8e+ch3aSZfgvXdHGTgDQ/vUzcOQJhhVEerunDZK7xo ElOLofnjR++rzEelqwTTg0CmOfRQchF2ur/OIodTrwKhGu4J4DhD6X6fQ3qVbB6/1vQX 1ajw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c10si6976936ejx.641.2020.06.24.12.18.06; Wed, 24 Jun 2020 12:18:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404794AbgFXTOw (ORCPT + 99 others); Wed, 24 Jun 2020 15:14:52 -0400 Received: from mail.fireflyinternet.com ([109.228.58.192]:51590 "EHLO fireflyinternet.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2404563AbgFXTOv (ORCPT ); Wed, 24 Jun 2020 15:14:51 -0400 X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from build.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 21606589-1500050 for multiple; Wed, 24 Jun 2020 20:14:22 +0100 From: Chris Wilson To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, Chris Wilson , Andrew Morton , Jan Kara , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , John Hubbard , Claudio Imbrenda , "Kirill A . Shutemov" , Jason Gunthorpe Subject: [PATCH] mm: Skip opportunistic reclaim for dma pinned pages Date: Wed, 24 Jun 2020 20:14:17 +0100 Message-Id: <20200624191417.16735-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A general rule of thumb is that shrinkers should be fast and effective. They are called from direct reclaim at the most incovenient of times when the caller is waiting for a page. If we attempt to reclaim a page being pinned for active dma [pin_user_pages()], we will incur far greater latency than a normal anonymous page mapped multiple times. Worse the page may be in use indefinitely by the HW and unable to be reclaimed in a timely manner. A side effect of the LRU shrinker not being dma aware is that we will often attempt to perform direct reclaim on the persistent group of dma pages while continuing to use the dma HW (an issue as the HW may already be actively waiting for the next user request), and even attempt to reclaim a partially allocated dma object in order to satisfy pinning the next user page for that object. It is to be expected that such pages are made available for reclaim at the end of the dma operation [unpin_user_pages()], and for truly longterm pins to be proactively recovered via device specific shrinkers [i.e. stop the HW, allow the pages to be returned to the system, and then compete again for the memory]. Signed-off-by: Chris Wilson Cc: Andrew Morton Cc: Jan Kara Cc: Jérôme Glisse Cc: John Hubbard Cc: Claudio Imbrenda Cc: Jan Kara Cc: Kirill A. Shutemov Cc: Jason Gunthorpe --- This seems perhaps a little devious and overzealous. Is there a more appropriate TTU flag? Would there be a way to limit its effect to say FOLL_LONGTERM? Doing the migration first would seem to be sensible if we disable opportunistic migration for the duration of the pin. --- mm/rmap.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/rmap.c b/mm/rmap.c index 5fe2dedce1fc..374c6e65551b 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1393,6 +1393,22 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, is_zone_device_page(page) && !is_device_private_page(page)) return true; + /* + * Try and fail early to revoke a costly DMA pinned page. + * + * Reclaiming an active DMA page requires stopping the hardware + * and flushing access. [Hardware that does support pagefaulting, + * and so can quickly revoke DMA pages at any time, does not need + * to pin the DMA page.] At worst, the page may be indefinitely in + * use by the hardware. Even at best it will take far longer to + * revoke the access via the mmu notifier, forcing that latency + * onto our callers rather than the consumer of the HW. As we are + * called during opportunistic direct reclaim, declare the + * opportunity cost too high and ignore the page. + */ + if (page_maybe_dma_pinned(page)) + return true; + if (flags & TTU_SPLIT_HUGE_PMD) { split_huge_pmd_address(vma, address, flags & TTU_SPLIT_FREEZE, page); -- 2.20.1