Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp30223imm; Thu, 28 Jun 2018 13:16:51 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfS8rcH6/2FR0RYp32qILhzoG+LBsBDqhl0Sxf62i5nX9Bg1cOLbMJdKkwCTMv1y6l9GTGc X-Received: by 2002:a17:902:59da:: with SMTP id d26-v6mr4440337plj.42.1530217011854; Thu, 28 Jun 2018 13:16:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530217011; cv=none; d=google.com; s=arc-20160816; b=InSg+lqPI+SBN72XgZOIwq5xSPWBXCPVYGYrBSOt1i/HaxhWNlcJfcjXnvHovf3n+b TVhytqqQilxwxovhAOyf+M8lOJ3bbtiaTgpfk7NIyvOFUlmQqpMLVdF0ZuOfwxJjiW4j 21qjHm1Z7t9FQZ2uN77scI5YkPoVhR6cPDubINvdCC7r0nLldNEisz0jDg/zdDzVp70j XHQxNS8FRcNS0Lncn74lEUaQ3SJhhKZdD8rV/h688teA/ZqohR3tCGTulCzVDsD0hmJL s0hf4PDA6rWB8r8+BjybMXi4ZKy9yz0MoSAXuM6MoLe9unpDe6yw6gkuz3ehFkUz+db2 UmLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=bfButp+ah/slZokTxZla4iv+0MZdjAexyJbwEzQp+A4=; b=NMpGRHQWNzcNNbO5VphoTZKCwOVPoKXtceR5BW0gXz6M/x+CQubtLhNn7d2FNlfp6x JgDreOnIi8OYVYopULkTuj90lGuyPNSG0sHi5O2Bcb6nRaqLKAWQ9bNvJbk2Syti2Mi+ rlkJiLMTuiLXD7aI9gt9La2mb8sjoMMtiTFuRGOgC1PvIEa2rLZ4QXmpOhmdGHpvC3+G Qk3k0GE4zkvOAfLet16KgD1p4LPvjTpExnEfmxxnzj8YbXKIHOAzX7XytikYEYfP98m+ G8qJ3jmySH5UOnF6ebAbC0HegP30QSPmfRYUQrIH7x5nbyUwrxufmxq1/t/kEfXvCAQA NATQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v18-v6si7384150pfg.343.2018.06.28.13.16.19; Thu, 28 Jun 2018 13:16:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968032AbeF1TDw (ORCPT + 99 others); Thu, 28 Jun 2018 15:03:52 -0400 Received: from mx2.suse.de ([195.135.220.15]:40176 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S968009AbeF1TDs (ORCPT ); Thu, 28 Jun 2018 15:03:48 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id C7CA7ADA0; Thu, 28 Jun 2018 19:03:44 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id F00371E3C15; Thu, 28 Jun 2018 11:17:43 +0200 (CEST) Date: Thu, 28 Jun 2018 11:17:43 +0200 From: Jan Kara To: John Hubbard Cc: Jan Kara , Jason Gunthorpe , Michal Hocko , Dan Williams , Christoph Hellwig , John Hubbard , Matthew Wilcox , Christopher Lameter , Linux MM , LKML , linux-rdma Subject: Re: [PATCH 2/2] mm: set PG_dma_pinned on get_user_pages*() Message-ID: <20180628091743.khhta7nafuwstd3m@quack2.suse.cz> References: <20180626134757.GY28965@dhcp22.suse.cz> <20180626164825.fz4m2lv6hydbdrds@quack2.suse.cz> <20180627113221.GO32348@dhcp22.suse.cz> <20180627115349.cu2k3ainqqdrrepz@quack2.suse.cz> <20180627115927.GQ32348@dhcp22.suse.cz> <20180627124255.np2a6rxy6rb6v7mm@quack2.suse.cz> <20180627145718.GB20171@ziepe.ca> <20180627170246.qfvucs72seqabaef@quack2.suse.cz> <1f6e79c5-5801-16d2-18a6-66bd0712b5b8@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1f6e79c5-5801-16d2-18a6-66bd0712b5b8@nvidia.com> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 27-06-18 19:42:01, John Hubbard wrote: > On 06/27/2018 10:02 AM, Jan Kara wrote: > > On Wed 27-06-18 08:57:18, Jason Gunthorpe wrote: > >> On Wed, Jun 27, 2018 at 02:42:55PM +0200, Jan Kara wrote: > >>> On Wed 27-06-18 13:59:27, Michal Hocko wrote: > >>>> On Wed 27-06-18 13:53:49, Jan Kara wrote: > >>>>> On Wed 27-06-18 13:32:21, Michal Hocko wrote: > >>>> [...] > >>>>>> Appart from that, do we really care about 32b here? Big DIO, IB users > >>>>>> seem to be 64b only AFAIU. > >>>>> > >>>>> IMO it is a bad habit to leave unpriviledged-user-triggerable oops in the > >>>>> kernel even for uncommon platforms... > >>>> > >>>> Absolutely agreed! I didn't mean to keep the blow up for 32b. I just > >>>> wanted to say that we can stay with a simple solution for 32b. I thought > >>>> the g-u-p-longterm has plugged the most obvious breakage already. But > >>>> maybe I just misunderstood. > >>> > >>> Most yes, but if you try hard enough, you can still trigger the oops e.g. > >>> with appropriately set up direct IO when racing with writeback / reclaim. > >> > >> gup longterm is only different from normal gup if you have DAX and few > >> people do, which really means it doesn't help at all.. AFAIK?? > > > > Right, what I wrote works only for DAX. For non-DAX situation g-u-p > > longterm does not currently help at all. Sorry for confusion. > > > > OK, I've got an early version of this up and running, reusing the page->lru > fields. I'll clean it up and do some heavier testing, and post as a PATCH v2. Cool. > One question though: I'm still vague on the best actions to take in the > following functions: > > page_mkclean_one > try_to_unmap_one > > At the moment, they are both just doing an evil little early-out: > > if (PageDmaPinned(page)) > return false; > > ...but we talked about maybe waiting for the condition to clear, instead? > Thoughts? What needs to happen in page_mkclean() depends on the caller. Most of the callers really need to be sure the page is write-protected once page_mkclean() returns. Those are: pagecache_isize_extended() fb_deferred_io_work() clear_page_dirty_for_io() if called for data-integrity writeback - which is currently known only in its caller (e.g. write_cache_pages()) where it can be determined as wbc->sync_mode == WB_SYNC_ALL. Getting this information into page_mkclean() will require some plumbing and clear_page_dirty_for_io() has some 50 callers but it's doable. clear_page_dirty_for_io() for cleaning writeback (wbc->sync_mode != WB_SYNC_ALL) can just skip pinned pages and we probably need to do that as otherwise memory cleaning would get stuck on pinned pages until RDMA drivers release its pins. > And if so, does it sound reasonable to refactor wait_on_page_bit_common(), > so that it learns how to wait for a bit that, while inside struct page, is > not within page->flags? wait_on_page_bit_common() and associated wait queue handling is a fast path so we should not make it slower for such special case as waiting for DMA pin. OTOH we could probably refactor most of the infrastructure to take pointer to word with flags instead of pointer to page. I'm not sure how the result will look like but it's probably worth a try. Honza -- Jan Kara SUSE Labs, CR