Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp821628ybi; Sun, 30 Jun 2019 01:01:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqx/QR44bmU+QGqZzwLz6slhZ7rmSvibsOg8+d42TBrQoJonGVQ6Za8w3uMTUfSt2luQ7b07 X-Received: by 2002:a17:902:2869:: with SMTP id e96mr21037395plb.203.1561881709709; Sun, 30 Jun 2019 01:01:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561881709; cv=none; d=google.com; s=arc-20160816; b=B/CsJXu6X6+NKpg+wTYbLFulc/URTqT6LKmYwr43ReXAuubrZ/8544LnD+T/IcEiai P8IWI5G3VZ2qhTapK0ha6o76sIlnnBLQUU9eiWn6Ar+KCK0OMGWWaz6jw0oGg6pbkn+T 6GoPk437FGvGeqFxtZ9xF5nXETpPmTxkBH0YQ4139bNYYciX7ziBpVGAxzRYSoJvNHDE +v32z/EVwX/KCH8S+IcjI6Rfc5YNVtdQTnqIMMZYd/lT0lI72vxm7NqpQHmgfebCj1fH Mr0JL11SvxwkE+p4vzf7GP+rsY+XBnGhCTgMYdQcbOqVyWVNCNeZ+sBr8FiHnEo24v8V 5dKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=FoNepBnZ0uw1TMBUJtB3fTwNze8vJxH2e+2QtYNEbJY=; b=Xet5UK1kDiBvFg1Q3CCCtWds/XfWm8zlDwH7U6OmZAuYdrb5o23FtGQkn4SX0dwPbE a/qCCXJehs+hcGdc9sUNx+KdrbtCWGXGEzS03dQkESgudWY1WQqOGoFf/jaxJVjFWUec aQhzCXbIW0awiWML54LGPWofvgjSEZ9as1J36eANCn2l+myMwEkCyJoLef0CH83MP3Nh CgR51d7TgxuQ5HIRdmSLV/sGazrsbWOsN0Z383yufvn+Lmi7hk8GKnMYtxB+HYnPa4rI E+O3dQxDL3oHis0gW2nPtzrF0o103v7PzwOPLpy8EqN1vefMFh3DeR1n1xbJwhE+UXvY yxrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=H9jsFDlY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a14si6804040pgm.206.2019.06.30.01.01.33; Sun, 30 Jun 2019 01:01:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=H9jsFDlY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726618AbfF3IBR (ORCPT + 99 others); Sun, 30 Jun 2019 04:01:17 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:33893 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726531AbfF3IBR (ORCPT ); Sun, 30 Jun 2019 04:01:17 -0400 Received: by mail-io1-f65.google.com with SMTP id k8so21817474iot.1; Sun, 30 Jun 2019 01:01:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=FoNepBnZ0uw1TMBUJtB3fTwNze8vJxH2e+2QtYNEbJY=; b=H9jsFDlY6VwbFX3rcCgjkRMkFe6Q6XRfuDbTXkobnjT4NI3geCYku3+CeM1V1w6tFw 96AUNXJw3ZA2omDANIrWro7NFLWRj8o2YMh2rRDjxk9AtFjgtbCUjabZJWJyHm8k+JyM Rn4fPCXCcUK0dzMAtPEBjLsUBcVnGsisCD07HiPXkSmWXuxfq4RijVZw+NGOyqlCjBn8 7+RElAyeRGeSwD6CPfhOj5FxMVe+jyV7tKUSp37evpChACr4Lcq0vy8pkfFtCxSwqcXw PVj7KS6JJ+drP/7VEaeHAD/hp7bDD7xJ6Q/kOKV7j9wh7ZjMLoquFmwVNNtr814eFhwW hI2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=FoNepBnZ0uw1TMBUJtB3fTwNze8vJxH2e+2QtYNEbJY=; b=VnGwvCWwzBTIe2AAESHAwrgp3PNB4aK9+TXYYmNtgbluEUQLFFrFzSWFNGNJYhC7B1 xuN99mSqmTAYa3ZthYByOBWMkHY/Qf5Hx1coyQss7dRiP9L9hiQH/dZHAHWtifvnVpA8 W+u2h+Baorldonr6nFbi8bI13y2opfftEWhkf161Y1aMOg2Zl7mvzOqSWXKFoVEGn3hV 8SLwQY/Tn1Kw1+JhVkXBGEsby/ss8LiKYlYOQ5UUnWuJXMHv3OtoYO+bkxGnlViMv0Ud WUt7fI2ThCEEPegx2EzJBmTW6y0vUQGN/DEMed6QOZfdPMZ3GZYA117vYk2Zes+zmaUO Wjvg== X-Gm-Message-State: APjAAAWdxw+C4/7Vto4sYQTagN9OxieVHtar+rH6Sfb9l0IVM+3QFttW ydWwLI8XCWJzCYJ5WZRC2Zik5A16ONxNhCeQ/GtTm7XpoYw= X-Received: by 2002:a6b:3102:: with SMTP id j2mr12154498ioa.5.1561881675411; Sun, 30 Jun 2019 01:01:15 -0700 (PDT) MIME-Version: 1.0 References: <156159454541.2964018.7466991316059381921.stgit@dwillia2-desk3.amr.corp.intel.com> <20190627123415.GA4286@bombadil.infradead.org> <20190627195948.GB4286@bombadil.infradead.org> <20190629160336.GB1180@bombadil.infradead.org> In-Reply-To: From: Dan Williams Date: Sun, 30 Jun 2019 01:01:04 -0700 Message-ID: Subject: Re: [PATCH] filesystem-dax: Disable PMD support To: Matthew Wilcox Cc: Seema Pandit , linux-nvdimm , Linux Kernel Mailing List , stable , Robert Barror , linux-fsdevel , Jan Kara Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jun 30, 2019 at 12:27 AM Dan Williams wrote: > > On Sat, Jun 29, 2019 at 9:03 AM Matthew Wilcox wrote: > > > > On Thu, Jun 27, 2019 at 07:39:37PM -0700, Dan Williams wrote: > > > On Thu, Jun 27, 2019 at 12:59 PM Matthew Wilcox wrote: > > > > > > > > On Thu, Jun 27, 2019 at 12:09:29PM -0700, Dan Williams wrote: > > > > > > This bug feels like we failed to unlock, or unlocked the wrong entry > > > > > > and this hunk in the bisected commit looks suspect to me. Why do we > > > > > > still need to drop the lock now that the radix_tree_preload() calls > > > > > > are gone? > > > > > > > > > > Nevermind, unmapp_mapping_pages() takes a sleeping lock, but then I > > > > > wonder why we don't restart the lookup like the old implementation. > > > > > > > > We have the entry locked: > > > > > > > > /* > > > > * Make sure 'entry' remains valid while we drop > > > > * the i_pages lock. > > > > */ > > > > dax_lock_entry(xas, entry); > > > > > > > > /* > > > > * Besides huge zero pages the only other thing that gets > > > > * downgraded are empty entries which don't need to be > > > > * unmapped. > > > > */ > > > > if (dax_is_zero_entry(entry)) { > > > > xas_unlock_irq(xas); > > > > unmap_mapping_pages(mapping, > > > > xas->xa_index & ~PG_PMD_COLOUR, > > > > PG_PMD_NR, false); > > > > xas_reset(xas); > > > > xas_lock_irq(xas); > > > > } > > > > > > > > If something can remove a locked entry, then that would seem like the > > > > real bug. Might be worth inserting a lookup there to make sure that it > > > > hasn't happened, I suppose? > > > > > > Nope, added a check, we do in fact get the same locked entry back > > > after dropping the lock. > > > > > > The deadlock revolves around the mmap_sem. One thread holds it for > > > read and then gets stuck indefinitely in get_unlocked_entry(). Once > > > that happens another rocksdb thread tries to mmap and gets stuck > > > trying to take the mmap_sem for write. Then all new readers, including > > > ps and top that try to access a remote vma, then get queued behind > > > that write. > > > > > > It could also be the case that we're missing a wake up. > > > > OK, I have a Theory. > > > > get_unlocked_entry() doesn't check the size of the entry being waited for. > > So dax_iomap_pmd_fault() can end up sleeping waiting for a PTE entry, > > which is (a) foolish, because we know it's going to fall back, and (b) > > can lead to a missed wakeup because it's going to sleep waiting for > > the PMD entry to come unlocked. Which it won't, unless there's a happy > > accident that happens to map to the same hash bucket. > > > > Let's see if I can steal some time this weekend to whip up a patch. > > Theory seems to have some evidence... I instrumented fs/dax.c to track > outstanding 'lock' entries and 'wait' events. At the time of the hang > we see no locks held and the waiter is waiting on a pmd entry: > > [ 4001.354334] fs/dax locked entries: 0 > [ 4001.358425] fs/dax wait entries: 1 > [ 4001.362227] db_bench/2445 index: 0x0 shift: 6 > [ 4001.367099] grab_mapping_entry+0x17a/0x260 > [ 4001.371773] dax_iomap_pmd_fault.isra.43+0x168/0x7a0 > [ 4001.377316] ext4_dax_huge_fault+0x16f/0x1f0 > [ 4001.382086] __handle_mm_fault+0x411/0x1390 > [ 4001.386756] handle_mm_fault+0x172/0x360 In fact, this naive fix is holding up so far: @@ -215,7 +216,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas, * queue to the start of that PMD. This ensures that all offsets in * the range covered by the PMD map to the same bit lock. */ - if (dax_is_pmd_entry(entry)) + //if (dax_is_pmd_entry(entry)) index &= ~PG_PMD_COLOUR; key->xa = xas->xa; key->entry_start = index;