Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp928125imm; Wed, 11 Jul 2018 13:38:02 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeOMnwTFsxdQp9mecxNI29J82e/sFf/svkRNIl/YDf9PPCetv+13lzf796BybzVBPuGBduC X-Received: by 2002:a17:902:5ac3:: with SMTP id g3-v6mr153129plm.90.1531341482462; Wed, 11 Jul 2018 13:38:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531341482; cv=none; d=google.com; s=arc-20160816; b=KmhLi6Nsf1I8LiuF6AuyNpPaFnF21gR2szXPYnCsUpqywpcFc+4Ek2lXSGFIRw6EOe YwOzpGaSmUDynCc6TlnCPtCjwzMRXfRfZxDbjYZNf7AjW49ga4Ozx4oxakPrzkIcwris dPLXmPjxUNHIgwfndXj7fkgDh+Bj5e0XSaDovSwi+NM0DsCFj73wP5fDkbl/fzYb4nlf rLF41yZz2Ao2CS22DYvaOemM7J3P6EwsA9HYj/e3yPKVMeV4+ngEUd9X7kqDItJMwNJn 9K57CTCVy9z3eYDtXGM3QnJm18/iB51gCDazvJcK3LAR4CFkoArqEptlMe9csq4MCfMA K9fQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=5SPtrpJFaJaJud6cGVY0kLWXNRLnpE7IemnwfRPlXAA=; b=ZdlZjS4DZiTGXh9FxKbD60KW706k0lYeyT+i/V/ogIwpVEynv0fnvuRHTow8VDluPv U0fBz+mG7jCBzcZpnmqwSF/1SkxAA9LkC49t5rqCkdC7fhpLFmpf3wYUna1qn4+G4Kn5 velNZxr3cLyKSlDbEyT8Bu+Lptvm7Fl2CH4cZhlghl8xSq/3l3F4HfIeC52lZVC5pQB6 fsuegD6jTEJRqnRB+Rg2Y7MUkoxTueQeIMnJ2fstVeIWEOoGVgNLsx48aEgrD7m1ugpV JLX2QLhxkj5U6iWcooS+wecmLvYyQ5xQokNHdl+QdrCbQa9+Ibc75eM9aeF+xDFcfUWP 1LZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l190-v6si18404831pgd.626.2018.07.11.13.37.46; Wed, 11 Jul 2018 13:38:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387970AbeGKUMQ (ORCPT + 99 others); Wed, 11 Jul 2018 16:12:16 -0400 Received: from mga03.intel.com ([134.134.136.65]:27511 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733070AbeGKUMQ (ORCPT ); Wed, 11 Jul 2018 16:12:16 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Jul 2018 13:06:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,338,1526367600"; d="scan'208";a="66156806" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by fmsmga002.fm.intel.com with ESMTP; 11 Jul 2018 13:06:19 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com, vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com, jithu.joseph@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH] x86/intel_rdt: Fix possible circular lock dependency Date: Wed, 11 Jul 2018 13:06:07 -0700 Message-Id: X-Mailer: git-send-email 2.17.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Lockdep is reporting a possible circular locking dependency: ====================================================== WARNING: possible circular locking dependency detected 4.18.0-rc1-test-test+ #4 Not tainted ------------------------------------------------------ user_example/766 is trying to acquire lock: 0000000073479a0f (rdtgroup_mutex){+.+.}, at: pseudo_lock_dev_mmap but task is already holding lock: 000000001ef7a35b (&mm->mmap_sem){++++}, at: vm_mmap_pgoff+0x9f/0x which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&mm->mmap_sem){++++}: _copy_to_user+0x1e/0x70 filldir+0x91/0x100 dcache_readdir+0x54/0x160 iterate_dir+0x142/0x190 __x64_sys_getdents+0xb9/0x170 do_syscall_64+0x86/0x200 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #1 (&sb->s_type->i_mutex_key#3){++++}: start_creating+0x60/0x100 debugfs_create_dir+0xc/0xc0 rdtgroup_pseudo_lock_create+0x217/0x4d0 rdtgroup_schemata_write+0x313/0x3d0 kernfs_fop_write+0xf0/0x1a0 __vfs_write+0x36/0x190 vfs_write+0xb7/0x190 ksys_write+0x52/0xc0 do_syscall_64+0x86/0x200 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (rdtgroup_mutex){+.+.}: __mutex_lock+0x80/0x9b0 pseudo_lock_dev_mmap+0x2f/0x170 mmap_region+0x3d6/0x610 do_mmap+0x387/0x580 vm_mmap_pgoff+0xcf/0x110 ksys_mmap_pgoff+0x170/0x1f0 do_syscall_64+0x86/0x200 entry_SYSCALL_64_after_hwframe+0x49/0xbe other info that might help us debug this: Chain exists of: rdtgroup_mutex --> &sb->s_type->i_mutex_key#3 --> &mm->mmap_sem Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&mm->mmap_sem); lock(&sb->s_type->i_mutex_key#3); lock(&mm->mmap_sem); lock(rdtgroup_mutex); *** DEADLOCK *** 1 lock held by user_example/766: #0: 000000001ef7a35b (&mm->mmap_sem){++++}, at: vm_mmap_pgoff+0x9f/0x110 rdtgroup_mutex is already being released temporarily during pseudo-lock region creation to prevent the potential deadlock between rdtgroup_mutex and mm->mmap_sem that is obtained during device_create(). Move the debugfs creation into this area to avoid the same circular dependency. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c | 29 ++++++++++----------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c index 751c78f9992f..f80c58f8adc3 100644 --- a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c +++ b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c @@ -1254,19 +1254,10 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) goto out_cstates; } - if (!IS_ERR_OR_NULL(debugfs_resctrl)) { - plr->debugfs_dir = debugfs_create_dir(rdtgrp->kn->name, - debugfs_resctrl); - if (!IS_ERR_OR_NULL(plr->debugfs_dir)) - debugfs_create_file("pseudo_lock_measure", 0200, - plr->debugfs_dir, rdtgrp, - &pseudo_measure_fops); - } - ret = pseudo_lock_minor_get(&new_minor); if (ret < 0) { rdt_last_cmd_puts("unable to obtain a new minor number\n"); - goto out_debugfs; + goto out_cstates; } /* @@ -1275,11 +1266,20 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) * * The mutex has to be released temporarily to avoid a potential * deadlock with the mm->mmap_sem semaphore which is obtained in - * the device_create() callpath below as well as before the mmap() - * callback is called. + * the device_create() and debugfs_create_dir() callpath below + * as well as before the mmap() callback is called. */ mutex_unlock(&rdtgroup_mutex); + if (!IS_ERR_OR_NULL(debugfs_resctrl)) { + plr->debugfs_dir = debugfs_create_dir(rdtgrp->kn->name, + debugfs_resctrl); + if (!IS_ERR_OR_NULL(plr->debugfs_dir)) + debugfs_create_file("pseudo_lock_measure", 0200, + plr->debugfs_dir, rdtgrp, + &pseudo_measure_fops); + } + dev = device_create(pseudo_lock_class, NULL, MKDEV(pseudo_lock_major, new_minor), rdtgrp, "%s", rdtgrp->kn->name); @@ -1290,7 +1290,7 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) ret = PTR_ERR(dev); rdt_last_cmd_printf("failed to create character device: %d\n", ret); - goto out_minor; + goto out_debugfs; } /* We released the mutex - check if group was removed while we did so */ @@ -1311,10 +1311,9 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) out_device: device_destroy(pseudo_lock_class, MKDEV(pseudo_lock_major, new_minor)); -out_minor: - pseudo_lock_minor_release(new_minor); out_debugfs: debugfs_remove_recursive(plr->debugfs_dir); + pseudo_lock_minor_release(new_minor); out_cstates: pseudo_lock_cstates_relax(plr); out_region: -- 2.17.0