Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp744908pxf; Thu, 18 Mar 2021 10:30:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw2G1MdZkI2fHmJoXFKOznwF9I/oFsBl+tX9835O98a5KULPYT7Ys/6LwI8/0JoAaxSbrzn X-Received: by 2002:a17:906:e84:: with SMTP id p4mr11333499ejf.248.1616088655701; Thu, 18 Mar 2021 10:30:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616088655; cv=none; d=google.com; s=arc-20160816; b=OwadkTCcsTqKDFe3Sq5e74kYjtLe7Oxq0PlkoAkWTHSsL1w0E/XDVwXQbzev9glumL k5QEQHE7K3bAlzjCsrktn9o9csy4eLKT4+1vJ3OeTy5eFtieiuBaJaFJbf55OloR9mIh spsjh0tjdrFFJmtMtunG3aNf53QBeICJ9VN4qUlFRX+hkj7tRsCDkgcEsuJSAoo1aV2M pv2bHNgrP3WMr8CMFfDV2lvc/T4cEj498t+WQN/hBBan9RjUW/qjGUj/XutKdpyZAyNP llS2BrN4smIo/mshcsJVk3ToUuzsPm2Nn54A4AsXtJb1faTqnwtoe9lkkc069ffcp34j R7iA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=MKTrxMLfoa+TFhtbx9BBf8vrzmeXw9NoUHAs2FfLwfc=; b=p+C0h2mMNNV2UVAR905LRUvbbtOkWl5ay1EThuDkETxRygmvgqFQagrq01RfiwzYcQ f5d2BlFnLCGJTSYHUOnOBuMUAAyXTEhkMV/Q7QZFKEDtNAyETbB6hC8+OqMZIL7lzH00 jw1iUGr+ws9VlN/icNoeZ9uVO3J8fQ9J/oi4X7bVnfnz0DWMzI6gx/Nu1JAAOgq/yykQ EZcn3H4b7tu9OcVm6iqSDyRDcOyVvUPDVdwRJcPyNETLu85D2WMw54YfV7yRry7oNk5l XCHf8wesScadaDS2f+0yCH0wFt9fUQe0Pf0h2AJ7VOJDofLuqgHjpgbdE9wLOsFe7myK o/wA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W81EAPyV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l7si2000521edn.67.2021.03.18.10.30.33; Thu, 18 Mar 2021 10:30:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W81EAPyV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232314AbhCRR3R (ORCPT + 99 others); Thu, 18 Mar 2021 13:29:17 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:33468 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232257AbhCRR2p (ORCPT ); Thu, 18 Mar 2021 13:28:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1616088524; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=MKTrxMLfoa+TFhtbx9BBf8vrzmeXw9NoUHAs2FfLwfc=; b=W81EAPyVKgoXeH0bzyKQMIfstbWiYwjBjAtnnI/NSH8PQQU+gVO50Z60QsjUtC7xEoVxE7 TE7AzF0vL4ETfMRQKosNG+xiiFXlrGKFd7omFXOAjsITYaunElg+Z6i8RUQsdOBYQ7VlQM znL7axKvebvKITPc/Mjna1dMMbUX4PM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-383-KJbGqldJMvSDA_92LEYqoQ-1; Thu, 18 Mar 2021 13:28:43 -0400 X-MC-Unique: KJbGqldJMvSDA_92LEYqoQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9967C1926DA3; Thu, 18 Mar 2021 17:28:41 +0000 (UTC) Received: from llong.com (ovpn-119-86.rdu2.redhat.com [10.10.119.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id 85BCF5D6A8; Thu, 18 Mar 2021 17:28:40 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng , "Paul E. McKenney" , Davidlohr Bueso Cc: linux-kernel@vger.kernel.org, Juri Lelli , Waiman Long Subject: [PATCH-tip 5/5] locking/locktorture: locking/locktorture: Fix incorrect use of ww_acquire_ctx in ww_mutex test Date: Thu, 18 Mar 2021 13:28:14 -0400 Message-Id: <20210318172814.4400-6-longman@redhat.com> In-Reply-To: <20210318172814.4400-1-longman@redhat.com> References: <20210318172814.4400-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The ww_acquire_ctx structure for ww_mutex needs to persist for a complete lock/unlock cycle. In the ww_mutex test in locktorture, however, both ww_acquire_init() and ww_acquire_fini() are called within the lock function only. This causes a lockdep splat of "WARNING: Nested lock was not taken" when lockdep is enabled in the kernel. To fix this problem, we need to move the ww_acquire_fini() after the ww_mutex_unlock() in torture_ww_mutex_unlock(). This is done by allocating a global array of ww_acquire_ctx structures. Each locking thread is associated with its own ww_acquire_ctx via the unique thread id it has so that both the lock and unlock functions can access the same ww_acquire_ctx structure. Signed-off-by: Waiman Long --- kernel/locking/locktorture.c | 39 +++++++++++++++++++++++++----------- 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c index 90a975a95a13..b3adb40549bf 100644 --- a/kernel/locking/locktorture.c +++ b/kernel/locking/locktorture.c @@ -374,15 +374,27 @@ static struct lock_torture_ops mutex_lock_ops = { */ static DEFINE_WD_CLASS(torture_ww_class); static struct ww_mutex torture_ww_mutex_0, torture_ww_mutex_1, torture_ww_mutex_2; +static struct ww_acquire_ctx *ww_acquire_ctxs; static void torture_ww_mutex_init(void) { ww_mutex_init(&torture_ww_mutex_0, &torture_ww_class); ww_mutex_init(&torture_ww_mutex_1, &torture_ww_class); ww_mutex_init(&torture_ww_mutex_2, &torture_ww_class); + + ww_acquire_ctxs = kmalloc_array(cxt.nrealwriters_stress, + sizeof(*ww_acquire_ctxs), + GFP_KERNEL); + if (!ww_acquire_ctxs) + VERBOSE_TOROUT_STRING("ww_acquire_ctx: Out of memory"); +} + +static void torture_ww_mutex_exit(void) +{ + kfree(ww_acquire_ctxs); } -static int torture_ww_mutex_lock(int tid __maybe_unused) +static int torture_ww_mutex_lock(int tid) __acquires(torture_ww_mutex_0) __acquires(torture_ww_mutex_1) __acquires(torture_ww_mutex_2) @@ -392,7 +404,7 @@ __acquires(torture_ww_mutex_2) struct list_head link; struct ww_mutex *lock; } locks[3], *ll, *ln; - struct ww_acquire_ctx ctx; + struct ww_acquire_ctx *ctx = &ww_acquire_ctxs[tid]; locks[0].lock = &torture_ww_mutex_0; list_add(&locks[0].link, &list); @@ -403,12 +415,12 @@ __acquires(torture_ww_mutex_2) locks[2].lock = &torture_ww_mutex_2; list_add(&locks[2].link, &list); - ww_acquire_init(&ctx, &torture_ww_class); + ww_acquire_init(ctx, &torture_ww_class); list_for_each_entry(ll, &list, link) { int err; - err = ww_mutex_lock(ll->lock, &ctx); + err = ww_mutex_lock(ll->lock, ctx); if (!err) continue; @@ -419,26 +431,29 @@ __acquires(torture_ww_mutex_2) if (err != -EDEADLK) return err; - ww_mutex_lock_slow(ll->lock, &ctx); + ww_mutex_lock_slow(ll->lock, ctx); list_move(&ll->link, &list); } - ww_acquire_fini(&ctx); return 0; } -static void torture_ww_mutex_unlock(int tid __maybe_unused) +static void torture_ww_mutex_unlock(int tid) __releases(torture_ww_mutex_0) __releases(torture_ww_mutex_1) __releases(torture_ww_mutex_2) { + struct ww_acquire_ctx *ctx = &ww_acquire_ctxs[tid]; + ww_mutex_unlock(&torture_ww_mutex_0); ww_mutex_unlock(&torture_ww_mutex_1); ww_mutex_unlock(&torture_ww_mutex_2); + ww_acquire_fini(ctx); } static struct lock_torture_ops ww_mutex_lock_ops = { .init = torture_ww_mutex_init, + .exit = torture_ww_mutex_exit, .writelock = torture_ww_mutex_lock, .write_delay = torture_mutex_delay, .task_boost = torture_boost_dummy, @@ -924,16 +939,16 @@ static int __init lock_torture_init(void) goto unwind; } - if (cxt.cur_ops->init) { - cxt.cur_ops->init(); - cxt.init_called = true; - } - if (nwriters_stress >= 0) cxt.nrealwriters_stress = nwriters_stress; else cxt.nrealwriters_stress = 2 * num_online_cpus(); + if (cxt.cur_ops->init) { + cxt.cur_ops->init(); + cxt.init_called = true; + } + #ifdef CONFIG_DEBUG_MUTEXES if (str_has_prefix(torture_type, "mutex")) cxt.debug_lock = true; -- 2.18.1