Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3014438ybd; Fri, 28 Jun 2019 01:11:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqx9sqJ8Lz62c7rbHFDQOk6OMoxlxJCSqHxgK03G/A4ZhxiXNHbD91SExkSjWmswMqykQBXh X-Received: by 2002:a17:902:e2:: with SMTP id a89mr10042320pla.210.1561709464600; Fri, 28 Jun 2019 01:11:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561709464; cv=none; d=google.com; s=arc-20160816; b=PF/cgBXIeb09MT7nkoOdQ72lbt0pfMSxyYlcklAbEOJ7O7b5r8qtYg4tvjt9+fzSJx WxfbsR7Srjs5NRjhybSKNz7Mid3F88L8y0Pbfx5qDBAhRb0hjBeoc/u3tWxO2VqG8cAf nHTy9fFo/TLRtaom4VD/vbJM58KVEJ3gwS1Eosd182RQdwDvjP4Eci/3fXeMHhNhPmCJ +IzABgOrdSY3DTM1rNLer8waxEaBO2FStN6TEQs4oJ26EKGfIbAPV9nI8QjYo0Sksxb+ dCIiNE2rzn7u3ckIZSrjeKYQR6XqkxfWbjDozoTx1pLJD5820vJmZZVXvPaj5WswwEaU by7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=yibP/Iv6ps2Lbs4MfioQUnaQ1X0MZgCBiZaerRAAUs0=; b=koU6YgVCXQBDfVwOLUiE/A5EuDZDXwZZLmrQPE0PD6yJuazpSnYYo/+iLYZWlUNivd U2vSS1X5PZChWYMEFNMPoxoZv0uVhciDuvZO+M3icDxXCOcnVuapG0S9A3L5M+QJo5z9 KwHHLiIeemjh2uVGG9SgrHbIgvX1M84Wn+9gnLvQvgcuwTEZgvmxcVxkC+hU2O8QLRbV 72o1mE4twkPNrhDG3tZpzhDUOFk69DK5gzIsAO/AgKHU0+ucophblUbbuu5nTRXQYq+F vLCzRYLj9aDJqyYp+TOu2UaSdMtGwB3QWGfrLb/ufe38aD38LM6qjG6jmuQz07C6sveV eK9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=e5kLlPi6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id br18si1658248pjb.52.2019.06.28.01.10.48; Fri, 28 Jun 2019 01:11:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=e5kLlPi6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726787AbfF1IJ5 (ORCPT + 99 others); Fri, 28 Jun 2019 04:09:57 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:32921 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726562AbfF1IJ4 (ORCPT ); Fri, 28 Jun 2019 04:09:56 -0400 Received: by mail-pg1-f194.google.com with SMTP id m4so2252774pgk.0 for ; Fri, 28 Jun 2019 01:09:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yibP/Iv6ps2Lbs4MfioQUnaQ1X0MZgCBiZaerRAAUs0=; b=e5kLlPi61UOlg3aWbm3hyHJ6zEvPrbJg4WaMTgr5J7N6beCFHbvMVMWvXm34zFIMK6 sy+TCthgNvl6QKdGAvF199JWgLSzY0riL7Qema9Z8eQSMrEVD7QmDgObmKA9VnGm6jQk Qk9Vw4tG1NS/Rsaor+MwqJJtzbW9NLCCg64qdKIwo/lqdLjqjD0HoEJZR2UkGwIQNQU7 B8LVi8olEdiEG1gQQPAGB3QH7tntl/R65/A9R5rItsLpolqg+IRJEp1Du/WH5LqoOfL3 I3l8I0kuDspWHSG/V5vI+q5UJh9nND3VlxUlk753w0GhWsuvm1nMSIm2VqKl1wq8mbGo 9uhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yibP/Iv6ps2Lbs4MfioQUnaQ1X0MZgCBiZaerRAAUs0=; b=k1xPfUIvIDha27hIh88QC5jhQ8eA7QxQR8qzFb3WM1BWcWnJ99/cp5BX9wv2fc9eQF fvgoGnboyuSr0ZFL7kIJPXY370TR3Z8AjrCf3s3xTphuV9tUXV1fbFhzkl12EuDFRBBO LV/nwPMFciLkTcIKuMDLvj2g+uPrUYfAfbtwqvBKlnjFxVeL5vm7vbdWoh8uh6e5pnAn tJTnwCiprArbb+0Aw/s+jkJASj5MCDHAu+RAGVShZqGWKi4xukFQtG67OfUZvxJzoujH l67Kvjurlhdbt67TDD1hOStED5HdW+z4pT5v0rUPGUtFRCf/NcDU72omYbbHb2Z5cDEl ouXA== X-Gm-Message-State: APjAAAVu//zzdcBdTOiQl0a4D7UBDeqem+sb/M8XMoTiwwnHP7ize2Dt UyROi8V3ILmxlVXuCs3xvuNZTKiq3v15Cg315/VV4Q== X-Received: by 2002:a17:90a:be0d:: with SMTP id a13mr11033056pjs.84.1561709395368; Fri, 28 Jun 2019 01:09:55 -0700 (PDT) MIME-Version: 1.0 References: <20190617082613.109131-1-brendanhiggins@google.com> <20190617082613.109131-2-brendanhiggins@google.com> <20190620001526.93426218BE@mail.kernel.org> <20190626034100.B238520883@mail.kernel.org> <20190627181636.5EA752064A@mail.kernel.org> In-Reply-To: <20190627181636.5EA752064A@mail.kernel.org> From: Brendan Higgins Date: Fri, 28 Jun 2019 01:09:44 -0700 Message-ID: Subject: Re: [PATCH v5 01/18] kunit: test: add KUnit test runner core To: Stephen Boyd Cc: Frank Rowand , Greg KH , Josh Poimboeuf , Kees Cook , Kieran Bingham , Luis Chamberlain , Peter Zijlstra , Rob Herring , shuah , "Theodore Ts'o" , Masahiro Yamada , devicetree , dri-devel , kunit-dev@googlegroups.com, "open list:DOCUMENTATION" , linux-fsdevel@vger.kernel.org, linux-kbuild , Linux Kernel Mailing List , "open list:KERNEL SELFTEST FRAMEWORK" , linux-nvdimm , linux-um@lists.infradead.org, Sasha Levin , "Bird, Timothy" , Amir Goldstein , Dan Carpenter , Daniel Vetter , Jeff Dike , Joel Stanley , Julia Lawall , Kevin Hilman , Knut Omang , Logan Gunthorpe , Michael Ellerman , Petr Mladek , Randy Dunlap , Richard Weinberger , David Rientjes , Steven Rostedt , wfg@linux.intel.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 27, 2019 at 11:16 AM Stephen Boyd wrote: > > Quoting Brendan Higgins (2019-06-26 16:00:40) > > On Tue, Jun 25, 2019 at 8:41 PM Stephen Boyd wrote: > > > > > scenario like below, but where it is a problem. There could be three > > > CPUs, or even one CPU and three threads if you want to describe the > > > extra thread scenario. > > > > > > Here's my scenario where it isn't needed: > > > > > > CPU0 CPU1 > > > ---- ---- > > > kunit_run_test(&test) > > > test_case_func() > > > .... > > > [mock hardirq] > > > kunit_set_success(&test) > > > [hardirq ends] > > > ... > > > complete(&test_done) > > > wait_for_completion(&test_done) > > > kunit_get_success(&test) > > > > > > We don't need to care about having locking here because success or > > > failure only happens in one place and it's synchronized with the > > > completion. > > > > Here is the scenario I am concerned about: > > > > CPU0 CPU1 CPU2 > > ---- ---- ---- > > kunit_run_test(&test) > > test_case_func() > > .... > > schedule_work(foo_func) > > [mock hardirq] foo_func() > > ... ... > > kunit_set_success(false) kunit_set_success(false) > > [hardirq ends] ... > > ... > > complete(&test_done) > > wait_for_completion(...) > > kunit_get_success(&test) > > > > In my scenario, since both CPU1 and CPU2 update the success status of > > the test simultaneously, even though they are setting it to the same > > value. If my understanding is correct, this could result in a > > write-tear on some architectures in some circumstances. I suppose we > > could just make it an atomic boolean, but I figured locking is also > > fine, and generally preferred. > > This is what we have WRITE_ONCE() and READ_ONCE() for. Maybe you could > just use that in the getter and setters and remove the lock if it isn't > used for anything else. > > It may also be a good idea to have a kunit_fail_test() API that fails > the test passed in with a WRITE_ONCE(false). Otherwise, the test is > assumed successful and it isn't even possible for a test to change the > state from failure to success due to a logical error because the API > isn't available. Then we don't really need to have a generic > kunit_set_success() function at all. We could have a kunit_test_failed() > function too that replaces the kunit_get_success() function. That would > read better in an if condition. You know what, I think you are right. Sorry, for not realizing this earlier, I think you mentioned something along these lines a long time ago. Thanks for your patience! > > > > Also, to be clear, I am onboard with dropping then IRQ stuff for now. > > I am fine moving to a mutex for the time being. > > > > Ok. Thanks!