Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3771373ybi; Mon, 29 Jul 2019 12:18:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqxewlkC0/HWpdB7taSMOGT4BklXKqTjpGsaiTVmrrX/pBrdAXd8MC9yqHjUh60PMVaI3KUb X-Received: by 2002:a17:902:29e6:: with SMTP id h93mr53126643plb.297.1564427938540; Mon, 29 Jul 2019 12:18:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564427938; cv=none; d=google.com; s=arc-20160816; b=Pt2P3o58EB0oT/yUlJ2x0eTjbKezDVlzHv9v28PmuwVfxsV+2hEoSdtJfKHokoYj21 8xxUnVAAXc7K+xC9L1lPFKXpyX2scqxkPDjuFebkyaiF5cx0x11d+L5dcg2HTTkvlUjg oS+OpzqsE8IFwS9osytMt+Uh3ScF3CiE6jtFS1gMJEDQp2yyzVjkV0J2qARN28EA9c8b UJ/o6ZS8gLU9kbDSFJ3sOzAAI5MzrzMWYEs1ZB2FMopIyTeJCkKsnpT0e2btefpBAl06 WberazdmUceSzRXN4p/SwFruzwzfFUdzKNOrqY5tcsRUfTbChCvizCE+hpIrJw+i5Cvq eYvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=RAMdIRF5kBX5jmVCs8GLo1jkwxi4aucEHWjJV5yMiWA=; b=oVpR2doDASA6FdjYSPMezevtZYnRXGtkr9xzrrqqeFKdTtA3AKc3BpnKO+DE2xs4wr 5QJ3Ku5RLc6E20Ppgt75B0/ptWGB+2gco8NePtrLhVEEgq+S1faJyoW8UX+YxuqV3aUU VNqxuUE0ilYPuM7eMAz8yPF+FjZImDw+SM+ZCB5uFzrVChvhHjZ/PNsucb/yxzaok2ja M9RrQnZvKYRyy3qx3XUEyghTJ6as8QBWvh3OaMmA3wki0llnckbsg0wpELPAdB6cOcy/ CdSdIAZxiCF+vDkbdS1fcqe4e4jXhWjNJPUwNBB5BFcCdt1B9b+cjP/VhEqSypArC1Bo C/Mg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e24si27644949pff.125.2019.07.29.12.18.43; Mon, 29 Jul 2019 12:18:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388079AbfG2PdY (ORCPT + 99 others); Mon, 29 Jul 2019 11:33:24 -0400 Received: from foss.arm.com ([217.140.110.172]:45808 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387887AbfG2PdX (ORCPT ); Mon, 29 Jul 2019 11:33:23 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 15FDA337; Mon, 29 Jul 2019 08:33:23 -0700 (PDT) Received: from arrakis.emea.arm.com (arrakis.cambridge.arm.com [10.1.196.78]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 132313F694; Mon, 29 Jul 2019 08:33:21 -0700 (PDT) Date: Mon, 29 Jul 2019 16:33:19 +0100 From: Catalin Marinas To: Valentin Schneider Cc: Nikolay Borisov , linux-btrfs@vger.kernel.org, paulmck@linux.ibm.com, andrea.parri@amarulasolutions.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 0/2] Refactor snapshot vs nocow writers locking Message-ID: <20190729153319.GH2368@arrakis.emea.arm.com> References: <20190719083949.5351-1-nborisov@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some nitpicking below: On Mon, Jul 29, 2019 at 03:13:42PM +0100, Valentin Schneider wrote: > specs.tla: > > ---- MODULE specs ---- > EXTENDS Integers, Sequences, TLC > > CONSTANTS > NR_WRITERS, > NR_READERS, > WRITER_TASK, > READER_TASK > > WRITERS == {WRITER_TASK} \X (1..NR_WRITERS) > READERS == {READER_TASK} \X (1..NR_READERS) > THREADS == WRITERS \union READERS Recommendation: use symbolic values for WRITERS and READERS (defined in .cfg: e.g. r1, r2, r3, w1, w2, w2). It allows you do to symmetry optimisations. We've also hit a TLC bug in the past with process values made up of a Cartesian product (though it may have been fixed since). > macro ReadLock(tid) > { > if (lock_state = "idle" \/ lock_state = "read_locked") { > lock_state := "read_locked"; > threads[tid] := "read_locked"; > } else { > assert lock_state = "write_locked"; > \* waiting for writers to finish > threads[tid] := "write_waiting"; > await lock_state = "" \/ lock_state = "read_locked"; lock_state = "idle"? > macro WriteLock(tid) > { > if (lock_state = "idle" \/ lock_state = "write_locked") { > lock_state := "write_locked"; > threads[tid] := "write_locked"; > } else { > assert lock_state = "read_locked"; > \* waiting for readers to finish > threads[tid] := "read_waiting"; > await lock_state = "idle" \/ lock_state = "write_locked"; > }; > } I'd say that's one of the pitfalls of PlusCal. The above is executed atomically, so you'd have the lock_state read and updated in the same action. Looking at the C patches, there is an atomic_read(&lock->readers) followed by a percpu_counter_inc(&lock->writers). Between these two, you can have "readers" becoming non-zero via a different CPU. My suggestion would be to use procedures with labels to express the non-atomicity of such sequences. > macro ReadUnlock(tid) { > if (threads[tid] = "read_locked") { > threads[tid] := "idle"; > if (\A thread \in THREADS: threads[thread] # "read_locked") { > \* we were the last read holder, everyone else should be waiting, unlock the lock > lock_state := "idle"; > }; > }; > } I'd make this close to the proposed C code with atomic counters. You'd not be able to check each thread atomically in practice anyway. -- Catalin