Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F7E0C64EC4 for ; Tue, 7 Mar 2023 01:01:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229734AbjCGBBc (ORCPT ); Mon, 6 Mar 2023 20:01:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229688AbjCGBB3 (ORCPT ); Mon, 6 Mar 2023 20:01:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A52062ED7A for ; Mon, 6 Mar 2023 17:00:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678150841; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=VHFntejT56JbD/rUEQWs9tWFnUbRv9RbZFyVXVHrMio=; b=QKc4tHj0uA9tHypZjodaQoxN620igqamgCURW7UwR8x5CEkLVUNnmI34qyKG0RGtHTQXtY FZVGMTxNT9QNFBvmZCIhiKT/lH+2QuqBF5ZB4Bg8bb5B+tc+U9Wd0ik0jBmjphP2bLxa/S 54F1p6hUgpOihlOIxWuYNFv7Cpe+mso= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-321-JPeUHaEyN5Oboq8gM-K4VQ-1; Mon, 06 Mar 2023 20:00:38 -0500 X-MC-Unique: JPeUHaEyN5Oboq8gM-K4VQ-1 Received: by mail-qk1-f200.google.com with SMTP id m25-20020ae9e019000000b007421ddd945eso6421774qkk.6 for ; Mon, 06 Mar 2023 17:00:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678150838; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=VHFntejT56JbD/rUEQWs9tWFnUbRv9RbZFyVXVHrMio=; b=gegcroF34SktlBM7DsjYGc/LuSS9GwYTZxi7o4tjSAC44WgNdR8P+zW5jPfr+E88mG MSfMvU/QnEZy27bQwCTYDI8eBBVfwZ+y6yc/DvYknVvkCEVImxuIfUTgxarxj1FPSJNY ei2WAVXixee+QhilPijCqLnSyUkJhhh9ww9OjuNIps9FiPmX1dvmIsU1BMMT7sZRrg0K ckniQRcz0k1gHiH5aa3oKDkMVXIaIJm4um/79WGeWTqUC3K2pr9GFOfi+lLdKi3etGfA rhAEzWCjx/nO5M/vV7XRnbtNxuEObGivSInUaJZ5njREFRpgD1ZnNbcJprwdv5EjgMEg K5hQ== X-Gm-Message-State: AO0yUKXs3nztgIoZWT3B9rC0SqgWfqw/e7qcyJwLfk87EkqyzC1kCMTL NIUlvMOK2dO5RDdXfQf2/lwwaGiYGG7VGfHjPM+GdCO9Y3smO3TWxfhn780YWy+lwc7ieM3SXhT MDQBDVL47QCcBIRP/yx3GjZKh X-Received: by 2002:ac8:4e49:0:b0:3bf:a564:573b with SMTP id e9-20020ac84e49000000b003bfa564573bmr22624857qtw.0.1678150837985; Mon, 06 Mar 2023 17:00:37 -0800 (PST) X-Google-Smtp-Source: AK7set9mNewiLEJ/Qcl0fkXX58JVHG+qYCqPmSyNkMCeEvogUl0mdbdtpfbHSmtHHB3rrhxwI1DA3g== X-Received: by 2002:ac8:4e49:0:b0:3bf:a564:573b with SMTP id e9-20020ac84e49000000b003bfa564573bmr22624808qtw.0.1678150837670; Mon, 06 Mar 2023 17:00:37 -0800 (PST) Received: from x1n (bras-base-aurron9127w-grc-56-70-30-145-63.dsl.bell.ca. [70.30.145.63]) by smtp.gmail.com with ESMTPSA id v25-20020ac873d9000000b003c033b23a9asm1407231qtp.12.2023.03.06.17.00.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Mar 2023 17:00:37 -0800 (PST) Date: Mon, 6 Mar 2023 20:00:35 -0500 From: Peter Xu To: Axel Rasmussen Cc: Alexander Viro , Andrew Morton , Hugh Dickins , Jan Kara , "Liam R. Howlett" , Matthew Wilcox , Mike Kravetz , Mike Rapoport , Muchun Song , Nadav Amit , Shuah Khan , James Houghton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: Re: [PATCH v3 3/5] mm: userfaultfd: combine 'mode' and 'wp_copy' arguments Message-ID: References: <20230306225024.264858-1-axelrasmussen@google.com> <20230306225024.264858-4-axelrasmussen@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20230306225024.264858-4-axelrasmussen@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 06, 2023 at 02:50:22PM -0800, Axel Rasmussen wrote: > Many userfaultfd ioctl functions take both a 'mode' and a 'wp_copy' > argument. In future commits we plan to plumb the flags through to more > places, so we'd be proliferating the very long argument list even > further. > > Let's take the time to simplify the argument list. Combine the two > arguments into one - and generalize, so when we add more flags in the > future, it doesn't imply more function arguments. > > Since the modes (copy, zeropage, continue) are mutually exclusive, store > them as an integer value (0, 1, 2) in the low bits. Place combine-able > flag bits in the high bits. > > This is quite similar to an earlier patch proposed by Nadav Amit > ("userfaultfd: introduce uffd_flags" - for some reason Lore no longer > has a copy of the patch). The main difference is that patch only handled Lore has. :) https://lore.kernel.org/all/20220619233449.181323-2-namit@vmware.com And btw sorry to review late. > flags, whereas this patch *also* combines the "mode" argument into the > same type to shorten the argument list. > > Acked-by: James Houghton > Signed-off-by: Axel Rasmussen Mostly good to me, a few nitpicks below. [...] > +/* A combined operation mode + behavior flags. */ > +typedef unsigned int __bitwise uffd_flags_t; > + > +/* Mutually exclusive modes of operation. */ > +enum mfill_atomic_mode { > + MFILL_ATOMIC_COPY = (__force uffd_flags_t) 0, > + MFILL_ATOMIC_ZEROPAGE = (__force uffd_flags_t) 1, > + MFILL_ATOMIC_CONTINUE = (__force uffd_flags_t) 2, > + NR_MFILL_ATOMIC_MODES, > }; I never used enum like this. I had a feeling that this will enforce setting the enum entries but would the enforce applied to later assignments? I'm not sure. I had a quick test and actually I found sparse already complains about calculating the last enum entry: ---8<--- $ cat a.c typedef unsigned int __attribute__((bitwise)) flags_t; enum { FLAG1 = (__attribute__((force)) flags_t) 0, FLAG_NUM, }; void main(void) { uffd_flags_t flags = FLAG1; } $ sparse a.c a.c:5:5: error: can't increment the last enum member ---8<--- Maybe just use the simple "#define"s? > > +#define MFILL_ATOMIC_MODE_BITS (const_ilog2(NR_MFILL_ATOMIC_MODES - 1) + 1) Here IIUC it should be "const_ilog2(NR_MFILL_ATOMIC_MODES) + 1", but maybe.. we don't bother and define every bit explicitly? > +#define MFILL_ATOMIC_BIT(nr) ((__force uffd_flags_t) BIT(MFILL_ATOMIC_MODE_BITS + (nr))) > +#define MFILL_ATOMIC_MODE_MASK (MFILL_ATOMIC_BIT(0) - 1) > + > +/* Flags controlling behavior. */ > +#define MFILL_ATOMIC_WP MFILL_ATOMIC_BIT(0) [...] > @@ -312,9 +312,9 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - enum mcopy_atomic_mode mode, > - bool wp_copy) > + uffd_flags_t flags) > { > + int mode = flags & MFILL_ATOMIC_MODE_MASK; > struct mm_struct *dst_mm = dst_vma->vm_mm; > int vm_shared = dst_vma->vm_flags & VM_SHARED; > ssize_t err; > @@ -333,7 +333,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > * by THP. Since we can not reliably insert a zero page, this > * feature is not supported. > */ > - if (mode == MCOPY_ATOMIC_ZEROPAGE) { > + if (mode == MFILL_ATOMIC_ZEROPAGE) { The mode comes from "& MFILL_ATOMIC_MODE_MASK" but it doesn't quickly tell whether there's a shift for the mask. Would it look better we just have a helper to fetch the mode? The function tells that whatever it returns must be the mode: if (uffd_flags_get_mode(flags) == MFILL_ATOMIC_ZEROPAGE) We also avoid quite a few "mode" variables. All the rest bits will be fine to use "flags & FLAG1" if it's a boolean (so only this "mode" is slightly tricky). What do you think? Thanks, -- Peter Xu