Received: by 2002:a89:288:0:b0:1f7:eeee:6653 with SMTP id j8csp431196lqh; Tue, 7 May 2024 03:49:08 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCU3vC0gTmYZQ+fZqKw5MJUI5WuWRUNd7z+uT1EzQFRkMweFQ/tAK0Rxuop11wzVwycJuleLq0+4J71mjEanb0qrjdvl/WC8/Wl1rd/JpQ== X-Google-Smtp-Source: AGHT+IFfNGsHnImMYfmX6MgJEo2V6OrOvL0+5XURrb1Q98qBv6+BrTBJ7o7m1N1wDwtkHPUmctcZ X-Received: by 2002:a50:9b55:0:b0:572:4041:5637 with SMTP id a21-20020a509b55000000b0057240415637mr8268022edj.19.1715078948622; Tue, 07 May 2024 03:49:08 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715078948; cv=pass; d=google.com; s=arc-20160816; b=kvByfMy5PXfPBRXS5aEI904CcRslXn3PETk9fTE/GbsjIELMWYQmdhkYuTqIsxEyDk 3x2e+6kQAAy8FbrTeFF9Tk7gn5mE286JjUouehze8UZ6qanyWJuHt4xcGqUYLESeOCZ5 4ufDXHALzXhk1hIgo98tq0LoTlz66mFEvXJoE5kieXE/LlqwMuV77MIvA+A8xpvDkoCQ uOhG76fR6dzInamuF5Wan8n88l5aBlE93yEaAUEP5bTywF8YQmmNljn5dNnJGnp/x66x CGGkAYu4Q2Km76cVj3MdwboSIzS+aNXFP6FZik3eKGg1xZ1oCu5dH65sz+M8mb3KrKC6 cYWQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date; bh=uFLVyObYwZS1gZ7qbdqFPVtvRdltbw/jeqQRl9Dx2qY=; fh=RW2PVHK6qUVTnzGsMo1pgnbenY7LucO3c3vZy33Y/Cs=; b=MFbr2YvY0FLDFz6qhacdxZUR7SLW/GalXqEXy0fCIp5a9e61W9HIy2ltXK26/UZtJR 1wq/CCKQTemOmrlxdsPw7nzl5GhE/vaCzgt+oIOjd3ILLgIehH+SmJlILrSd1YbtnNlb UXVzrYhHBGCh95EKYtmzzF27yPjUgd/qaULIzP7MVS4rGxfJ3D2m+R1uMboef9jh9CZQ b+Logi+9/SIihd/HJsHbMFWyltwiIoHQtb2IkUXMJQmt5mPmZXvdOaW1y7wuZUc0IFWv OB60cQk1YSfZzt6uUjc27yFoTV+Aeh5sr4ARWQuP7NAIvomYifgMgFU9o45lOD2p2N64 baCg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-171135-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-171135-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 3-20020a508e03000000b005730ffa6c83si1001877edw.464.2024.05.07.03.49.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 May 2024 03:49:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-171135-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-171135-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-171135-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 32D671F233FA for ; Tue, 7 May 2024 10:49:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D5FBE1509AB; Tue, 7 May 2024 10:44:58 +0000 (UTC) Received: from mail-ej1-f43.google.com (mail-ej1-f43.google.com [209.85.218.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E49C14F9EA; Tue, 7 May 2024 10:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.43 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715078698; cv=none; b=dcPdcoKLHkT6c5+6w9BqQv/L9yJz1I35H/pUhgF/nflZg29Q/wcve17GgE7Bt8xmCZwTV6yvDPHbw9VnHAZmrJR1KKzvWBBEUVfq3xXzf4tGW0ygMkzVVi7XgQUS0uYfS9EzYrnu1sRzG7fhI4cLTyB9NAPEAxSMMilw3GP5SLM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715078698; c=relaxed/simple; bh=zN9YI3Ms1m+wOhz90UWcvVrsHIm3ci6x0/cMI4eFQzw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=NV2V6fLgCggE89WKelU9Fp4zG0x5fcJuiV+sQi/kNXtqWKx5AIRvSAkI2jQbq9Fd/rpEExY45J+FPpXPH4L3bfvOvBb7lpKufkzQbLl7SzBPzE4Ht1qTjcR5tqLzURSwS5KZSfQmQvmACJ5Dno9KAKEqYGZj1t0931nPE0QdWp0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=pass smtp.mailfrom=gmail.com; arc=none smtp.client-ip=209.85.218.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ej1-f43.google.com with SMTP id a640c23a62f3a-a59a9d66a51so658494466b.2; Tue, 07 May 2024 03:44:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715078695; x=1715683495; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=uFLVyObYwZS1gZ7qbdqFPVtvRdltbw/jeqQRl9Dx2qY=; b=OWsSGy6xdvvvgpjeOp7beUCDq5+bywTWFvtH6Bj2/nE2FV7Xz7MEU2dzYpAyorT4j8 /21Tt4TuSHQq3ydQ9KgUA3SA3GqG96F4sGygCI2UVnSfromgnUJw0kRjgWmLwFtMqH1e l5jlcnAU7PNZFIgbOdBdedWmqQc2hroJfzPab7QhcAnMgDyEI2NLWPtv+fB8HNj8gjr/ 7b6ViyEo4FWAvgN6Iutmv4+YMdbCbA8OU+Urvdv+x04yrDs/E1xOYuy4wmN3jZub0Xgr QpssbfnGtaFW27tkKVUQVvzm7GsH6N3xgPJmvLl5wSYrlqN2dWZRBVt/oyfSbYO1zhc7 xJaA== X-Forwarded-Encrypted: i=1; AJvYcCV/SF85fwVCHmTlPwgoxyFnjHHknZ6dZUOS7rjGIli4wPzTeJzNb83PmviOmzylsTJLwHrbAnrwxYvl4/rJM+JvtzQR2eb6g8RNFwRV/gDW6HqHIvN9ZhHSMmTvZzNRNvBjYp9+pI4= X-Gm-Message-State: AOJu0YzDRXkWXE9tLh1UcjovtHflKsEna/Vq7G5Hdyl7gnNIT09RhmcL /k5A2PSNBWfTcDj2KffIGE4Sqw/BYRLFXlfyHSS9ftRVxOqDB6k0 X-Received: by 2002:a17:906:81d8:b0:a59:cd46:fe89 with SMTP id e24-20020a17090681d800b00a59cd46fe89mr3517514ejx.59.1715078694526; Tue, 07 May 2024 03:44:54 -0700 (PDT) Received: from gmail.com (fwdproxy-lla-116.fbsv.net. [2a03:2880:30ff:74::face:b00c]) by smtp.gmail.com with ESMTPSA id p14-20020a170906604e00b00a58a3238522sm6249014ejj.207.2024.05.07.03.44.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 May 2024 03:44:54 -0700 (PDT) Date: Tue, 7 May 2024 03:44:52 -0700 From: Breno Leitao To: Jens Axboe Cc: Pavel Begunkov , leit@meta.com, "open list:IO_URING" , open list Subject: Re: [PATCH] io_uring/io-wq: Use set_bit() and test_bit() at worker->flags Message-ID: References: <20240503173711.2211911-1-leitao@debian.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Fri, May 03, 2024 at 12:32:38PM -0600, Jens Axboe wrote: > On 5/3/24 11:37 AM, Breno Leitao wrote: > > Utilize set_bit() and test_bit() on worker->flags within io_uring/io-wq > > to address potential data races. > > > > The structure io_worker->flags may be accessed through parallel data > > paths, leading to concurrency issues. When KCSAN is enabled, it reveals > > data races occurring in io_worker_handle_work and > > io_wq_activate_free_worker functions. > > > > BUG: KCSAN: data-race in io_worker_handle_work / io_wq_activate_free_worker > > write to 0xffff8885c4246404 of 4 bytes by task 49071 on cpu 28: > > io_worker_handle_work (io_uring/io-wq.c:434 io_uring/io-wq.c:569) > > io_wq_worker (io_uring/io-wq.c:?) > > > > > > read to 0xffff8885c4246404 of 4 bytes by task 49024 on cpu 5: > > io_wq_activate_free_worker (io_uring/io-wq.c:? io_uring/io-wq.c:285) > > io_wq_enqueue (io_uring/io-wq.c:947) > > io_queue_iowq (io_uring/io_uring.c:524) > > io_req_task_submit (io_uring/io_uring.c:1511) > > io_handle_tw_list (io_uring/io_uring.c:1198) > > > > Line numbers against commit 18daea77cca6 ("Merge tag 'for-linus' of > > git://git.kernel.org/pub/scm/virt/kvm/kvm"). > > > > These races involve writes and reads to the same memory location by > > different tasks running on different CPUs. To mitigate this, refactor > > the code to use atomic operations such as set_bit(), test_bit(), and > > clear_bit() instead of basic "and" and "or" operations. This ensures > > thread-safe manipulation of worker flags. > > Looks good, a few comments for v2: > > > diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c > > index 522196dfb0ff..6712d70d1f18 100644 > > --- a/io_uring/io-wq.c > > +++ b/io_uring/io-wq.c > > @@ -44,7 +44,7 @@ enum { > > */ > > struct io_worker { > > refcount_t ref; > > - unsigned flags; > > + unsigned long flags; > > struct hlist_nulls_node nulls_node; > > struct list_head all_list; > > struct task_struct *task; > > This now creates a hole in the struct, maybe move 'lock' up after ref so > that it gets filled and the current hole after 'lock' gets removed as > well? I am not sure I can see it. From my tests, we got the same hole, and the struct size is the same. This is what I got with the change: struct io_worker { refcount_t ref; /* 0 4 */ /* XXX 4 bytes hole, try to pack */ raw_spinlock_t lock; /* 8 64 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ /* size: 336, cachelines: 6, members: 14 */ /* sum members: 328, holes: 2, sum holes: 8 */ /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */ /* last cacheline: 16 bytes */ } __attribute__((__aligned__(8))); This is what this current patch returns: struct io_worker { refcount_t ref; /* 0 4 */ /* XXX 4 bytes hole, try to pack */ long unsigned int flags; /* 8 8 */ /* size: 336, cachelines: 6, members: 14 */ /* sum members: 328, holes: 2, sum holes: 8 */ /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */ /* last cacheline: 16 bytes */ } __attribute__((__aligned__(8))); A possible suggestion is to move `create_index` after `ref. Then we can get a more packed structure: struct io_worker { refcount_t ref; /* 0 4 */ int create_index; /* 4 4 */ long unsigned int flags; /* 8 8 */ struct hlist_nulls_node nulls_node; /* 16 16 */ struct list_head all_list; /* 32 16 */ struct task_struct * task; /* 48 8 */ struct io_wq * wq; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct io_wq_work * cur_work; /* 64 8 */ struct io_wq_work * next_work; /* 72 8 */ raw_spinlock_t lock; /* 80 64 */ /* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */ struct completion ref_done; /* 144 88 */ /* --- cacheline 3 boundary (192 bytes) was 40 bytes ago --- */ long unsigned int create_state; /* 232 8 */ struct callback_head create_work __attribute__((__aligned__(8))); /* 240 16 */ /* --- cacheline 4 boundary (256 bytes) --- */ union { struct callback_head rcu __attribute__((__aligned__(8))); /* 256 16 */ struct work_struct work; /* 256 72 */ } __attribute__((__aligned__(8))); /* 256 72 */ /* size: 328, cachelines: 6, members: 14 */ /* forced alignments: 2 */ /* last cacheline: 8 bytes */ } __attribute__((__aligned__(8))); How does it sound? > And then I'd renumber the flags, they take bit offsets, not > masks/values. Otherwise it's a bit confusing for someone reading the > code, using masks with test/set bit functions. Good point. What about something like? enum { IO_WORKER_F_UP = 0, /* up and active */ IO_WORKER_F_RUNNING = 1, /* account as running */ IO_WORKER_F_FREE = 2, /* worker on free list */ IO_WORKER_F_BOUND = 3, /* is doing bounded work */ }; Since we are now using WRITE_ONCE() in io_wq_worker, I am wondering if this is what we want to do? WRITE_ONCE(worker->flags, (IO_WORKER_F_UP| IO_WORKER_F_RUNNING) << 1); Thanks