Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp1029126rbb; Sun, 25 Feb 2024 16:11:01 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCV/ogv1oIPTvemZliWnZnHZkxO7+u0qm+hGcWwZpBGkFoSMf10vmltTGVD334ZXMDh2nnihp6ga+MOcSEH0TCaGSRObxnawi/3AG3AMZQ== X-Google-Smtp-Source: AGHT+IEJmRTubBLz2af3H+amhxPcwuQ6ZTBie08R/ZnthhtWiWMakoLJdEmgmCMNz+Z1QdxiZkvc X-Received: by 2002:ac8:12c4:0:b0:42e:70b6:ae0d with SMTP id b4-20020ac812c4000000b0042e70b6ae0dmr5009819qtj.65.1708906260906; Sun, 25 Feb 2024 16:11:00 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708906260; cv=pass; d=google.com; s=arc-20160816; b=SMPqQrZSJ+EQ8AevsOK6f/0wv98JSoFDMSDyT66uzHPIz+JPNoRU9cYrbnNjt4n3ZM HrDFiPIb3NjEMmYE4aTZn24urHozBw2bRFlI+hUi2VW/lQft6tK7RgK5pDsgbN0qLMlM 9R1yB1Y/2OTs+EZsYRtStm9IFC2RrSwz6/yoEnTYSzxzMDjMhnGBB2q9FLEBFChb6XUl 5xbqTvopJs55ULHyDNqFKmS3x+h/Qjsf/AnpPm8uSaTLr6F82bfUiLWDHzlQszLwyNdZ 2+ad6ZzF2IfbX9ChZbb3Al9qyNUQ4uY8lcQ2uOs2JCQetw1ZmWKi9ORUP7U6sLYwHJCp ikdQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=kj3tL2ojtrovMn5GzkeggiQFgfU3mHccbBEaky3cDrE=; fh=ZA1SFSiTbRnJcs8aq2mNplbHRRi7REPJ5D9GgqG4Md0=; b=OGvGCfYmXv0cj8JsgNxr3LCQZAaT/xdwTW7mEivVHGh+nDGTisz1jH+N1WNIWnglMk jGX45ApVcPH/WsNkLStQ8goL1xHAX6xuPAIP0ExjkE/mYqcRNCVx4YxXJIIR4ufHYjcH tid4Ac3TPXEFh3dxI/duxBL8nOmZ43H2gFg2GX24bUqFfEsF01X9NWN5s/KiSYLJuqlB z0K0x7FmYq48VglWRUgIhbYJpFuKAyB5xBT9qJTv7LpsxomceaJimT0sEESAicmKV72O LMoQ1xDWyCxH/JiTKCGHJ+Uc7z1gcwGgGMNH+zQI41q3kUTm2GOqMRvlxrClVNTjZx1J GddA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=GponSIXU; arc=pass (i=1 spf=pass spfdomain=fromorbit.com dkim=pass dkdomain=fromorbit-com.20230601.gappssmtp.com dmarc=pass fromdomain=fromorbit.com); spf=pass (google.com: domain of linux-kernel+bounces-80351-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80351-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=fromorbit.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id h15-20020ac8744f000000b0042e6bf4f70asi3885609qtr.83.2024.02.25.16.11.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Feb 2024 16:11:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-80351-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=GponSIXU; arc=pass (i=1 spf=pass spfdomain=fromorbit.com dkim=pass dkdomain=fromorbit-com.20230601.gappssmtp.com dmarc=pass fromdomain=fromorbit.com); spf=pass (google.com: domain of linux-kernel+bounces-80351-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80351-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=fromorbit.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 97E311C20C4D for ; Mon, 26 Feb 2024 00:11:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 47A834C66; Mon, 26 Feb 2024 00:10:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=fromorbit-com.20230601.gappssmtp.com header.i=@fromorbit-com.20230601.gappssmtp.com header.b="GponSIXU" Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70105173 for ; Mon, 26 Feb 2024 00:10:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708906248; cv=none; b=H1cITHG9EczKLlBgi6Eclvqj6mVOmB39vcm0ivsCFZpWLsTsyMQVqKgePgLwJ1t3PbDr3vjJmdnBwDX4j97L9NyUfpmB+1mvq+fbp+8FK7+PaG+KCeSQbp6ztzYyh8GBg9hbVzVxIP7Erls8YP+6KFMfZmv7IZLJ59CSEI5PGbc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708906248; c=relaxed/simple; bh=GYBif4eHHYK1VBAeXhKURSAbsWCLixxMn/+30N10GqM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=pKiTSAcO1uEAi90rsEatItFaCagsWzYCqyUFyO/W5nAkNFKZ2swQLUTg/tMhKsGRnSxfbT2TumSV/uswb535CJhVq1fTfF5kF/YutTqKB5Wf5FNCCU2p8/NuYdJUJcQ5raIaUNqfzs7LUyJgZHM4Tb1G1yaiO1PhLoEzR4k6deI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=fromorbit.com; spf=pass smtp.mailfrom=fromorbit.com; dkim=pass (2048-bit key) header.d=fromorbit-com.20230601.gappssmtp.com header.i=@fromorbit-com.20230601.gappssmtp.com header.b=GponSIXU; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=fromorbit.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=fromorbit.com Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-5d81b08d6f2so2639651a12.0 for ; Sun, 25 Feb 2024 16:10:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1708906246; x=1709511046; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=kj3tL2ojtrovMn5GzkeggiQFgfU3mHccbBEaky3cDrE=; b=GponSIXU0PtTlbcGZVY+04F9/RA5vbLFOTmfarRES6zSFh4wBizKT8IGgt1HgKEttj GPzJEUM2MUq5Ys9cyzGFpgWUp6TGDqJtj4dmncGYbpFJm3TlovKg8oThngzrqzP/4cxU owbbxtOLgVao/RuP/BpRl2oBPr4R7wQRYfU/I8cTN90kC37MwG3S4Q3QytxprqKVCJET I5RAOaQY+6+5/AP52z9WTQJyLk6Xwc1qWgO1cWZO+YKY3RjGZ1jfGdO53Z2O84PTXONL jVHXP1wzDGGdBYsIaNW6ssM7FjmuWFJeWe327xmSCOgYu4ZwGDGqyLublCdIE7g+MJC+ GdFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708906246; x=1709511046; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=kj3tL2ojtrovMn5GzkeggiQFgfU3mHccbBEaky3cDrE=; b=Q5VGNv18tIPo+ite3h9Ybk9MO/xmqsaNLiYTTSyl73ktYY7sRBEga85//affdfrAyW D26E/FzYFFYAqYl9tnuQW5UDKDujybhde+wR9JKOmIM7BnOFzgKem4MUP6jc1NyhNEW/ Dx4PhejCMSJVoybJtx+/9AzVgbRQadPCk4+qp6zuLaAsHkLXAazsnK9yodUjPkSpFSQJ XKMZCEqEGyGVCuFsmGgOzHsRV/2XhxR6xwhdkGTVYh3QHKQtwlTh3STy5OTk+wjkyhXm VW9E+0xv6oyoRizqFVaBEek+b/DGKHqC+/A+g6aYe69KNOxJbjxRzY4tZuo8NqBOogrG N7BQ== X-Forwarded-Encrypted: i=1; AJvYcCWY5S69IMeEhXE9VYsJNsbNI5BsNo8S55URjLp6ks+vmVdIsosAWISyNwHdAKbpQsITHMhsjDlhRGyvDqiIc7ZQilnt32qSV3uY3jnQ X-Gm-Message-State: AOJu0YyxMs/6sSRTDuQbftPRIBAeyyAxF/u9S027jmRZzV+T9XR1yXKZ FRw8qTAyrvOu8/4k46Mt4tJeDOwf+auZb8B1dRMbP5AzBYPWgU74ok12nwL/CMg= X-Received: by 2002:a17:903:189:b0:1dc:692:2843 with SMTP id z9-20020a170903018900b001dc06922843mr8247359plg.5.1708906245699; Sun, 25 Feb 2024 16:10:45 -0800 (PST) Received: from dread.disaster.area (pa49-181-247-196.pa.nsw.optusnet.com.au. [49.181.247.196]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b001dca68a8a00sm219151plh.139.2024.02.25.16.10.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Feb 2024 16:10:45 -0800 (PST) Received: from dave by dread.disaster.area with local (Exim 4.96) (envelope-from ) id 1reOZu-00BWTP-1n; Mon, 26 Feb 2024 11:10:42 +1100 Date: Mon, 26 Feb 2024 11:10:42 +1100 From: Dave Chinner To: Luis Chamberlain Cc: lsf-pc@lists.linux-foundation.org, John Garry , Tso Ted , "Martin K. Petersen" , Pankaj Raghav , Daniel Gomez , Matthew Wilcox , "kbus @pop.gmail.com>> Keith Busch" , Bart Van Assche , hch@lst.de, djwong@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, chandan.babu@oracle.com, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, jbongio@google.com, ojaswin@linux.ibm.com Subject: Re: [LSF/MM/BPF TOPIC] no tears atomics & LBS Message-ID: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Thu, Feb 22, 2024 at 01:59:32PM -0800, Luis Chamberlain wrote: > At last year's LSFMM we learned through Ted Ts'o about the interest by > cloud providers in large atomics [0]. It is a good example where cloud > providers innovated in an area perhaps before storage vendors were > providing hardware support for such features. An example use case was > databases. In short, with large atomics databases can disable their own version > of journaling so to increase TPS. Large atomics lets you disabling things like > MySQL innodb_doublewrite. The feature to allow you to disable this and use > large atomcis is known as torn write prevention [1]. At least for MySQL the > default page size for the database (used for columns) is 16k, and so enabling > for example a 16k atomic can allow you to take advantage of this. It was also > mentioned how PostgreSQL only supports buffered-IO and so it would be desirable > for a solution to support buffered-IO with large atomics as well. The way > cloud providers enable torn write protection, is by using direct IO. > > John Garry has been working on adding an API for atomic writes, it would > seem some folks refer to this as the no-tears atomic API. It consists of > two parts, one for the block layer [2] and another set of changes for > XFS [3]. It enables Direct IO support with large atomics. It includes > a userspace API which lets you peg a FS_XFLAG_ATOMICWRITES flag onto a > file, and you then create an XFS filesystem using the XFS realtime > subvolume with with an extent alignment. The current users of this API > seems to be SCSI, but obviously this can grow to support others. A neat > feature of this effort is you can have two separate directories with > separate aligment requirements. There is no generic filesystem solution > yet. > > Meanwhile we're now at a v2 RFC for LBS support [4]. Although the LBS > effort originally was a completely orthogonal effort to large atomics, it > would seem there is a direct relationship here now worth discussing. > In short LBS enables buffered-IO large atomic support if the hardware > support its. > > We get both alignment constraints gauranteed and now ensure > we use contigous memory for the IOs for DMA too it is built on using large > folios. We expect NVMe drives which support support large atomics can > easily profit from this without any userspace modification other than > when you create the filesystem. If we combine atomic writes with buffered writeback then we create a major IO constraint: *all* writes must be atomic in this sort of setup because we cannot allow multi-sector writes to be torn randomly in the middle of *any* sector. i.e. the driver needs to telling the block device that it's maximum IO size is limited by the max atomic write size the device supports. With that constraint in place, I don't see how the page cache or filesystem needs to care about how the underlying storage device provides it's atomic sector sized IO. If the underlying device uses atomic writes, then it needs to set up all it's published IO constraints that are used by filesystems to build bios around the limitations of atomic writes. And that bleeds into userspace as well - it needs to know the sector sizes so it can set up the filesystem correctly in the first place. Hence I think there is -zero- overlap between LBS and atomic writes. Yes, a device can provide a larger sector size via atomic write support, but that's orthogonal to LBS infrastructure. All the device needs to do is to set all of the device limits to be based on atomic write constraints. Nothing else in the kernel or userspace needs to care, and then the driver can simply add the REQ_ATOMIC flag to all the write IOs itself.... Note that I'm not talking about IOCB_ATOMIC here: the page cache doesn't give any guarantees about atomic write semantics. e.g. reads are allowed to race with writes to the same folio, "atomic" user writes that span folios can be written back independently (even whilst the write() is in progress!) breaking the atomicity that userspace specified. Hence if we want IOCB_ATOMIC for buffered writes, the first problem that needs to be solved is providing guaranteed stable atomic write semantics through the page cache right down to the async writeback code..... > We reviewed the possible intersection of both efforts at our last LBS cabal > with LBS interested folks and Martin Peterson and John Garry. It is somewhat > unclear exactly how to follow up on some aspects of the no-tear API [5] > but there was agreement about the possible intersection of both efforts, > and that we should discuss this at LSFMM. The goal would be to try to reach > consensus on how no-tear API and how LBS could help with those > interested in leveraging large atomics. > > Some things to evaluate or for us to discuss: > > * no-tear API: But I like to cry. > - allows directories to have separate alignment requirements > - this might be useful for folks who want to use large IOs with > large atomics for some workloads but smaller IOs for another > directory on the same drive. It this a viable option to some > users for large atomics with concerns of being forced to use > only large writes with LBS? We can already do that with extent size hints in XFS. > - statx is modified so to display new alignment considerations > - atomics are power of 2 > - there seems to be some interest in supporting no-hardware-accel atomic > solution, so a software implemented atomic solution, could someone > clarify if that's accurate? How is the double write avoided? What are > the use cases? Do databases use that today? Christoph's proposal for XFS involves using existing internal copy-on-write infrastructure for IOCB_ATOMIC writes. i.e. it uses the filesystem journal to do the atomic swap of the new data extent in place of the old one. > - How do we generalize a solution per file? Would extending a min > order per file be desirable? Is that even tenable? AFAIA, this is already the plan with XFS via a FORCE_ALIGN inode flag in conjunction with extent size hints. > * LBS: > - stat will return the block size set, so userspace applications > using stat / statx will use the larger block size to ensure > alignment > - a drive with support for a large atomic but supporting smaller > logical block sizes will still allow writes to the logical block > size. If a block driver has a "preference" (in NVMe this would > be the NPWG for the IU) to write above the logical block size, > do we want the option to lift the logical block size? In > retrospect I don't think this is needed given Jan Kara's patches > to prevent allowing writes to to mounted devices [4], that should > ensure that if a filesystem takes advantage of a larger physical > block size and creates a filesystem with it as the sector size, > userspace won't be mucking around with lower IOs to the drive > while it is mounted. But, are there any applications which would > get the block device logical block size instead for DIO? > - LBS is transparent to to userspace applications > - We've verified *most* IOs are aligned if you use a 16k block size > but a smaller sector size, the lower IOs were verified to come > from the XFS buffer cache. If your drive supports a large atomic > you can avoid these as you can lift the sector size set as the > physical block size will be larger than the logical block size. > For NVMe today this is possible for drives with a large > NPWG (the IU) and NAWUFP (the large atomic), for example. This is just how the page cache and filesystems behave according to sector and block size constraints defined by the block device and mkfs. I'm not sure what you're asking that we comment on or discuss here... > Tooling: > > - Both efforts stand to gain from a shared verification set of tools > for alignment and atomic use > - We have a block layer eBPF alignent tool written by Daniel Gomez [6] > however there is lack of interested parties to help review a simpler > version of this tool this tool so we merge it [7], we can benefit from more > eyeablls from experienced eBPF / block layer folks. Running and maintaining eBPF tools on development systems running custom kernels is a PITA in my experience. Wouldn't it be better just to add block tracepoint analysis filters to things like trace-cmd? We already have tracepoints that expose all the IO operations like queuing, merging, dispatch, etc that users are familiar with and have scripts and tooling written for. Adding a filter that calculates IO alignment for traces during report generation would by much more useful for IO analysis in general as understanding these behaviours is not specific to atomic writes. -Dave. -- Dave Chinner david@fromorbit.com