Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A423DC61DA4 for ; Tue, 14 Feb 2023 22:28:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232528AbjBNW2u (ORCPT ); Tue, 14 Feb 2023 17:28:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229454AbjBNW2s (ORCPT ); Tue, 14 Feb 2023 17:28:48 -0500 Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com [IPv6:2607:f8b0:4864:20::e2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48EE5265B5; Tue, 14 Feb 2023 14:28:47 -0800 (PST) Received: by mail-vs1-xe2d.google.com with SMTP id d66so18033020vsd.9; Tue, 14 Feb 2023 14:28:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1676413726; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=plqHRAI8Pfx3LpYamylhfhBOqt+fLpnOArGMZObrn2U=; b=jT58V1SM8NDSjV4vkW9fRK0Lqw/waIGvOwskkhzhojXB82nrKvVMJQGwzBvI3iHRFL gry/nNrTImt2DSKPWAfL1clGjA0Mo68DXH9pL8NgqRv9i0gz3oPvr/NJYUZK0wsRsvXf fvTRBgNmVQYTKaoB6StOVR2k6AfxHpfL78Rv2uuAplNVFxgddeelRhg7ZPcT5cJjasDu mAd5JYO/HvyYYv0IF0Um3T93uuTzfuB01XJggWYCRJObpPWDNchJLI+wyKWSv47pgIML xCnQJWmxlLOqsxMSGH7NBsJBr/+YLxk8Mi3ldhRTDqI2gR+GRwzPcwMOArwauThEKALS z7+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1676413726; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=plqHRAI8Pfx3LpYamylhfhBOqt+fLpnOArGMZObrn2U=; b=Ll0Wn6wfYt4RPafxzF+FbCE/iY6+5l6GKgoeC90pzdYwPqI7rBPPsrXsI4eSVkjcZz TuvGbAj5tOYhekX1yUuI1XzeDZ+pXvGRIHm3U5H3neUDHTI4d0fPqq54chVTKF21utWo 662YZMt5P2im1wpro741y0kpL2+mUVa9Z5DRwrYMF/Tu+1vdY2tJLwErymhwA1LHkZbA 0Xp/PAVXL0SUliy9DywdgpJWFpW80wFyGaJP7316d3BJC98QS2qV7RhkdjSQxFaScCAg AFMiaoDgduFSuTmIFRSZaXeW1TAmohuxuhZhRQ4OAQUKsXsQTDgrrW3rFrMjHd7Q5zQY HP2A== X-Gm-Message-State: AO0yUKXYezB+hk2zSBRtvdNu6DkvIW+aDk7hY3g6vW7/SKvprsDOGQZY 2zd5m9szTdzVr1FlN4FRWH3JP11uejgt6XotKsGY2vwFMd0= X-Google-Smtp-Source: AK7set/44hMy5ENsS6Z0/yejw8yOcoTBn/fVLjtxEYcM/BoKHwiJZY7crwQgI4SNayyqUDqO7Ab9vCsO4V6WthLRU8c= X-Received: by 2002:a67:e0cf:0:b0:3fc:3a9e:3203 with SMTP id m15-20020a67e0cf000000b003fc3a9e3203mr73221vsl.84.1676413726415; Tue, 14 Feb 2023 14:28:46 -0800 (PST) MIME-Version: 1.0 References: <25578.37401.314298.238192@quad.stoffel.home> In-Reply-To: From: Roger Heflin Date: Tue, 14 Feb 2023 16:28:36 -0600 Message-ID: Subject: Re: [dm-devel] RAID4 with no striping mode request To: Heinz Mauelshagen Cc: Kyle Sanderson , linux-raid@vger.kernel.org, Song Liu , device-mapper development , John Stoffel , Linux-Kernel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 14, 2023 at 3:27 PM Heinz Mauelshagen wrote: > > > > ...which is RAID1 plus a parity disk which seems superfluous as you achieve (N-1) > resilience against single device failures already without the later. > > What would you need such parity disk for? > > Heinz > I thought that at first too, but threw that idea out as it did not make much sense. What he appears to want is 8 linear non-striped data disks + a parity disk. Such that you can lose any one data disk and parity can rebuild that disk. And if you lose several data diskis, then you have intact non-striped data for the remaining disks. It would almost seem that you would need to put a separate filesystem on each data disk/section (or have a filesystem that is redundant enough to survive) otherwise losing an entire data disk would leave the filesystem in a mess.. So N filesystems + a parity disk for the data on the N separate filesystems. And each write needs you to read the data from the disk you are writing to, and the parity and recalculate the new parity and write out the data and new parity. If the parity disk was an SSD it would be fast enough, but if parity was an SSD I would expect it to get used up/burned out from all of parity being re-written for each write on each disk unless you bought an expensive high-write ssd. The only advantage of the setup is that if you lose too many disks you still have some data. It is not clear to me that it would be any cheaper if parity needs to be a normal ssd's (since ssds are about 4x the price/gb and high-write ones are even more) than a classic bunch of mirrors, or even say a 4 disks raid6 where you can lose any 2 and still have data.