Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755860AbdCGQGH (ORCPT ); Tue, 7 Mar 2017 11:06:07 -0500 Received: from mail-io0-f174.google.com ([209.85.223.174]:34742 "EHLO mail-io0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932884AbdCGQEP (ORCPT ); Tue, 7 Mar 2017 11:04:15 -0500 Subject: Re: When will Linux support M2 on RAID ? To: Christoph Hellwig References: <20170307045200.GA1708@infradead.org> <20170307151528.GA16216@infradead.org> Cc: "David F." , linux-kernel , "linux-raid@vger.kernel.org" From: "Austin S. Hemmelgarn" Message-ID: Date: Tue, 7 Mar 2017 10:54:54 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20170307151528.GA16216@infradead.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1925 Lines: 33 On 2017-03-07 10:15, Christoph Hellwig wrote: > On Tue, Mar 07, 2017 at 09:50:22AM -0500, Austin S. Hemmelgarn wrote: >> He's referring to the RAID mode most modern Intel chipsets have, which (last >> I checked) Linux does not support completely and many OEM's are setting by >> default on new systems because it apparently provides better performance >> than AHCI even for a single device. > > It actually provides worse performance. What it does it that it shoves > up to three nvme device bars into the bar of an AHCI device, and > requires the OS to handle them all using a single driver. The Money's > on crack at Intel decided to do that to provide their "valueable" RSTe > IP (which is a windows ATA + RAID driver in a blob, which now has also > grown a NVMe driver). The only remotely sane thing is to disable it > in the bios, and burn all people involved with it. The next best thing > is to provide a fake PCIe root port driver untangling this before it > hits the driver, but unfortunately Intel is unwilling to either do this > on their own or at least provide enough documentation for others to do > it. > For NVMe, yeah, it hurts performance horribly. For SATA devices though, it's hit or miss, some setups perform better, some perform worse. It does have one advantage though, it lets you put the C drive for a Windows install on a soft-RAID array insanely easily compared to trying to do so through Windows itself (although still significantly less easily that doing the equivalent on Linux...). The cynic in me is tempted to believe that the OEM's who are turning it on by default are trying to either: 1. Make their low-end systems look even crappier in terms of performance while adding to their marketing checklist (Of the systems I've seen that have this on by default, most were cheap ones with really low specs). 2. Actively make it harder to run anything but Windows on their hardware.