Received: by 2002:a4a:301c:0:0:0:0:0 with SMTP id q28-v6csp118985oof; Mon, 24 Sep 2018 17:16:52 -0700 (PDT) X-Google-Smtp-Source: ACcGV63HSubzxF/aBvX2dpXmXLpZdbbGuyHr8D7LTrr0aiMiX7dykmQbnSmscXPoZUX1jvW4HOVg X-Received: by 2002:a63:a919:: with SMTP id u25-v6mr907013pge.211.1537834612464; Mon, 24 Sep 2018 17:16:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537834612; cv=none; d=google.com; s=arc-20160816; b=R25Lq//0uoCjQIRMNZvXm8YYzeRS2q7ugChM/uATtUeZ2D2E49Ls1lQPyhEXTXSgik 96cPIYAdnQirLMtE/LvYqp0b6cmGSjO3xk6fbOzAxvQ/JcVeIaeX+cZ6ltnw5lo41Q8M 9BPv8G6wzxEqKt67PSKoitf6KYLj4PX338/8SWa5Ab16Lw6eN+EtS0cDqF+jWwRBQxJy d0CNEzfbO1mIicm7xkqCok6+dq4qdWkwNPJvh8kjwh9BUs8BWDQJazZ/k/Sb+0umkz+9 5wQgSya5h5d8J/sZG1+NFqwZO/q9AENRZor6j2ujqeVngKI7T7pOXEOY7UPjakytrpC5 inmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=IxaEI4GXosP0R4sq9Bjb/8CkSvlpwpBmuOh5Yfe9YvY=; b=V82gjof8BKyHbps5ZzI0RjPBoZL87dZmaduCaNUplKNj1sX+K9VvzPmfB77voeQgCv eeKlcVFkmSHVE4v9Vv5Wl7Wty/dENzJmVc1wFz0Nq/0g+Gh31srlU7Xwq1ONikrP9zFs xm7PuNw1lb25T1c93ntH59gzHQK5bZmRDN5azxoyI8uTGDNl+BoSAQo60OfR+w86bOsP mTOO3C3o9VTs3lVXZaVWukPAhex14u2x0vWKBtbXqD0T47Rl8OurF4+dNN2KS0jZ8pDv RzKw2S14CMCiVDvdNimRVA8+hZk7kNmNjMhDoO3EP10YlbRqf5c1r1BQ6ImXLgAsaI3I ttCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r16-v6si653205pfh.229.2018.09.24.17.16.35; Mon, 24 Sep 2018 17:16:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727512AbeIYGVQ (ORCPT + 99 others); Tue, 25 Sep 2018 02:21:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46458 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726157AbeIYGVQ (ORCPT ); Tue, 25 Sep 2018 02:21:16 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C893281DF3; Tue, 25 Sep 2018 00:16:30 +0000 (UTC) Received: from ming.t460p (ovpn-8-19.pek2.redhat.com [10.72.8.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D8A741980D; Tue, 25 Sep 2018 00:16:20 +0000 (UTC) Date: Tue, 25 Sep 2018 08:16:16 +0800 From: Ming Lei To: Matthew Wilcox Cc: Bart Van Assche , Andrey Ryabinin , Vitaly Kuznetsov , Christoph Hellwig , Ming Lei , linux-block , linux-mm , Linux FS Devel , "open list:XFS FILESYSTEM" , Dave Chinner , Linux Kernel Mailing List , Jens Axboe , Christoph Lameter , Linus Torvalds , Greg Kroah-Hartman Subject: Re: block: DMA alignment of IO buffer allocated from slab Message-ID: <20180925001615.GA14386@ming.t460p> References: <20180923224206.GA13618@ming.t460p> <38c03920-0fd0-0a39-2a6e-70cd8cb4ef34@virtuozzo.com> <20a20568-5089-541d-3cee-546e549a0bc8@acm.org> <12eee877-affa-c822-c9d5-fda3aa0a50da@virtuozzo.com> <1537801706.195115.7.camel@acm.org> <1537804720.195115.9.camel@acm.org> <10c706fd-2252-f11b-312e-ae0d97d9a538@virtuozzo.com> <1537805984.195115.14.camel@acm.org> <20180924185753.GA32269@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180924185753.GA32269@bombadil.infradead.org> User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 25 Sep 2018 00:16:31 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 24, 2018 at 11:57:53AM -0700, Matthew Wilcox wrote: > On Mon, Sep 24, 2018 at 09:19:44AM -0700, Bart Van Assche wrote: > > That means that two buffers allocated with kmalloc() may share a cache line on > > x86-64. Since it is allowed to use a buffer allocated by kmalloc() for DMA, can > > this lead to data corruption, e.g. if the CPU writes into one buffer allocated > > with kmalloc() and a device performs a DMA write to another kmalloc() buffer and > > both write operations affect the same cache line? > > You're not supposed to use kmalloc memory for DMA. This is why we have > dma_alloc_coherent() and friends. Also, from DMA-API.txt: Please take a look at USB drivers, or storage drivers or scsi layer. Lot of DMA buffers are allocated via kmalloc. Also see the following description in DMA-API-HOWTO.txt: If the device supports DMA, the driver sets up a buffer using kmalloc() or a similar interface, which returns a virtual address (X). The virtual memory system maps X to a physical address (Y) in system RAM. The driver can use virtual address X to access the buffer, but the device itself cannot because DMA doesn't go through the CPU virtual memory system. Also still see DMA-API-HOWTO.txt: Types of DMA mappings ===================== There are two types of DMA mappings: - Consistent DMA mappings which are usually mapped at driver initialization, unmapped at the end and for which the hardware should guarantee that the device and the CPU can access the data in parallel and will see updates made by each other without any explicit software flushing. Think of "consistent" as "synchronous" or "coherent". - Streaming DMA mappings which are usually mapped for one DMA transfer, unmapped right after it (unless you use dma_sync_* below) and for which hardware can optimize for sequential accesses. Thanks, Ming