Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932546AbdDRAD0 (ORCPT ); Mon, 17 Apr 2017 20:03:26 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:35039 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932078AbdDRADV (ORCPT ); Mon, 17 Apr 2017 20:03:21 -0400 X-Original-SENDERIP: 156.147.1.126 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.249.25 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 18 Apr 2017 09:03:19 +0900 From: Minchan Kim To: Christoph Lameter CC: Sergey Senozhatsky , Joonsoo Kim , Andrew Morton , Michal Hocko , Vlastimil Babka , , , , Sergey Senozhatsky Subject: Re: copy_page() on a kmalloc-ed page with DEBUG_SLAB enabled (was "zram: do not use copy_page with non-page alinged address") Message-ID: <20170418000319.GC21354@bbox> References: <20170417014803.GC518@jagdpanzerIV.localdomain> MIME-Version: 1.0 In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB04/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/18 09:03:19, Serialize by Router on LGEKRMHUB04/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/18 09:03:19, Serialize complete at 2017/04/18 09:03:19 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1580 Lines: 32 On Mon, Apr 17, 2017 at 10:20:42AM -0500, Christoph Lameter wrote: > On Mon, 17 Apr 2017, Sergey Senozhatsky wrote: > > > Minchan reported that doing copy_page() on a kmalloc(PAGE_SIZE) page > > with DEBUG_SLAB enabled can cause a memory corruption (See below or > > lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org ) > > Yes the alignment guarantees do not require alignment on a page boundary. > > The alignment for kmalloc allocations is controlled by KMALLOC_MIN_ALIGN. > Usually this is either double word aligned or cache line aligned. > > > that's an interesting problem. arm64 copy_page(), for instance, wants src > > and dst to be page aligned, which is reasonable, while generic copy_page(), > > on the contrary, simply does memcpy(). there are, probably, other callpaths > > that do copy_page() on kmalloc-ed pages and I'm wondering if there is some > > sort of a generic fix to the problem. > > Simple solution is to not allocate pages via the slab allocator but use > the page allocator for this. The page allocator provides proper alignment. > > There is a reason it is called the page allocator because if you want a > page you use the proper allocator for it. It would be better if the APIs works with struct page, not address but I can imagine there are many cases where don't have struct page itself and redundant for kmap/kunmap. Another approach is the API does normal thing for non-aligned prefix and tail space and fast thing for aligned space. Otherwise, it would be happy if the API has WARN_ON non-page SIZE aligned address.