Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp1149814ybi; Wed, 3 Jul 2019 10:19:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqxRGw+fVem1dGFn5gpOs3ZxhVOPniL3sDGy742IYkQBKmgvzHxSWkI2eREpYYN+89EjPBed X-Received: by 2002:a63:d23:: with SMTP id c35mr33629628pgl.376.1562174395636; Wed, 03 Jul 2019 10:19:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562174395; cv=none; d=google.com; s=arc-20160816; b=tvPrj8UH5Oag+d9djkp1zCOkBrKJcA/yxgtTvVw8NMuGlO2O/ISPku2656/I1+BtBJ E7P/VVZ1eeRzgBiDCHFI0rY8Wpi5sv1M8DZ341JytaYJ7uZsVEOyFOi0Ka/lljKqUDW5 vJwZv9QKf3JoPOUS8V6r9aVYRN7KnAPTotiyv/e53Z4oS/fqxri4/uTAU9ZqFBAGnHWb F12txgG3yelU3soXKaJKib4f9l1hExkJ80xZTW5IW8DB8dXEVlnnUGKbZDN1ngrpbVHW cviZaZkj6blZStwF522CTTgn9noQjge8can3iGUC86T0LcHm72UEJo4qYiudL1g+Gv5f QAKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=YGjByfD82DU0x9mX4FdRfeYlzBtivDKcJ0KQXu0Ie1I=; b=ennFwad46Fbnz8/DPB+99yLRKyxuPTuUPRoVPXtntyERJcYc3RkrFQ7HamXx82u1gO DpWTm90x9Om+7D1gE/Iha2xrfSLLwAz1aofbKJhWAxNagmpwNg7CVZRpPExMK1VRSKxO juRy45mQe0IxzBpF+skYdG6ZG/qxXRuhPJOm/Wa5WbFHyPCchvHniFS7ketsYsGl00rV QQPL/m2p+qIz39GwlKOcjTTPXAcsQgB4TxlwkyjfSZYz/vQY7viVejNYwCUcL9+2YyT0 59kN0Ks131GjtB0b7siby4/WeyZzQjT9cNW7f2DSViYfFYA/7+AX7Xam7Gy2l/8dpgIy nImA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=a7Rj0jY2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i186si3362385pge.9.2019.07.03.10.19.40; Wed, 03 Jul 2019 10:19:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=a7Rj0jY2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726945AbfGCRTP (ORCPT + 99 others); Wed, 3 Jul 2019 13:19:15 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:43039 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726550AbfGCRTO (ORCPT ); Wed, 3 Jul 2019 13:19:14 -0400 Received: by mail-io1-f65.google.com with SMTP id k20so6569483ios.10 for ; Wed, 03 Jul 2019 10:19:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=YGjByfD82DU0x9mX4FdRfeYlzBtivDKcJ0KQXu0Ie1I=; b=a7Rj0jY2QBIfViPNFUm76trGe/bKUY7mCjPwJbiaJ50x6pJNRBkPu+R1MM8mw6sOOU 4B3aEJKjRb3+e7bU3IP1jFdb1YFmzDwlsN4MXd++Uf2Fp2Nzel29i8J9XQ22NojddDZB +Ulzkl67GjTVkp41624QkEl0E/s8o3qkxFuABwKgTE6kJYqMZA8jNskfHCbQOBk6pnzR NOpvtVVjeuJiet55kmS0P+X09nZ3dUpGlSld4TNhSinRaDzB2lEIW0iwUqau3LNF+c3h 39nW6JLRT+JrNz8eQ83heezlELAfEDy8iT1ms460iBM6Lt5wffeg7MZ2w0nk/P0WI2Ur Z3XQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=YGjByfD82DU0x9mX4FdRfeYlzBtivDKcJ0KQXu0Ie1I=; b=ZkvgsD1zBF4IacxqmgUgLS3xV0pCOtSgiyf2noV5N4lCgT2llt+EP1KMsGxmqJa852 Z1E2gYUa/P7cmyjJlSBRJfqekNSJhQrOWwb2nNZ1f/tqeVR+5lOSK6pQUGF9BVPpvXq8 Pz6y6tCSVvSzfoM3SyF2JVXAdDoeazmQWFOUQctM9csxyNQcH1TDt0doV3LyxDDjd1oa llEgXIrIBwvdELRokgo6PuDNLkw5L9P3h9wcDqz1qKCL7WRo4c5ci1YdhCikCRxoCFaU LzEQkTC50EoIoz4g8ZNF8m629ypaWMp8Y74qsO7ghhSKiFi0YonDA+N+fi6ep7wG4K6H zQCg== X-Gm-Message-State: APjAAAXzbjhbbfK9tHzBYfgVIOOGfQiqFSEhOJMJr8xfCf8BQpDR9tLh 6SqoFtWWthy9gBZl1MlKPpDndbWIOBbOgqSfOGwHyQ== X-Received: by 2002:a5e:c207:: with SMTP id v7mr10360260iop.163.1562174353630; Wed, 03 Jul 2019 10:19:13 -0700 (PDT) MIME-Version: 1.0 References: <20190701173042.221453-1-henryburns@google.com> In-Reply-To: From: Henry Burns Date: Wed, 3 Jul 2019 10:18:37 -0700 Message-ID: Subject: Re: [PATCH] mm/z3fold: Fix z3fold_buddy_slots use after free To: Vitaly Wool Cc: Andrew Morton , Vitaly Vul , Mike Rapoport , Xidong Wang , Shakeel Butt , Jonathan Adams , Linux-MM , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool wrote: > > > > > > Hi Henry, > > > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote: > > > > > > > > Running z3fold stress testing with address sanitization > > > > showed zhdr->slots was being used after it was freed. > > > > > > > > z3fold_free(z3fold_pool, handle) > > > > free_handle(handle) > > > > kmem_cache_free(pool->c_handle, zhdr->slots) > > > > release_z3fold_page_locked_list(kref) > > > > __release_z3fold_page(zhdr, true) > > > > zhdr_to_pool(zhdr) > > > > slots_to_pool(zhdr->slots) *BOOM* > > > > > > Thanks for looking into this. I'm not entirely sure I'm all for > > > splitting free_handle() but let me think about it. > > > > > > > Instead we split free_handle into two functions, release_handle() > > > > and free_slots(). We use release_handle() in place of free_handle(), > > > > and use free_slots() to call kmem_cache_free() after > > > > __release_z3fold_page() is done. > > > > > > A little less intrusive solution would be to move backlink to pool > > > from slots back to z3fold_header. Looks like it was a bad idea from > > > the start. > > > > > > Best regards, > > > Vitaly > > > > We still want z3fold pages to be movable though. Wouldn't moving > > the backink to the pool from slots to z3fold_header prevent us from > > enabling migration? > > That is a valid point but we can just add back pool pointer to > z3fold_header. The thing here is, there's another patch in the > pipeline that allows for a better (inter-page) compaction and it will > somewhat complicate things, because sometimes slots will have to be > released after z3fold page is released (because they will hold a > handle to another z3fold page). I would prefer that we just added back > pool to z3fold_header and changed zhdr_to_pool to just return > zhdr->pool, then had the compaction patch valid again, and then we > could come back to size optimization. I see your point, patch incoming.