Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp1306560ybi; Wed, 3 Jul 2019 13:15:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqxi9tzwOng8Io/7zlkR26BHNY5D0ZxTyR6uN3aaV1DGjTJQvbsED1tGavJo+08wWmEBITFZ X-Received: by 2002:a17:90a:db42:: with SMTP id u2mr15014936pjx.48.1562184929227; Wed, 03 Jul 2019 13:15:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562184929; cv=none; d=google.com; s=arc-20160816; b=CpNzf0i8ksgpi6BJXYxdSA3klSg8p1chT31faR+PC1qD345eHQgMtrtFMTWVYh0Rxh 764dtApELuVZ+H28+2+SCRufihoLPHg81+0C1nolL/D+MoX5Azy+l/sQxtIYDj9PZKXI kuP1ZVMpYKTqCKdO/va4iryPqJYQy0gm7vqxnf8ixVgQ683Mvp8DiuIooz0oYni8/UlD BEDgCn/e34i9Cwbw3ENvCW1ZehSz69drc+Or/eWKqogfm6X3eI4gmSOH/g79ABE6P+x9 2Z8ooJXU21Rh0WqzPiREkL9gKGaxDzfpiQBioYAs8q/x8JWvDRYqIm7J8wI6fH4oPUZM Y0yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=yYUsHMTWauYHWqlo9ty/zJzUyUz6zmCJ5iqAlMLp6F0=; b=oCgcdBQ07UBjXLxY5HZfujv6kLxLzoZNrIv/Ow9mftspx521umaN9HUyiMCiNcBkCA dIuTgZfT3yYQzSt/TyWNv5L1nSmtRwCZj2PMLGvkLPchCU3Q9n5J/K6lYUMTg6V2wb86 DwpxEzDlwAOAHreIpYLs2+tFqUXznla+FhElr+qjUcTUWEfMJrokmsLQ80hAhCxIPEOy Xdok1ignrNBI4a0gczg4W9VQ7dTK7o4gvmDFHVxuWYVTGP4PE0pEuD83jZtIWcAfxzTv DeFggaU2GEk/2fpERH3YqUJPYna4EkZSySvbRhUcYQdvl++OM23rYsEjY92spKiAWo+g dRtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=hsd+hhLl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c127si3376273pfa.20.2019.07.03.13.15.13; Wed, 03 Jul 2019 13:15:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=hsd+hhLl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727193AbfGCUOv (ORCPT + 99 others); Wed, 3 Jul 2019 16:14:51 -0400 Received: from mail-qt1-f194.google.com ([209.85.160.194]:36222 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726490AbfGCUOr (ORCPT ); Wed, 3 Jul 2019 16:14:47 -0400 Received: by mail-qt1-f194.google.com with SMTP id z4so2045381qtc.3 for ; Wed, 03 Jul 2019 13:14:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yYUsHMTWauYHWqlo9ty/zJzUyUz6zmCJ5iqAlMLp6F0=; b=hsd+hhLlxEGDdvpWo6KHr+7ipbRO0Tpjqi/yfrTH9FDKheJ6fGdXTYEiCj28seOz2l g/TjkEb8Td3Ga7v6D+piUUejHsLAcijpARYZtwoW1RJSMUvPf87zMGya0KxPrJfVVMWv bXN+jQ3JwVWQpc3OowW7OKFslKmEJjOjFAwKepBGD6x1bI92iSiEiEc1IqRS7HLfsJqv 3CzASWm+KFGMCY4sGVmn2d4Mnx8RYBGNFlTkPLYn2M151gj70N1GzxuERCluhe1sltCG abC0A5AraypsRtqrGUvR75f67LBlDm4d9i1Fj6ak8cbxNRGsLcYvu/UsFwm7fSvt8JBx TGpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yYUsHMTWauYHWqlo9ty/zJzUyUz6zmCJ5iqAlMLp6F0=; b=B0d65Qa8l8nLkhLH6sJarEWM9zBQI0nRKH/QcfFGWkXU1+777JDsL4NDvz5NiLz4va cs9ALhVDsEHrF+E4UMAgFrXy9glmWCEjCHh2N7XRoxCqwDIsUgAq3NfXQvWoDLO+Cqpq wn4O7SzjokmeWxD+Lij0zqnXOMWUDIw0ODJKopWDdzBTD1LV92zWsyz3GwsjU4irpkZ3 oKe20PShAif9gjq0iwcsCpzazlAht/cZh70xicYfg91utcEsnwJwMvJVuKWcYTibnWBy 4m9kDVktPJXSvE7OfRKnNgou7fsdx6wPpKWcPf05sSDIrxzcKtLkO9L9pM9HzdxlA16Q YQxg== X-Gm-Message-State: APjAAAUN9Zyy4sKbqHF34VBP4VUFipG4CyLNzSQEOz5/stc8z2B9+Ktw IVJdUgDozdEn3ZokkkM88MqZBFlFHnP7VUnvt0547w== X-Received: by 2002:a81:4c44:: with SMTP id z65mr23892003ywa.4.1562184885658; Wed, 03 Jul 2019 13:14:45 -0700 (PDT) MIME-Version: 1.0 References: <20190701173042.221453-1-henryburns@google.com> In-Reply-To: From: Shakeel Butt Date: Wed, 3 Jul 2019 13:14:34 -0700 Message-ID: Subject: Re: [PATCH] mm/z3fold: Fix z3fold_buddy_slots use after free To: Vitaly Wool Cc: Henry Burns , Andrew Morton , Vitaly Vul , Mike Rapoport , Xidong Wang , Jonathan Adams , Linux-MM , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 2, 2019 at 11:03 PM Vitaly Wool wrote: > > On Tue, Jul 2, 2019 at 6:57 PM Henry Burns wrote: > > > > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool wrote: > > > > > > Hi Henry, > > > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote: > > > > > > > > Running z3fold stress testing with address sanitization > > > > showed zhdr->slots was being used after it was freed. > > > > > > > > z3fold_free(z3fold_pool, handle) > > > > free_handle(handle) > > > > kmem_cache_free(pool->c_handle, zhdr->slots) > > > > release_z3fold_page_locked_list(kref) > > > > __release_z3fold_page(zhdr, true) > > > > zhdr_to_pool(zhdr) > > > > slots_to_pool(zhdr->slots) *BOOM* > > > > > > Thanks for looking into this. I'm not entirely sure I'm all for > > > splitting free_handle() but let me think about it. > > > > > > > Instead we split free_handle into two functions, release_handle() > > > > and free_slots(). We use release_handle() in place of free_handle(), > > > > and use free_slots() to call kmem_cache_free() after > > > > __release_z3fold_page() is done. > > > > > > A little less intrusive solution would be to move backlink to pool > > > from slots back to z3fold_header. Looks like it was a bad idea from > > > the start. > > > > > > Best regards, > > > Vitaly > > > > We still want z3fold pages to be movable though. Wouldn't moving > > the backink to the pool from slots to z3fold_header prevent us from > > enabling migration? > > That is a valid point but we can just add back pool pointer to > z3fold_header. The thing here is, there's another patch in the > pipeline that allows for a better (inter-page) compaction and it will > somewhat complicate things, because sometimes slots will have to be > released after z3fold page is released (because they will hold a > handle to another z3fold page). I would prefer that we just added back > pool to z3fold_header and changed zhdr_to_pool to just return > zhdr->pool, then had the compaction patch valid again, and then we > could come back to size optimization. > By adding pool pointer back to z3fold_header, will we still be able to move/migrate/compact the z3fold pages?