Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1776815pxf; Fri, 26 Mar 2021 14:45:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzEnJI7DVxVssM3+eIBLjlJfcEOgucHENxathFitHwSFIz/apcA59SYDmBblYH+zfvU72ei X-Received: by 2002:a17:906:6c4:: with SMTP id v4mr17154425ejb.198.1616795117034; Fri, 26 Mar 2021 14:45:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616795117; cv=none; d=google.com; s=arc-20160816; b=ddkaErMI3CVMbtRpf1X6dSGCKIlKYw44zteu6i3cyfeD5wATpIFFMybgCPj/Czmu2v pncyuTca8csBYgJXnEJxwH6U1A8DeAwbKb3AZCftkIclrkGK8m+PxRCesz1k0rxYFoDy bq+e+0DMySMf8RVv3ewvDOiTFqC0GwquLP1ozUquoU9SCA6jt6taOdbfbjqggTeomTMC R9XKvny4RUDS6HFDl7qFBlRimTwElJw+Zl3Rm/XefOcI1jUQ94LSTTCaEI391/dOqqnF bkY8LKWh2Z02COmal/7mwdwv22ZrpAHHOF9Hfu16fL4MG2E3S/Sz6CW74B87cRbC4pew j1RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=WV7WEOwlzxCsF73+pI7k12EgmOoMiHqyTVgC0XnzDUM=; b=lw18TTCd+B38XTW5JR3d8Zey+0TOyn1Ihxr7rghM+T40m0kIreV+v5sEr1D5azpO25 yLyDnFWSs475hGPhx7EV0v90XoojlrZ+qMKCyAPtBZNNg8IL5lGhxh13RQUD0cSq4JUE sfe72OFgEiML4hcukiLc+gWjyqxCDNQM86CqNGRMscRY2OHv9lM8y5/ph2FEWZX3tutU WjpEgOThdM4iFBd7G9brkzXniQZxnx49ng5wc0CHa9lwhXhxRhDI6jD14g6fCSZZf8SN 26wf2w9s0uTrxCuCi3/8DQZO+el3t723XNOGDtf6ni5tGeutO8UwOjd8vkJkvLLBeJio iyjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=ecstEGFf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lr15si7948811ejb.275.2021.03.26.14.44.54; Fri, 26 Mar 2021 14:45:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=ecstEGFf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230027AbhCZVmA (ORCPT + 99 others); Fri, 26 Mar 2021 17:42:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230221AbhCZVlk (ORCPT ); Fri, 26 Mar 2021 17:41:40 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22756C0613B1 for ; Fri, 26 Mar 2021 14:41:39 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id r12so10511834ejr.5 for ; Fri, 26 Mar 2021 14:41:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=WV7WEOwlzxCsF73+pI7k12EgmOoMiHqyTVgC0XnzDUM=; b=ecstEGFfBqs91yppIM+HaNt8s4h4q+ZO5S3+/XFI0UuJvwNLMSWwuUOprfEF59aLlz u4SK3abvmWXSsdyDHCJHy36n6byqXsYuAP9VeejcjjK61FDk+mMMliL++TtAovpAGpZy HjEiN2eXkaYItfs7tHfWO9/2HKHmvRjoLWk331UbJ8sRL8r0WamBXwFrBgD4IVua7UlE rgBhFq+nP2tGK55C/wl0iSHQ0MA2tTRurDaII6kXXOuAgNC3fQ23ch72261RT3GVuJ4y TSZTNni+7G/lGONUh2Z1fxFKJolQilZOCkA4hLZh0zvLUUXUShkpRK47nevseIX6qo6M dfzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=WV7WEOwlzxCsF73+pI7k12EgmOoMiHqyTVgC0XnzDUM=; b=dJEJbS4J1znqugfI9OhxYsblUYyV6/yxsHpBCFesP2VQFwxIEiZhnlWCwCrHcj9u5y HgJX1ATTSluiP1Uuf255EEbiHbPh6KXmtuWnWbeEow7trjWp6839eq319DLG57Jwlgfg XYOzk9Jhl1ZXJBZ7tOzP68DRBdokh8xz2+WOoJASSZcqOhZymtbLUUbT8/J5Wkcynrds 6k3xtYrQ6zdslbZrC25njZrM0vCkfzUpQW7IW/GdMkm/lzpaLiCL+sHebJjp6Uy5hZeE SeFvTDwmoFFW0aq0QPVLu9GOCV7QyqqVo6Zcv5l1tjorAIjZqUBuFvg6BQVyYBC1DyKU qMTA== X-Gm-Message-State: AOAM531JtTH07za5/g9EJYgkMty+uudqSiJMeIZ8fTt1F405nKhGzDWR 7UGyHi2oUpBIjm3nD5UaE80SI+YKjWawBKrEmr7AmGBAdaSBuw== X-Received: by 2002:a17:906:3c50:: with SMTP id i16mr17752791ejg.175.1616794897749; Fri, 26 Mar 2021 14:41:37 -0700 (PDT) MIME-Version: 1.0 References: <20210326090057.30499-1-qiang.zhang@windriver.com> In-Reply-To: <20210326090057.30499-1-qiang.zhang@windriver.com> From: Pavel Tatashin Date: Fri, 26 Mar 2021 17:41:01 -0400 Message-ID: Subject: Re: [PATCH v2] loop: call __loop_clr_fd() with lo_mutex locked to avoid autoclear race To: qiang.zhang@windriver.com Cc: Jens Axboe , linux-block@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 26, 2021 at 5:00 AM wrote: > > From: Zqiang > > lo->lo_refcnt = 0 > > CPU0 CPU1 > lo_open() lo_open() > mutex_lock(&lo->lo_mutex) > atomic_inc(&lo->lo_refcnt) > lo_refcnt == 1 > mutex_unlock(&lo->lo_mutex) > mutex_lock(&lo->lo_mutex) > atomic_inc(&lo->lo_refcnt) > lo_refcnt == 2 > mutex_unlock(&lo->lo_mutex) > loop_clr_fd() > mutex_lock(&lo->lo_mutex) > atomic_read(&lo->lo_refcnt) > 1 > lo->lo_flags |= LO_FLAGS_AUTOCLEAR lo_release() > mutex_unlock(&lo->lo_mutex) > return mutex_lock(&lo->lo_mutex) > atomic_dec_return(&lo->lo_refcnt) > lo_refcnt == 1 > mutex_unlock(&lo->lo_mutex) > return > > lo_release() > mutex_lock(&lo->lo_mutex) > atomic_dec_return(&lo->lo_refcnt) > lo_refcnt == 0 > lo->lo_flags & LO_FLAGS_AUTOCLEAR > == true > mutex_unlock(&lo->lo_mutex) loop_control_ioctl() > case LOOP_CTL_REMOVE: > mutex_lock(&lo->lo_mutex) > atomic_read(&lo->lo_refcnt)==0 > __loop_clr_fd(lo, true) mutex_unlock(&lo->lo_mutex) > mutex_lock(&lo->lo_mutex) loop_remove(lo) > mutex_destroy(&lo->lo_mutex) > ...... kfree(lo) > data race > > When different tasks on two CPUs perform the above operations on the same > lo device, data race may be occur, Do not drop lo->lo_mutex before calling > __loop_clr_fd(), so refcnt and LO_FLAGS_AUTOCLEAR check in lo_release > stay in sync. There is a race with autoclear logic where use after free may occur as shown in the above scenario. Do not drop lo->lo_mutex before calling __loop_clr_fd(), so refcnt and LO_FLAGS_AUTOCLEAR check in lo_release stay in sync. Reviewed-by: Pavel Tatashin > > Fixes: 6cc8e7430801 ("loop: scale loop device by introducing per device lock") > Signed-off-by: Zqiang > --- > v1->v2: > Modify the title and commit message. > > drivers/block/loop.c | 11 ++++------- > 1 file changed, 4 insertions(+), 7 deletions(-) > > diff --git a/drivers/block/loop.c b/drivers/block/loop.c > index d58d68f3c7cd..5712f1698a66 100644 > --- a/drivers/block/loop.c > +++ b/drivers/block/loop.c > @@ -1201,7 +1201,6 @@ static int __loop_clr_fd(struct loop_device *lo, bool release) > bool partscan = false; > int lo_number; > > - mutex_lock(&lo->lo_mutex); > if (WARN_ON_ONCE(lo->lo_state != Lo_rundown)) { > err = -ENXIO; > goto out_unlock; > @@ -1257,7 +1256,6 @@ static int __loop_clr_fd(struct loop_device *lo, bool release) > lo_number = lo->lo_number; > loop_unprepare_queue(lo); > out_unlock: > - mutex_unlock(&lo->lo_mutex); > if (partscan) { > /* > * bd_mutex has been held already in release path, so don't > @@ -1288,12 +1286,11 @@ static int __loop_clr_fd(struct loop_device *lo, bool release) > * protects us from all the other places trying to change the 'lo' > * device. > */ > - mutex_lock(&lo->lo_mutex); > + > lo->lo_flags = 0; > if (!part_shift) > lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN; > lo->lo_state = Lo_unbound; > - mutex_unlock(&lo->lo_mutex); > > /* > * Need not hold lo_mutex to fput backing file. Calling fput holding > @@ -1332,9 +1329,10 @@ static int loop_clr_fd(struct loop_device *lo) > return 0; > } > lo->lo_state = Lo_rundown; > + err = __loop_clr_fd(lo, false); > mutex_unlock(&lo->lo_mutex); > > - return __loop_clr_fd(lo, false); > + return err; > } > > static int > @@ -1916,13 +1914,12 @@ static void lo_release(struct gendisk *disk, fmode_t mode) > if (lo->lo_state != Lo_bound) > goto out_unlock; > lo->lo_state = Lo_rundown; > - mutex_unlock(&lo->lo_mutex); > /* > * In autoclear mode, stop the loop thread > * and remove configuration after last close. > */ > __loop_clr_fd(lo, true); > - return; > + goto out_unlock; > } else if (lo->lo_state == Lo_bound) { > /* > * Otherwise keep thread (if running) and config, > -- > 2.17.1 >