Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3860783imm; Mon, 25 Jun 2018 05:57:25 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJF9POpLPxy1n8do49PHyQmFL6wsSQ9UH7dzaci0NJvxMI25e2Hyknaahhct7eloXp2dkxF X-Received: by 2002:a62:1a4f:: with SMTP id a76-v6mr9953690pfa.16.1529931445705; Mon, 25 Jun 2018 05:57:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529931445; cv=none; d=google.com; s=arc-20160816; b=COctVXsxcVic5b2YqiH99pXrlfG04bNV1IvqCsm/R/kUUNY64gZti6Z31XVtaMBrIC 7AqZGqHDuSsqRfZgty8/t/C8ue5DT5LpqumLB8WJ+W9kjowdVSzGUmF3EQp2Ag8om/ZT x4awhebl/iXtgW0jH6vIsRGsMHFLhRxQhPDggRC5ER5SATWXYTKB1u1/UJ7H+v5V6ocl T+X6vUtVQTOtWultk5uG8baFOn1HTbDriIb6qNVRUjKzcfcUNs/a4dK8cgRZpUMiodZr UFrgTMUStZLwhYGc3xVBuYN1P4gzKpTdwgWlmYpWWR0kHNJipL10jLjxPLEWEolYmUm+ bfnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=x7eNGCN2oEi+w8PuAvgKk+FzIjFJ2giSnhdmu6qGed4=; b=EfxHDhyLsAvjyJi1PhIV4KNs4hLH/3WqGFSUC/vdvc7uvqrLrzCfzlQjLRcq1BqSyV DkipqPIUlUmhiWB3j7QTf4dRm3tPEppfuWBgEv6PCBH2KbQoHH+uh24t3gR5ToxrjFek BgCSdDTPXoIsc6jP2wQTYjP6ecbww8sCfV8nKanRr0UdgPRUnSbrGm869kI+qs2REa+2 5gkjfxcOGHhf+pVJYZ0WfQYbF8o8j1v9K4B8FTtdUgP+yC9aMYSz6AIPtsvuLBT9fuE6 JdG2o8SSL0wKTD8Fb1aJdxBiWBRq67gRL8Kn0+3mg9iXRqXNW2mew2Aq9xJ+VRkHQksr qaTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f10-v6si13673178plr.265.2018.06.25.05.57.11; Mon, 25 Jun 2018 05:57:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755462AbeFYMzq (ORCPT + 99 others); Mon, 25 Jun 2018 08:55:46 -0400 Received: from mx2.suse.de ([195.135.220.15]:47043 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755350AbeFYMzp (ORCPT ); Mon, 25 Jun 2018 08:55:45 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext-too.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id A13EBAF20; Mon, 25 Jun 2018 12:55:43 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 4C7F81E3629; Mon, 25 Jun 2018 14:55:40 +0200 (CEST) Date: Mon, 25 Jun 2018 14:55:40 +0200 From: Jan Kara To: Paul Moore Cc: jack@suse.cz, willy@infradead.org, baijiaju1990@gmail.com, Eric Paris , amir73il@gmail.com, linux-audit@redhat.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug Message-ID: <20180625125540.gmhupkrkjrhwtj2p@quack2.suse.cz> References: <20180621033245.10754-1-baijiaju1990@gmail.com> <20180621042912.GA4967@bombadil.infradead.org> <20180622092340.dzl2ea7tdkjdkdhg@quack2.suse.cz> <20180625092257.kyqnmn4ki7cuqkat@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180625092257.kyqnmn4ki7cuqkat@quack2.suse.cz> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 25-06-18 11:22:57, Jan Kara wrote: > On Fri 22-06-18 14:56:09, Paul Moore wrote: > > On Fri, Jun 22, 2018 at 5:23 AM Jan Kara wrote: > > > On Wed 20-06-18 21:29:12, Matthew Wilcox wrote: > > > > On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote: > > > > > The kernel may sleep with holding a spinlock. > > > > > The function call paths (from bottom to top) in Linux-4.16.7 are: > > > > > > > > > > [FUNC] kmem_cache_alloc(GFP_KERNEL) > > > > > fs/notify/mark.c, 439: > > > > > kmem_cache_alloc in fsnotify_attach_connector_to_object > > > > > fs/notify/mark.c, 520: > > > > > fsnotify_attach_connector_to_object in fsnotify_add_mark_list > > > > > fs/notify/mark.c, 590: > > > > > fsnotify_add_mark_list in fsnotify_add_mark_locked > > > > > kernel/audit_tree.c, 437: > > > > > fsnotify_add_mark_locked in tag_chunk > > > > > kernel/audit_tree.c, 423: > > > > > spin_lock in tag_chunk > > > > > > > > There are several locks here; your report would be improved by saying > > > > which one is the problem. I'm assuming it's old_entry->lock. > > > > > > > > spin_lock(&old_entry->lock); > > > > ... > > > > if (fsnotify_add_inode_mark_locked(chunk_entry, > > > > old_entry->connector->inode, 1)) { > > > > ... > > > > return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups); > > > > ... > > > > ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups); > > > > ... > > > > if (inode) > > > > connp = &inode->i_fsnotify_marks; > > > > conn = fsnotify_grab_connector(connp); > > > > if (!conn) { > > > > err = fsnotify_attach_connector_to_object(connp, inode, mnt); > > > > > > > > It seems to me that this is safe because old_entry is looked up from > > > > fsnotify_find_mark, and it can't be removed while its lock is held. > > > > Therefore there's always a 'conn' returned from fsnotify_grab_connector(), > > > > and so this path will never be taken. > > > > > > > > But this code path is confusing to me, and I could be wrong. Jan, please > > > > confirm my analysis is correct? > > > > > > Yes, you are correct. The presence of another mark in the list (and the > > > fact we pin it there using refcount & mark_mutex) guarantees we won't need > > > to allocate the connector. I agree the audit code's use of fsnotify would > > > deserve some cleanup. > > > > I'm always open to suggestions and patches (hint, hint) from the > > fsnotify experts ;) > > Yeah, I was looking into it on Friday and today :). Currently I've got a > bit stuck because I think I've found some races in audit_tree code and I > haven't yet decided how to fix them. E.g. am I right the following can > happen? > > CPU1 CPU2 > tag_chunk(inode, tree1) tag_chunk(inode, tree2) > old_entry = fsnotify_find_mark(); old_entry = fsnotify_find_mark(); > old = container_of(old_entry); old = container_of(old_entry); > chunk = alloc_chunk(old->count + 1); chunk = alloc_chunk(old->count + 1); > mutex_lock(&group->mark_mutex); > adds new mark > replaces chunk > old->dead = 1; > mutex_unlock(&group->mark_mutex); > mutex_lock(&group->mark_mutex); > if (!(old_entry->flags & > FSNOTIFY_MARK_FLAG_ATTACHED)) { > Check fails as old_entry is > not yet destroyed > adds new mark > replaces old chunk again -> > list corruption, lost refs, ... > mutex_unlock(&group->mark_mutex); > > Generally there's a bigger problem that audit_tree code can have multiple > marks attached to one inode but only one of them is the "valid" one (i.e., > the one embedded in the latest chunk). This is only a temporary state until > fsnotify_destroy_mark() detaches the mark and then on last reference drop > we really remove the mark from inode's list but during that window it is > undefined which mark is returned from fsnotify_find_mark()... > > So am I right the above can really happen or is there some higher level > synchronization I'm missing? If this can really happen, I think I'll need > to rework the code so that audit_tree has just one mark attached and > let it probably point to the current chunk. Also am I right to assume that if two tag_chunk() calls race, both try to add new fsnotify mark in create_chunk() and one of them fails, then the resulting ENOSPC error from create_chunk() is actually a bug? Because from looking at the code it seems that the desired functionality is that tag_chunk() should add 'tree' to the chunk, expanding chunk as needed. Honza -- Jan Kara SUSE Labs, CR