Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754539AbbFLWfw (ORCPT ); Fri, 12 Jun 2015 18:35:52 -0400 Received: from g4t3427.houston.hp.com ([15.201.208.55]:48158 "EHLO g4t3427.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751198AbbFLWfs (ORCPT ); Fri, 12 Jun 2015 18:35:48 -0400 Message-ID: <557B5EBF.6010105@hp.com> Date: Fri, 12 Jun 2015 18:35:43 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Stephen Smalley CC: Raghavendra K T , Paul Moore , Eric Paris , James Morris , "Serge E. Hallyn" , selinux@tycho.nsa.gov, linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org, Scott J Norton , Douglas Hatch Subject: Re: [PATCH v2] selinux: reduce locking overhead in inode_free_security() References: <1434058284-56634-1-git-send-email-Waiman.Long@hp.com> <557A7B91.4000502@linux.vnet.ibm.com> <557AD10D.6060200@tycho.nsa.gov> In-Reply-To: <557AD10D.6060200@tycho.nsa.gov> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3071 Lines: 69 On 06/12/2015 08:31 AM, Stephen Smalley wrote: > On 06/12/2015 02:26 AM, Raghavendra K T wrote: >> On 06/12/2015 03:01 AM, Waiman Long wrote: >>> The inode_free_security() function just took the superblock's isec_lock >>> before checking and trying to remove the inode security struct from the >>> linked list. In many cases, the list was empty and so the lock taking >>> is wasteful as no useful work is done. On multi-socket systems with >>> a large number of CPUs, there can also be a fair amount of spinlock >>> contention on the isec_lock if many tasks are exiting at the same time. >>> >>> This patch changes the code to check the state of the list first >>> before taking the lock and attempting to dequeue it. As this function >>> is called indirectly from __destroy_inode(), there can't be another >>> instance of inode_free_security() running on the same inode. >>> >>> Signed-off-by: Waiman Long >>> --- >>> security/selinux/hooks.c | 15 ++++++++++++--- >>> 1 files changed, 12 insertions(+), 3 deletions(-) >>> >>> v1->v2: >>> - Take out the second list_empty() test inside the lock. >>> >>> diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c >>> index 7dade28..e5cdad7 100644 >>> --- a/security/selinux/hooks.c >>> +++ b/security/selinux/hooks.c >>> @@ -254,10 +254,19 @@ static void inode_free_security(struct inode >>> *inode) >>> struct inode_security_struct *isec = inode->i_security; >>> struct superblock_security_struct *sbsec = inode->i_sb->s_security; >>> >>> - spin_lock(&sbsec->isec_lock); >>> - if (!list_empty(&isec->list)) >>> + /* >>> + * As not all inode security structures are in a list, we check for >>> + * empty list outside of the lock to make sure that we won't waste >>> + * time taking a lock doing nothing. As inode_free_security() is >>> + * being called indirectly from __destroy_inode(), there is no way >>> + * there can be two or more concurrent calls. So doing the >>> list_empty() >>> + * test outside the loop should be safe. >>> + */ >>> + if (!list_empty(&isec->list)) { >>> + spin_lock(&sbsec->isec_lock); >>> list_del_init(&isec->list); >> Stupid question, >> >> I need to take a look at list_del_init() code, but it can so happen that >> if !list_empty() check could happen simultaneously, then serially two >> list_del_init() can happen. >> >> is that not a problem()? > Hmm...I suppose that's possible (sb_finish_set_opts and > inode_free_security could both perform the list_del_init). Ok, we'll > stay with the first version. > Actually, list_del_init() can be applied twice with no harm being done. The first list_del_init() will set list-> next = list->prev = list. The second one will do the same thing and so it should be safe. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/