Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp16915227rwd; Mon, 26 Jun 2023 17:33:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6FepyKO8p00icjY8AP4Y5NfMFJJrdpe9Wo3sqKBi2+OeD+x7oYpbI4iIEqpNB6k7vph5yC X-Received: by 2002:a05:620a:4246:b0:765:ab83:ef05 with SMTP id w6-20020a05620a424600b00765ab83ef05mr3998567qko.56.1687826032979; Mon, 26 Jun 2023 17:33:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687826032; cv=none; d=google.com; s=arc-20160816; b=lpValkaAfdI+tf+/UgBcddidTy5YcZNly4l3GnFCkOjEP12P5N6RK8bdH9j6Qu/rmi aASa96pyhLvtJRhzpMVqdtLhJGQyc+/3pCPIcSOCMNwgsnupc8r2wlNanZj6gihvrPFy sBK3PfUn/w9o6FG8FG5xFSZMX6kwxgmiw82M+jUTo32dj6TI2fB8iqRA2PMY9ktY2V9T wM4XFRji37RjMhvNQGmC9TnOASYKFEUQuhFtnJBc/oR+HC/XdI0OhSkiAGnmkVplh1JB HHcVSlXoqjF7D/OwY5fyIM5iBCG2SqBVILuuTKbNhlslUazft2CvjkV1tDn+h5IxMw3h QKiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=ZJuKj+u/iaHfpnToK3Hjz8WAqXefP5JdPosEoq9MmW4=; fh=ACilj/p3twZiFHz1pz+Kr5hB688btNkF/mU17pI83Fs=; b=y0Z5pgUjgbEGCpQSdel6xX2mdqoeATQs85bcM5n2Krn3q8Utk5gtn606YMVW+jgql7 QhneU4YShmYXOGwviOyTLr/ipZMmKUb4vXCvkSqNfhujL/RhCPMlqJmTJwG2dzm0zjG7 uuOvy6IEg1ub4eaCE3cEnI8qqYMKLAYCFgOl1SLu5cszppWyvJdEHNn5M0KuY2S56mU/ o+fRmIqAgVyRpMKCNO7Of6HF+Iggv9m+zmIfJFpaHbCJ3IsEji/Zpi3i4yEYIm4iZVQ0 rjVC1xknZkBIFCOOUSbZzsuLfUmNEaJ4+zoRq6JpxaQ29s8TXGcSc/E6L9NoPQ95+c9h +ppA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@canonical.com header.s=20210705 header.b=hboEUQu9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=canonical.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bv70-20020a632e49000000b005573574f000si6015683pgb.182.2023.06.26.17.33.41; Mon, 26 Jun 2023 17:33:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@canonical.com header.s=20210705 header.b=hboEUQu9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=canonical.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229571AbjF0Abq (ORCPT + 99 others); Mon, 26 Jun 2023 20:31:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229448AbjF0Abp (ORCPT ); Mon, 26 Jun 2023 20:31:45 -0400 Received: from smtp-relay-canonical-0.canonical.com (smtp-relay-canonical-0.canonical.com [185.125.188.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BE5F1703; Mon, 26 Jun 2023 17:31:44 -0700 (PDT) Received: from [10.8.192.2] (unknown [50.47.134.245]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-0.canonical.com (Postfix) with ESMTPSA id 298E53F25A; Tue, 27 Jun 2023 00:31:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1687825901; bh=ZJuKj+u/iaHfpnToK3Hjz8WAqXefP5JdPosEoq9MmW4=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=hboEUQu9s70yZcXCgpqK7RsTvsmD1KKk6A42+uK2cx4xuDA16YR2zAprJexV4ERKq bg4NTT5XrXrUkS6miJy88A3SP7ANqZKZzxJqRAhp5mcKjWpUJpGxXWZoz/anMbmy8k VpPdPGpNLeo4MwC5Fdj5SgiaC3SgF6TSONL4PHzj213BbaW45q15YeF3wNH8L6vOEX /3FngaWH7uvmGm9ADjtt9wu2L0XwLgisu5JEBZoP9NjesrnO9V0MAb1Km0Pen5HWuA YDc8oxJn39tJmdBKy8TrB7dUoj+8MYL7TCksiCVI5yRjle7CN1G/n1SF/KPEzhMgzF +lNHe2xdaFynw== Message-ID: <31d6e8d6-0747-a282-746b-5c144a9970bb@canonical.com> Date: Mon, 26 Jun 2023 17:31:37 -0700 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: [PATCH v3] apparmor: global buffers spin lock may get contended Content-Language: en-US To: Anil Altinay Cc: Sebastian Andrzej Siewior , LKLM , Sergey Senozhatsky , Peter Zijlstra , Tomasz Figa , linux-security-module@vger.kernel.org References: <4595e7b4-ea31-5b01-f636-259e84737dfc@canonical.com> From: John Johansen Organization: Canonical In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/26/23 16:33, Anil Altinay wrote: > Hi John, > > I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing. > yeah, testing help is always much appreciated. I have a v4, and I am working on 3 alternate version to compare against, to help give a better sense if we can get away with simplifying or tweak the scaling. I should be able to post them out some time tonight. > Best, > Anil > > On Tue, Feb 21, 2023 at 1:27 PM Anil Altinay > wrote: > > I can test the patch with 5.10 and 5.15 kernels in different machines. > Just let me know which machine types you would like me to test. > > On Mon, Feb 20, 2023 at 12:42 AM John Johansen > > wrote: > > > > On 2/17/23 02:44, Sebastian Andrzej Siewior wrote: > > > On 2023-02-16 16:08:10 [-0800], John Johansen wrote: > > >> --- a/security/apparmor/lsm.c > > >> +++ b/security/apparmor/lsm.c > > >> @@ -49,12 +49,19 @@ union aa_buffer { > > >>      char buffer[1]; > > >>   }; > > >> +struct aa_local_cache { > > >> +    unsigned int contention; > > >> +    unsigned int hold; > > >> +    struct list_head head; > > >> +}; > > > > > > if you stick a local_lock_t into that struct, then you could replace > > >       cache = get_cpu_ptr(&aa_local_buffers); > > > with > > >       local_lock(&aa_local_buffers.lock); > > >       cache = this_cpu_ptr(&aa_local_buffers); > > > > > > You would get the preempt_disable() based locking for the per-CPU > > > variable (as with get_cpu_ptr()) and additionally some lockdep > > > validation which would warn if it is used outside of task context (IRQ). > > > > > I did look at local_locks and there was a reason I didn't use them. I > > can't recall as the original iteration of this is over a year old now. > > I will have to dig into it again. > > > > > I didn't parse completely the hold/contention logic but it seems to work > > > ;) > > > You check "cache->count >=  2" twice but I don't see an inc/ dec of it > > > nor is it part of aa_local_cache. > > > > > sadly I messed up the reordering of this and the debug patch. This will be > > fixed in v4. > > > > > I can't parse how many items can end up on the local list if the global > > > list is locked. My guess would be more than 2 due the ->hold parameter. > > > > > So this iteration, forces pushing back to global list if there are already > > two on the local list. The hold parameter just affects how long the > > buffers remain on the local list, before trying to place them back on > > the global list. > > > > Originally before the count was added more than 2 buffers could end up > > on the local list, and having too many local buffers is a waste of > > memory. The count got added to address this. The value of 2 (which should > > be switched to a define) was chosen because no mediation routine currently > > uses more than 2 buffers. > > > > Note that this doesn't mean that more than two buffers can be allocated > > to a tasks on a cpu. Its possible in some cases to have a task have > > allocated buffers and to still have buffers on the local cache list. > > > > > Do you have any numbers on the machine and performance it improved? It > > > sure will be a good selling point. > > > > > > > I can include some supporting info, for a 16 core machine. But it will > > take some time to for me to get access to a bigger machine, where this > > is much more important. Hence the call for some of the other people > > on this thread to test. > > > > thanks for the feedback > > >