Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1580674rdb; Thu, 7 Dec 2023 03:23:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IGgWSnwLLTuKmHUUE7RIBK0VQdSmAAxXjfLoiBhrCQfnBkCT2A4l2qXDKYFLu50M/gm2uZA X-Received: by 2002:a05:6a00:c2:b0:6ce:6b61:2f2e with SMTP id e2-20020a056a0000c200b006ce6b612f2emr1715310pfj.24.1701948193956; Thu, 07 Dec 2023 03:23:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701948193; cv=none; d=google.com; s=arc-20160816; b=LTAra4FLFNt6/Tm1xK1RgGuPtdeSQOqRmYGGL7mlqoUvUtUCQPaq2fLZQE9CLxioCd MFy0NqMJwsk3KahEK6nBuc4N1tKN6n8UkOWaFVKfcTBJDx78VOkr/wB0G51htWHE7Ks0 w9q7T/ZHY82lT30YFGXXzT1WsheFyzP5gWfcNtDTIQwfu3Yrfoo7Gg22MeFxjcYVpYXR zYjOqAs9QatEIQ/SlxGM84m4mHp84joiKKI1heWh8OAm1K+N57+E/GFB17AQ93MJA093 uMW+SZDoLinoPA3UKMevGk8riMNNS+203Yy6ZD9C9WsLbDjhxi3agxH7H9FiHpF1mx8M Vvug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=n1qw5FBvQG4ZQd8+AY0SoFv2851ps+tEWS9GwrW6a8I=; fh=Hhg/Irzq7NxjNc8I6dFyEpRureAOclKOpRByptRv8N8=; b=VeR3FZrNSrQtjzyATEulSws5xc3csF6QBcUD58wDTAXN2oDi8ltZfL/Fy+RKbm3Su1 lnc5+O9nLcnbS5eUAg9biwCBelaq7GZxBTbW0iCpuefYcBprqfJlwfVAlrvPP9r9yrpt A2mDLnNcoOooY3CQWprsx/ev0e7KdIgR96h3vMQrRRerWc5jel6ayUMhNb7VAk99NMF1 7sYxumlE6iNncHmjxt0EWZY5DIIzZRzdbN9TYQOzbiA9tx1+lmD1WC74fYvMR+kAORls ioLw7G1ul+tHK2CSQ2Xuf+uQF9cb5HcX20EAlxn5Vubukx8xixZhw541vhb7NjNFUUC+ OSlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id u2-20020a656702000000b00584ca25959csi1021473pgf.540.2023.12.07.03.23.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 03:23:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id D122D8066BA1; Thu, 7 Dec 2023 03:23:12 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379057AbjLGLXC (ORCPT + 99 others); Thu, 7 Dec 2023 06:23:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378977AbjLGLXB (ORCPT ); Thu, 7 Dec 2023 06:23:01 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B720210EA for ; Thu, 7 Dec 2023 03:23:03 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5D9EC12FC; Thu, 7 Dec 2023 03:23:49 -0800 (PST) Received: from [10.1.32.134] (XHFQ2J9959.cambridge.arm.com [10.1.32.134]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A05F73F762; Thu, 7 Dec 2023 03:23:00 -0800 (PST) Message-ID: <4aa520f0-7c84-4e93-88bf-aee6d8d3ea70@arm.com> Date: Thu, 7 Dec 2023 11:22:59 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v8 03/10] mm: thp: Introduce multi-size THP sysfs interface Content-Language: en-GB To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Yin Fengwei , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Alistair Popple Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20231204102027.57185-1-ryan.roberts@arm.com> <20231204102027.57185-4-ryan.roberts@arm.com> <004ed23d-5571-4474-b7fe-7bc08817c165@redhat.com> <378afc6b-f93a-48ad-8aa6-ab171f3b9613@redhat.com> From: Ryan Roberts In-Reply-To: <378afc6b-f93a-48ad-8aa6-ab171f3b9613@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 07 Dec 2023 03:23:12 -0800 (PST) On 07/12/2023 11:13, David Hildenbrand wrote: >>> >>>> + >>>>        if (!vma->vm_mm)        /* vdso */ >>>> -        return false; >>>> +        return 0; >>>>          /* >>>>         * Explicitly disabled through madvise or prctl, or some >>>> @@ -88,16 +141,16 @@ bool hugepage_vma_check(struct vm_area_struct *vma, >>>> unsigned long vm_flags, >>>>         * */ >>>>        if ((vm_flags & VM_NOHUGEPAGE) || >>>>            test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) >>>> -        return false; >>>> +        return 0; >>>>        /* >>>>         * If the hardware/firmware marked hugepage support disabled. >>>>         */ >>>>        if (transparent_hugepage_flags & (1 << >>>> TRANSPARENT_HUGEPAGE_UNSUPPORTED)) >>>> -        return false; >>>> +        return 0; >>>>          /* khugepaged doesn't collapse DAX vma, but page fault is fine. */ >>>>        if (vma_is_dax(vma)) >>>> -        return in_pf; >>>> +        return in_pf ? orders : 0; >>>>          /* >>>>         * khugepaged special VMA and hugetlb VMA. >>>> @@ -105,17 +158,29 @@ bool hugepage_vma_check(struct vm_area_struct *vma, >>>> unsigned long vm_flags, >>>>         * VM_MIXEDMAP set. >>>>         */ >>>>        if (!in_pf && !smaps && (vm_flags & VM_NO_KHUGEPAGED)) >>>> -        return false; >>>> +        return 0; >>>>          /* >>>> -     * Check alignment for file vma and size for both file and anon vma. >>>> +     * Check alignment for file vma and size for both file and anon vma by >>>> +     * filtering out the unsuitable orders. >>>>         * >>>>         * Skip the check for page fault. Huge fault does the check in fault >>>> -     * handlers. And this check is not suitable for huge PUD fault. >>>> +     * handlers. >>>>         */ >>>> -    if (!in_pf && >>>> -        !transhuge_vma_suitable(vma, (vma->vm_end - HPAGE_PMD_SIZE))) >>>> -        return false; >>>> +    if (!in_pf) { >>>> +        int order = first_order(orders); >>>> +        unsigned long addr; >>>> + >>>> +        while (orders) { >>>> +            addr = vma->vm_end - (PAGE_SIZE << order); >>>> +            if (thp_vma_suitable_orders(vma, addr, BIT(order))) >>>> +                break; >>> >>> Comment: you'd want a "thp_vma_suitable_order" helper here. But maybe the >>> compiler is smart enough to optimize the loop and everyything else out. >> >> I'm happy to refactor so that thp_vma_suitable_order() is the basic primitive, >> then make thp_vma_suitable_orders() a loop that calls thp_vma_suitable_order() >> (that's basically how it is laid out already, just all in one function). Is that >> what you are requesting? > > You got the spirit, yes. > >>> >>> [...] >>> >>>> + >>>> +static ssize_t thpsize_enabled_store(struct kobject *kobj, >>>> +                     struct kobj_attribute *attr, >>>> +                     const char *buf, size_t count) >>>> +{ >>>> +    int order = to_thpsize(kobj)->order; >>>> +    ssize_t ret = count; >>>> + >>>> +    if (sysfs_streq(buf, "always")) { >>>> +        set_bit(order, &huge_anon_orders_always); >>>> +        clear_bit(order, &huge_anon_orders_inherit); >>>> +        clear_bit(order, &huge_anon_orders_madvise); >>>> +    } else if (sysfs_streq(buf, "inherit")) { >>>> +        set_bit(order, &huge_anon_orders_inherit); >>>> +        clear_bit(order, &huge_anon_orders_always); >>>> +        clear_bit(order, &huge_anon_orders_madvise); >>>> +    } else if (sysfs_streq(buf, "madvise")) { >>>> +        set_bit(order, &huge_anon_orders_madvise); >>>> +        clear_bit(order, &huge_anon_orders_always); >>>> +        clear_bit(order, &huge_anon_orders_inherit); >>>> +    } else if (sysfs_streq(buf, "never")) { >>>> +        clear_bit(order, &huge_anon_orders_always); >>>> +        clear_bit(order, &huge_anon_orders_inherit); >>>> +        clear_bit(order, &huge_anon_orders_madvise); >>> >>> Note: I was wondering for a second if some concurrent cames could lead to an >>> inconsistent state. I think in the worst case we'll simply end up with "never" >>> on races. >> >> You mean if different threads try to write different values to this file >> concurrently? Or if there is a concurrent fault that tries to read the flags >> while they are being modified? > > I thought about what you said first, but what you said last might also apply. As > long as "nothing breaks", all good. > >> >> I thought about this for a long time too and wasn't sure what was best. The >> existing global enabled store impl clears the bits first then sets the bit. With >> this approach you can end up with multiple bits set if there is a race to set >> diffierent values, and you can end up with a faulting thread seeing never if it >> reads the bits after they have been cleared but before setting them. > > Right, but user space is playing stupid games and can win stupid prices. As long > as nothing breaks, we're good. > >> >> I decided to set the new bit before clearing the old bits, which is different; A >> racing fault will never see "never" but as you say, a race to set the file could >> result in "never" being set. >> >> On reflection, it's probably best to set the bit *last* like the global control >> does? > > Probably might just slap a simple spinlock in there, so at least the writer side > is completely serialized. Then you can just set the bit last. It's unlikely that > readers will actually run into issues, and if they ever would, we could use some > rcu magic to let them read a consistent state. I'd prefer to leave it as it is now; clear first, set last without any explicit serialization. I've convinced myself that nothing breaks and its the same pattern used by the global control so its consistent. Unless you're insisting on the spin lock?