Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp550108pxb; Tue, 3 Nov 2020 06:34:13 -0800 (PST) X-Google-Smtp-Source: ABdhPJz8byAh3iagY5/wpA+rIeQK83ok5wMTTtIo40CW0zulrI+MHp7qT/8uM54USL00+g4C9J57 X-Received: by 2002:a17:906:a108:: with SMTP id t8mr2746719ejy.435.1604414053128; Tue, 03 Nov 2020 06:34:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604414053; cv=none; d=google.com; s=arc-20160816; b=DHeBcWV1odzxOJoI5IbqKvnO3CsJp4qVOpFpubIUTGft+LiEEaIDE5rmZ5ijyYII9k yEoq4NQqCjgOGkMBX4Wf1xKd3RL8GiRP+LsnCQkdc23z8lrGH4t1WgVwISxEw+apJe24 aP8/RW86BlCeEKR0c4NKhxbCEM9pkSB84970AxaKrBnYfFZOkavZLIDGCLUg/zQVch/d 5Q2bjAYzafQo/RXoM96GHOaJW8KJOvdiwZ5Qf+82G0MJIYOg/Q0PSdbUaZbvrYa4nBze 4GwORHWzptBMlhXAmQj/6t6l1GyDSgDLRllWLmCYs0/Bo52B2SAE9PewGhLhy+Y/kYa6 QXtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=o0XeLZMror3+KTYiFv+kN2hLeO7UR+ZgT+r8kUp5bTo=; b=FUslqf8GME6ebiX5RGICvH0jyQ0oDRv/jUGNT6ScTBZ2HTxkgWAbaWwY46E5XO3uh9 u0BDUkpN06E2eRP4nC7Y8z2xG9Qmc85tn2mHZ+jcUWbFrpHDkitLIASONlUw7ZulAQ6b gmMuuCMj5/NHjtnvAY+1WoVNye5GzTGba8CR6ZWOAmYVmxmIHKabU6YtnzUwJcUDut5Z k+j6QtxQNtwFsoKkTgVnXjh9FpG3JUQoPRYi0rRW5Zmw49KHugzM2XzTFzFlxYpacWTx OBHgGKTp7E6ZJNSYY8ltMWwsHm43HnzLbYNGOrfvvMK1uHEU6c7E81At54VA+6ok/yY+ buvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c91si7195836edf.313.2020.11.03.06.33.48; Tue, 03 Nov 2020 06:34:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729690AbgKCOcV (ORCPT + 99 others); Tue, 3 Nov 2020 09:32:21 -0500 Received: from lhrrgout.huawei.com ([185.176.76.210]:3032 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729286AbgKCObY (ORCPT ); Tue, 3 Nov 2020 09:31:24 -0500 Received: from lhreml724-chm.china.huawei.com (unknown [172.18.7.107]) by Forcepoint Email with ESMTP id BF3F4A1754057E9ACDF6; Tue, 3 Nov 2020 14:31:23 +0000 (GMT) Received: from [10.47.5.37] (10.47.5.37) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1913.5; Tue, 3 Nov 2020 14:31:23 +0000 Subject: Re: [PATCH v5 2/2] iommu/iova: Free global iova rcache on iova alloc failure To: Robin Murphy , , , , CC: , References: <1601451864-5956-1-git-send-email-vjitta@codeaurora.org> <1601451864-5956-2-git-send-email-vjitta@codeaurora.org> From: John Garry Message-ID: Date: Tue, 3 Nov 2020 14:31:20 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.1.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.47.5.37] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/11/2020 12:35, Robin Murphy wrote: > On 2020-09-30 08:44, vjitta@codeaurora.org wrote: >> From: Vijayanand Jitta >> >> When ever an iova alloc request fails we free the iova >> ranges present in the percpu iova rcaches and then retry >> but the global iova rcache is not freed as a result we could >> still see iova alloc failure even after retry as global >> rcache is holding the iova's which can cause fragmentation. >> So, free the global iova rcache as well and then go for the >> retry. > If we do clear all the CPU rcaches, it would nice to have something immediately available to replenish, i.e. use the global rcache, instead of flushing it, if that is not required... > This looks reasonable to me - it's mildly annoying that we end up with > so many similar-looking functions, Well I did add a function to clear all CPU rcaches here, if you would like to check: https://lore.kernel.org/linux-iommu/1603733501-211004-2-git-send-email-john.garry@huawei.com/ > but the necessary differences are > right down in the middle of the loops so nothing can reasonably be > factored out :( > > Reviewed-by: Robin Murphy > >> Signed-off-by: Vijayanand Jitta >> --- >>   drivers/iommu/iova.c | 23 +++++++++++++++++++++++ >>   1 file changed, 23 insertions(+) >> >> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c >> index c3a1a8e..faf9b13 100644 >> --- a/drivers/iommu/iova.c >> +++ b/drivers/iommu/iova.c >> @@ -25,6 +25,7 @@ static void init_iova_rcaches(struct iova_domain >> *iovad); >>   static void free_iova_rcaches(struct iova_domain *iovad); >>   static void fq_destroy_all_entries(struct iova_domain *iovad); >>   static void fq_flush_timeout(struct timer_list *t); >> +static void free_global_cached_iovas(struct iova_domain *iovad); a thought: It would be great if the file could be rearranged at some point where we don't require so many forward declarations. >>   void >>   init_iova_domain(struct iova_domain *iovad, unsigned long granule, >> @@ -442,6 +443,7 @@ alloc_iova_fast(struct iova_domain *iovad, >> unsigned long size, >>           flush_rcache = false; >>           for_each_online_cpu(cpu) >>               free_cpu_cached_iovas(cpu, iovad); >> +        free_global_cached_iovas(iovad); >>           goto retry; >>       } >> @@ -1057,5 +1059,26 @@ void free_cpu_cached_iovas(unsigned int cpu, >> struct iova_domain *iovad) >>       } >>   } >> +/* >> + * free all the IOVA ranges of global cache >> + */ >> +static void free_global_cached_iovas(struct iova_domain *iovad) >> +{ >> +    struct iova_rcache *rcache; >> +    unsigned long flags; >> +    int i, j; >> + >> +    for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) { >> +        rcache = &iovad->rcaches[i]; >> +        spin_lock_irqsave(&rcache->lock, flags); >> +        for (j = 0; j < rcache->depot_size; ++j) { >> +            iova_magazine_free_pfns(rcache->depot[j], iovad); >> +            iova_magazine_free(rcache->depot[j]); >> +            rcache->depot[j] = NULL; I don't think that NULLify is strictly necessary >> +        } >> +        rcache->depot_size = 0; >> +        spin_unlock_irqrestore(&rcache->lock, flags); >> +    } >> +} >>   MODULE_AUTHOR("Anil S Keshavamurthy "); >>   MODULE_LICENSE("GPL"); >> > _______________________________________________ > iommu mailing list > iommu@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/iommu > .