Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1235570pxk; Fri, 18 Sep 2020 07:22:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybRNvV7P4lwrSBgDaxZUcwM3bvBDjvZ/s1gxFPQqQfUZRDexa6RXD6rj2tM3aDGzQeqWdl X-Received: by 2002:a05:6402:6d3:: with SMTP id n19mr38208465edy.381.1600438956475; Fri, 18 Sep 2020 07:22:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600438956; cv=none; d=google.com; s=arc-20160816; b=JBTdWwHS5hyonImd+QpSNuRxWsBvUNChjMaHL+CrU6KXk//H4nER9mVzBguGF24qiG TSdMgBAmZjLrWTikukRJZnD2GdxTiUQoZTwnbdGJH+LqUNsWe1kf6u8FJrUIyLFhtMT8 E4ZNYXsKSxi4CrldxFPH6sy9XJWTukZ/6LqVQzpYOIgUfNb6kFKDxCF7cui8UmjQ7gnU bZuSzqBQkXA0evW66hLjF/RchwZdRi5kDHc23lhAzNJMxp4UZD5zL7EhSixiJ7/DZsth vJMSTGmunh8OkpMBhlLBZmfQn7VOFiLQks05bXCF78KMygaeNLEzM4yr+yaRhuROdCpM kXbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=DsyktDQl4CcK970ceuWdoyhEaCoNB9ntpgDaVpLJtj4=; b=onJ/qZE3He/7ZAxqm5mbxctiKqDfcaKvkBOWM5z1hXj5aDIMXSfH0mBCRsh4fwnXRj 2cJ/ovD+LEtTlHonK6q4VZVoX3/c09k3setLBEcaDGwXrXYtN9hPOck8rdFsTL+d6rpJ 4hSKostox1hJSsY/B3y/t/ux+9kNf/PMGoxoZgqrT7XO20px1oi37tNoBeqw1IN9Fkqk LkifNhADnV3ulVw0cgxCgkuQsL/4faPKhNdZHdlcYkyOiEzSkL8I2XrF65GcEfJIsFFa DrwKS31iCFco4q0OYboELpLhfybeSi84FCupmN7DKFpIGxY5lONEKOdiFUzIo19orR7O 9BQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n25si2150507edt.464.2020.09.18.07.22.12; Fri, 18 Sep 2020 07:22:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726273AbgIROSL (ORCPT + 99 others); Fri, 18 Sep 2020 10:18:11 -0400 Received: from foss.arm.com ([217.140.110.172]:43418 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbgIROSK (ORCPT ); Fri, 18 Sep 2020 10:18:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 77AFA1045; Fri, 18 Sep 2020 07:18:09 -0700 (PDT) Received: from [10.57.47.84] (unknown [10.57.47.84]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 222273F718; Fri, 18 Sep 2020 07:18:05 -0700 (PDT) Subject: Re: [PATCH v2 1/2] iommu/iova: Retry from last rb tree node if iova search fails To: vjitta@codeaurora.org, joro@8bytes.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: vinmenon@codeaurora.org, kernel-team@android.com References: <1597927761-24441-1-git-send-email-vjitta@codeaurora.org> From: Robin Murphy Message-ID: <07270e29-c9d4-ae8c-a236-eb6fefccbf6c@arm.com> Date: Fri, 18 Sep 2020 15:18:04 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.2.2 MIME-Version: 1.0 In-Reply-To: <1597927761-24441-1-git-send-email-vjitta@codeaurora.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020-08-20 13:49, vjitta@codeaurora.org wrote: > From: Vijayanand Jitta > > When ever a new iova alloc request comes iova is always searched > from the cached node and the nodes which are previous to cached > node. So, even if there is free iova space available in the nodes > which are next to the cached node iova allocation can still fail > because of this approach. > > Consider the following sequence of iova alloc and frees on > 1GB of iova space > > 1) alloc - 500MB > 2) alloc - 12MB > 3) alloc - 499MB > 4) free - 12MB which was allocated in step 2 > 5) alloc - 13MB > > After the above sequence we will have 12MB of free iova space and > cached node will be pointing to the iova pfn of last alloc of 13MB > which will be the lowest iova pfn of that iova space. Now if we get an > alloc request of 2MB we just search from cached node and then look > for lower iova pfn's for free iova and as they aren't any, iova alloc > fails though there is 12MB of free iova space. > > To avoid such iova search failures do a retry from the last rb tree node > when iova search fails, this will search the entire tree and get an iova > if its available. > > Signed-off-by: Vijayanand Jitta > --- > drivers/iommu/iova.c | 23 +++++++++++++++++------ > 1 file changed, 17 insertions(+), 6 deletions(-) > > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c > index 49fc01f..4e77116 100644 > --- a/drivers/iommu/iova.c > +++ b/drivers/iommu/iova.c > @@ -184,8 +184,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, > struct rb_node *curr, *prev; > struct iova *curr_iova; > unsigned long flags; > - unsigned long new_pfn; > + unsigned long new_pfn, low_pfn_new; > unsigned long align_mask = ~0UL; > + unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn; > > if (size_aligned) > align_mask <<= fls_long(size - 1); > @@ -198,15 +199,25 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, > > curr = __get_cached_rbnode(iovad, limit_pfn); > curr_iova = rb_entry(curr, struct iova, node); > + low_pfn_new = curr_iova->pfn_hi + 1; Could we call "low_pfn_new" something like "retry_pfn" instead? This code already has unavoidable readability struggles with so many different "pfn"s in play, so having two different meanings of "new" really doesn't help. Other than that, I think this looks OK (IIRC it's basically what I originally suggested), so with the naming tweaked, Reviewed-by: Robin Murphy > + > +retry: > do { > - limit_pfn = min(limit_pfn, curr_iova->pfn_lo); > - new_pfn = (limit_pfn - size) & align_mask; > + high_pfn = min(high_pfn, curr_iova->pfn_lo); > + new_pfn = (high_pfn - size) & align_mask; > prev = curr; > curr = rb_prev(curr); > curr_iova = rb_entry(curr, struct iova, node); > - } while (curr && new_pfn <= curr_iova->pfn_hi); > - > - if (limit_pfn < size || new_pfn < iovad->start_pfn) { > + } while (curr && new_pfn <= curr_iova->pfn_hi && new_pfn >= low_pfn); > + > + if (high_pfn < size || new_pfn < low_pfn) { > + if (low_pfn == iovad->start_pfn && low_pfn_new < limit_pfn) { > + high_pfn = limit_pfn; > + low_pfn = low_pfn_new; > + curr = &iovad->anchor.node; > + curr_iova = rb_entry(curr, struct iova, node); > + goto retry; > + } > iovad->max32_alloc_size = size; > goto iova32_full; > } >