Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2360395pxj; Mon, 10 May 2021 00:36:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwn3L2arrLaQQghmvo6zYetTgOo0kWsstl1fk7tIXwZO2qsOkUxiZAdWIw/VYQFKXiePH4r X-Received: by 2002:a05:6e02:13c4:: with SMTP id v4mr21187373ilj.240.1620632168310; Mon, 10 May 2021 00:36:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620632168; cv=none; d=google.com; s=arc-20160816; b=URfYL8gox2YhjpELyQSKDAaPMT0uY0tLueAcPLhANyzjvg/kab8IJX4rkRaIeOGrVd 03rlXfl6eP2f8uHeUb7eJRqtkgnhygObpobdVrD+bet9AU2tENnL3yhJobel3Jz46V2Y AA3HMl3P00F1eDrcVZ5ZFsh7CjeDc6G9PpmK1mqJpXgfJrulDaeoC3VNwILXt9TSoUDp 6AxOMDBgT3Dh1H7LPxeaozwaPXEB2Y2AqTMHR17ed5qjD9nkm3Ue2nvVpShL7TWSX5cG qkKd/8N1k6wIswbJoAODP2/SIpW5xq4wAy6lXnrtHivQMlT71Xj8wv7TX4Cr8/ObuZUo Jz6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=7t7mKyriyEZX6HmajXTUz0XciwB1CpKo++Vww6ZeLk4=; b=wG70+/vtwq6N6ZcdKUnurtrr9AU5mH2l+B0P8ZFROjQoy/Qz4MN7y9iDeuaCxKYqwV jOWDEEimJx+Cd8pHab0TdzqwfNni1bC+oj7Sj+HdruVkQ8Eethto1Q8TNdsN7p/oKzJs We5fLU4Ww8E5x1FZzsGI4r5vv5b2GBKZSNYLaTBkzJz+3LNwJa9QqrVe96dsDO4I2WA3 lDnTSUSL/Er85Km4hj1rosEO69ro+PUT8F9IXkGGqds9Z+myjypExipOJmyQNOQbq3TU loKX6sEu7UHIhOT+rQjqHxh600qzXy3eCAXwVFEMGuimryW3H2AHVHpidapFGjnpvl+r bBww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ozlabs-ru.20150623.gappssmtp.com header.s=20150623 header.b=lcVUnde9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s6si14419000jao.56.2021.05.10.00.35.55; Mon, 10 May 2021 00:36:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ozlabs-ru.20150623.gappssmtp.com header.s=20150623 header.b=lcVUnde9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230146AbhEJHfV (ORCPT + 99 others); Mon, 10 May 2021 03:35:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230059AbhEJHfV (ORCPT ); Mon, 10 May 2021 03:35:21 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED11FC061573 for ; Mon, 10 May 2021 00:34:12 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id z18so5106146plg.8 for ; Mon, 10 May 2021 00:34:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ozlabs-ru.20150623.gappssmtp.com; s=20150623; h=message-id:date:mime-version:user-agent:subject:content-language:to :cc:references:from:in-reply-to:content-transfer-encoding; bh=7t7mKyriyEZX6HmajXTUz0XciwB1CpKo++Vww6ZeLk4=; b=lcVUnde9dwDYb5DVrpPQCHlVJRpEVuRQdM+GtQJmozWIA0BbXRqoX2sSS4xizAXO7J s+jgZSmHLoX7r9j50L8ULBvYCblKL/dWFVtgbA0JzCiKBo41P+M0RON9X6cbU+HRnkV9 AqA5edKB9S7yN0wOGwKTMUvpZcZqyXvAMKGOAoShTxLz5HmtdWTFrVCL4HF8pAo6T+hK mhoXZlV3/t5P7TZ1UkJ3f6QBP8HdQ7JGGGyUfCnB/HJzh1QOrT/YzV+K6mufYBOM/M/e Lta6pLxcjwTCXJKyFTvO9EtULCQ60uAaJDMZV7ReWPZWGYmBeCM9s2o5MM5YtsA33DdP ecIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=7t7mKyriyEZX6HmajXTUz0XciwB1CpKo++Vww6ZeLk4=; b=Q930GHcAOIfGUiDZI/lk6rg7fUNoOeKpV/ywA4z/1pqybDn5Wv1otkVUOH82KR0F8Z bckvwkbxioz1VxbL6+8NrWM0kF6cTGjFNjRPRP4Kwql6qle76GaV5Q3hzTnUXaWpGNkd OixVTZFI/cq3rsgTZ1fdL4wv377fxQfhvJX/2vTQHMdS/YXfS2J25NyRdZVs6RSuFJco AERLl+JE0A81ih/3KKBefsHgHXO+W8/KTaDhVI7glDCQDywmT71OAZB+eQoxZFRLkXJi QV44DDJtn7Jb56gwoWtGQQJJLgIrwfntL8n7/v0qYLbIg337M76YYAVAN+emWjS9bvu4 EWRg== X-Gm-Message-State: AOAM531yr3v9YZnqtgEeJjydzUrsk8YHL5AdzOYArz95b5+Nw4dzdRMB J9pp5gxzQsEFRsu+WW9NlopD6g== X-Received: by 2002:a17:902:b947:b029:ec:b04d:c8a2 with SMTP id h7-20020a170902b947b02900ecb04dc8a2mr23768452pls.2.1620632052453; Mon, 10 May 2021 00:34:12 -0700 (PDT) Received: from localhost (110-175-254-242.static.tpgi.com.au. [110.175.254.242]) by smtp.gmail.com with UTF8SMTPSA id w127sm7116905pfw.4.2021.05.10.00.34.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 10 May 2021 00:34:12 -0700 (PDT) Message-ID: Date: Mon, 10 May 2021 17:34:07 +1000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:88.0) Gecko/20100101 Thunderbird/88.0 Subject: Re: [PATCH v4 02/11] powerpc/kernel/iommu: Add new iommu_table_in_use() helper Content-Language: en-US To: Leonardo Bras , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Joel Stanley , Christophe Leroy , Nicolin Chen , Niklas Schnelle Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org References: <20210430163145.146984-1-leobras.c@gmail.com> <20210430163145.146984-3-leobras.c@gmail.com> From: Alexey Kardashevskiy In-Reply-To: <20210430163145.146984-3-leobras.c@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/1/21 02:31, Leonardo Bras wrote: > Having a function to check if the iommu table has any allocation helps > deciding if a tbl can be reset for using a new DMA window. > > It should be enough to replace all instances of !bitmap_empty(tbl...). > > iommu_table_in_use() skips reserved memory, so we don't need to worry about > releasing it before testing. This causes iommu_table_release_pages() to > become unnecessary, given it is only used to remove reserved memory for > testing. > > Also, only allow storing reserved memory values in tbl if they are valid > in the table, so there is no need to check it in the new helper. > > Signed-off-by: Leonardo Bras > --- > arch/powerpc/include/asm/iommu.h | 1 + > arch/powerpc/kernel/iommu.c | 65 ++++++++++++++++---------------- > 2 files changed, 34 insertions(+), 32 deletions(-) > > diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h > index deef7c94d7b6..bf3b84128525 100644 > --- a/arch/powerpc/include/asm/iommu.h > +++ b/arch/powerpc/include/asm/iommu.h > @@ -154,6 +154,7 @@ extern int iommu_tce_table_put(struct iommu_table *tbl); > */ > extern struct iommu_table *iommu_init_table(struct iommu_table *tbl, > int nid, unsigned long res_start, unsigned long res_end); > +bool iommu_table_in_use(struct iommu_table *tbl); > > #define IOMMU_TABLE_GROUP_MAX_TABLES 2 > > diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c > index ad82dda81640..5e168bd91401 100644 > --- a/arch/powerpc/kernel/iommu.c > +++ b/arch/powerpc/kernel/iommu.c > @@ -691,32 +691,24 @@ static void iommu_table_reserve_pages(struct iommu_table *tbl, > if (tbl->it_offset == 0) > set_bit(0, tbl->it_map); > > - tbl->it_reserved_start = res_start; > - tbl->it_reserved_end = res_end; > - > - /* Check if res_start..res_end isn't empty and overlaps the table */ > - if (res_start && res_end && > - (tbl->it_offset + tbl->it_size < res_start || > - res_end < tbl->it_offset)) > - return; > + if (res_start < tbl->it_offset) > + res_start = tbl->it_offset; > > - for (i = tbl->it_reserved_start; i < tbl->it_reserved_end; ++i) > - set_bit(i - tbl->it_offset, tbl->it_map); > -} > + if (res_end > (tbl->it_offset + tbl->it_size)) > + res_end = tbl->it_offset + tbl->it_size; > > -static void iommu_table_release_pages(struct iommu_table *tbl) > -{ > - int i; > + /* Check if res_start..res_end is a valid range in the table */ > + if (res_start >= res_end) { > + tbl->it_reserved_start = tbl->it_offset; > + tbl->it_reserved_end = tbl->it_offset; > + return; > + } > > - /* > - * In case we have reserved the first bit, we should not emit > - * the warning below. > - */ > - if (tbl->it_offset == 0) > - clear_bit(0, tbl->it_map); > + tbl->it_reserved_start = res_start; > + tbl->it_reserved_end = res_end; > > for (i = tbl->it_reserved_start; i < tbl->it_reserved_end; ++i) > - clear_bit(i - tbl->it_offset, tbl->it_map); > + set_bit(i - tbl->it_offset, tbl->it_map); git produced a messy chunk here. The new logic is: static void iommu_table_reserve_pages(struct iommu_table *tbl, unsigned long res_start, unsigned long res_end) { int i; WARN_ON_ONCE(res_end < res_start); /* * Reserve page 0 so it will not be used for any mappings. * This avoids buggy drivers that consider page 0 to be invalid * to crash the machine or even lose data. */ if (tbl->it_offset == 0) set_bit(0, tbl->it_map); if (res_start < tbl->it_offset) res_start = tbl->it_offset; if (res_end > (tbl->it_offset + tbl->it_size)) res_end = tbl->it_offset + tbl->it_size; /* Check if res_start..res_end is a valid range in the table */ if (res_start >= res_end) { tbl->it_reserved_start = tbl->it_offset; tbl->it_reserved_end = tbl->it_offset; return; } It is just hard to read. A code reviewer would assume res_end >= res_start (as there is WARN_ON) but later we allow res_end to be lesser than res_start. but may be it is just me :) Otherwise looks good. Reviewed-by: Alexey Kardashevskiy > } > > /* > @@ -781,6 +773,22 @@ struct iommu_table *iommu_init_table(struct iommu_table *tbl, int nid, > return tbl; > } > > +bool iommu_table_in_use(struct iommu_table *tbl) > +{ > + unsigned long start = 0, end; > + > + /* ignore reserved bit0 */ > + if (tbl->it_offset == 0) > + start = 1; > + end = tbl->it_reserved_start - tbl->it_offset; > + if (find_next_bit(tbl->it_map, end, start) != end) > + return true; > + > + start = tbl->it_reserved_end - tbl->it_offset; > + end = tbl->it_size; > + return find_next_bit(tbl->it_map, end, start) != end; > +} > + > static void iommu_table_free(struct kref *kref) > { > unsigned long bitmap_sz; > @@ -799,10 +807,8 @@ static void iommu_table_free(struct kref *kref) > > iommu_debugfs_del(tbl); > > - iommu_table_release_pages(tbl); > - > /* verify that table contains no entries */ > - if (!bitmap_empty(tbl->it_map, tbl->it_size)) > + if (iommu_table_in_use(tbl)) > pr_warn("%s: Unexpected TCEs\n", __func__); > > /* calculate bitmap size in bytes */ > @@ -1108,18 +1114,13 @@ int iommu_take_ownership(struct iommu_table *tbl) > for (i = 0; i < tbl->nr_pools; i++) > spin_lock(&tbl->pools[i].lock); > > - iommu_table_release_pages(tbl); > - > - if (!bitmap_empty(tbl->it_map, tbl->it_size)) { > + if (iommu_table_in_use(tbl)) { > pr_err("iommu_tce: it_map is not empty"); > ret = -EBUSY; > - /* Undo iommu_table_release_pages, i.e. restore bit#0, etc */ > - iommu_table_reserve_pages(tbl, tbl->it_reserved_start, > - tbl->it_reserved_end); > - } else { > - memset(tbl->it_map, 0xff, sz); > } > > + memset(tbl->it_map, 0xff, sz); > + > for (i = 0; i < tbl->nr_pools; i++) > spin_unlock(&tbl->pools[i].lock); > spin_unlock_irqrestore(&tbl->large_pool.lock, flags); > -- Alexey