Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp1683850pxv; Fri, 25 Jun 2021 20:35:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzqe/u6Usqb/GJycNkakiV+IddoGtGRooxJNVISbOSwgCytKwXEjprDUnDdbC4rCDyzkzVq X-Received: by 2002:a17:906:bcf4:: with SMTP id op20mr13864765ejb.327.1624678506922; Fri, 25 Jun 2021 20:35:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624678506; cv=none; d=google.com; s=arc-20160816; b=o5HcRPFgB+InMzTczC8glMW4lOtDlj9DPAsZpk6oxwMjic85RniO65eQH7e9idBYM5 AmlhDK4cWlk6Repr9HRi7ejDSoIl9UsYekLHs7UJZRnv3v738daTlZ1/ObT3ghCOvF+e ItlM+SyVT74vvC1sZRzReKaKLdcwcJHcDqkDr7X6FWfG/IZv4nArIoAws1eUS3MmynBm SZoCO5Wd08Rl8lhztkSwZarAslrUmH+83yxU0Mqr0YMjcAMB30nAeK/0c0Z9ksSjgB4s 72T7KEw1EczRs4IyyOq5H7ZLogTKLrpxbMtmj7SImKrLWkbBQg4K8jwBSkUCUUuemjuv ZzgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:date:message-id:subject:from:to; bh=0Z1Io4QTusk8xWzaF7z/e/iqRtpkTuHTFvaaCmSTtzk=; b=qKhsOE/TYXBdkSRLDsEsYmH+zSbIMxhigHSVwwjs9Tn4nd79ZpqyPZF/kOLOqGLM2E mh93iki/Wr95yhpDNWbhyKY6NChxjUdCHu1VhJE+lKRA1fC7TTxag9D2/zD9KX+Uth0h TYv5RJZRpFXlIqv4QT6qze/jd7P2Lcx+JkcXYrkHD5vIhKfj1Z/WtEifG97pG2ED1qB/ bzCQkhuHnJfB+sCzEJjzQzmHLcJSKTTEWqLk7gfl103MtB0tbJXBgA1dxxA+EAQu0Ys6 yJzVa3xEfplDGZ7gLdezcbz552RWz+Z7dMclDjSI831uzf7sSERIwqkG84GnJi4q+44h ZIUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 24si7655036eja.677.2021.06.25.20.34.43; Fri, 25 Jun 2021 20:35:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229906AbhFZDc5 (ORCPT + 99 others); Fri, 25 Jun 2021 23:32:57 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:8324 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229878AbhFZDc4 (ORCPT ); Fri, 25 Jun 2021 23:32:56 -0400 Received: from dggeme755-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4GBfQb3Trtz71y0; Sat, 26 Jun 2021 11:25:43 +0800 (CST) Received: from [10.174.179.57] (10.174.179.57) by dggeme755-chm.china.huawei.com (10.3.19.101) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Sat, 26 Jun 2021 11:29:52 +0800 To: , , , , , From: Kemeng Shi Subject: [PATCH] libnvdimm, badrange: replace div_u64_rem with DIV_ROUND_UP Message-ID: Date: Sat, 26 Jun 2021 11:29:51 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.5.0 MIME-Version: 1.0 Content-Type: text/plain; charset="gbk" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.57] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggeme755-chm.china.huawei.com (10.3.19.101) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org __add_badblock_range use div_u64_rem to round up end_sector and it will introduces unnecessary rem define and costly '%' operation. So clean it with DIV_ROUND_UP. Signed-off-by: Kemeng Shi --- drivers/nvdimm/badrange.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/nvdimm/badrange.c b/drivers/nvdimm/badrange.c index aaf6e215a8c6..28e73506d85e 100644 --- a/drivers/nvdimm/badrange.c +++ b/drivers/nvdimm/badrange.c @@ -187,12 +187,9 @@ static void __add_badblock_range(struct badblocks *bb, u64 ns_offset, u64 len) const unsigned int sector_size = 512; sector_t start_sector, end_sector; u64 num_sectors; - u32 rem; start_sector = div_u64(ns_offset, sector_size); - end_sector = div_u64_rem(ns_offset + len, sector_size, &rem); - if (rem) - end_sector++; + end_sector = end_sector = DIV_ROUND_UP(ns_offset + len, sector_size); num_sectors = end_sector - start_sector; if (unlikely(num_sectors > (u64)INT_MAX)) { -- 2.23.0 -- Best wishes Kemeng Shi