Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3541294pxj; Tue, 11 May 2021 06:51:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyAZAIrpbnNVtPmOuWVk0EU4ldggabL+DXGRc0vNSybxHHlDwgt1sDZNl+HB2puCoPKceGX X-Received: by 2002:a17:906:f84:: with SMTP id q4mr9915585ejj.442.1620741065208; Tue, 11 May 2021 06:51:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620741065; cv=none; d=google.com; s=arc-20160816; b=O7/CLZaxbtFn6qzizbl/BBdH7kIs6VWTH5SxoeJw8znS2HYuU5jVNa3g0C2iT5mNyT Wrm4lyXQi0da01uzAson8D7Q+JTkK1L2cFqcBBks7yrkOVzdfzgYHZlWYyBzuZLHfCrO dn/stG7/aY6P5bMKHhegtUodGLVkBMq9dT93lyvkmm4fEH8LCtBH0xsYcMZdbeG2YuNS 5TGV1eaEFYVW0EZKdmRm0aE5vl87x8QVpssJDbogYJyB6FSZtRNpSZ/GVZfS4n4xkUMv MT1cTK4G9ztk3tdsPq5mUxkJ0Dx7C7y4nDx1NYCRCm9sJ79tfm9TtkL9iyWyzZTuitVR OZJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=LRr7Io/cRxzwiZCxD1WkNI1tUY1TshY/Vz0h+517uac=; b=zkHhm30R8cKovHVNvO+ahcUZvk6GPnde1NwiUeuFbLp/BZAMFjDRCadwRilkt2Wg+b HULw0K5HN3MzLdu+lscloPOA2ONpJtPsC9G33XKrhrwlAK+SEAvi/jcgSI5+v1WR/C4y 9eHPa0Mow4NcPcR5HLF7XnJ2gMWuf7TfevfVdg2j0fK8OF8eIDh1MCWA8AmslPWPxwj1 SL9tzldSiWUtu9E0Wtn3ikSbquEZQFCa9bX7JnMqgLLqmjLQBCiBJfZYwXuJI/dlgYiF oi+sZ0Cxak88RLjJNqGM5c6QnDKWFT/mgiRJKP3QBi+RNE32x5uhBTK863ou46ilIUvi 2Mcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s18si133819ejh.472.2021.05.11.06.50.40; Tue, 11 May 2021 06:51:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231749AbhEKNud (ORCPT + 99 others); Tue, 11 May 2021 09:50:33 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2704 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231669AbhEKNuU (ORCPT ); Tue, 11 May 2021 09:50:20 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FffN86GQHz1BL6b; Tue, 11 May 2021 21:46:32 +0800 (CST) Received: from huawei.com (10.175.104.170) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.498.0; Tue, 11 May 2021 21:49:02 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , , , Subject: [PATCH v3 1/5] mm/huge_memory.c: remove dedicated macro HPAGE_CACHE_INDEX_MASK Date: Tue, 11 May 2021 21:48:53 +0800 Message-ID: <20210511134857.1581273-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210511134857.1581273-1-linmiaohe@huawei.com> References: <20210511134857.1581273-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.104.170] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rewrite the pgoff checking logic to remove macro HPAGE_CACHE_INDEX_MASK which is only used here to simplify the code. Reviewed-by: Yang Shi Reviewed-by: Anshuman Khandual Reviewed-by: David Hildenbrand Signed-off-by: Miaohe Lin --- include/linux/huge_mm.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9626fda5efce..0a526f211fec 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -152,15 +152,13 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) bool transparent_hugepage_enabled(struct vm_area_struct *vma); -#define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1) - static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, unsigned long haddr) { /* Don't have to check pgoff for anonymous vma */ if (!vma_is_anonymous(vma)) { - if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) != - (vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK)) + if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, + HPAGE_PMD_NR)) return false; } -- 2.23.0