Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp1404861pxy; Thu, 29 Apr 2021 06:30:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyLe2LVkUxOT0+qBqEAdxQ5Uk0WeRqmgkHi+8/KHWzlsGhzN4CBshxnPseySFqLaYv15jFE X-Received: by 2002:a05:6402:488:: with SMTP id k8mr18047220edv.233.1619703028603; Thu, 29 Apr 2021 06:30:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619703028; cv=none; d=google.com; s=arc-20160816; b=KIYZgGl17aBQbVF/yo/vIc9HfavQp3/HJ2SlGQI/ukSIOjSDE9oIho9FYFd8TEm3ud f0VWXss0nM1vjGMcCrXnHIhjgiTfzKWTDBfWiX6u/lhIQTnPQ2+wBXFk9pdBGpjUKMhc Bgm2V5OIESdJ4rFhNK5oV/h1q37ofJTSwV6Qd35u5Q6/+skqceRqt7mm9kOD0aBOuHgp m9aK2t/BYAH1k12yLjQb5qd3lITfdw+ms8VcC6mG0rcWz8BfwrvkxKVrBvJgEORDhtCK 3IJhjGFdSzHvxW5ZnKcj8t+7n01DsCTex5UnF/nkGuxAu2FIl6VvKY4oRdYWbBZ1Zwmx 0BcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=5WiK6O0hCs5sY484Cu0cWyd5WXMRpHJyj9eVL74Ert4=; b=eWisjNC6LxCkFfOpAxr16jcPxZxbaRRNxSrL9RcXyz4odxLNV7QRvk1/3dqySyOTOg UYy3uJcMmOcf4hYspZTidjuRrbycJoaUO08X0C/cWfje5lgg5djU5dB/RaF6G9TDhLJh iDQZr/xusFvfii/tznVJ9Zs1QHZlT3+0Bu+eFTT3bY/JNcJ0+ojFgLltkgtXJ48u3fS8 2gkE6P2ZwaK6+bx3j92ylrGoZE4dKLvahVxMEH/QLhrhVuF2VKW606Twjh/l4LrZ8Zsv J3pkENxxmPRmkTBovMfjCgOsNpveUw7QOW072FFzkvFhX4KJ/r6P8qBXyvmVq0VtWUYq S7Jw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g3si2899280edn.348.2021.04.29.06.30.02; Thu, 29 Apr 2021 06:30:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237206AbhD2N1z (ORCPT + 99 others); Thu, 29 Apr 2021 09:27:55 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:17408 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234473AbhD2N1w (ORCPT ); Thu, 29 Apr 2021 09:27:52 -0400 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FWGSr6TrlzjcK4; Thu, 29 Apr 2021 21:25:00 +0800 (CST) Received: from huawei.com (10.175.104.170) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 29 Apr 2021 21:26:55 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , Subject: [PATCH v2 1/5] mm/huge_memory.c: remove dedicated macro HPAGE_CACHE_INDEX_MASK Date: Thu, 29 Apr 2021 21:26:44 +0800 Message-ID: <20210429132648.305447-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210429132648.305447-1-linmiaohe@huawei.com> References: <20210429132648.305447-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.104.170] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rewrite the pgoff checking logic to remove macro HPAGE_CACHE_INDEX_MASK which is only used here to simplify the code. Reviewed-by: Yang Shi Reviewed-by: Anshuman Khandual Signed-off-by: Miaohe Lin --- include/linux/huge_mm.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9626fda5efce..0a526f211fec 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -152,15 +152,13 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) bool transparent_hugepage_enabled(struct vm_area_struct *vma); -#define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1) - static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, unsigned long haddr) { /* Don't have to check pgoff for anonymous vma */ if (!vma_is_anonymous(vma)) { - if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) != - (vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK)) + if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, + HPAGE_PMD_NR)) return false; } -- 2.23.0