Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4677455imu; Tue, 18 Dec 2018 21:01:28 -0800 (PST) X-Google-Smtp-Source: AFSGD/WPlsGUOSRCHUIyFJKzQWeovhC/O3V3CZncfYqPhFeTQ0EFfteI5wkjIsoiviq7Thic0m3o X-Received: by 2002:a17:902:6bc4:: with SMTP id m4mr18099578plt.93.1545195688482; Tue, 18 Dec 2018 21:01:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545195688; cv=none; d=google.com; s=arc-20160816; b=sP9duHw/QZIzkwrwpT8nzINpDWSO8EQdPpCfFWJ9cD8luNzwp9qEu15ydxLVq/NgJy EhHVF8wBDC915IR/rEaawziiLP9cvByJhA50J9cAyP6sfDS6ni7eUDskIQCthD8k0u8G 0N8XXzcLIaX5YJrPL1/QAQQozyj56HLU11y5uBiMcMctLUwstpd/JKzzQJgdy4b7p2GA Y8ShXnyv5+2uCmCANLZHHU1Nuh1jHM08hqts4tJZk57ywBYz7N2+fAJR1udmGeiQUxz0 qJb45QoMxARo+qjBnw1WsQeXzoBcBpnzQilq+MVwswLtS5UTZ2+C65nQ7f+MYyxGfXvi Mlxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:content-transfer-encoding :mime-version:references:in-reply-to:date:subject:cc:to:from; bh=rdFGNMlYlNXs0Yb3qu3KpAEKXJiAmt/bRdyK0nzaiBc=; b=gWkmbOUpRQeSGdd06/CyXulBQqSLR8zNngrPuz3KgxBtBC/7bqnX+aWB3/gJoUp2Vm 2dFoBS+/N9GtIGyYUC+2bse7M5VQYWqyaWTwnaHDRodGRUzr1dMtH+3V1H/3/pOCQ9fo UU9Ggc/KCSEvw/g9bPeZCOBkAnI/tNmRJyxA2gXBTfSchm7UUm/pgprOY+gS/z0OR4Ca hHaGye+okRYR+4jZPvcXpOmaMCTg0u6r3pb+J7UH0riDe4XqBdbUssxhAO3BuwPCYMS6 ybzv79bxbHwUWi+B/tN025SFKyOW5EkgyWtAkq3wFTQYfFopkn21ZlGYQC57s17CHBt6 29iQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d3si13541744pla.122.2018.12.18.21.01.02; Tue, 18 Dec 2018 21:01:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727468AbeLSDlP (ORCPT + 99 others); Tue, 18 Dec 2018 22:41:15 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:49186 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726537AbeLSDlN (ORCPT ); Tue, 18 Dec 2018 22:41:13 -0500 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wBJ3dL1E140219 for ; Tue, 18 Dec 2018 22:41:12 -0500 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by mx0a-001b2d01.pphosted.com with ESMTP id 2pfdtf11mr-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 18 Dec 2018 22:41:12 -0500 Received: from localhost by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 19 Dec 2018 03:41:11 -0000 Received: from b03cxnp08027.gho.boulder.ibm.com (9.17.130.19) by e36.co.us.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 19 Dec 2018 03:41:08 -0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wBJ3f71N17236096 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 19 Dec 2018 03:41:07 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B4F6F78060; Wed, 19 Dec 2018 03:41:07 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AC0A17805C; Wed, 19 Dec 2018 03:41:03 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.85.71.103]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 19 Dec 2018 03:41:03 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, Michal Hocko , Alexey Kardashevskiy , mpe@ellerman.id.au, paulus@samba.org, David Gibson Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Subject: [PATCH V5 2/3] powerpc/mm/iommu: Allow migration of cma allocated pages during mm_iommu_get Date: Wed, 19 Dec 2018 09:10:46 +0530 X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181219034047.16305-1-aneesh.kumar@linux.ibm.com> References: <20181219034047.16305-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 x-cbid: 18121903-0020-0000-0000-00000E9C706F X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010246; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000271; SDB=6.01133803; UDB=6.00589448; IPR=6.00913977; MB=3.00024741; MTD=3.00000008; XFM=3.00000015; UTC=2018-12-19 03:41:11 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18121903-0021-0000-0000-0000641A6FEF Message-Id: <20181219034047.16305-3-aneesh.kumar@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-12-19_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1812190029 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Current code doesn't do page migration if the page allocated is a compound page. With HugeTLB migration support, we can end up allocating hugetlb pages from CMA region. Also THP pages can be allocated from CMA region. This patch updates the code to handle compound pages correctly. This use the new helper get_user_pages_cma_migrate. It does one get_user_pages with right count, instead of doing one get_user_pages per page. That avoids reading page table multiple times. The patch also convert the hpas member of mm_iommu_table_group_mem_t to a union. We use the same storage location to store pointers to struct page. We cannot update alll the code path use struct page *, because we access hpas in real mode and we can't do that struct page * to pfn conversion in real mode. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/mm/mmu_context_iommu.c | 120 ++++++++-------------------- 1 file changed, 35 insertions(+), 85 deletions(-) diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c index 56c2234cc6ae..1d5161f93ce6 100644 --- a/arch/powerpc/mm/mmu_context_iommu.c +++ b/arch/powerpc/mm/mmu_context_iommu.c @@ -21,6 +21,7 @@ #include #include #include +#include static DEFINE_MUTEX(mem_list_mutex); @@ -34,8 +35,18 @@ struct mm_iommu_table_group_mem_t { atomic64_t mapped; unsigned int pageshift; u64 ua; /* userspace address */ - u64 entries; /* number of entries in hpas[] */ - u64 *hpas; /* vmalloc'ed */ + u64 entries; /* number of entries in hpages[] */ + /* + * in mm_iommu_get we temporarily use this to store + * struct page address. + * + * We need to convert ua to hpa in real mode. Make it + * simpler by storing physicall address. + */ + union { + struct page **hpages; /* vmalloc'ed */ + phys_addr_t *hpas; + }; }; static long mm_iommu_adjust_locked_vm(struct mm_struct *mm, @@ -78,63 +89,14 @@ bool mm_iommu_preregistered(struct mm_struct *mm) } EXPORT_SYMBOL_GPL(mm_iommu_preregistered); -/* - * Taken from alloc_migrate_target with changes to remove CMA allocations - */ -struct page *new_iommu_non_cma_page(struct page *page, unsigned long private) -{ - gfp_t gfp_mask = GFP_USER; - struct page *new_page; - - if (PageCompound(page)) - return NULL; - - if (PageHighMem(page)) - gfp_mask |= __GFP_HIGHMEM; - - /* - * We don't want the allocation to force an OOM if possibe - */ - new_page = alloc_page(gfp_mask | __GFP_NORETRY | __GFP_NOWARN); - return new_page; -} - -static int mm_iommu_move_page_from_cma(struct page *page) -{ - int ret = 0; - LIST_HEAD(cma_migrate_pages); - - /* Ignore huge pages for now */ - if (PageCompound(page)) - return -EBUSY; - - lru_add_drain(); - ret = isolate_lru_page(page); - if (ret) - return ret; - - list_add(&page->lru, &cma_migrate_pages); - put_page(page); /* Drop the gup reference */ - - ret = migrate_pages(&cma_migrate_pages, new_iommu_non_cma_page, - NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE); - if (ret) { - if (!list_empty(&cma_migrate_pages)) - putback_movable_pages(&cma_migrate_pages); - } - - return 0; -} - long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, struct mm_iommu_table_group_mem_t **pmem) { struct mm_iommu_table_group_mem_t *mem; - long i, j, ret = 0, locked_entries = 0; + long i, ret = 0, locked_entries = 0; unsigned int pageshift; unsigned long flags; unsigned long cur_ua; - struct page *page = NULL; mutex_lock(&mem_list_mutex); @@ -181,41 +143,24 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, goto unlock_exit; } + ret = get_user_pages_cma_migrate(ua, entries, 1, mem->hpages); + if (ret != entries) { + /* free the reference taken */ + for (i = 0; i < ret; i++) + put_page(mem->hpages[i]); + + vfree(mem->hpas); + kfree(mem); + ret = -EFAULT; + goto unlock_exit; + } else + ret = 0; + + pageshift = PAGE_SHIFT; for (i = 0; i < entries; ++i) { + struct page *page = mem->hpages[i]; cur_ua = ua + (i << PAGE_SHIFT); - if (1 != get_user_pages_fast(cur_ua, - 1/* pages */, 1/* iswrite */, &page)) { - ret = -EFAULT; - for (j = 0; j < i; ++j) - put_page(pfn_to_page(mem->hpas[j] >> - PAGE_SHIFT)); - vfree(mem->hpas); - kfree(mem); - goto unlock_exit; - } - /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. NOTE: faulting in + migration - * can be expensive. Batching can be considered later - */ - if (is_migrate_cma_page(page)) { - if (mm_iommu_move_page_from_cma(page)) - goto populate; - if (1 != get_user_pages_fast(cur_ua, - 1/* pages */, 1/* iswrite */, - &page)) { - ret = -EFAULT; - for (j = 0; j < i; ++j) - put_page(pfn_to_page(mem->hpas[j] >> - PAGE_SHIFT)); - vfree(mem->hpas); - kfree(mem); - goto unlock_exit; - } - } -populate: - pageshift = PAGE_SHIFT; + if (mem->pageshift > PAGE_SHIFT && PageCompound(page)) { pte_t *pte; struct page *head = compound_head(page); @@ -233,7 +178,12 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, local_irq_restore(flags); } mem->pageshift = min(mem->pageshift, pageshift); + /* + * We don't need struct page reference any more, switch + * physicall address. + */ mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT; + } atomic64_set(&mem->mapped, 1); -- 2.19.2