Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp409701rwb; Sun, 25 Sep 2022 23:54:42 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6POIA6gvwzOJo98G+IeaVhGbILDYJ5X7aTaoGFvFFCbPzs40tW1I67Iyx09mTBKyBhR0tM X-Received: by 2002:a05:6a00:1a44:b0:528:6af7:ff4a with SMTP id h4-20020a056a001a4400b005286af7ff4amr22307686pfv.78.1664175282658; Sun, 25 Sep 2022 23:54:42 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1664175282; cv=pass; d=google.com; s=arc-20160816; b=QqdAjwla0W7qPFWKPVWdm1kAg6yqD82q3K3WOTQuFZ1wTgEz8stGSKntb1qmBPMT7V zV3taKDVkyDvc8wBTF6XqEL6WV48fz9WMBuPLzEI2hcMCFNZHmIy+v17bJQV0ID8q9JW 7W7gk4X7NHaggl6oyE0p1h5SETvZm5ck7SAk4Lwk276vcofRgB9gOkJmIKpe25TbvQHA mdisuoTCv+ryJYwbO0mJgFEvG7ZYAcNTaVu0e9YKCpGby4cgj8p/XC79vL9Ra5faRmlR sOZmtwzooWM2lG43ScQtF5Ysprr8/4QoSDxd8MjNSwLjT0MEVfzAkMpZeP80WNgDZRSK /ppQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DHYA/CXw0YUE97JGCWBIjFBFZ/6GK4DksH8Yc5dW8q8=; b=kcyt6aDDx2rxftBXAgJvQ1kW1FJg9DQOwt8bx6dnZdGfafK1Z+BsiYSHblIkrx9xm6 CewmxTqjG82wBshCUNur0LC+mI7gEFqjxQacN0Eofljxfv5uf5pMsHgHaqSmAstV7+Tx vtv5pbw+1iEVqUMU5KfhDE14lf3afQKBERUpDE4jOvhT0lR8/FPmjPz7NRTO8iSHKqRZ QIGMRw1hlDpV2y5UfUbHvUyQBxGQdZL1LriFdDs848tVEAZ8OGATVGS1j53bH5MeD+Eo sQJem/xZ9aeizVb/QSx/1+MR6RDXRMvkqAmPRHEJvJpXTFkOg0ak/70Z5LpULYu1eUfF tySQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=AjoY3aFX; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e8-20020a170902ef4800b001789bd50b42si10905909plx.311.2022.09.25.23.54.31; Sun, 25 Sep 2022 23:54:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=AjoY3aFX; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233632AbiIZGFi (ORCPT + 99 others); Mon, 26 Sep 2022 02:05:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233529AbiIZGE7 (ORCPT ); Mon, 26 Sep 2022 02:04:59 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2077.outbound.protection.outlook.com [40.107.244.77]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08E9E17A8C for ; Sun, 25 Sep 2022 23:04:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N79IaMp/5YqfeJHTlwvj5hqhXNw5jbWt49zEiYAVo6e2BYO6p+c7OgjPvuVY2TAutJpinniAyJUQA54pUEJgtWV+d9Q9UophHEoyMwPw63FyFnzskubLBuYZ4IqyL72dIxDzOEjiBFFyk/iMa12HWZiDtPdhL+Rlt3TUm5TczBAlo4tya7zZ10GlFPc9rffuF3o96pzH+4SCV72LbaNwEi0aznX6LtpM5H5sQj+SFQFhVEJwwru9B7ErXSSsB0nzruJ6dWlr+WRnxOJ6aiUZaoufIzLFAQ01h0TdZTJHMd2NmAqCouwLmXhK5GqLmcjK3RBuy/71ynV7BbRHVoUerQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DHYA/CXw0YUE97JGCWBIjFBFZ/6GK4DksH8Yc5dW8q8=; b=DI8YH9PX5RkJHVGFmb9QzmuHsjDu5oPNI+BlgFdX4AAnUT/XQFtkbKT95ggqDQEuW51EHcor5neM3qZ+P6wu8WqaX0sc8XBNuRL2L3cIN4VUyk6JDodSgcMf5EJWgqJFcFnY4P4yVfTZwaXxy51VcCUus8zbb1dis4L9WZv/5cFx7kCwFsnB5x64V92EDoDgf1+DsXEyGcBWjNHtryBqdBe9J6hxFHlY66gwoHCElLs5jDL43PBXpdgyEO45ooXnmjtcYoh3gYWRhsyTPA9IeYf5jIjAXmYgdnkYx6dAblMTDO4s50qd/9SXcRbUbZXqEEhMDILotyrbvE13o5fDNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DHYA/CXw0YUE97JGCWBIjFBFZ/6GK4DksH8Yc5dW8q8=; b=AjoY3aFXE+4uvtGIVtcFsBPuKoI1JoqsENPrnQ4puUoxkVAjhxavRJmKRQynZy6dKrhWRf6s3H/ajKXp/rZ2ILXmDqhBDxUkLCuvmkl56gZr/U7SSJG/FuKthkPc6PGB5Bri+JOTeocqD6EdfhpQHdZgABtiGIdMRifxKDf+AxlGo33bYQbr3YExvp4HO1OhXL1mMdeuzPYwqChsuBguNWgAg9AvstnRjlmswoJrnunHPgN6d5d6X0fcKy0i27LnK5ut/UzSiF00d+NLHXgjuGI5VAkUkPhrxlO3gIrrnm7ZNWJ9AtViHO8xjbjTtDkTM510L9La3snYpb5B/85H5g== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by MN0PR12MB6030.namprd12.prod.outlook.com (2603:10b6:208:3ce::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.24; Mon, 26 Sep 2022 06:04:50 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::ed0:a520:ac8e:9966]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::ed0:a520:ac8e:9966%6]) with mapi id 15.20.5654.025; Mon, 26 Sep 2022 06:04:50 +0000 From: Alistair Popple To: linux-mm@kvack.org, Andrew Morton Cc: Michael Ellerman , Nicholas Piggin , Felix Kuehling , Alex Deucher , =?UTF-8?q?Christian=20K=C3=B6nig?= , "Pan, Xinhui" , David Airlie , Daniel Vetter , Ben Skeggs , Karol Herbst , Lyude Paul , Ralph Campbell , "Matthew Wilcox (Oracle)" , Alex Sierra , John Hubbard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Jason Gunthorpe , Dan Williams , Alistair Popple Subject: [PATCH 7/7] hmm-tests: Add test for migrate_device_range() Date: Mon, 26 Sep 2022 16:03:11 +1000 Message-Id: <749b224f6a35bcaa733733ab35dd8593bd555546.1664171943.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SYBPR01CA0110.ausprd01.prod.outlook.com (2603:10c6:10:1::26) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|MN0PR12MB6030:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b018b42-c47e-42dd-4a9a-08da9f850561 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Yg+OKHnIhqV4Vs26gpIVLEc7G8GDWx8wUpxOZK+DMmnN8B5GEQYGQr2gg4qtF2jbQEN7AB5Yhq8gbF0BEJDUGjqVDChQx9CapCgXmGNvkVEhG9vxwBDwFvwOKPTXHrx1b2nYoEMewEkrFsLXQ1kZ3jcfVb4LuXzB72ZlB2yIkXAfOqZqSq711zWzAStYzzMqVR3LiAOpHdcCmE7RB+/dgfNeWkq/pcvJjsy4Wn8pYI6tLJaAM1/ZSjeZoPWMyy0+3GgcFiXuYDKyD0EcsSDR0UZfJomglcn1YKG3TtXh6tiXQ4Y+Qs06fbA96oxLBFBSMERPOczd2ulKHUqKxby10Zl74mXYcfjef7fRY6ztp9Lf6bW80XNwnMeZP2HpeVxW0pddIxgS0CS0isdtDOaOJqyu+fW02i86JCZmbWL+kUTIYJdmBaopf7b8r1ZEiXKEKMnt7PHD4CtCijfUORnSpoT2IUbLS9nwjt7x/OS3WVTE2ju0cOtlmon+ieN+HKmV34JRIyZhlRDmq7teIX7/m53MCiJtl/5W5Hp+yMrDN14LqD12/7oIb0IwpP9N8kjohsjhU0wFxc6SvaziIHT8+sY6Bk3+PMijhLCHFSkAH2urAJWHdCqFlchNyAIOOQ/XyYv9XPG6KUZwHUfzKFZFeCIoOs2Ve7pcUCFGkkYOMlES1MhNaymrvOyNXI5sVvoVD48hydbbn5uBm92TOlePaA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(366004)(396003)(376002)(136003)(39860400002)(451199015)(316002)(54906003)(6916009)(6486002)(66946007)(66556008)(36756003)(8676002)(66476007)(5660300002)(4326008)(38100700002)(6666004)(8936002)(26005)(6506007)(7416002)(478600001)(107886003)(6512007)(2906002)(41300700001)(86362001)(2616005)(186003)(83380400001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?vbGmegRP5P3JT1hk3ZJiMdlRnludFVCC1R5YhEO7Argk0zoYaU3J8SKuR75F?= =?us-ascii?Q?NzbBIZEYX/749TbKx9YmBdC7Ll3hUsjOST/7b5KppNIpD/se04JD5VNCa7tH?= =?us-ascii?Q?9gQpmy5X9aVgKrfG0prSCS6khNtNSVUx4FiEVwywKbyZrAyG96LQ9lHdLsMg?= =?us-ascii?Q?gVMUwjvj+aqz9lgALfeCXlj1L7ofABRp3UfHXHH/hIN0iKdudVwcr93pFHad?= =?us-ascii?Q?twmuV93hxsk8Zl9QzSIykD7zO8plcGcaMK4VM9qzDlAhz49J9DcQBtGX6uSa?= =?us-ascii?Q?EfaJZj5kXxRLCNQ+HpNCcIU3wrg+quiZfDyf4iyRXmJq8xaTnZYYGdxeKczC?= =?us-ascii?Q?u93v2ubieBchTPp+7DYnNUX8OGI/SEk3jNVRZh2izYdIQJcW2t9yNHhKo5YB?= =?us-ascii?Q?gGCLMTnDlKsFnzbYbUmGPPqBK+E93XPBTlT9bE0hoCX6RYTtaCTIb5o+Dn7X?= =?us-ascii?Q?0KNbHxeJB1a/xkXTtlEg0lqmPvk8J2novGcOJZ0pQ0o4AG8ZcDyczRL/CYWG?= =?us-ascii?Q?NNn37PbnkdND+MLZRjIiWF6BUzt/OeTqhP7Hvj8ftGaWoBGj2iH6kn3pcdfL?= =?us-ascii?Q?2T+nI1lhM+myywn7paWTiVYd4QZMn3EkexND1dF1TJRUnEwDhXgcq+aRyYZY?= =?us-ascii?Q?Jp6JKfUtCRCmxGfckulieFN0PCr8seZV6/sK8fiN8LLKlTXPkCYKh9UmEE2t?= =?us-ascii?Q?GFJzVzs1yzyVaDIYzf76HZEw9uZVB7grSO2pn3RNCsv4Bv1jxqEdT0gIn4zO?= =?us-ascii?Q?OZdoSzXkzflbFj1ohyTEQF3dDSZT8Nr59q5B+SaP4+D4M1MtT7QLBwpr/gm0?= =?us-ascii?Q?QJ3ylCIIhIW1QjDMIMOUXRKjl8OPD+cLBhxp8Nev2PbbATCb2EtaOb5Ee5XI?= =?us-ascii?Q?84PNATt5f/phk5XY2uUlBxDDE/M2HU8rcRqABDLb8aySFItOzmNiciA6u8Mj?= =?us-ascii?Q?3OFWVflG4uuhaRJe9t8dCObJatJ/4GaJxVx1AvVA+BAfZbacU8k0+5m0TE3d?= =?us-ascii?Q?j9oHctEBojVnZlLXQeGQGyWKSI25LM475OMDp3tiaV0irk78D5tnHVXgjS3I?= =?us-ascii?Q?SbisInMxZvCzVSBqSGxP/9OabTWIEuCnN95ASgX2GC6eW/Zx/AtZhl/LKBUz?= =?us-ascii?Q?zeOK9qxrpJT0u7s7VC6JefC7K9EKo/Y+Z/MGfwJPJl9vey25AfhgvNi5fVj0?= =?us-ascii?Q?0/ZNHIcCF4kj4DXIjKI/7jVR45k3abwKrxYdnuBHt4KFw1SH5wg6tZUpdBZu?= =?us-ascii?Q?A7YvEF61tffr3nokQRlvUQsJdYVnArlzme09A1QcleXgYwBka8Fu+DqLV/CB?= =?us-ascii?Q?3F5OtT8fmutsu9lLD+ct0psu1rxV5mswjC/TRDEDGZKM2NDvG9E5eP9sMpRx?= =?us-ascii?Q?78lryZRXSUwjf0oaBkmYNY7kFGvjWZ+3vO5wvi9YprduFYAePdpNI7W33qpJ?= =?us-ascii?Q?BS/2bCeP5+sEcL2Xqrv+hhj+3J+BjJvx435le0X/T6z+KuwkE3t/59hxjTRd?= =?us-ascii?Q?Ud0vMrlyCkPDgU9jLXGCODuEO96RLR8cKDvxLxKawXHJNYrJ7t2T6o5dBQky?= =?us-ascii?Q?BeiciPIM+nKZaHoVLmBitBkgm7NjYdOOX93YRsgU?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6b018b42-c47e-42dd-4a9a-08da9f850561 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2022 06:04:49.9356 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: brq4MZNIz0qDiJ17ixz/hBuAgVCpTsSkUaXlvdqj2tAezYDOyw+2D26L43shRScH8NW43kibiOb8bsPSuTHTMg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6030 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Signed-off-by: Alistair Popple --- lib/test_hmm.c | 119 +++++++++++++++++++++----- lib/test_hmm_uapi.h | 1 +- tools/testing/selftests/vm/hmm-tests.c | 49 +++++++++++- 3 files changed, 148 insertions(+), 21 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 2bd3a67..d2821dd 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -100,6 +100,7 @@ struct dmirror { struct dmirror_chunk { struct dev_pagemap pagemap; struct dmirror_device *mdevice; + bool remove; }; /* @@ -192,11 +193,15 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp) return 0; } +static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page) +{ + return container_of(page->pgmap, struct dmirror_chunk, pagemap); +} + static struct dmirror_device *dmirror_page_to_device(struct page *page) { - return container_of(page->pgmap, struct dmirror_chunk, - pagemap)->mdevice; + return dmirror_page_to_chunk(page)->mdevice; } static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range) @@ -1219,6 +1224,84 @@ static int dmirror_snapshot(struct dmirror *dmirror, return ret; } +static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk) +{ + unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT; + unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT; + unsigned long npages = end_pfn - start_pfn + 1; + unsigned long i; + unsigned long *src_pfns; + unsigned long *dst_pfns; + + src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL); + dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL); + + migrate_device_range(src_pfns, start_pfn, npages); + for (i = 0; i < npages; i++) { + struct page *dpage, *spage; + + spage = migrate_pfn_to_page(src_pfns[i]); + if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE)) + continue; + + if (WARN_ON(!is_device_private_page(spage) && + !is_device_coherent_page(spage))) + continue; + spage = BACKING_PAGE(spage); + dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + lock_page(dpage); + copy_highpage(dpage, spage); + dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)); + if (src_pfns[i] & MIGRATE_PFN_WRITE) + dst_pfns[i] |= MIGRATE_PFN_WRITE; + } + migrate_device_pages(src_pfns, dst_pfns, npages); + migrate_device_finalize(src_pfns, dst_pfns, npages); + kfree(src_pfns); + kfree(dst_pfns); +} + +/* Removes free pages from the free list so they can't be re-allocated */ +static void dmirror_remove_free_pages(struct dmirror_chunk *devmem) +{ + struct dmirror_device *mdevice = devmem->mdevice; + struct page *page; + + for (page = mdevice->free_pages; page; page = page->zone_device_data) + if (dmirror_page_to_chunk(page) == devmem) + mdevice->free_pages = page->zone_device_data; +} + +static void dmirror_device_remove_chunks(struct dmirror_device *mdevice) +{ + unsigned int i; + + mutex_lock(&mdevice->devmem_lock); + if (mdevice->devmem_chunks) { + for (i = 0; i < mdevice->devmem_count; i++) { + struct dmirror_chunk *devmem = + mdevice->devmem_chunks[i]; + + spin_lock(&mdevice->lock); + devmem->remove = true; + dmirror_remove_free_pages(devmem); + spin_unlock(&mdevice->lock); + + dmirror_device_evict_chunk(devmem); + memunmap_pages(&devmem->pagemap); + if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE) + release_mem_region(devmem->pagemap.range.start, + range_len(&devmem->pagemap.range)); + kfree(devmem); + } + mdevice->devmem_count = 0; + mdevice->devmem_capacity = 0; + mdevice->free_pages = NULL; + kfree(mdevice->devmem_chunks); + } + mutex_unlock(&mdevice->devmem_lock); +} + static long dmirror_fops_unlocked_ioctl(struct file *filp, unsigned int command, unsigned long arg) @@ -1273,6 +1356,11 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_snapshot(dmirror, &cmd); break; + case HMM_DMIRROR_RELEASE: + dmirror_device_remove_chunks(dmirror->mdevice); + ret = 0; + break; + default: return -EINVAL; } @@ -1327,9 +1415,13 @@ static void dmirror_devmem_free(struct page *page) mdevice = dmirror_page_to_device(page); spin_lock(&mdevice->lock); - mdevice->cfree++; - page->zone_device_data = mdevice->free_pages; - mdevice->free_pages = page; + + /* Return page to our allocator if not freeing the chunk */ + if (!dmirror_page_to_chunk(page)->remove) { + mdevice->cfree++; + page->zone_device_data = mdevice->free_pages; + mdevice->free_pages = page; + } spin_unlock(&mdevice->lock); } @@ -1402,22 +1494,7 @@ static int dmirror_device_init(struct dmirror_device *mdevice, int id) static void dmirror_device_remove(struct dmirror_device *mdevice) { - unsigned int i; - - if (mdevice->devmem_chunks) { - for (i = 0; i < mdevice->devmem_count; i++) { - struct dmirror_chunk *devmem = - mdevice->devmem_chunks[i]; - - memunmap_pages(&devmem->pagemap); - if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE) - release_mem_region(devmem->pagemap.range.start, - range_len(&devmem->pagemap.range)); - kfree(devmem); - } - kfree(mdevice->devmem_chunks); - } - + dmirror_device_remove_chunks(mdevice); cdev_del(&mdevice->cdevice); } diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index e31d58c..8c818a2 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -36,6 +36,7 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x04, struct hmm_dmirror_cmd) #define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) #define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_RELEASE _IOWR('H', 0x07, struct hmm_dmirror_cmd) /* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index f2c2c97..28232ad 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1054,6 +1054,55 @@ TEST_F(hmm, migrate_fault) hmm_buffer_free(buffer); } +TEST_F(hmm, migrate_release) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Migrate memory to device. */ + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Release device memory. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_RELEASE, buffer, npages); + ASSERT_EQ(ret, 0); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / (2 * sizeof(*ptr)); ++i) + ASSERT_EQ(ptr[i], i); + + hmm_buffer_free(buffer); +} + /* * Migrate anonymous shared memory to device private memory. */ -- git-series 0.9.1