Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753483AbcKSOup convert rfc822-to-8bit (ORCPT ); Sat, 19 Nov 2016 09:50:45 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:60073 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752761AbcKSOun (ORCPT ); Sat, 19 Nov 2016 09:50:43 -0500 From: "Aneesh Kumar K.V" To: John Hubbard , =?utf-8?B?SsOpcsO0bWU=?= Glisse Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [HMM v13 00/18] HMM (Heterogeneous Memory Management) v13 In-Reply-To: References: <1479493107-982-1-git-send-email-jglisse@redhat.com> Date: Sat, 19 Nov 2016 20:20:35 +0530 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16111914-0016-0000-0000-000005353D23 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006105; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000189; SDB=6.00782739; UDB=6.00377818; IPR=6.00560271; BA=6.00004893; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00013379; XFM=3.00000011; UTC=2016-11-19 14:50:41 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16111914-0017-0000-0000-000034C7CC65 Message-Id: <8760njmslg.fsf@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-11-19_10:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1611190268 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2874 Lines: 64 John Hubbard writes: > On Fri, 18 Nov 2016, Jérôme Glisse wrote: > >> Cliff note: HMM offers 2 things (each standing on its own). First >> it allows to use device memory transparently inside any process >> without any modifications to process program code. Second it allows >> to mirror process address space on a device. >> >> Change since v12 is the use of struct page for device memory even if >> the device memory is not accessible by the CPU (because of limitation >> impose by the bus between the CPU and the device). >> >> Using struct page means that their are minimal changes to core mm >> code. HMM build on top of ZONE_DEVICE to provide struct page, it >> adds new features to ZONE_DEVICE. The first 7 patches implement >> those changes. >> >> Rest of patchset is divided into 3 features that can each be use >> independently from one another. First is the process address space >> mirroring (patch 9 to 13), this allow to snapshot CPU page table >> and to keep the device page table synchronize with the CPU one. >> >> Second is a new memory migration helper which allow migration of >> a range of virtual address of a process. This memory migration >> also allow device to use their own DMA engine to perform the copy >> between the source memory and destination memory. This can be >> usefull even outside HMM context in many usecase. >> >> Third part of the patchset (patch 17-18) is a set of helper to >> register a ZONE_DEVICE node and manage it. It is meant as a >> convenient helper so that device drivers do not each have to >> reimplement over and over the same boiler plate code. >> >> >> I am hoping that this can now be consider for inclusion upstream. >> Bottom line is that without HMM we can not support some of the new >> hardware features on x86 PCIE. I do believe we need some solution >> to support those features or we won't be able to use such hardware >> in standard like C++17, OpenCL 3.0 and others. >> >> I have been working with NVidia to bring up this feature on their >> Pascal GPU. There are real hardware that you can buy today that >> could benefit from HMM. We also intend to leverage this inside the >> open source nouveau driver. >> > > Hi, > > We (NVIDIA engineering) have been working closely with Jerome on this for > several years now, and I wanted to mention that NVIDIA is committed to > using HMM. We've done initial testing of this patchset on Pascal GPUs (a > bit more detail below) and it is looking good. > This can also be used on IBM platforms like Minsky ( http://www.tomshardware.com/news/ibm-power8-nvidia-tesla-p100-minsky,32661.html ) There is also discussion around using this for device accelerated page migration. That can help with coherent device memory node work. (https://lkml.kernel.org/r/1477283517-2504-1-git-send-email-khandual@linux.vnet.ibm.com) -aneesh