Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp3048662imm; Mon, 10 Sep 2018 10:10:43 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYRPcz8Z86x04WNNS+twQaN6A/GkjMPTar3mNWkIpovL2gsoDi15XbS8oIlalRmOVQUxUUa X-Received: by 2002:a17:902:3a5:: with SMTP id d34-v6mr22942020pld.98.1536599443543; Mon, 10 Sep 2018 10:10:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536599443; cv=none; d=google.com; s=arc-20160816; b=bvZbaHT3nl4xnv/Ye+riwxt3mS1k+Vm1ruAX/as4lKoZ/Ow7vK+AgYp103CmFRz99J v+7yVDJvnPM8/6mVkobVOs1RdAXzjyh6ci6IBOrnwIixLEmaSQKSHh8dxBKMeE99hGmA XbqMQzs/vazpGel71svB6zfn5wwEFrJ2AZSh+8uPs35kPFcQXss0l9e0FXnsyo1aD9bM uGBjrjz0E7eaDxbW21lgbPB2zblkDGjTmxHGzOFcmVmDui4rgMBJ7U05SNWXzkRLuwSu ZB5EJyqspoW1l0sJ13oZETnXJTIdFGDHHqyqGnP5E5DfulxMfxc2WzzX/Rj5AY5KdbWQ DmPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject; bh=hLgQJEpHaoDsyGaMNN2nLjHtDd55S/OcvAWrogKAl6Y=; b=F2QqYg7goMAhNl8JDC/bCGyPIFEX1mQZ7bpX8aMKS4WPA7bextMSbVScg8KfrRC9v4 5yuMM96vA1KRJ8eZwi407JOTwrQnepRsvm2+i10Wnt/2xnNPu0raD49KgCY8+IZiQMVc 9X6HXyCAEnSaGU+hXgS259zioYeKIvDbE9yRdxrjTg3RX2wsvEvZ/YybwaW/Dwg6kPO/ crXwXwsWuLW2KunjifnmXUGIMxN/h+XuW7ToaiMD6SOkQsz27pktRZC0dqqp6Zsdq8jo MuGwfPf1FiIWYkg/vCd66jCk8yGFWPwPJEn9unKAFYvjcbg1PWr++MFpnVN9Ssis1RKz hw1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 184-v6si16825482pgb.587.2018.09.10.10.10.15; Mon, 10 Sep 2018 10:10:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728092AbeIJWEp convert rfc822-to-8bit (ORCPT + 99 others); Mon, 10 Sep 2018 18:04:45 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:47346 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726603AbeIJWEp (ORCPT ); Mon, 10 Sep 2018 18:04:45 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id ABCC95BCDE; Mon, 10 Sep 2018 17:09:40 +0000 (UTC) Received: from llong.remote.csb (dhcp-17-55.bos.redhat.com [10.18.17.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 569AA2156889; Mon, 10 Sep 2018 17:09:36 +0000 (UTC) Subject: Re: Plumbers 2018 - Performance and Scalability Microconference To: John Hubbard , Daniel Jordan , linux-kernel@vger.kernel.org, "linux-mm@kvack.org" Cc: Aaron Lu , alex.kogan@oracle.com, akpm@linux-foundation.org, boqun.feng@gmail.com, brouer@redhat.com, dave@stgolabs.net, dave.dice@oracle.com, Dhaval Giani , ktkhai@virtuozzo.com, ldufour@linux.vnet.ibm.com, Pavel.Tatashin@microsoft.com, paulmck@linux.vnet.ibm.com, shady.issa@oracle.com, tariqt@mellanox.com, tglx@linutronix.de, tim.c.chen@intel.com, vbabka@suse.cz, yang.shi@linux.alibaba.com, shy828301@gmail.com, Huang Ying , subhra.mazumdar@oracle.com, Steven Sistare , jwadams@google.com, ashwinch@google.com, sqazi@google.com, Shakeel Butt , walken@google.com, rientjes@google.com, junaids@google.com, Neha Agarwal References: <1dc80ff6-f53f-ae89-be29-3408bf7d69cc@oracle.com> <35c2c79f-efbe-f6b2-43a6-52da82145638@nvidia.com> From: Waiman Long Organization: Red Hat Message-ID: <55b44432-ade5-f090-bfe7-ea20f3e87285@redhat.com> Date: Mon, 10 Sep 2018 13:09:36 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <35c2c79f-efbe-f6b2-43a6-52da82145638@nvidia.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Mon, 10 Sep 2018 17:09:41 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Mon, 10 Sep 2018 17:09:41 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'longman@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/08/2018 12:13 AM, John Hubbard wrote: > > Hi Daniel and all, > > I'm interested in the first 3 of those 4 topics, so if it doesn't conflict with HMM topics or > fix-gup-with-dma topics, I'd like to attend. GPUs generally need to access large chunks of > memory, and that includes migrating (dma-copying) pages around. > > So for example a multi-threaded migration of huge pages between normal RAM and GPU memory is an > intriguing direction (and I realize that it's a well-known topic, already). Doing that properly > (how many threads to use?) seems like it requires scheduler interaction. > > It's also interesting that there are two main huge page systems (THP and Hugetlbfs), and I sometimes > wonder the obvious thing to wonder: are these sufficiently different to warrant remaining separate, > long-term? Yes, I realize they're quite different in some ways, but still, one wonders. :) One major difference between hugetlbfs and THP is that the former has to be explicitly managed by the applications that use it whereas the latter is done automatically without the applications being aware that THP is being used at all. Performance wise, THP may or may not increase application performance depending on the exact memory access pattern, though the chance is usually higher that an application will benefit than suffer from it. If an application know what it is doing, using hughtblfs can boost performance more than it can ever achieved by THP. Many large enterprise applications, like Oracle DB, are using hugetlbfs and explicitly disable THP. So unless THP can improve its performance to a level that is comparable to hugetlbfs, I won't see the later going away. Cheers, Longman