Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3400380imm; Sun, 1 Jul 2018 20:51:41 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdGadSRdr+GwcG0NG4T4OB/RedqukBpNGQhL86b5MLw3cwGKOvwN/eALifMHZNZKQcR5hCN X-Received: by 2002:a63:4203:: with SMTP id p3-v6mr18729394pga.184.1530503501395; Sun, 01 Jul 2018 20:51:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530503501; cv=none; d=google.com; s=arc-20160816; b=pLanzmkjsPH01jihlNkR6aHzvs+PSHFTWh8qJ6Rfo76cQXvki3aqYhoXzzD7/7WYoq UuRVapQ2MT0QrQUBKF3e7+QYblNIUAh05agyfdtd121/uZQn/Cp86wDJhhIiK9eegQNP Ocr9/3xLWyECZykxk7gmxa+Ez9TZZD728qA9h84AEXObq3TP47y0Kdx8+vohFDs+pmxZ HxHlIniBGdU05ScwYnOaobpwnvDzEjAbNHk9mZDr91kVXHAEmVP8teNAaWz6QKpHlDxb cBDXc5Bw7nCg3PUbk1+HARAbnShNnaJQ6jFpd24hrSqamYINBCJ8qqoi30rgjvzIAdnO EisQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=pEKKpHn2zBERCZWQPlMrM70OTbXyK4Hl62EltKEMKoc=; b=trY4TEQNNzpThpcFrWiHd2n1QZlBSyASrap6zwTYAH/AiZR6LR5WDIb3168NL7alOg lKyfRORojbqgdWb0u7J9bpSmLopabdF80QbG+kNCOu0zkE1JVcxuSRPeYDUTIT0YlyWL hWV0ArfU+TjdgcKLFh9aLfIZtVwDvV7usy0f95T4COd6PgC27XGe3JDLGBXfJW+9OyNF i5D+sD9//LA+Texh7gebNrJro5i4BZAh9BCo+VL5VvVLPe1ZZMz/NDTzyNo2HPqWXB8m 1qddWEEFlqU13na0V2gVWOH+pN/dUTivcthRKXwoYCQ9lssDZAvLj8DNayNJYwXrL1qy za1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 3-v6si15521833pfl.220.2018.07.01.20.51.14; Sun, 01 Jul 2018 20:51:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932566AbeGBCxt (ORCPT + 99 others); Sun, 1 Jul 2018 22:53:49 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:59152 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753047AbeGBCxs (ORCPT ); Sun, 1 Jul 2018 22:53:48 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9421776F91; Mon, 2 Jul 2018 02:53:47 +0000 (UTC) Received: from localhost (ovpn-8-16.pek2.redhat.com [10.72.8.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 92830111DCEA; Mon, 2 Jul 2018 02:53:46 +0000 (UTC) Date: Mon, 2 Jul 2018 10:53:43 +0800 From: Baoquan He To: Pavel Tatashin Cc: Steven Sistare , Daniel Jordan , LKML , Andrew Morton , kirill.shutemov@linux.intel.com, Michal Hocko , Linux Memory Management List , dan.j.williams@intel.com, jack@suse.cz, jglisse@redhat.com, Souptick Joarder , gregkh@linuxfoundation.org, Vlastimil Babka , Wei Yang , dave.hansen@intel.com, rientjes@google.com, mingo@kernel.org, osalvador@techadventures.net Subject: Re: [PATCH v3 1/2] mm/sparse: add sparse_init_nid() Message-ID: <20180702025343.GN3223@MiWiFi-R3L-srv> References: <20180702020417.21281-1-pasha.tatashin@oracle.com> <20180702020417.21281-2-pasha.tatashin@oracle.com> <20180702021121.GL3223@MiWiFi-R3L-srv> <20180702023130.GM3223@MiWiFi-R3L-srv> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Mon, 02 Jul 2018 02:53:47 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Mon, 02 Jul 2018 02:53:47 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'bhe@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/01/18 at 10:43pm, Pavel Tatashin wrote: > On Sun, Jul 1, 2018 at 10:31 PM Baoquan He wrote: > > > > On 07/01/18 at 10:18pm, Pavel Tatashin wrote: > > > > Here, I think it might be not right to jump to 'failed' directly if one > > > > section of the node failed to populate memmap. I think the original code > > > > is only skipping the section which memmap failed to populate by marking > > > > it as not present with "ms->section_mem_map = 0". > > > > > > > > > > Hi Baoquan, > > > > > > Thank you for a careful review. This is an intended change compared to > > > the original code. Because we operate per-node now, if we fail to > > > allocate a single section, in this node, it means we also will fail to > > > allocate all the consequent sections in the same node and no need to > > > check them anymore. In the original code we could not simply bailout, > > > because we still might have valid entries in the following nodes. > > > Similarly, sparse_init() will call sparse_init_nid() for the next node > > > even if previous node failed to setup all the memory. > > > > Hmm, say the node we are handling is node5, and there are 100 sections. > > If you allocate memmap for section at one time, you have succeeded to > > handle for the first 99 sections, now the 100th failed, so you will mark > > all sections on node5 as not present. And the allocation failure is only > > for single section memmap allocation case. > > No, unless I am missing something, that's not how code works: > > 463 if (!map) { > 464 pr_err("%s: memory map backing failed. > Some memory will not be available.", > 465 __func__); > 466 pnum_begin = pnum; > 467 goto failed; > 468 } > > 476 failed: > 477 /* We failed to allocate, mark all the following pnums as > not present */ > 478 for_each_present_section_nr(pnum_begin, pnum) { > > We continue from the pnum that failed as we set pnum_begin to pnum, > and mark all the consequent sections as not-present. Ah, yes, I misunderstood it, sorry for that. Then I have only one concern, for vmemmap case, if one section doesn't succeed to populate its memmap, do we need to skip all the remaining sections in that node? > > The only change compared to the original code is that once we found an > empty pnum we stop checking the consequent pnums in this node, as we > know they are empty as well, because there is no more memory in this > node to allocate from. >