Received: by 2002:a25:2c96:0:0:0:0:0 with SMTP id s144csp1020985ybs; Mon, 25 May 2020 05:22:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyAXx24l8QXjn/9Yu8oaSKmjGBIMSevoOX1uT/PLcOYEfDvhMyslVNcOCCPYs3IOmSxodcc X-Received: by 2002:a50:d785:: with SMTP id w5mr14370046edi.212.1590409378780; Mon, 25 May 2020 05:22:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590409378; cv=none; d=google.com; s=arc-20160816; b=XoeNS75jXG/sHQ+CJbfOGEGOxhM5GPfpU4wiJjzVkFnQ5QDVP3j2yfbwZk8F8VWqyo QH8lncCXs0v5w+rdGXKQM8CVZe9yZjFm7x9PDGahltSy0udLWZAlLUULGuskoD1QSltd cndmsMnTytUV5OL4bN65ViMmqGMsw1x4SHIuFLkVgQgQZWNCe9t79mCHgsWBmAWGUxpD jc1p8y0lD0nbuxDoDSGCD3URQ+IhCTCfQ4iH2DI/55odHic/0UJMJnaQj69F9enuqUqd 1tMEBqoojWHvKe5GgoUCvWBmdss+LoRNisHI8nH9Y3u36I784mGF1nvEu8IowZsTzaD7 YbEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=MDtuGjq992H7zxIhKOT8Z+F0tNgL9APFEam9OpU1H8Q=; b=tX5GSkbeDr4pH19HwQ1/fY7X4Y2U5EoZjSw4vdQCa7t2efgfrCbJutyHGgTWLJmDUx /SqqeVKlajuF8gIQobOUmrRGok2F9gqgkRsKOi6R+gj+bp6Cd9cwszWy3K+m+sajkdBB XfsmGdcjqjXY4/34Fr1V2L38G/pmb4x/6Gs+r3y0xol/pV+XVjSbfK7fK/g+Gj6JgxKD seaDuKbPLkXfjKStAIFKpKjP2ZdXJSE0rE0UHQASACqc5p435DknJxBfduPKQR56oa2b St56S8l0uNXWw3i9JCGFmopxsvPQ8CdRC1QR6osLX5wW6PeZJG71KBEk8fws9gZXfTvK MQtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="H5aFzuG/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hb15si9581251ejb.451.2020.05.25.05.22.34; Mon, 25 May 2020 05:22:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="H5aFzuG/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390556AbgEYMS5 (ORCPT + 99 others); Mon, 25 May 2020 08:18:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390196AbgEYMS4 (ORCPT ); Mon, 25 May 2020 08:18:56 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 620E7C061A0E for ; Mon, 25 May 2020 05:18:56 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id x6so3282755wrm.13 for ; Mon, 25 May 2020 05:18:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MDtuGjq992H7zxIhKOT8Z+F0tNgL9APFEam9OpU1H8Q=; b=H5aFzuG/1daEeVbEqDE7WtNrggKNjqP4JqKSQRu+yoDIxkU/3Bbj0UcHzX0eBd2KuM 94QpMksuIUn5B5d9MOO9KMgE/C/j253P2bKT0U08fUKSldw9fp8Jo4B/ExxzKqmoRPrG osx2fcwgYDHoqaxho74/5qNSUwiqnTFUrQKoWrkt1e5nzm4XQYRAGimfT8IdqXWBLjEw fmHsgb8IjQ5ayMLmoXMXWd+bJA5Uiml/bvrXVvNryKVCysqB/xgCcKmmr/SnxTwhwP9f d7ZWewCEktYZ2sn88fbXPyXLT3yOLjiPeYTUVKok27px08aafKwUnwIvQCbSZD4bvnR0 Giyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MDtuGjq992H7zxIhKOT8Z+F0tNgL9APFEam9OpU1H8Q=; b=Dce5O/nptfcrWMlsbxDMonxWD+/sUOXISBaOcV6XZDXJ+y0Jxe8OrKXW3iFmjmOhTz PpAvAl/ilKSQEAQy3tFTJMrY/XIBUrhbNQy3ZaFjFKmejjNW/yT7T/n8gvSj+BekvLrr CnsOkfZpqLN23GLc7Yl7P37n+De7JjRGkUFeMY3l6oNjTJrcmpKXt5RtmDiy+0efagDl 0QAdcJNkhvQDeEukYtDiAIaFH9IjTrsChbG8RidW9lKTx8pWg/SC9SOukPag41HGf8db v+qHGakTJ0eMZik/4qKlkaZ8Er76kPkMnvhM69GUXyrY7uNvXZg+moo1P2z2akSOAQ0K Wzow== X-Gm-Message-State: AOAM532o/0PxKhuev+lM/BQHWyEDx8d2YF7iV+h9VERg+GlFi2EdPrzw sSSLCl/B+isNbWTZRgqoMXQ= X-Received: by 2002:a05:6000:12c2:: with SMTP id l2mr14462285wrx.133.1590409135173; Mon, 25 May 2020 05:18:55 -0700 (PDT) Received: from ubuntu-G3.micron.com ([165.225.203.62]) by smtp.gmail.com with ESMTPSA id 10sm18136635wmw.26.2020.05.25.05.18.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 May 2020 05:18:54 -0700 (PDT) From: Bean Huo To: miquel.raynal@bootlin.com, richard@nod.at, vigneshr@ti.com, s.hauer@pengutronix.de, boris.brezillon@collabora.com, derosier@gmail.com Cc: linux-mtd@lists.infradead.org, linux-kernel@vger.kernel.org, huobean@gmail.com, Bean Huo Subject: [PATCH v6 5/5] mtd: rawnand: micron: Micron SLC NAND filling block Date: Mon, 25 May 2020 14:18:13 +0200 Message-Id: <20200525121814.31934-6-huobean@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200525121814.31934-1-huobean@gmail.com> References: <20200525121814.31934-1-huobean@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Bean Huo On planar 2D Micron NAND devices when a block erase command is issued, occasionally even though a block erase operation completes and returns a pass status, the flash block may not be completely erased. Subsequent operations to this block on very rare cases can result in subtle failures or corruption. These extremely rare cases should nevertheless be considered. These rare occurrences have been observed on partially written blocks. To avoid this rare occurrence, we should make sure that at least 15 pages have been programmed to a block before it is erased. In case we find that less than 15 pages have been programmed, we will rewrite first 15 pages of block. Signed-off-by: Miquel Raynal Signed-off-by: Bean Huo --- drivers/mtd/nand/raw/nand_micron.c | 102 +++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c index 4385092a9325..85aa17ddd3fc 100644 --- a/drivers/mtd/nand/raw/nand_micron.c +++ b/drivers/mtd/nand/raw/nand_micron.c @@ -36,6 +36,9 @@ #define NAND_ECC_STATUS_1_3_CORRECTED BIT(4) #define NAND_ECC_STATUS_7_8_CORRECTED (BIT(4) | BIT(3)) +#define MICRON_SHALLOW_ERASE_MIN_PAGE 15 +#define MICRON_PAGE_MASK_TRIGGER GENMASK(MICRON_SHALLOW_ERASE_MIN_PAGE, 0) + struct nand_onfi_vendor_micron { u8 two_plane_read; u8 read_cache; @@ -64,6 +67,7 @@ struct micron_on_die_ecc { struct micron_nand { struct micron_on_die_ecc ecc; + u16 *writtenp; }; static int micron_nand_setup_read_retry(struct nand_chip *chip, int retry_mode) @@ -472,6 +476,93 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip) return MICRON_ON_DIE_SUPPORTED; } +static int micron_nand_pre_erase(struct nand_chip *chip, u32 eraseblock) +{ + struct micron_nand *micron = nand_get_manufacturer_data(chip); + struct mtd_info *mtd = nand_to_mtd(chip); + u8 last_page = MICRON_SHALLOW_ERASE_MIN_PAGE - 1; + u32 page; + u8 *data_buf; + int ret, i; + + data_buf = nand_get_data_buf(chip); + WARN_ON(!data_buf); + + if (likely(micron->writtenp[eraseblock] & BIT(last_page))) + return 0; + + page = eraseblock << (chip->phys_erase_shift - chip->page_shift); + + if (unlikely(micron->writtenp[eraseblock] == 0)) { + ret = nand_read_page_raw(chip, data_buf, 1, page + last_page); + if (ret) + return ret; /* Read error */ + ret = nand_check_is_erased_page(chip, data_buf, true); + if (!ret) + return 0; + } + + memset(data_buf, 0x00, mtd->writesize); + + for (i = 0; i < MICRON_SHALLOW_ERASE_MIN_PAGE; i++) { + ret = nand_write_page_raw(chip, data_buf, false, page + i); + if (ret) + return ret; + } + + return 0; +} + +static int micron_nand_post_erase(struct nand_chip *chip, u32 eraseblock) +{ + struct micron_nand *micron = nand_get_manufacturer_data(chip); + + if (!micron) + return -EINVAL; + + micron->writtenp[eraseblock] = 0; + + return 0; +} + +static int micron_nand_write_oob(struct nand_chip *chip, loff_t to, + struct mtd_oob_ops *ops) +{ + struct micron_nand *micron = nand_get_manufacturer_data(chip); + u32 eb_sz = nanddev_eraseblock_size(&chip->base); + u32 p_sz = nanddev_page_size(&chip->base); + u32 ppeb = nanddev_pages_per_eraseblock(&chip->base); + u32 nb_p_tot = ops->len / p_sz; + u32 first_eb = DIV_ROUND_DOWN_ULL(to, eb_sz); + u32 first_p = DIV_ROUND_UP_ULL(to - (first_eb * eb_sz), p_sz); + u32 nb_eb = DIV_ROUND_UP_ULL(first_p + nb_p_tot, ppeb); + u32 remaining_p, eb, nb_p; + int ret; + + ret = nand_write_oob_nand(chip, to, ops); + + if (ret || ops->len != ops->retlen) + return ret; + + /* Mark the last pages of the first erase block to write */ + nb_p = min(nb_p_tot, ppeb - first_p); + micron->writtenp[first_eb] |= GENMASK(first_p + nb_p, 0) & + MICRON_PAGE_MASK_TRIGGER; + remaining_p = nb_p_tot - nb_p; + + /* Mark all the pages of all "in-the-middle" erase blocks */ + for (eb = first_eb + 1; eb < first_eb + nb_eb - 1; eb++) { + micron->writtenp[eb] |= MICRON_PAGE_MASK_TRIGGER; + remaining_p -= ppeb; + } + + /* Mark the first pages of the last erase block to write */ + if (remaining_p) + micron->writtenp[eb] |= GENMASK(remaining_p - 1, 0) & + MICRON_PAGE_MASK_TRIGGER; + return 0; +} + static int micron_nand_init(struct nand_chip *chip) { struct mtd_info *mtd = nand_to_mtd(chip); @@ -558,6 +649,17 @@ static int micron_nand_init(struct nand_chip *chip) } } + if (nand_is_slc(chip)) { + micron->writtenp = kcalloc(nanddev_neraseblocks(&chip->base), + sizeof(u16), GFP_KERNEL); + if (!micron->writtenp) + goto err_free_manuf_data; + + chip->ops.write_oob = micron_nand_write_oob; + chip->ops.pre_erase = micron_nand_pre_erase; + chip->ops.post_erase = micron_nand_post_erase; + } + return 0; err_free_manuf_data: -- 2.17.1