Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp2747920ybk; Mon, 18 May 2020 07:03:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwV3kQP5Lm0JmQxUQPbLdMvc4xL7NVoF333PXwz1F1qvrJg5NsO0oSwNAWp3lESDx9TIK/t X-Received: by 2002:a17:906:81c6:: with SMTP id e6mr14500025ejx.241.1589810619314; Mon, 18 May 2020 07:03:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589810619; cv=none; d=google.com; s=arc-20160816; b=Td38uHrH5oO9cu+uxF8GcC7/AeFIsHp+oRe207VldPTOKTQZLw1slvmRWjKv0PYWUF 0O2lhWszdut/OqTtB4ddMR5QonbTVeiqwc5CI+Lqcg12EcgiAVsgWJP+/HIfkz2ZrJJR 989NXO9KohFhTU9tmfCcRbPrnaiKu4GIsbkdvLn/WjLRwRvsqzxNJWYrj5e2Annniq3x E5HUZI8OYl78PqDHtFXSfK8hdubnfQl7/wCtLfrTE6khJq3gUhg3uHf6v3ILDD2hHCeT Wzbh96TrHGaFQWxnli33eC8TSrJRMyGvJX5/zCvEwLFqwg5dtiYvLVVAu4p6HxIxTS7a UYXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=PhOLQ/F7J+2sphUowl9P4vwRtY+FxXOHsvwRgin0evk=; b=JbYEYpudZEKNgKLtz3MfyunDsnsoPrQrOBc6IV0GOaf0SQYWVpriSqsY96YVZPhoFA vpvQt/OkmJGeZg7NqFOkw5bzgPA7SBcwBMvOezOZ6/eg5fEqp99yXJm6VKErSgRMDxUd DXlaQly1oINC35u+sXgB58jXoQ2RKTBNyJibRG8BCijNAugQYTOcH3Ws7jaSYz/2sfxP Mn70uQNDGZVjU5gfeWpBcur8xNd2sIvoGMEErksXRtzCS6WxrUnEinBqlKJjqU+i+m3O m+AjpVbRD340rD8Uor/zC5Wn8ULyImMYV98zRMn8BZ55aJ3Z9tn+rQihZLH3fxWnR+LU 7BoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=WKQZ9JFW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o12si7008887edq.34.2020.05.18.07.03.10; Mon, 18 May 2020 07:03:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=WKQZ9JFW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728008AbgEROAL (ORCPT + 99 others); Mon, 18 May 2020 10:00:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727975AbgEROAI (ORCPT ); Mon, 18 May 2020 10:00:08 -0400 Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com [IPv6:2a00:1450:4864:20::342]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2319C05BD09 for ; Mon, 18 May 2020 07:00:07 -0700 (PDT) Received: by mail-wm1-x342.google.com with SMTP id w64so10545092wmg.4 for ; Mon, 18 May 2020 07:00:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PhOLQ/F7J+2sphUowl9P4vwRtY+FxXOHsvwRgin0evk=; b=WKQZ9JFWOHdV8WWzQocLGVLzoReM/ZedGXcLLw8+YoYe6Q4L9KAVAA8SX9tICGQzeq K38lphcNq2lE1b+iTCVArrTsM2/Zm3upAuWjywCncQYgVjaToV/wSBByHr0ITAZU65LT QEL07I11p3c4uhKMv98clhSCdXlLz2yIeYyLLT5bRaoEx+K6NMwTLRoRjMS59/njZVtE bvIHIATFIw8ek6pavS1ZPaQhUyu5ueGeFSUV2KawD8vzeVjVBdZ0O2+tO9Cj6dbkFF3d bSJvJWqYgzl75yY7NwtOrFolYmHW/DiueAEEqQ13gtC00No+Nf5e0Hd+k/P9hYEfjZAE E7XQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PhOLQ/F7J+2sphUowl9P4vwRtY+FxXOHsvwRgin0evk=; b=DG8DC+5LQTA4uXf1T51ufxljmlf/lrT1DiuWuNDO5orKQQuMWmgL+jAQLYjC86gcEV vK+bO23H9oCDPUXjkE1KvtC3ER2OXhLYNMr/7y8e9gK8ptofSpwe+m0WxY/Zb/aOnDTo 56KjjpINhR8E7ivUZRohF+g6UbTvauSpTQTGZjFFZTkwBKMgX+XQswRT1luhuK9kc54b ULeH+wi6G942KCizFqG3tFsiWMtpsv0yVzqSkDcbyN6gspfxdyxU0SOCwyMdchsvvvLV 8Z6SnybE2X1gJE8H9N3CTPcoV2GKm2j+NR5AV12L6Nw4vDUeEoy1byM8x4qhmV1hew3Z SJiQ== X-Gm-Message-State: AOAM5307GhT/n8ySr39b04dUphTRMrQ9zihERg8SvGmY2M+Xc4X5wDEJ CkOeGKt+IZMoba1e2ol++Hw= X-Received: by 2002:a1c:3b87:: with SMTP id i129mr20512200wma.38.1589810406758; Mon, 18 May 2020 07:00:06 -0700 (PDT) Received: from localhost.localdomain ([2a01:598:b884:8d4:84ab:c471:b6eb:fcab]) by smtp.gmail.com with ESMTPSA id 81sm18114519wme.16.2020.05.18.07.00.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 May 2020 07:00:06 -0700 (PDT) From: Bean Huo To: miquel.raynal@bootlin.com, richard@nod.at, vigneshr@ti.com, s.hauer@pengutronix.de, boris.brezillon@collabora.com, derosier@gmail.com Cc: linux-mtd@lists.infradead.org, linux-kernel@vger.kernel.org, huobean@gmail.com, Bean Huo Subject: [PATCH v4 5/5] mtd: rawnand: micron: Micron SLC NAND filling block Date: Mon, 18 May 2020 15:59:43 +0200 Message-Id: <20200518135943.11749-6-huobean@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200518135943.11749-1-huobean@gmail.com> References: <20200518135943.11749-1-huobean@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Bean Huo On some legacy planar 2D Micron NAND devices when a block erase command is issued, occasionally even though a block erase operation completes and returns a pass status, the flash block may not be completely erased. Subsequent operations to this block on very rare cases can result in subtle failures or corruption. These extremely rare cases should nevertheless be considered. These rare occurrences have been observed on partially written blocks. To avoid this rare occurrence, we should make sure that at least 15 pages have been programmed to a block before it is erased. In case we find that less than 15 pages have been programmed, we will rewrite first 15 pages of block. Signed-off-by: Bean Huo --- drivers/mtd/nand/raw/nand_micron.c | 102 +++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c index b3485b0995ad..c5fd9e60f46d 100644 --- a/drivers/mtd/nand/raw/nand_micron.c +++ b/drivers/mtd/nand/raw/nand_micron.c @@ -36,6 +36,9 @@ #define NAND_ECC_STATUS_1_3_CORRECTED BIT(4) #define NAND_ECC_STATUS_7_8_CORRECTED (BIT(4) | BIT(3)) +#define MICRON_SHALLOW_ERASE_MIN_PAGE 15 +#define MICRON_PAGE_MASK_TRIGGER GENMASK(MICRON_SHALLOW_ERASE_MIN_PAGE, 0) + struct nand_onfi_vendor_micron { u8 two_plane_read; u8 read_cache; @@ -64,6 +67,7 @@ struct micron_on_die_ecc { struct micron_nand { struct micron_on_die_ecc ecc; + u16 *writtenp; }; static int micron_nand_setup_read_retry(struct nand_chip *chip, int retry_mode) @@ -429,6 +433,93 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip) return MICRON_ON_DIE_SUPPORTED; } +static int micron_nand_pre_erase(struct nand_chip *chip, u32 eraseblock) +{ + struct micron_nand *micron = nand_get_manufacturer_data(chip); + struct mtd_info *mtd = nand_to_mtd(chip); + u8 last_page = MICRON_SHALLOW_ERASE_MIN_PAGE - 1; + u32 page; + u8 *data_buf; + int ret, i; + + data_buf = nand_get_data_buf(chip); + WARN_ON(!data_buf); + + if (likely(micron->writtenp[eraseblock] & BIT(last_page))) + return 0; + + page = eraseblock << (chip->phys_erase_shift - chip->page_shift); + + if (unlikely(micron->writtenp[eraseblock] == 0)) { + ret = nand_read_page_raw(chip, data_buf, 1, page + last_page); + if (ret) + return ret; /* Read error */ + ret = nand_check_is_erased_page(chip, data_buf, true); + if (!ret) + return 0; + } + + memset(data_buf, 0x00, mtd->writesize); + + for (i = 0; i < MICRON_SHALLOW_ERASE_MIN_PAGE; i++) { + ret = nand_write_page_raw(chip, data_buf, false, page + i); + if (ret) + return ret; + } + + return 0; +} + +static int micron_nand_post_erase(struct nand_chip *chip, u32 eraseblock) +{ + struct micron_nand *micron = nand_get_manufacturer_data(chip); + + if (!micron) + return -EINVAL; + + micron->writtenp[eraseblock] = 0; + + return 0; +} + +static int micron_nand_write_oob(struct nand_chip *chip, loff_t to, + struct mtd_oob_ops *ops) +{ + struct micron_nand *micron = nand_get_manufacturer_data(chip); + u32 eb_sz = nanddev_eraseblock_size(&chip->base); + u32 p_sz = nanddev_page_size(&chip->base); + u32 ppeb = nanddev_pages_per_eraseblock(&chip->base); + u32 nb_p_tot = ops->len / p_sz; + u32 first_eb = DIV_ROUND_DOWN_ULL(to, eb_sz); + u32 first_p = DIV_ROUND_UP_ULL(to - (first_eb * eb_sz), p_sz); + u32 nb_eb = DIV_ROUND_UP_ULL(first_p + nb_p_tot, ppeb); + u32 remaining_p, eb, nb_p; + int ret; + + ret = nand_write_oob_nand(chip, to, ops); + + if (ret || ops->len != ops->retlen) + return ret; + + /* Mark the last pages of the first erase block to write */ + nb_p = min(nb_p_tot, ppeb - first_p); + micron->writtenp[first_eb] |= GENMASK(first_p + nb_p, 0) & + MICRON_PAGE_MASK_TRIGGER; + remaining_p = nb_p_tot - nb_p; + + /* Mark all the pages of all "in-the-middle" erase blocks */ + for (eb = first_eb + 1; eb < first_eb + nb_eb - 1; eb++) { + micron->writtenp[eb] |= MICRON_PAGE_MASK_TRIGGER; + remaining_p -= ppeb; + } + + /* Mark the first pages of the last erase block to write */ + if (remaining_p) + micron->writtenp[eb] |= GENMASK(remaining_p - 1, 0) & + MICRON_PAGE_MASK_TRIGGER; + return 0; +} + static int micron_nand_init(struct nand_chip *chip) { struct mtd_info *mtd = nand_to_mtd(chip); @@ -515,6 +606,17 @@ static int micron_nand_init(struct nand_chip *chip) } } + if (nand_is_slc(chip)) { + micron->writtenp = kcalloc(nanddev_neraseblocks(&chip->base), + sizeof(u16), GFP_KERNEL); + if (!micron->writtenp) + goto err_free_manuf_data; + + chip->ops.write_oob = micron_nand_write_oob; + chip->ops.pre_erase = micron_nand_pre_erase; + chip->ops.post_erase = micron_nand_post_erase; + } + return 0; err_free_manuf_data: -- 2.17.1