Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp239022ybl; Wed, 22 Jan 2020 20:26:14 -0800 (PST) X-Google-Smtp-Source: APXvYqyTDJBnXvYgNjdVDD/kAhKhzZYAjtef83IVeDxO9se/y7GBVD0xqfbvdXqwwZ4BiGuHwGeK X-Received: by 2002:aca:ef82:: with SMTP id n124mr9578387oih.88.1579753574400; Wed, 22 Jan 2020 20:26:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579753574; cv=none; d=google.com; s=arc-20160816; b=FNX+SqPI1NZehdXUNXItysMTmn/F68vTw/v/QeFT2wpex8PvqzD4AQBQhXQHtbtwza ht7DW82pHg/2dATSoilm3HPw98HX4b0DCy790/RUHdlAXrNb+C/BSyLt/VMSd42O2mbz v2UVSgTwWexfAx1gKupACZt/pKWwq9lRtJL48Yy+qAxDkN0J7vbFz4diYkaTcKf95f8V Mu31t7PsfR6turiXkepCUSD4Va4aYYq5FMRdAOfmGMyBRJQn3OzjJaNM1W33lZ7UM06L K4w+CFt2/+czQUd4VpzMPoiCqwx2CZxUOIZRS1mWUw/kX3Ez7izMm97M7r2moExHeZ5A keuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=PAIatr9x7pXdyPfuUMV0gvvyQBiahoXgOeeVvuGNxrU=; b=fjXQkkDeHIYCHa7t1n9dXyK3qzH1AVBuHfIv01x3qN4SSFlfnW8/BdORyVcvb0NQQQ pbOhWW+3ykGVbuWcPuiQ7XuH2XL6IASX7CPK1tj7KbPgLCZ+Pif1tNHbLvFF054h0IlQ Gip8RMFt2ORP15BwxG/GSyE/A2pi21l8Rg/qulhSGapGPby5XvWlytduuVZnsgg5sRd6 +fHSV+P8Ul7BN2BoPZ33amMreZzL5+CauVWADWvqghd1ycKBEdGwMdb7MXZKxbYJH23d WYC/zEEPsm4X5M9W5KNxKL6rffjZBx3zYKJ/l2BWxTHaWTM0IUMD8LaipKqy2w47nLEA dT+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=vEX7Zp6E; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x207si317858oif.30.2020.01.22.20.26.02; Wed, 22 Jan 2020 20:26:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=vEX7Zp6E; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726584AbgAWEYT (ORCPT + 99 others); Wed, 22 Jan 2020 23:24:19 -0500 Received: from lelv0143.ext.ti.com ([198.47.23.248]:35322 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725989AbgAWEYT (ORCPT ); Wed, 22 Jan 2020 23:24:19 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 00N4NkAx124300; Wed, 22 Jan 2020 22:23:46 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1579753426; bh=PAIatr9x7pXdyPfuUMV0gvvyQBiahoXgOeeVvuGNxrU=; h=Subject:To:CC:References:From:Date:In-Reply-To; b=vEX7Zp6E82L2UWMCymVA4Qc1CD+/H2suB/NrkZDjz25oCAI+N4dvd9HGER1y5Zp/S M6iF1XKyFPZJtpFRh2xapvwUsYLssaL1zMz73c4AZYR6ZzAievifIFCBlFtR/KY2sL xYAgQhx+/XcDPDCQqMFlS887JanZTfp99CK6FcwY= Received: from DLEE104.ent.ti.com (dlee104.ent.ti.com [157.170.170.34]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 00N4NkUS038544 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 22 Jan 2020 22:23:46 -0600 Received: from DLEE101.ent.ti.com (157.170.170.31) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Wed, 22 Jan 2020 22:23:45 -0600 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE101.ent.ti.com (157.170.170.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Wed, 22 Jan 2020 22:23:45 -0600 Received: from [172.24.145.136] (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 00N4NePl095997; Wed, 22 Jan 2020 22:23:41 -0600 Subject: Re: [PATCH v1 11/11] mtd: new support oops logger based on pstore/blk To: WeiXiong Liao , Kees Cook , Anton Vorontsov , Colin Cross , Tony Luck , Jonathan Corbet , Miquel Raynal , Richard Weinberger , Mauro Carvalho Chehab , "David S. Miller" , Rob Herring , Greg Kroah-Hartman , Jonathan Cameron CC: , , References: <1579482233-2672-1-git-send-email-liaoweixiong@allwinnertech.com> <1579482233-2672-12-git-send-email-liaoweixiong@allwinnertech.com> From: Vignesh Raghavendra Message-ID: Date: Thu, 23 Jan 2020 09:54:09 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <1579482233-2672-12-git-send-email-liaoweixiong@allwinnertech.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi On 20/01/20 6:33 am, WeiXiong Liao wrote: [...] > +static inline int mtdpstore_panic_block_isbad(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + struct mtd_info *mtd = cxt->mtd; > + u64 blknum = div_u64(off, mtd->erasesize); > + > + return test_bit(blknum, cxt->badmap); > +} > + > +static inline void mtdpstore_mark_used(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + > + pr_debug("mark zone %llu used\n", zonenum); Please replace pr_*() with dev_*() throughout the patch. Device pointer should be available via struct mtd_info Regards Vignesh > + set_bit(zonenum, cxt->usedmap); > +} > + > +static inline void mtdpstore_mark_unused(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + > + pr_debug("mark zone %llu unused\n", zonenum); > + clear_bit(zonenum, cxt->usedmap); > +} > + > +static inline void mtdpstore_block_mark_unused(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + u32 zonecnt = cxt->mtd->erasesize / cxt->bo_info.dmesg_size; > + > + while (zonecnt > 0) { > + pr_debug("mark zone %llu unused\n", zonenum); > + clear_bit(zonenum, cxt->usedmap); > + zonenum++; > + zonecnt--; > + } > +} > + > +static inline int mtdpstore_is_used(struct mtdpstore_context *cxt, loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + u64 blknum = div_u64(off, cxt->mtd->erasesize); > + > + if (test_bit(blknum, cxt->badmap)) > + return true; > + return test_bit(zonenum, cxt->usedmap); > +} > + > +static int mtdpstore_block_is_used(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + u32 zonecnt = cxt->mtd->erasesize / cxt->bo_info.dmesg_size; > + > + while (zonecnt > 0) { > + if (test_bit(zonenum, cxt->usedmap)) > + return true; > + zonenum++; > + zonecnt--; > + } > + return false; > +} > + > +static int mtdpstore_is_empty(struct mtdpstore_context *cxt, char *buf, > + size_t size) > +{ > + struct mtd_info *mtd = cxt->mtd; > + size_t sz; > + int i; > + > + sz = min_t(uint32_t, size, mtd->writesize / 4); > + for (i = 0; i < sz; i++) { > + if (buf[i] != (char)0xFF) > + return false; > + } > + return true; > +} > + > +static void mtdpstore_mark_removed(struct mtdpstore_context *cxt, loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + > + pr_debug("mark zone %llu removed\n", zonenum); > + set_bit(zonenum, cxt->rmmap); > +} > + > +static void mtdpstore_block_clear_removed(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + u32 zonecnt = cxt->mtd->erasesize / cxt->bo_info.dmesg_size; > + > + while (zonecnt > 0) { > + clear_bit(zonenum, cxt->rmmap); > + zonenum++; > + zonecnt--; > + } > +} > + > +static int mtdpstore_block_is_removed(struct mtdpstore_context *cxt, > + loff_t off) > +{ > + u64 zonenum = div_u64(off, cxt->bo_info.dmesg_size); > + u32 zonecnt = cxt->mtd->erasesize / cxt->bo_info.dmesg_size; > + > + while (zonecnt > 0) { > + if (test_bit(zonenum, cxt->rmmap)) > + return true; > + zonenum++; > + zonecnt--; > + } > + return false; > +} > + > +static int mtdpstore_erase_do(struct mtdpstore_context *cxt, loff_t off) > +{ > + struct erase_info erase; > + int ret; > + > + pr_debug("try to erase off 0x%llx\n", off); > + erase.len = cxt->mtd->erasesize; > + erase.addr = off; > + ret = mtd_erase(cxt->mtd, &erase); > + if (!ret) > + mtdpstore_block_clear_removed(cxt, off); > + else > + pr_err("erase of region [0x%llx, 0x%llx] on \"%s\" failed\n", > + (unsigned long long)erase.addr, > + (unsigned long long)erase.len, cxt->bo_info.device); > + return ret; > +} > + > +/* > + * called while removing file > + * > + * Avoiding over erasing, do erase only when all zones are removed or unused. > + * Ensure to remove when unregister by reading, erasing and wrtiing back. > + */ > +static ssize_t mtdpstore_erase(size_t size, loff_t off) > +{ > + struct mtdpstore_context *cxt = &oops_cxt; > + > + if (mtdpstore_block_isbad(cxt, off)) > + return -EIO; > + > + mtdpstore_mark_unused(cxt, off); > + > + if (likely(mtdpstore_block_is_used(cxt, off))) { > + mtdpstore_mark_removed(cxt, off); > + return 0; > + } > + > + /* all zones are unused, erase it */ > + off = ALIGN_DOWN(off, cxt->mtd->erasesize); > + return mtdpstore_erase_do(cxt, off); > +} > + > +/* > + * What is securety for mtdpstore? > + * As there is no erase for panic case, we should ensure at least one zone > + * is writable. Otherwise, panic write will be failed. > + * If zone is used, write operation will return -ENEXT, which means that > + * pstore/blk will try one by one until get a empty zone. So, it's no need > + * to ensure next zone is empty, but at least one. > + */ > +static int mtdpstore_security(struct mtdpstore_context *cxt, loff_t off) > +{ > + int ret = 0, i; > + u32 zonenum = (u32)div_u64(off, cxt->bo_info.dmesg_size); > + u32 zonecnt = (u32)div_u64(cxt->mtd->size, cxt->bo_info.dmesg_size); > + u32 blkcnt = (u32)div_u64(cxt->mtd->size, cxt->mtd->erasesize); > + u32 erasesize = cxt->mtd->erasesize; > + > + for (i = 0; i < zonecnt; i++) { > + u32 num = (zonenum + i) % zonecnt; > + > + /* found empty zone */ > + if (!test_bit(num, cxt->usedmap)) > + return 0; > + } > + > + /* If there is no any empty zone, we have no way but to do erase */ > + off = ALIGN_DOWN(off, erasesize); > + while (blkcnt--) { > + div64_u64_rem(off + erasesize, cxt->mtd->size, (u64 *)&off); > + > + if (mtdpstore_block_isbad(cxt, off)) > + continue; > + > + ret = mtdpstore_erase_do(cxt, off); > + if (!ret) { > + mtdpstore_block_mark_unused(cxt, off); > + break; > + } > + } > + > + if (ret) > + pr_err("all blocks bad!\n"); > + pr_debug("end security\n"); > + return ret; > +} > + > +static ssize_t mtdpstore_write(const char *buf, size_t size, loff_t off) > +{ > + struct mtdpstore_context *cxt = &oops_cxt; > + size_t retlen; > + int ret; > + > + if (mtdpstore_block_isbad(cxt, off)) > + return -ENEXT; > + > + /* zone is used, please try next one */ > + if (mtdpstore_is_used(cxt, off)) > + return -ENEXT; > + > + pr_debug("try to write off 0x%llx size %zu\n", off, size); > + ret = mtd_write(cxt->mtd, off, size, &retlen, (u_char *)buf); > + if (ret < 0 || retlen != size) { > + pr_err("write failure at %lld (%zu of %zu written), err %d\n", > + off, retlen, size, ret); > + return -EIO; > + } > + mtdpstore_mark_used(cxt, off); > + > + mtdpstore_security(cxt, off); > + return retlen; > +} > + > +/* > + * All zones will be read as pstore/blk will read zone one by one when do > + * recover. > + */ > +static ssize_t mtdpstore_read(char *buf, size_t size, loff_t off) > +{ > + struct mtdpstore_context *cxt = &oops_cxt; > + size_t retlen; > + int ret; > + > + if (mtdpstore_block_isbad(cxt, off)) > + return -ENEXT; > + > + pr_debug("try to read off 0x%llx size %zu\n", off, size); > + ret = mtd_read(cxt->mtd, off, size, &retlen, (u_char *)buf); > + if ((ret < 0 && !mtd_is_bitflip(ret)) || size != retlen) { > + pr_err("read failure at %lld (%zu of %zu read), err %d\n", > + off, retlen, size, ret); > + return -EIO; > + } > + > + if (mtdpstore_is_empty(cxt, buf, size)) > + mtdpstore_mark_unused(cxt, off); > + else > + mtdpstore_mark_used(cxt, off); > + > + mtdpstore_security(cxt, off); > + return retlen; > +} > + > +static ssize_t mtdpstore_panic_write(const char *buf, size_t size, loff_t off) > +{ > + struct mtdpstore_context *cxt = &oops_cxt; > + size_t retlen; > + int ret; > + > + if (mtdpstore_panic_block_isbad(cxt, off)) > + return -ENEXT; > + > + /* zone is used, please try next one */ > + if (mtdpstore_is_used(cxt, off)) > + return -ENEXT; > + > + ret = mtd_panic_write(cxt->mtd, off, size, &retlen, (u_char *)buf); > + if (ret < 0 || size != retlen) { > + pr_err("panic write failure at %lld (%zu of %zu read), err %d\n", > + off, retlen, size, ret); > + return -EIO; > + } > + mtdpstore_mark_used(cxt, off); > + > + return retlen; > +} > + > +static void mtdpstore_notify_add(struct mtd_info *mtd) > +{ > + int ret; > + struct mtdpstore_context *cxt = &oops_cxt; > + struct blkoops_info *info = &cxt->bo_info; > + unsigned long longcnt; > + > + if (!strcmp(mtd->name, info->device)) > + cxt->index = mtd->index; > + > + if (mtd->index != cxt->index || cxt->index < 0) > + return; > + > + pr_debug("found matching MTD device %s\n", mtd->name); > + > + if (mtd->size < info->dmesg_size * 2) { > + pr_err("MTD partition %d not big enough\n", mtd->index); > + return; > + } > + if (mtd->erasesize < info->dmesg_size) { > + pr_err("eraseblock size of MTD partition %d too small\n", > + mtd->index); > + return; > + } > + if (unlikely(info->dmesg_size % mtd->writesize)) { > + pr_err("record size %lu KB must align to write size %d KB\n", > + info->dmesg_size / 1024, > + mtd->writesize / 1024); > + return; > + } > + if (unlikely(mtd->size > MTDPSTORE_MAX_MTD_SIZE)) { > + pr_err("mtd%d is too large (limit is %d MiB)\n", > + mtd->index, > + MTDPSTORE_MAX_MTD_SIZE / 1024 / 1024); > + return; > + } > + > + longcnt = BITS_TO_LONGS(div_u64(mtd->size, info->dmesg_size)); > + cxt->rmmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL); > + cxt->usedmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL); > + > + longcnt = BITS_TO_LONGS(div_u64(mtd->size, mtd->erasesize)); > + cxt->badmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL); > + > + cxt->bo_dev.total_size = mtd->size; > + /* just support dmesg right now */ > + cxt->bo_dev.flags = BLKOOPS_DEV_SUPPORT_DMESG; > + cxt->bo_dev.read = mtdpstore_read; > + cxt->bo_dev.write = mtdpstore_write; > + cxt->bo_dev.erase = mtdpstore_erase; > + cxt->bo_dev.panic_write = mtdpstore_panic_write; > + > + ret = blkoops_register_device(&cxt->bo_dev); > + if (ret) { > + pr_err("mtd%d register to blkoops failed\n", mtd->index); > + return; > + } > + cxt->mtd = mtd; > + pr_info("Attached to MTD device %d\n", mtd->index); > +} > + > +static int mtdpstore_flush_removed_do(struct mtdpstore_context *cxt, > + loff_t off, size_t size) > +{ > + struct mtd_info *mtd = cxt->mtd; > + u_char *buf; > + int ret; > + size_t retlen; > + struct erase_info erase; > + > + buf = kmalloc(mtd->erasesize, GFP_KERNEL); > + if (!buf) > + return -ENOMEM; > + > + /* 1st. read to cache */ > + ret = mtd_read(mtd, off, mtd->erasesize, &retlen, buf); > + if (ret || retlen != mtd->erasesize) > + goto free; > + > + /* 2nd. erase block */ > + erase.len = mtd->erasesize; > + erase.addr = off; > + ret = mtd_erase(mtd, &erase); > + if (ret) > + goto free; > + > + /* 3rd. write back */ > + while (size) { > + unsigned int zonesize = cxt->bo_info.dmesg_size; > + > + /* remove must clear used bit */ > + if (mtdpstore_is_used(cxt, off)) > + mtd_write(mtd, off, zonesize, &retlen, buf); > + > + off += zonesize; > + size -= min_t(unsigned int, zonesize, size); > + } > + > +free: > + kfree(buf); > + return ret; > +} > + > +static int mtdpstore_flush_removed(struct mtdpstore_context *cxt) > +{ > + struct mtd_info *mtd = cxt->mtd; > + int ret; > + loff_t off; > + u32 blkcnt = (u32)div_u64(mtd->size, mtd->erasesize); > + > + for (off = 0; blkcnt > 0; blkcnt--, off += mtd->erasesize) { > + ret = mtdpstore_block_is_removed(cxt, off); > + if (!ret) { > + off += mtd->erasesize; > + continue; > + } > + > + ret = mtdpstore_flush_removed_do(cxt, off, mtd->erasesize); > + if (ret) > + return ret; > + } > + return 0; > +} > + > +static void mtdpstore_notify_remove(struct mtd_info *mtd) > +{ > + struct mtdpstore_context *cxt = &oops_cxt; > + > + if (mtd->index != cxt->index || cxt->index < 0) > + return; > + > + mtdpstore_flush_removed(cxt); > + > + blkoops_unregister_device(&cxt->bo_dev); > + kfree(cxt->badmap); > + kfree(cxt->usedmap); > + kfree(cxt->rmmap); > + cxt->mtd = NULL; > + cxt->index = -1; > +} > + > +static struct mtd_notifier mtdpstore_notifier = { > + .add = mtdpstore_notify_add, > + .remove = mtdpstore_notify_remove, > +}; > + > +static int __init mtdpstore_init(void) > +{ > + int ret; > + struct mtdpstore_context *cxt = &oops_cxt; > + struct blkoops_info *info = &cxt->bo_info; > + > + ret = blkoops_info(info); > + if (unlikely(ret)) > + return ret; > + > + if (strlen(info->device) == 0) { > + pr_err("mtd device must be supplied\n"); > + return -EINVAL; > + } > + if (!info->dmesg_size) { > + pr_err("no recorder enabled\n"); > + return -EINVAL; > + } > + > + /* Setup the MTD device to use */ > + ret = kstrtoint((char *)info->device, 0, &cxt->index); > + if (ret) > + cxt->index = -1; > + > + register_mtd_user(&mtdpstore_notifier); > + return 0; > +} > +module_init(mtdpstore_init); > + > +static void __exit mtdpstore_exit(void) > +{ > + unregister_mtd_user(&mtdpstore_notifier); > +} > +module_exit(mtdpstore_exit); > + > +MODULE_LICENSE("GPL"); > +MODULE_AUTHOR("WeiXiong Liao "); > +MODULE_DESCRIPTION("MTD Oops/Panic console logger/driver"); > -- Regards Vignesh