Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751866AbdDCFSP (ORCPT ); Mon, 3 Apr 2017 01:18:15 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:60769 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751161AbdDCFRm (ORCPT ); Mon, 3 Apr 2017 01:17:42 -0400 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.249.23 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org From: Minchan Kim To: Andrew Morton CC: , Sergey Senozhatsky , , Minchan Kim Subject: [PATCH 5/5] zram: introduce zram data accessor Date: Mon, 3 Apr 2017 14:17:33 +0900 Message-ID: <1491196653-7388-6-git-send-email-minchan@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1491196653-7388-1-git-send-email-minchan@kernel.org> References: <1491196653-7388-1-git-send-email-minchan@kernel.org> X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB04/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/03 14:17:39, Serialize by Router on LGEKRMHUB04/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/03 14:17:39, Serialize complete at 2017/04/03 14:17:39 MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4102 Lines: 118 With element, sometime I got confused handle and element access. It might be my bad but I think it's time to introduce accessor to prevent future idiot like me. This patch is just clean-up patch so it shouldn't change any behavior. Signed-off-by: Minchan Kim --- drivers/block/zram/zram_drv.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index fdb73222841d..c3171e5aa582 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -57,6 +57,16 @@ static inline struct zram *dev_to_zram(struct device *dev) return (struct zram *)dev_to_disk(dev)->private_data; } +static unsigned long zram_get_handle(struct zram *zram, u32 index) +{ + return zram->table[index].handle; +} + +static void zram_set_handle(struct zram *zram, u32 index, unsigned long handle) +{ + zram->table[index].handle = handle; +} + /* flag operations require table entry bit_spin_lock() being held */ static int zram_test_flag(struct zram *zram, u32 index, enum zram_pageflags flag) @@ -82,9 +92,9 @@ static inline void zram_set_element(struct zram *zram, u32 index, zram->table[index].element = element; } -static inline void zram_clear_element(struct zram *zram, u32 index) +static unsigned long zram_get_element(struct zram *zram, u32 index) { - zram->table[index].element = 0; + return zram->table[index].element; } static size_t zram_get_obj_size(struct zram *zram, u32 index) @@ -428,13 +438,14 @@ static bool zram_special_page_read(struct zram *zram, u32 index, unsigned int offset, unsigned int len) { zram_slot_lock(zram, index); - if (unlikely(!zram->table[index].handle) || - zram_test_flag(zram, index, ZRAM_SAME)) { + if (unlikely(!zram_get_handle(zram, index) || + zram_test_flag(zram, index, ZRAM_SAME))) { void *mem; zram_slot_unlock(zram, index); mem = kmap_atomic(page); - zram_fill_page(mem + offset, len, zram->table[index].element); + zram_fill_page(mem + offset, len, + zram_get_element(zram, index)); kunmap_atomic(mem); return true; } @@ -473,7 +484,7 @@ static void zram_meta_free(struct zram *zram, u64 disksize) /* Free all pages that are still in this zram device */ for (index = 0; index < num_pages; index++) { - unsigned long handle = zram->table[index].handle; + unsigned long handle = zram_get_handle(zram, index); /* * No memory is allocated for same element filled pages. * Simply clear same page flag. @@ -513,7 +524,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize) */ static void zram_free_page(struct zram *zram, size_t index) { - unsigned long handle = zram->table[index].handle; + unsigned long handle = zram_get_handle(zram, index); /* * No memory is allocated for same element filled pages. @@ -521,7 +532,7 @@ static void zram_free_page(struct zram *zram, size_t index) */ if (zram_test_flag(zram, index, ZRAM_SAME)) { zram_clear_flag(zram, index, ZRAM_SAME); - zram_clear_element(zram, index); + zram_set_element(zram, index, 0); atomic64_dec(&zram->stats.same_pages); return; } @@ -535,7 +546,7 @@ static void zram_free_page(struct zram *zram, size_t index) &zram->stats.compr_data_size); atomic64_dec(&zram->stats.pages_stored); - zram->table[index].handle = 0; + zram_set_handle(zram, index, 0); zram_set_obj_size(zram, index, 0); } @@ -550,7 +561,7 @@ static int zram_decompress_page(struct zram *zram, struct page *page, u32 index) return 0; zram_slot_lock(zram, index); - handle = zram->table[index].handle; + handle = zram_get_handle(zram, index); size = zram_get_obj_size(zram, index); src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); @@ -721,7 +732,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec, */ zram_slot_lock(zram, index); zram_free_page(zram, index); - zram->table[index].handle = handle; + zram_set_handle(zram, index, handle); zram_set_obj_size(zram, index, comp_len); zram_slot_unlock(zram, index); -- 2.7.4