Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751431AbdDCFwj (ORCPT ); Mon, 3 Apr 2017 01:52:39 -0400 Received: from mail-he1eur01on0054.outbound.protection.outlook.com ([104.47.0.54]:29695 "EHLO EUR01-HE1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751093AbdDCFwi (ORCPT ); Mon, 3 Apr 2017 01:52:38 -0400 Authentication-Results: lge.com; dkim=none (message not signed) header.d=none;lge.com; dmarc=none action=none header.from=nextfour.com; Subject: Re: [PATCH 2/5] zram: partial IO refactoring To: Minchan Kim , Andrew Morton References: <1491196653-7388-1-git-send-email-minchan@kernel.org> <1491196653-7388-3-git-send-email-minchan@kernel.org> CC: , Sergey Senozhatsky , From: =?UTF-8?Q?Mika_Penttil=c3=a4?= Message-ID: Date: Mon, 3 Apr 2017 08:52:33 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <1491196653-7388-3-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [194.157.170.61] X-ClientProxiedBy: AM4PR0701CA0027.eurprd07.prod.outlook.com (2603:10a6:200:42::37) To AM4PR0701MB2161.eurprd07.prod.outlook.com (2603:10a6:200:49::18) X-MS-Office365-Filtering-Correlation-Id: 849fa61b-c3a9-406c-332e-08d47a55a0ba X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(201703131423075);SRVR:AM4PR0701MB2161; X-Microsoft-Exchange-Diagnostics: 1;AM4PR0701MB2161;3:iTmv7oV4leofGZdMHTZqLr5WOozKiorAfIaW/+pRjRsKhMBFbM0WZuF6UH+SRAu+qLVkSfIh+d/vtfvRUOLw0ZpRUnrtc1A57nZP/u+p6UIHJrWlD7Ctfh2zOSbdA1T8rI5zZZ2UrRZAhGINtn9FxY+l7YHcZhYUi9TYrHFr/fKEmpCK5L83CitLNbMY/3ZvT7WVbYbFhMOI8R6vA7CYPpcrljXHFN/vOoSlvfAPf2WJ7Nm2g92dXKBXykawhSQT4IbXuoxDUMO+tKs4otvohJDMlIrHzuP15+P36ShuxRmcKlfSYrThZFcfAcWBFlCl;25:rU/mPd8Oc84GyTbOu1yoWPzA8UI03SXlVrhVIgcPteo/cxKLNtwPVxqAUklCdqNvHTlb0t//8dV6Jm6nLUyYqjnrLVM1aTrCLXh8VSwJIj20bibsDoz9PTAuttTGvld6yKCAWhuiwSSVyJQ7l8DiXCGinN3ErgvYFaDaEpajULt255ZvEuw+csHjtA+Vs+tmauNYotHs22UhEQOKfQ1vCBRLnOHcUdkZcbId2UBUXo/2iB3Al4KS3yPjmyxQc81IFHOnRjw46vQCMqhfdz4109+GXmQRySrt6n4V4Y2H36/CupNMLEXl2QasU57sLRwTYscjLVMWokmrlaC4SjyQN+qXRFW6E8Tqt8xLHZzsV+rLVtoDaxWthQbsSJWrvHGrvFJRzl538mLCZeAUlHFAtW+Stz3ALd2/s94fwWjd/eSvHNuNSmZASccOF+DtrRR83nDlxPbeflonBxach9XkNA== X-Microsoft-Exchange-Diagnostics: 1;AM4PR0701MB2161;31:TJqdcMKm6nSPoWUytydWSVddU7ISGaGoazMZ5W4INmq0/74xl0QU9vrzIGArs5wOscg1ihShQa6WTkEIbzRZfG7qpw9Cd2zr+7rAktTmAPvUbiIFyXFpL6kZv/HYgCCga3Y9N+XEIIXUdIsHDjzn00VdDxBwZ4ki/Y72rvj8kkT5kIiB4vtTNPeWgjD4eJcW8+7angtLBvxqV0ixnQOEFXMyEggqOD2ktkSAzKgDEt3wd7qerClAUPlHQcX1yk6tjaIMucxNVFIUbPRUzzt8BA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040450)(601004)(2401047)(8121501046)(5005006)(93006095)(93001095)(3002001)(10201501046)(6041248)(201703131423075)(201702281528075)(201703061421075)(2016111802025)(20161123560025)(20161123564025)(20161123562025)(20161123555025)(6072148)(6043046);SRVR:AM4PR0701MB2161;BCL:0;PCL:0;RULEID:;SRVR:AM4PR0701MB2161; X-Microsoft-Exchange-Diagnostics: 1;AM4PR0701MB2161;4:/VZCBG2K9p34zfbWEG2gja58j+UvvPMxNdEnzLkow2ySVtzFPgMmlLpGQIC40WeL1ho/+qrsAt7qteN2QYNr1VafUpZ5WWz6tedCrGXGYzhtgAXOo2BvvQdDFvB5D+4o1cOQcjC5FaCxM4CxUIG3x5jwhc+uzsO8IbrbjPTSaAeajy/ByQte7O+14/l5bwM040w+qiGQnCLnbj1Palycrn/pp/q5VzyiVc7un8q4Sr3Ad5h9ZhgusTxTuVNg+LsmWJAnh1V2Xr7IP5V5bRcV4xLmePPUqYu5ULSUHudcrxQcle0L0K1DV7ByTfL33tg/ar58wT9mg2dJTFvFajhi4MlRn2fAUoyrqGMGO3LZPMQqKEUTRhnJshWp1R3IVd5uCzIuC8n6nVwgGJzIxRkcGyylYgvlvYF7h2TUhop48w2pADh79Shd0PoMKaktzpz27nJnx1YvaDz8Zca6LvB8cx9D5ul/42UyqipAaV8AJWIK2Dxqdi9qpPKyyR17FY8fmY2gNyxZrVW7RsaPZSvNikoGky++mnsvSsFRwVUmKQtlxMSHVYkv4Du2KGO3LPEYDtDvDHacx9uiSi4yOfYAN4JXvv+PvLBekLc8bQUr5goi+FkKXHEgaKOgw9tNRJ9zuL2pu4GXWsOCfHHuF5ofjiDWOWoVfc7RZ/qdWaDGfvbq8rSIXMzQghXyf3rGay7HI8UnpEwRdBHOqKr2uo9ctrXih1pefvVADJ8Fga/EP+HlPQCFTDdsVJoV2iCOpkBnuM9al86WaIZrHLt4uGSGjg== X-Forefront-PRVS: 0266491E90 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6049001)(6009001)(39400400002)(39410400002)(39830400002)(39450400003)(377454003)(24454002)(86362001)(8676002)(230700001)(31686004)(31696002)(81166006)(42186005)(53936002)(54356999)(50986999)(76176999)(2950100002)(2906002)(189998001)(4001350100001)(66066001)(33646002)(6116002)(3846002)(36756003)(4326008)(5660300001)(47776003)(229853002)(305945005)(77096006)(23746002)(6486002)(7736002)(38730400002)(25786009)(83506001)(50466002)(53546009);DIR:OUT;SFP:1101;SCL:1;SRVR:AM4PR0701MB2161;H:[10.10.10.110];FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?Windows-1252?Q?1;AM4PR0701MB2161;23:nIgEfS8rAsFISHIFlpBQ/Y1dz8OhmseC2Pz?= =?Windows-1252?Q?vSvVIyro1vzwP1YV1FEsaAu9h9gszWBop3XvvxwPGMw9LlqrwogjUrXY?= =?Windows-1252?Q?jNOdKZ2kkrJElJFPnrI4mmhUavy6rMW3CGLPDycUmlG8bP/QoH1cu0qZ?= =?Windows-1252?Q?t+hm9pFZOhFx133HlabTUWGZzv5Y+hJZo3Oe8FmvtwmEoPJcQdwlJOtR?= =?Windows-1252?Q?RL7d83mn3nnSF7UJhRd+W73AN2eL9YaijGxHXL8ixsvrB+UafL0qZY/w?= =?Windows-1252?Q?QHdhpROSRxt9zZh5n7KqEVfr+VoHDTUQ6xRcsBxQc47MUp1FGSj1Vfus?= =?Windows-1252?Q?aMRclmbID86kbBa6H7nS5MsQd3eL0vwhro/DeObte2iB/ZcVKA+y7VBf?= =?Windows-1252?Q?HjdPQtheKD8Krp/+vKkdHwymwCrWaVFp4Oj3f/gVHecOhGC62oRGkTQu?= =?Windows-1252?Q?P82ac0990DXw/Cti804SW37filtx62FR2GTBj7xuGcELwwtriOB2igwG?= =?Windows-1252?Q?rduM30PSmp9Ffcz6Tnl22yPHdrZWa6sM0qVio5KnS0WFZ5KpDkqJjlC5?= =?Windows-1252?Q?rP2zKZlsCboxLoP+lEDZ38a6yBXxeBpopEZrwzApPNUIqLAb2kohtKr5?= =?Windows-1252?Q?I0wBr9urnFpqWs+xysmt/iytFLRMPAyqD5X31FNsNcvYqG1yWo6nCI9s?= =?Windows-1252?Q?wep+WIoSKyzxmeGhC5MZMblDCMw0lVswQdZzzIr0GQqZz0rIXNZMp22U?= =?Windows-1252?Q?AWUVLsEHMpoZz49x3yF3uK+8aVWejhKZbEA5QUBbwWihZd4havxnBqw0?= =?Windows-1252?Q?x2COuDxMIzi92YUFTaukPEHR1FxhXjoo/oWW7ygbgOtkSH0qT0xDWh6h?= =?Windows-1252?Q?JCQVCRYNeBaOgCWWDT2POl1QwYfEQiS+zGVe6sAkLOu7PnI2cZwcqHqf?= =?Windows-1252?Q?D2jHbpRIcz72iZQoPnnSg5JYIn9U5+hzYyZ5EhmpjAnsnb3FD7yegnVs?= =?Windows-1252?Q?smtaDII+qc+OzYSsFSxox0ed9fN3JnsJOEHMMfOfXtoFx5Z/KwBF3OTf?= =?Windows-1252?Q?EvtXyKETItt6bhASZQQtwfheD41SKDPh9pAbslqyvpbKn1S5yEMgbvbS?= =?Windows-1252?Q?xaTq55pTOkI8LlYLMEXe1vnJuhICFR032MSaGgmi9aIrSF1vxCN+3XdP?= =?Windows-1252?Q?gOxP5iGcXNQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1;AM4PR0701MB2161;6:FBu0x2G9i+Xt4v6wMzJIgeUjZ8kOAvrFPxmo0DTF0Zg/MItTV4q3pLwaejLBh9fSrXEwv15GF/LxGxjkHau1RKll1TU+ZcWcWmQoy6xSE0dYpGvmyMUePXVlTPAmzjZRFezrI63apenXtyrdsNVMtd4oMcRTch/vPnmpNwHKaOZI1DoI9QAcNjSRnhdp+SuY314SduO22HxoPMeo2cqLm1jQzpgFLCCLaT8mePr3lHrWi+bCSqOShyP3iMVtDXWiA6p4MCc9lMR5jWarWwxpTnyk/SlkJZRUy9LmD9dyftc0JfQcgbBwKWdHOYYihyjv9boJyQfsQPGhmELsDu+m1FEhygwfZWOny2/rN1vNcLiPsGcSW5F9aV6ukmT9Zg9U/d5DQX9+QISmtHCDpLJB7w==;5:5I+YjuTfs5iTjlvLCA9/CVso0ocg/y05un4nG1F4UxFq+xKsXpSXskPDaQR66CEEcxf1qepjmdbA1cGTTE2v4SFy0l3NWPWLuafbV0DisiY4Vf3wpTHhxPfP/2KeG1EhfVVf2ATSafvYJG0XwCLpWQ==;24:IXIVa2avhwGZo+zblUPJvkfPJm3lPhFjDf6oS+O98gAzebVOOW8iggYx6hVlbo9GkSuJ91ktAIeMlvfXEY2jV5Nv2uO3ImKI7Z7631nbWXA= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;AM4PR0701MB2161;7:LumdGnL2pwHc3xRgYpCTb2ambJByEq/OS2bjyAVvNrAVQ7FsxHAqaljt6ec0BA3UWT4LWqqWYEfnfgqUY7TXPtmeixW+0er/OpLnoufeHHtX8ur/bA/DlZbTqa8JondGr7NqEame8heb2aGzbcE+dKgfkDG98oqkmI6d9qncfk8zyi8P87HpOOxMcEQJKm1RYSFuMpyvWWwMsllF4GX4Kpjb2F4VXBzevqLuA5NjdqSbVPNbumz3NNByEOVC1g8EQ3r6cfJTcOwqIg1QzJbWDoT8Nmb0antCuij6sxSQovR3v0C0B7OhzX+DBpuEQTOfr/mwzylKC1Tbhm5ePBLsPw== X-OriginatorOrg: nextfour.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2017 05:52:35.1256 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0701MB2161 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 14532 Lines: 499 Hi! On 04/03/2017 08:17 AM, Minchan Kim wrote: > For architecture(PAGE_SIZE > 4K), zram have supported partial IO. > However, the mixed code for handling normal/partial IO is too mess, > error-prone to modify IO handler functions with upcoming feature > so this patch aims for cleaning up zram's IO handling functions. > > Signed-off-by: Minchan Kim > --- > drivers/block/zram/zram_drv.c | 333 +++++++++++++++++++++++------------------- > 1 file changed, 184 insertions(+), 149 deletions(-) > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index 28c2836f8c96..7938f4b98b01 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -45,6 +45,8 @@ static const char *default_compressor = "lzo"; > /* Module params (documentation at end) */ > static unsigned int num_devices = 1; > > +static void zram_free_page(struct zram *zram, size_t index); > + > static inline bool init_done(struct zram *zram) > { > return zram->disksize; > @@ -98,10 +100,17 @@ static void zram_set_obj_size(struct zram_meta *meta, > meta->table[index].value = (flags << ZRAM_FLAG_SHIFT) | size; > } > > +#if PAGE_SIZE != 4096 > static inline bool is_partial_io(struct bio_vec *bvec) > { > return bvec->bv_len != PAGE_SIZE; > } > +#else For page size of 4096 bv_len can still be < 4096 and partial pages should be supported (uncompress before write etc). ? > +static inline bool is_partial_io(struct bio_vec *bvec) > +{ > + return false; > +} > +#endif > > static void zram_revalidate_disk(struct zram *zram) > { > @@ -191,18 +200,6 @@ static bool page_same_filled(void *ptr, unsigned long *element) > return true; > } > > -static void handle_same_page(struct bio_vec *bvec, unsigned long element) > -{ > - struct page *page = bvec->bv_page; > - void *user_mem; > - > - user_mem = kmap_atomic(page); > - zram_fill_page(user_mem + bvec->bv_offset, bvec->bv_len, element); > - kunmap_atomic(user_mem); > - > - flush_dcache_page(page); > -} > - > static ssize_t initstate_show(struct device *dev, > struct device_attribute *attr, char *buf) > { > @@ -418,6 +415,53 @@ static DEVICE_ATTR_RO(io_stat); > static DEVICE_ATTR_RO(mm_stat); > static DEVICE_ATTR_RO(debug_stat); > > +static bool zram_special_page_read(struct zram *zram, u32 index, > + struct page *page, > + unsigned int offset, unsigned int len) > +{ > + struct zram_meta *meta = zram->meta; > + > + bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); > + if (unlikely(!meta->table[index].handle) || > + zram_test_flag(meta, index, ZRAM_SAME)) { > + void *mem; > + > + bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > + mem = kmap_atomic(page); > + zram_fill_page(mem + offset, len, meta->table[index].element); > + kunmap_atomic(mem); > + return true; > + } > + bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > + > + return false; > +} > + > +static bool zram_special_page_write(struct zram *zram, u32 index, > + struct page *page) > +{ > + unsigned long element; > + void *mem = kmap_atomic(page); > + > + if (page_same_filled(mem, &element)) { > + struct zram_meta *meta = zram->meta; > + > + kunmap_atomic(mem); > + /* Free memory associated with this sector now. */ > + bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); > + zram_free_page(zram, index); > + zram_set_flag(meta, index, ZRAM_SAME); > + zram_set_element(meta, index, element); > + bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > + > + atomic64_inc(&zram->stats.same_pages); > + return true; > + } > + kunmap_atomic(mem); > + > + return false; > +} > + > static void zram_meta_free(struct zram_meta *meta, u64 disksize) > { > size_t num_pages = disksize >> PAGE_SHIFT; > @@ -504,169 +548,104 @@ static void zram_free_page(struct zram *zram, size_t index) > zram_set_obj_size(meta, index, 0); > } > > -static int zram_decompress_page(struct zram *zram, char *mem, u32 index) > +static int zram_decompress_page(struct zram *zram, struct page *page, u32 index) > { > - int ret = 0; > - unsigned char *cmem; > - struct zram_meta *meta = zram->meta; > + int ret; > unsigned long handle; > unsigned int size; > + void *src, *dst; > + struct zram_meta *meta = zram->meta; > + > + if (zram_special_page_read(zram, index, page, 0, PAGE_SIZE)) > + return 0; > > bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); > handle = meta->table[index].handle; > size = zram_get_obj_size(meta, index); > > - if (!handle || zram_test_flag(meta, index, ZRAM_SAME)) { > - bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > - zram_fill_page(mem, PAGE_SIZE, meta->table[index].element); > - return 0; > - } > - > - cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); > + src = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); > if (size == PAGE_SIZE) { > - copy_page(mem, cmem); > + dst = kmap_atomic(page); > + copy_page(dst, src); > + kunmap_atomic(dst); > + ret = 0; > } else { > struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp); > > - ret = zcomp_decompress(zstrm, cmem, size, mem); > + dst = kmap_atomic(page); > + ret = zcomp_decompress(zstrm, src, size, dst); > + kunmap_atomic(dst); > zcomp_stream_put(zram->comp); > } > zs_unmap_object(meta->mem_pool, handle); > bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > > /* Should NEVER happen. Return bio error if it does. */ > - if (unlikely(ret)) { > + if (unlikely(ret)) > pr_err("Decompression failed! err=%d, page=%u\n", ret, index); > - return ret; > - } > > - return 0; > + return ret; > } > > static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec, > - u32 index, int offset) > + u32 index, int offset) > { > int ret; > struct page *page; > - unsigned char *user_mem, *uncmem = NULL; > - struct zram_meta *meta = zram->meta; > - page = bvec->bv_page; > > - bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); > - if (unlikely(!meta->table[index].handle) || > - zram_test_flag(meta, index, ZRAM_SAME)) { > - bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > - handle_same_page(bvec, meta->table[index].element); > + page = bvec->bv_page; > + if (zram_special_page_read(zram, index, page, bvec->bv_offset, > + bvec->bv_len)) > return 0; > - } > - bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > - > - if (is_partial_io(bvec)) > - /* Use a temporary buffer to decompress the page */ > - uncmem = kmalloc(PAGE_SIZE, GFP_NOIO); > - > - user_mem = kmap_atomic(page); > - if (!is_partial_io(bvec)) > - uncmem = user_mem; > > - if (!uncmem) { > - pr_err("Unable to allocate temp memory\n"); > - ret = -ENOMEM; > - goto out_cleanup; > + if (is_partial_io(bvec)) { > + /* Use a temporary buffer to decompress the page */ > + page = alloc_page(GFP_NOIO|__GFP_HIGHMEM); > + if (!page) > + return -ENOMEM; > } > > - ret = zram_decompress_page(zram, uncmem, index); > - /* Should NEVER happen. Return bio error if it does. */ > + ret = zram_decompress_page(zram, page, index); > if (unlikely(ret)) > - goto out_cleanup; > + goto out; > > - if (is_partial_io(bvec)) > - memcpy(user_mem + bvec->bv_offset, uncmem + offset, > - bvec->bv_len); > + if (is_partial_io(bvec)) { > + void *dst = kmap_atomic(bvec->bv_page); > + void *src = kmap_atomic(page); > > - flush_dcache_page(page); > - ret = 0; > -out_cleanup: > - kunmap_atomic(user_mem); > + memcpy(dst + bvec->bv_offset, src + offset, bvec->bv_len); > + kunmap_atomic(src); > + kunmap_atomic(dst); > + } > +out: > if (is_partial_io(bvec)) > - kfree(uncmem); > + __free_page(page); > + > return ret; > } > > -static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, > - int offset) > +static int zram_compress(struct zram *zram, struct zcomp_strm **zstrm, > + struct page *page, > + unsigned long *out_handle, unsigned int *out_comp_len) > { > - int ret = 0; > - unsigned int clen; > + int ret; > + unsigned int comp_len; > + void *src; > unsigned long handle = 0; > - struct page *page; > - unsigned char *user_mem, *cmem, *src, *uncmem = NULL; > struct zram_meta *meta = zram->meta; > - struct zcomp_strm *zstrm = NULL; > - unsigned long alloced_pages; > - unsigned long element; > - > - page = bvec->bv_page; > - if (is_partial_io(bvec)) { > - /* > - * This is a partial IO. We need to read the full page > - * before to write the changes. > - */ > - uncmem = kmalloc(PAGE_SIZE, GFP_NOIO); > - if (!uncmem) { > - ret = -ENOMEM; > - goto out; > - } > - ret = zram_decompress_page(zram, uncmem, index); > - if (ret) > - goto out; > - } > > compress_again: > - user_mem = kmap_atomic(page); > - if (is_partial_io(bvec)) { > - memcpy(uncmem + offset, user_mem + bvec->bv_offset, > - bvec->bv_len); > - kunmap_atomic(user_mem); > - user_mem = NULL; > - } else { > - uncmem = user_mem; > - } > - > - if (page_same_filled(uncmem, &element)) { > - if (user_mem) > - kunmap_atomic(user_mem); > - /* Free memory associated with this sector now. */ > - bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); > - zram_free_page(zram, index); > - zram_set_flag(meta, index, ZRAM_SAME); > - zram_set_element(meta, index, element); > - bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > - > - atomic64_inc(&zram->stats.same_pages); > - ret = 0; > - goto out; > - } > - > - zstrm = zcomp_stream_get(zram->comp); > - ret = zcomp_compress(zstrm, uncmem, &clen); > - if (!is_partial_io(bvec)) { > - kunmap_atomic(user_mem); > - user_mem = NULL; > - uncmem = NULL; > - } > + src = kmap_atomic(page); > + ret = zcomp_compress(*zstrm, src, &comp_len); > + kunmap_atomic(src); > > if (unlikely(ret)) { > pr_err("Compression failed! err=%d\n", ret); > - goto out; > + return ret; > } > > - src = zstrm->buffer; > - if (unlikely(clen > max_zpage_size)) { > - clen = PAGE_SIZE; > - if (is_partial_io(bvec)) > - src = uncmem; > - } > + if (unlikely(comp_len > max_zpage_size)) > + comp_len = PAGE_SIZE; > > /* > * handle allocation has 2 paths: > @@ -682,50 +661,70 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, > * from the slow path and handle has already been allocated. > */ > if (!handle) > - handle = zs_malloc(meta->mem_pool, clen, > + handle = zs_malloc(meta->mem_pool, comp_len, > __GFP_KSWAPD_RECLAIM | > __GFP_NOWARN | > __GFP_HIGHMEM | > __GFP_MOVABLE); > if (!handle) { > zcomp_stream_put(zram->comp); > - zstrm = NULL; > - > atomic64_inc(&zram->stats.writestall); > - > - handle = zs_malloc(meta->mem_pool, clen, > + handle = zs_malloc(meta->mem_pool, comp_len, > GFP_NOIO | __GFP_HIGHMEM | > __GFP_MOVABLE); > + *zstrm = zcomp_stream_get(zram->comp); > if (handle) > goto compress_again; > + return -ENOMEM; > + } > > - pr_err("Error allocating memory for compressed page: %u, size=%u\n", > - index, clen); > - ret = -ENOMEM; > - goto out; > + *out_handle = handle; > + *out_comp_len = comp_len; > + return 0; > +} > + > +static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec, > + u32 index, int offset) > +{ > + int ret; > + unsigned long handle; > + unsigned int comp_len; > + void *src, *dst; > + struct zcomp_strm *zstrm; > + unsigned long alloced_pages; > + struct zram_meta *meta = zram->meta; > + struct page *page = bvec->bv_page; > + > + if (zram_special_page_write(zram, index, page)) > + return 0; > + > + zstrm = zcomp_stream_get(zram->comp); > + ret = zram_compress(zram, &zstrm, page, &handle, &comp_len); > + if (ret) { > + zcomp_stream_put(zram->comp); > + return ret; > } > > alloced_pages = zs_get_total_pages(meta->mem_pool); > update_used_max(zram, alloced_pages); > > if (zram->limit_pages && alloced_pages > zram->limit_pages) { > + zcomp_stream_put(zram->comp); > zs_free(meta->mem_pool, handle); > - ret = -ENOMEM; > - goto out; > + return -ENOMEM; > } > > - cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO); > + dst = zs_map_object(meta->mem_pool, handle, ZS_MM_WO); > > - if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) { > + if (comp_len == PAGE_SIZE) { > src = kmap_atomic(page); > - copy_page(cmem, src); > + copy_page(dst, src); > kunmap_atomic(src); > } else { > - memcpy(cmem, src, clen); > + memcpy(dst, zstrm->buffer, comp_len); > } > > zcomp_stream_put(zram->comp); > - zstrm = NULL; > zs_unmap_object(meta->mem_pool, handle); > > /* > @@ -734,19 +733,54 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, > */ > bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); > zram_free_page(zram, index); > - > meta->table[index].handle = handle; > - zram_set_obj_size(meta, index, clen); > + zram_set_obj_size(meta, index, comp_len); > bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); > > /* Update stats */ > - atomic64_add(clen, &zram->stats.compr_data_size); > + atomic64_add(comp_len, &zram->stats.compr_data_size); > atomic64_inc(&zram->stats.pages_stored); > + return 0; > +} > + > +static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, > + u32 index, int offset) > +{ > + int ret; > + struct page *page = NULL; > + void *src; > + struct bio_vec vec; > + > + vec = *bvec; > + if (is_partial_io(bvec)) { > + void *dst; > + /* > + * This is a partial IO. We need to read the full page > + * before to write the changes. > + */ > + page = alloc_page(GFP_NOIO|__GFP_HIGHMEM); > + if (!page) > + return -ENOMEM; > + > + ret = zram_decompress_page(zram, page, index); > + if (ret) > + goto out; > + > + src = kmap_atomic(bvec->bv_page); > + dst = kmap_atomic(page); > + memcpy(dst + offset, src + bvec->bv_offset, bvec->bv_len); > + kunmap_atomic(dst); > + kunmap_atomic(src); > + > + vec.bv_page = page; > + vec.bv_len = PAGE_SIZE; > + vec.bv_offset = 0; > + } > + > + ret = __zram_bvec_write(zram, &vec, index, offset); > out: > - if (zstrm) > - zcomp_stream_put(zram->comp); > if (is_partial_io(bvec)) > - kfree(uncmem); > + __free_page(page); > return ret; > } > > @@ -802,6 +836,7 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index, > if (!is_write) { > atomic64_inc(&zram->stats.num_reads); > ret = zram_bvec_read(zram, bvec, index, offset); > + flush_dcache_page(bvec->bv_page); > } else { > atomic64_inc(&zram->stats.num_writes); > ret = zram_bvec_write(zram, bvec, index, offset); >