Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2478873pxa; Mon, 17 Aug 2020 10:34:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzSV49ge0B9h8HIfii6S5Eg6dR2zK3q+EEk44bcwDpSb8ZNC7y7+OEuFGTwV+aNwRJcuZTi X-Received: by 2002:a17:906:fcdb:: with SMTP id qx27mr15895607ejb.421.1597685660123; Mon, 17 Aug 2020 10:34:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597685660; cv=none; d=google.com; s=arc-20160816; b=g+VIJSGM1gcgGQL8FBm+OuvqyytsByr8I+SkvF/0xVYDQmduQf+ZUwVJdslfzxil5a cI9GD7Oh9ZllcSowIqsS8EYJc1m3OmAQPDor9Z62lvJsdPJY7pP90z6LN8/x/iZu3JC+ nQWTK85eHifYpDeDSfrNoFdkzSegBwPD8R160cxQXg5SCKftrXyd1xSDBrH/8z98DfnJ dBDypoLf756uCZRJXcL+gp5Qfy5DbUfpeFYB1VSaJtWTZ3Cx0WcErPXrN/44agmNConn fz+q6Mji9CcZgqrwTlAZURxrHI4aiGBrvY/B3kqsi7du7IRA0x53Q69Po34u+D0PyjTF pUHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=iSA+I1IfovndRT2XBz1ym8iSfL6HDjmiNoR/Lk3EjV4=; b=riJ8Ndjerkg1pS7TdE5pZW3ufM74vZTe7b0VVuPp8ow4qRvikEKzLM8wA3HYN8UZGG g4iHyE4rGamtfqv2N9z9/ZDCzoiOK+UYvxZDsdy9/szpYN/27AhtHy1Ybr0+4BtP8SmF /0E2zhiygGXOGQZur+r2UE9Qcts14e09kj4yUYeorR03+rxM2Nq5aCmu6d7xdH3IBZ8/ obJqb/oFTl09ljrfZD/oznaPxearA/GfXkGFXKkLTuX7j+7phDBqQyzQWkZzrUsrCeOk fRTAUWpy7pw6d8JH2b6qhYRcQCsG5Ld+SK0H68uZ87hmhihNV4eIScpghoDW4fp9uJ9i YjDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=PN5P6AB0; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qc16si11026525ejb.746.2020.08.17.10.33.56; Mon, 17 Aug 2020 10:34:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=PN5P6AB0; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389031AbgHQRdW (ORCPT + 99 others); Mon, 17 Aug 2020 13:33:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389129AbgHQQx6 (ORCPT ); Mon, 17 Aug 2020 12:53:58 -0400 Received: from mail-io1-xd41.google.com (mail-io1-xd41.google.com [IPv6:2607:f8b0:4864:20::d41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66F62C061358 for ; Mon, 17 Aug 2020 09:53:39 -0700 (PDT) Received: by mail-io1-xd41.google.com with SMTP id q75so18382919iod.1 for ; Mon, 17 Aug 2020 09:53:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iSA+I1IfovndRT2XBz1ym8iSfL6HDjmiNoR/Lk3EjV4=; b=PN5P6AB03jEHFaL0dGfWzy1+1saqTf01y+MBz9OYRRwGTXgCbNY5haMKP2pMW9u8xD 1AYk0SGgGnHYxrVseAtO7EEllR67nsQdjUrwy4Xc6RwY7BXjKe2/D+IPbZqDtDpcWum8 Cajcve5gc4GZtM0Tlre0nbp/a4hlBoh0eiHIWhNRfqH/x8E+1rXfQQFn4TGpmTZ2T2Er 8v/pXFI4mk/42rrnURzml1UlqGoKIOizj8E9TGB9/NBytPcRj5fD2low2zWPu03pRS6j LAiUD7dWeQwFqKuHbv/8+vnBmaJbvaUT+AmSaGfjCjWSRmNyAEr6nJZuDkVPUJ1N6/3v JlxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=iSA+I1IfovndRT2XBz1ym8iSfL6HDjmiNoR/Lk3EjV4=; b=PMwUV0CnsVHB8tjxDY3FfDczsTImQLkgmxlAqQnqCR/PsLnsd2Q/t0xksldPnyHiun r1so2rZU4/w2ZGV9oE/VdTS85qu21FFeEFfgnU1zIxNjOAf6iPBA36F5VLTGYvgLu+Qo 6ExZFPv/1XXiXGt5tJa4tENqVhEuHKIlNiEJUmc7efMeP0Q/JrvMN7VilT2H7flNcCno WXdHoPYZw0hRBYbxqjWNT6NvMmm6XpVCSirfbyRH5iFy5l48r8TPTSh1XWQjl9zcmtYB HZIt75jms56/65N9X+l77+TvbjgbYbLHAkfTLVFapRUO+fk80r5Z7DmJyY7n5AGw6QHm 10qw== X-Gm-Message-State: AOAM533Ig+A2wm+uieo7Srdpuc1ukbS5pOCdZr6biv7Ye9ShIB8I+T0A xarOIxhTUpKM/KjKsDEfFS9VkKFDxfs= X-Received: by 2002:a02:9247:: with SMTP id y7mr14757885jag.140.1597683218335; Mon, 17 Aug 2020 09:53:38 -0700 (PDT) Received: from gouda.nowheycreamery.com (c-68-32-74-190.hsd1.mi.comcast.net. [68.32.74.190]) by smtp.gmail.com with ESMTPSA id a16sm7413106ilc.7.2020.08.17.09.53.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Aug 2020 09:53:37 -0700 (PDT) From: schumaker.anna@gmail.com X-Google-Original-From: Anna.Schumaker@Netapp.com To: linux-nfs@vger.kernel.org Cc: Anna.Schumaker@Netapp.com Subject: [PATCH v4 09/10] SUNRPC: Add an xdr_align_data() function Date: Mon, 17 Aug 2020 12:53:26 -0400 Message-Id: <20200817165327.354181-10-Anna.Schumaker@Netapp.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200817165327.354181-1-Anna.Schumaker@Netapp.com> References: <20200817165327.354181-1-Anna.Schumaker@Netapp.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Anna Schumaker For now, this function simply aligns the data at the beginning of the pages. This can eventually be expanded to shift data to the correct offsets when we're ready. Signed-off-by: Anna Schumaker --- include/linux/sunrpc/xdr.h | 1 + net/sunrpc/xdr.c | 121 +++++++++++++++++++++++++++++++++++++ 2 files changed, 122 insertions(+) diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h index de1f301f4864..ebfc562f07a1 100644 --- a/include/linux/sunrpc/xdr.h +++ b/include/linux/sunrpc/xdr.h @@ -252,6 +252,7 @@ extern __be32 *xdr_inline_decode(struct xdr_stream *xdr, size_t nbytes); extern unsigned int xdr_read_pages(struct xdr_stream *xdr, unsigned int len); extern void xdr_enter_page(struct xdr_stream *xdr, unsigned int len); extern int xdr_process_buf(struct xdr_buf *buf, unsigned int offset, unsigned int len, int (*actor)(struct scatterlist *, void *), void *data); +extern uint64_t xdr_align_data(struct xdr_stream *, uint64_t, uint32_t); extern uint64_t xdr_expand_hole(struct xdr_stream *, uint64_t, uint64_t); /** diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index 24baf052e6e6..e799cbfe6b5a 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -19,6 +19,9 @@ #include #include +static void _copy_to_pages(struct page **, size_t, const char *, size_t); + + /* * XDR functions for basic NFS types */ @@ -201,6 +204,88 @@ EXPORT_SYMBOL_GPL(xdr_inline_pages); * Helper routines for doing 'memmove' like operations on a struct xdr_buf */ +/** + * _shift_data_left_pages + * @pages: vector of pages containing both the source and dest memory area. + * @pgto_base: page vector address of destination + * @pgfrom_base: page vector address of source + * @len: number of bytes to copy + * + * Note: the addresses pgto_base and pgfrom_base are both calculated in + * the same way: + * if a memory area starts at byte 'base' in page 'pages[i]', + * then its address is given as (i << PAGE_CACHE_SHIFT) + base + * Alse note: pgto_base must be < pgfrom_base, but the memory areas + * they point to may overlap. + */ +static void +_shift_data_left_pages(struct page **pages, size_t pgto_base, + size_t pgfrom_base, size_t len) +{ + struct page **pgfrom, **pgto; + char *vfrom, *vto; + size_t copy; + + BUG_ON(pgfrom_base <= pgto_base); + + pgto = pages + (pgto_base >> PAGE_SHIFT); + pgfrom = pages + (pgfrom_base >> PAGE_SHIFT); + + pgto_base &= ~PAGE_MASK; + pgfrom_base &= ~PAGE_MASK; + + do { + if (pgto_base >= PAGE_SIZE) { + pgto_base = 0; + pgto++; + } + if (pgfrom_base >= PAGE_SIZE){ + pgfrom_base = 0; + pgfrom++; + } + + copy = len; + if (copy > (PAGE_SIZE - pgto_base)) + copy = PAGE_SIZE - pgto_base; + if (copy > (PAGE_SIZE - pgfrom_base)) + copy = PAGE_SIZE - pgfrom_base; + + vto = kmap_atomic(*pgto); + if (*pgto != *pgfrom) { + vfrom = kmap_atomic(*pgfrom); + memcpy(vto + pgto_base, vfrom + pgfrom_base, copy); + kunmap_atomic(vfrom); + } else + memmove(vto + pgto_base, vto + pgfrom_base, copy); + flush_dcache_page(*pgto); + kunmap_atomic(vto); + + pgto_base += copy; + pgfrom_base += copy; + + } while ((len -= copy) != 0); +} + +static void +_shift_data_left_tail(struct xdr_buf *buf, unsigned int pgto, size_t len) +{ + struct kvec *tail = buf->tail; + + if (len > tail->iov_len) + len = tail->iov_len; + + _copy_to_pages(buf->pages, + buf->page_base + pgto, + (char *)tail->iov_base, + len); + tail->iov_len -= len; + + if (tail->iov_len > 0) + memmove((char *)tail->iov_base, + tail->iov_base + len, + tail->iov_len); +} + /** * _shift_data_right_pages * @pages: vector of pages containing both the source and dest memory area. @@ -1177,6 +1262,42 @@ unsigned int xdr_read_pages(struct xdr_stream *xdr, unsigned int len) } EXPORT_SYMBOL_GPL(xdr_read_pages); +uint64_t xdr_align_data(struct xdr_stream *xdr, uint64_t offset, uint32_t length) +{ + struct xdr_buf *buf = xdr->buf; + unsigned int from, bytes; + unsigned int shift = 0; + + if ((offset + length) < offset || + (offset + length) > buf->page_len) + length = buf->page_len - offset; + + xdr_realign_pages(xdr); + from = xdr_page_pos(xdr); + bytes = xdr->nwords << 2; + if (length < bytes) + bytes = length; + + /* Move page data to the left */ + if (from > offset) { + shift = min_t(unsigned int, bytes, buf->page_len - from); + _shift_data_left_pages(buf->pages, + buf->page_base + offset, + buf->page_base + from, + shift); + bytes -= shift; + + /* Move tail data into the pages, if necessary */ + if (bytes > 0) + _shift_data_left_tail(buf, offset + shift, bytes); + } + + xdr->nwords -= XDR_QUADLEN(length); + xdr_set_page(xdr, from + length, PAGE_SIZE); + return length; +} +EXPORT_SYMBOL_GPL(xdr_align_data); + uint64_t xdr_expand_hole(struct xdr_stream *xdr, uint64_t offset, uint64_t length) { struct xdr_buf *buf = xdr->buf; -- 2.28.0