Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752697AbdCWIBZ (ORCPT ); Thu, 23 Mar 2017 04:01:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59062 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751336AbdCWIBX (ORCPT ); Thu, 23 Mar 2017 04:01:23 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 0E4DC7D0EF Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jasowang@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 0E4DC7D0EF Subject: Re: [PATCH net-next 1/8] ptr_ring: introduce batch dequeuing To: "Michael S. Tsirkin" References: <1490069087-4783-1-git-send-email-jasowang@redhat.com> <1490069087-4783-2-git-send-email-jasowang@redhat.com> <20170322153638-mutt-send-email-mst@kernel.org> Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org From: Jason Wang Message-ID: <516ec084-6d68-fba7-eea1-65d6746bb957@redhat.com> Date: Thu, 23 Mar 2017 13:33:28 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <20170322153638-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 23 Mar 2017 05:33:37 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3029 Lines: 113 On 2017年03月22日 21:43, Michael S. Tsirkin wrote: > On Tue, Mar 21, 2017 at 12:04:40PM +0800, Jason Wang wrote: >> Signed-off-by: Jason Wang >> --- >> include/linux/ptr_ring.h | 65 ++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 65 insertions(+) >> >> diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h >> index 6c70444..4771ded 100644 >> --- a/include/linux/ptr_ring.h >> +++ b/include/linux/ptr_ring.h >> @@ -247,6 +247,22 @@ static inline void *__ptr_ring_consume(struct ptr_ring *r) >> return ptr; >> } >> >> +static inline int __ptr_ring_consume_batched(struct ptr_ring *r, >> + void **array, int n) >> +{ >> + void *ptr; >> + int i = 0; >> + >> + while (i < n) { >> + ptr = __ptr_ring_consume(r); >> + if (!ptr) >> + break; >> + array[i++] = ptr; >> + } >> + >> + return i; >> +} >> + >> /* >> * Note: resize (below) nests producer lock within consumer lock, so if you >> * call this in interrupt or BH context, you must disable interrupts/BH when > > This ignores the comment above that function: > > /* Note: callers invoking this in a loop must use a compiler barrier, > * for example cpu_relax(). > */ Yes, __ptr_ring_swap_queue() ignores this too. > > Also - it looks like it shouldn't matter if reads are reordered but I wonder. > Thoughts? Including some reasoning about it in commit log would be nice. Yes, I think it doesn't matter in this case, it matters only for batched producing. Thanks > >> @@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r) >> return ptr; >> } >> >> +static inline int ptr_ring_consume_batched(struct ptr_ring *r, >> + void **array, int n) >> +{ >> + int ret; >> + >> + spin_lock(&r->consumer_lock); >> + ret = __ptr_ring_consume_batched(r, array, n); >> + spin_unlock(&r->consumer_lock); >> + >> + return ret; >> +} >> + >> +static inline int ptr_ring_consume_batched_irq(struct ptr_ring *r, >> + void **array, int n) >> +{ >> + int ret; >> + >> + spin_lock_irq(&r->consumer_lock); >> + ret = __ptr_ring_consume_batched(r, array, n); >> + spin_unlock_irq(&r->consumer_lock); >> + >> + return ret; >> +} >> + >> +static inline int ptr_ring_consume_batched_any(struct ptr_ring *r, >> + void **array, int n) >> +{ >> + unsigned long flags; >> + int ret; >> + >> + spin_lock_irqsave(&r->consumer_lock, flags); >> + ret = __ptr_ring_consume_batched(r, array, n); >> + spin_unlock_irqrestore(&r->consumer_lock, flags); >> + >> + return ret; >> +} >> + >> +static inline int ptr_ring_consume_batched_bh(struct ptr_ring *r, >> + void **array, int n) >> +{ >> + int ret; >> + >> + spin_lock_bh(&r->consumer_lock); >> + ret = __ptr_ring_consume_batched(r, array, n); >> + spin_unlock_bh(&r->consumer_lock); >> + >> + return ret; >> +} >> + >> /* Cast to structure type and call a function without discarding from FIFO. >> * Function must return a value. >> * Callers must take consumer_lock. >> -- >> 2.7.4