Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752552AbcDZH5O (ORCPT ); Tue, 26 Apr 2016 03:57:14 -0400 Received: from mx4-phx2.redhat.com ([209.132.183.25]:60621 "EHLO mx4-phx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751772AbcDZH5N (ORCPT ); Tue, 26 Apr 2016 03:57:13 -0400 Date: Tue, 26 Apr 2016 03:57:10 -0400 (EDT) From: Pankaj Gupta To: Jason Wang Cc: mst@redhat.com, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Message-ID: <19697601.46250110.1461657430875.JavaMail.zimbra@redhat.com> In-Reply-To: <571F1330.7030504@redhat.com> References: <1461636873-45335-1-git-send-email-jasowang@redhat.com> <1461636873-45335-2-git-send-email-jasowang@redhat.com> <2033086948.46236145.1461651858100.JavaMail.zimbra@redhat.com> <571F1330.7030504@redhat.com> Subject: Re: [PATCH 2/2] vhost: lockless enqueuing MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.65.193.144] X-Mailer: Zimbra 8.0.6_GA_5922 (ZimbraWebClient - FF38 (Linux)/8.0.6_GA_5922) Thread-Topic: vhost: lockless enqueuing Thread-Index: iDMTngqk4PH6pdVXpYPODmniw6V6pA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1104 Lines: 47 > > > > On 04/26/2016 02:24 PM, Pankaj Gupta wrote: > > Hi Jason, > > > > Overall patches look good. Just one doubt I have is below: > >> We use spinlock to synchronize the work list now which may cause > >> unnecessary contentions. So this patch switch to use llist to remove > >> this contention. Pktgen tests shows about 5% improvement: > >> > >> Before: > >> ~1300000 pps > >> After: > >> ~1370000 pps > >> > >> Signed-off-by: Jason Wang > >> --- > >> drivers/vhost/vhost.c | 52 > >> +++++++++++++++++++++++++-------------------------- > >> drivers/vhost/vhost.h | 7 ++++--- > >> 2 files changed, 29 insertions(+), 30 deletions(-) > > [...] > > >> - if (work) { > >> + node = llist_del_all(&dev->work_list); > >> + if (!node) > >> + schedule(); > >> + > >> + node = llist_reverse_order(node); > > Can we avoid llist reverse here? > > > > Probably not, this is because: > > - we should process the work exactly the same order as they were queued, > otherwise flush won't work > - llist can only add a node to the head of list. Got it. Thanks, > > Thanks >