Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753977AbaFMV3x (ORCPT ); Fri, 13 Jun 2014 17:29:53 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:36640 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753198AbaFMV3v (ORCPT ); Fri, 13 Jun 2014 17:29:51 -0400 Message-ID: <539B6D1A.3010602@fb.com> Date: Fri, 13 Jun 2014 15:28:58 -0600 From: Jens Axboe User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Keith Busch CC: =?ISO-8859-1?Q?Matias_Bj=F8rling?= , Matthew Wilcox , "sbradshaw@micron.com" , "tom.leiming@gmail.com" , "hch@infradead.org" , "linux-kernel@vger.kernel.org" , "linux-nvme@lists.infradead.org" Subject: Re: [PATCH v7] NVMe: conversion to blk-mq References: <1402392038-5268-2-git-send-email-m@bjorling.me> <5397636F.9050209@fb.com> <5397753B.2020009@fb.com> <20140610213333.GA10055@linux.intel.com> <539889DC.7090704@fb.com> <20140611170917.GA12025@linux.intel.com> <5399BA00.7000705@bjorling.me> <539B05A1.7080700@fb.com> <539B14A9.8010204@fb.com> <539B3F75.7040700@fb.com> In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-Originating-IP: [192.168.57.29] X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.12.52,1.0.14,0.0.0000 definitions=2014-06-13_07:2014-06-13,2014-06-13,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 kscore.is_bulkscore=1.46489487207191e-11 kscore.compositescore=0 circleOfTrustscore=0 compositescore=0.324642340081358 urlsuspect_oldscore=0.324642340081358 suspectscore=0 recipient_domain_to_sender_totalscore=0 phishscore=0 bulkscore=0 kscore.is_spamscore=0 recipient_to_sender_totalscore=0 recipient_domain_to_sender_domain_totalscore=2524143 rbsscore=0.324642340081358 spamscore=0 recipient_to_sender_domain_totalscore=0 urlsuspectscore=0.9 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1406130226 X-FB-Internal: deliver Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/13/2014 01:22 PM, Keith Busch wrote: > One performance oddity we observe is that servicing the interrupt on the > thread sibling of the core that submitted the I/O is the worst performing > cpu you can chose; it's actually better to use a different core on the > same node. At least that's true as long as you're not utilizing the cpus > for other work, so YMMV. This doesn't match what I see here. Just ran some test cases - both sync, and higher QD. For sync performance, core or thread sibling is the best choice, other CPUs next. That is pretty logical. For a more loaded run, thread sibling ends up being a better choice than core, since core runs out of steam (255K vs 275K here). And thread sibling is still a marginally better choice than some other core on the same node. Which pretty much matches my expectations of what the best mappings would be. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/