Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932283AbaFIKk4 (ORCPT ); Mon, 9 Jun 2014 06:40:56 -0400 Received: from mail-vc0-f175.google.com ([209.85.220.175]:57739 "EHLO mail-vc0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754060AbaFIKky (ORCPT ); Mon, 9 Jun 2014 06:40:54 -0400 MIME-Version: 1.0 In-Reply-To: References: <1402057225-7312-1-git-send-email-m@bjorling.me> <1402057225-7312-2-git-send-email-m@bjorling.me> Date: Mon, 9 Jun 2014 18:40:53 +0800 Message-ID: Subject: Re: [PATCH v6] NVMe: conversion to blk-mq From: Ming Lei To: =?UTF-8?Q?Matias_Bj=C3=B8rling?= Cc: Matthew Wilcox , Keith Busch , "Sam Bradshaw (sbradshaw)" , Jens Axboe , Linux Kernel Mailing List , linux-nvme Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 9, 2014 at 3:53 PM, Ming Lei > > After pulling from your tree, the problem still persists. > > I test nvme over qemu, and both linus/next tree can > work well with qemu nvme. One problem I found is that rq->start_time isn't set after you cleared IO_STAT intentionally, then the request can't be requeued any more. Once the request can be requeued, the problem becomes req hanging. The root cause is that device returns NVME_INTERNAL_DEV_ERROR(0x6) with your conversion patch. Thanks, -- Ming Lei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/