Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp416403ybe; Wed, 4 Sep 2019 23:13:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqyxLBJ6G0PbDNx1F/8wewqFQ984usIqHWCIizwRSsud/yUFy3DxlIqdn7TVYtTluD9hwvMm X-Received: by 2002:aa7:9341:: with SMTP id 1mr1867802pfn.202.1567664007249; Wed, 04 Sep 2019 23:13:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567664007; cv=none; d=google.com; s=arc-20160816; b=PUD+kfmstNh18rTlR4FIJY3xLkQ80Q/fDByel+kIAaMdpAKDJySevG2Uut0qZOESOM OAvbFr7g5s95t7fnibbPFm9mpvWRUCv3FXRQKH7UulLraqznEd/Z01INud8+5jrLOPsv v+jZ1Zr4HMoZwIn9decFwjHupSQPQJvaGW3QCrgvrhMNvHtB04jPlvYcXSHM9o+5qFC+ Yf9nCKw61vOLJswimtqdf15JRNA9jEjAI83qWUr1h258+PUOeEqztCtEtK2S0ICakB2+ d2dge/h7mb63ObpM6d4QB26W4qXh1r9jDpBGf/sQaGHI/W+798ZUirOApTaNTnrVOhsp 6Q9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=HRPxrTCzzD8zBasHw+0atm3F3oiXnUj3K+8HlF5f2oo=; b=uIU2uWV5cxg/H2YWyk1ir89JTFZYix733nuJA9Xj+63OkoW/tIudrwGss8v0J/SSJQ tLLpMQuomVurC3xaZrLGxHbSC58brTVbP03YF25S2qSaTjA7BzcnTC65WCp0XDr8WKhM Zqox1y0Xc5T6XbKov6lOoFpLNaxq0J+MSWaIHUgBg9+mNvV7+6HQI6tcddlsx3KUbYBn xY/BNfXFboocvwA1uPHIsW+/0eLk9niW7Mfx60ONQPWidGC3KJHRu5HQ9GM8Cjp6+/Mn Yzct+RY7ZmhCnL8Mt7LjZa9PrSgE2RIZatPpPtmwys91uCWYoFdrg2OYqxjyktoSaW9J nDuw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w5si1051465plp.404.2019.09.04.23.13.10; Wed, 04 Sep 2019 23:13:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731420AbfIEGGd (ORCPT + 99 others); Thu, 5 Sep 2019 02:06:33 -0400 Received: from verein.lst.de ([213.95.11.211]:45989 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbfIEGGc (ORCPT ); Thu, 5 Sep 2019 02:06:32 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 27DF868AFE; Thu, 5 Sep 2019 08:06:28 +0200 (CEST) Date: Thu, 5 Sep 2019 08:06:27 +0200 From: Christoph Hellwig To: David Rientjes Cc: Tom Lendacky , Brijesh Singh , Christoph Hellwig , Jens Axboe , Ming Lei , Peter Gonda , Jianxiong Gao , linux-kernel@vger.kernel.org, x86@kernel.org, iommu@lists.linux-foundation.org Subject: Re: [bug] __blk_mq_run_hw_queue suspicious rcu usage Message-ID: <20190905060627.GA1753@lst.de> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 04, 2019 at 02:40:44PM -0700, David Rientjes wrote: > Hi Christoph, Jens, and Ming, > > While booting a 5.2 SEV-enabled guest we have encountered the following > WARNING that is followed up by a BUG because we are in atomic context > while trying to call set_memory_decrypted: Well, this really is a x86 / DMA API issue unfortunately. Drivers are allowed to do GFP_ATOMIC dma allocation under locks / rcu critical sections and from interrupts. And it seems like the SEV case can't handle that. We have some semi-generic code to have a fixed sized pool in kernel/dma for non-coherent platforms that have similar issues that we could try to wire up, but I wonder if there is a better way to handle the issue, so I've added Tom and the x86 maintainers. Now independent of that issue using DMA coherent memory for the nvme PRPs/SGLs doesn't actually feel very optional. We could do with normal kmalloc allocations and just sync it to the device and back. I wonder if we should create some general mempool-like helpers for that.