Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753838AbdHQRRq (ORCPT ); Thu, 17 Aug 2017 13:17:46 -0400 Received: from quartz.orcorp.ca ([184.70.90.242]:44278 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753791AbdHQRRo (ORCPT ); Thu, 17 Aug 2017 13:17:44 -0400 Date: Thu, 17 Aug 2017 11:17:32 -0600 From: Jason Gunthorpe To: Sebastian Andrzej Siewior Cc: Ken Goldman , Haris Okanovic , linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, tpmdd-devel@lists.sourceforge.net, harisokn@gmail.com, julia.cartwright@ni.com, gratian.crisan@ni.com, scott.hartman@ni.com, chris.graf@ni.com, brad.mouring@ni.com, jonathan.david@ni.com, peterhuewe@gmx.de, tpmdd@selhorst.net, jarkko.sakkinen@linux.intel.com, eric.gardiner@ni.com Subject: Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s Message-ID: <20170817171732.GA22792@obsidianresearch.com> References: <20170804215651.29247-1-haris.okanovic@ni.com> <20170815201308.20024-1-haris.okanovic@ni.com> <13741b28-1b5c-de55-3945-e05911e5a4e2@linux.vnet.ibm.com> <20170817103807.ubrbylnud6wxod3s@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170817103807.ubrbylnud6wxod3s@linutronix.de> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1622 Lines: 37 On Thu, Aug 17, 2017 at 12:38:07PM +0200, Sebastian Andrzej Siewior wrote: > > I worry a bit about "appears to fix". It seems odd that the TPM device > > driver would be the first code to uncover this. Can anyone confirm that the > > chipset does indeed have this bug? > > What Haris says makes sense. It is just not all architectures > accumulate/ batch writes to HW. It doesn't seem that odd to me.. In modern Intel chipsets the physical LPC bus is used for very little. Maybe some flash and possibly a winbond super IO at worst? Plus the TPM. I can't confirm what Intel has done, but if writes are posted, then it is not a 'bug', but expected operation for a PCI/LPC bridge device to have an ordered queue of posted writes, and thus higher latency when processing reads due to ordering requirments. Other drivers may not see it because most LPC usages would not be write heavy, or might use IO instructions which are not posted.. I can confirm that my ARM systems with a custom PCI-LPC bridge will have exactly the same problem, and that the readl is the only solution. This is becuase writes to LPC are posted over PCI and will be buffered in the root complex, device end port and internally in the LPC bridge. Since they are posted there is no way for the CPU to know when the complete and when it would be 'low latency' to issue a read. > So powerpc (for instance) has a sync operation after each write to HW. I > am wondering if we could need something like that on x86. Even on something like PPC 'sync' is not defined to globally flush posted writes, and wil not help. WMB is probably similar. Jason