Return-path: Received: from mail-qy0-f181.google.com ([209.85.216.181]:50123 "EHLO mail-qy0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751827Ab1DGJzF convert rfc822-to-8bit (ORCPT ); Thu, 7 Apr 2011 05:55:05 -0400 MIME-Version: 1.0 In-Reply-To: <1302162886.10725.4.camel@maggie> References: <1302033463-1846-1-git-send-email-zajec5@gmail.com> <1302123428.20093.6.camel@maggie> <1302124112.20093.11.camel@maggie> <1302124737.27258.7.camel@dev.znau.edu.ua> <1302134429.27258.32.camel@dev.znau.edu.ua> <1302162886.10725.4.camel@maggie> Date: Thu, 7 Apr 2011 11:55:03 +0200 Message-ID: Subject: Re: [RFC][PATCH] bcmai: introduce AI driver From: =?UTF-8?B?UmFmYcWCIE1pxYJlY2tp?= To: =?UTF-8?Q?Michael_B=C3=BCsch?= Cc: George Kashperko , Arnd Bergmann , Russell King , "linux-wireless@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "b43-dev@lists.infradead.org" , Arend van Spriel , linuxdriverproject , "linux-arm-kernel@lists.infradead.org" , Larry Finger Content-Type: text/plain; charset=UTF-8 Sender: linux-wireless-owner@vger.kernel.org List-ID: W dniu 7 kwietnia 2011 09:54 użytkownik Michael Büsch napisał: > On Thu, 2011-04-07 at 02:54 +0200, Rafał Miłecki wrote: >> W dniu 7 kwietnia 2011 02:00 użytkownik George Kashperko >> napisał: >> > For PCI function description take a look at PCI specs or PCI >> > configuration space description (e. g. >> > http://en.wikipedia.org/wiki/PCI_configuration_space) >> > >> > Sorry for missleading short-ups, w11 - bcm80211 core, under two-head I >> > mean ssb/axi with two functional cores on same interconnect (like w11 >> > +w11, not a lot of these exists I guess). Also there were some b43+b44 >> > on single PCI ssb host and those where implemented as ssb interconnect >> > on multifunctional PCI host therefore providing separate access windows >> > for each function. >> > >> > Might I mussunderstood something (its late night here at my place) when >> > you where talking about using coreswitching involved for two drivers >> > therefore I remembered about those functions. Seems now you were talking >> > about chipcommon+b43 access sharing same window. >> > >> > As for core switching requirments for earlier SSB interconnects on PCI >> > hosts where there were no direct chipcommon access, that one can be >> > accomplished without spin_lock/mutex for b43 or b44 cores with proper >> > bus design. >> > >> > AXI doesn't need spinlocks/mutexes as both chipcommon and pci bridge are >> > available directly and b43 will be the only one requiring window access. >> >> Ahh, so while talking about 4 windows, I guess you counted fixes >> windows as well. That would be right, matching my knowledge. >> >> When asking question about amount of cores we may want to use >> simultaneously I didn't think about ChipCommon or PCIe. The real >> problem would be to support for example two 802.11 cores and one >> ethernet core at the same time. That gives us 3 cores while we have >> only 2 sliding windows. > > Would that really be a problem? Think of it. This combination > will only be available on embedded devices. But do we have windows > on embedded devices? I guess not. If AXI is similar to SSB, the MMIO > of all cores will always be mapped. So accesses can be done > without switch or lock. > > I do really think that engineers at broadcom are clever enough > to design a hardware that does not require expensive window sliding > all the time while operating. I also think so. When asking about amount of cores (non PCIe, non ChipCommon) which has to work simultaneously. I'm not sure if we will meet AI board with 2 cores (non PCIe, non ChipCommon) on PCIe host. I don't think we will see more than 2 cores (non PCIe, non ChipCommon) on PCIe host. -- Rafał