Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934886AbbGHNWo (ORCPT ); Wed, 8 Jul 2015 09:22:44 -0400 Received: from mx0b-00176a03.pphosted.com ([67.231.157.48]:43947 "EHLO mx0b-00176a03.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933550AbbGHNWk (ORCPT ); Wed, 8 Jul 2015 09:22:40 -0400 Message-ID: <559D2415.1060502@ge.com> Date: Wed, 8 Jul 2015 14:22:29 +0100 From: Martyn Welch User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Dmitry Kalinkin CC: Greg Kroah-Hartman , , , Manohar Vanga , Igor Alekseev Subject: Re: [PATCHv3 08/16] staging: vme_user: provide DMA functionality References: <1432814833-5320-1-git-send-email-dmitry.kalinkin@gmail.com> <1432814833-5320-9-git-send-email-dmitry.kalinkin@gmail.com> <20150613002807.GA17459@kroah.com> <559A8117.4060701@ge.com> <559A9556.4040303@ge.com> In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [3.159.19.191] X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2015-07-08_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1506180000 definitions=main-1507080212 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3486 Lines: 65 On 06/07/15 18:24, Dmitry Kalinkin wrote: >> Some functionality was dropped as it was not good practice >> >(such as receiving VME interrupts in user space, it's not really doable if >> >the slave card is Release On Register Access rather than Release on >> >Acknowledge), > Didn't know about RORA. I wonder how different this is compared to the > PCI bus case. Little I suspect. What it does mean is that there's no generic mechanism for clearing down an interrupt, so a device specific interrupt routine is required, which needs to be in kernel space. >> >so the interface became more of a debug mechanism for me. >> >Others have clearly found it provides enough for them to allow drivers to be >> >written in user space. >> > >> >I was thinking that the opposite might be better, no windows were mapped at >> >module load, windows could be allocated and mapped using the control device. >> >This would ensure that unused resources were still available for kernel >> >based drivers and would mean the driver wouldn't be pre-allocating a bunch >> >of fairly substantially sized slave window buffers (the buffers could also >> >be allocated to match the size of the slave window requested). What do you >> >think? > I'm not a VME expert, but it seems that VME windows are a quiet limited resource > no matter how you allocate your resources. Theoretically we could put up to 32 > different boards in a single crate, so there won't be enough windows for each > driver to allocate. That said, there is no way around this when putting together > a really heterogeneous VME system. To overcome such problem, one could > develop a different kernel API that would not provide windows to the > drivers, but > handle reads and writes by reconfiguring windows on the fly, which in turn would > introduce more latency. Those who need such API are welcome to develop it:) > The aim of the existing APIs is to provide a mechanism for allocating resources. You're right, the resources are limited when scaling to a 32 slot crate. There's a number of ways to share the resources, though they tend to all have trade offs. My experience has been that the majority of VME systems don't tend to stretch up to 32 cards. > As for dynamic vme_user device allocation, I don't see the point in this. > The only existing kernel VME driver allocates windows in advance, user is just > to make sure to leave one free window if she wants to use that. Module parameter > for window count will be dynamic enough to handle that. If vme_user grabs all the VME windows, there are no windows available for any kernel level VME drivers to use. If a kernel level driver loads before vme_user and is allocated a window, if vme_user demands 8 windows (and assuming it doesn't deal with some already having been allocated gracefully, which it doesn't at the moment) then it doesn't load. Dynamic allocation would leave "unused" resources available rather than prospectively hogging them. -- Martyn Welch (Lead Software Engineer) | Registered in England and Wales GE Intelligent Platforms | (3828642) at 100 Barbirolli Square T +44(0)1327322748 | Manchester, M2 3AB E martyn.welch@ge.com | VAT:GB 927559189 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/