Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754089AbbFIWJe (ORCPT ); Tue, 9 Jun 2015 18:09:34 -0400 Received: from mail-bl2on0113.outbound.protection.outlook.com ([65.55.169.113]:55373 "EHLO na01-bl2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753901AbbFIWIz (ORCPT ); Tue, 9 Jun 2015 18:08:55 -0400 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=freescale.com; freescale.mail.onmicrosoft.com; dkim=none (message not signed) header.d=none; From: "J. German Rivera" To: , , , CC: , , , , , , , , , , "J. German Rivera" Subject: [PATCH v4 7/7] staging: fsl-mc: Use DPMCP IRQ and completion var to wait for MC Date: Tue, 9 Jun 2015 16:59:08 -0500 Message-ID: <1433887148-2310-8-git-send-email-German.Rivera@freescale.com> X-Mailer: git-send-email 2.3.3 In-Reply-To: <1433887148-2310-1-git-send-email-German.Rivera@freescale.com> References: <1433887148-2310-1-git-send-email-German.Rivera@freescale.com> X-EOPAttributedMessage: 0 X-Microsoft-Exchange-Diagnostics: 1;BN1AFFO11FD031;1:yw6AVDueneqRMC+u73Tck37C5dkPTn6SrczmvCtmTgBteEJFkS/5vitNujpmYvQn2OWFJLaWdFYmjzwl1F9NAScqg9x3IFZhWU57aMh0C3hhrS7cWseRqt+sqwTxBv64p3JbPUyHsA0Dc7X6WxLapHOPx3DXHKk6NiohQMKNf18zwEnaKH9gNcP1lx+OUUrAGaJK60NkIPSViwzI9UyWv9hGsQ1W4pgN5dmav3yNlO9iNijD7w2GX38ZnxpxbL7++reflh0ePLAC8KZj47UX/rOo16N6Z/E2yt4vw1q52krRGMKPyicnWMzWtEmF1Qq7YOzw+mknU3c5/SKdCy6XVA== X-Forefront-Antispam-Report: CIP:192.88.168.50;CTRY:US;IPV:NLI;EFV:NLI;SFV:NSPM;SFS:(10019020)(6009001)(339900001)(199003)(189002)(2950100001)(5001960100002)(107886002)(105606002)(5890100001)(106466001)(6806004)(46102003)(189998001)(87936001)(36756003)(77096005)(575784001)(50226001)(47776003)(77156002)(62966003)(48376002)(19580395003)(2201001)(50466002)(104016003)(5001770100001)(86362001)(19580405001)(76176999)(50986999)(92566002)(229853001)(85426001)(217873001)(4001430100001);DIR:OUT;SFP:1102;SCL:1;SRVR:SN1PR0301MB1648;H:tx30smr01.am.freescale.net;FPR:;SPF:Fail;MLV:sfv;A:1;MX:1;LANG:en; MIME-Version: 1.0 Content-Type: text/plain X-Microsoft-Exchange-Diagnostics: 1;SN1PR0301MB1648;2:q4qAGiLURzefyCX0ZUqrnqe4Nc2Q22oSzaQoUoqGJLABz77btHDnltjRLZbzXbVt;2:jHrg2YZij4GWqi14olM/zK/QoM8l9a7iMTynnKIHeZdqfOosokH+qph4TxTvc7ZI/pN2PQfXI2NhqAn0ds3PrbM47KzEJjbEJm9FG5O9eKiE1zb0OnHDtYopldomO7JNIx0IVP6olhsJJfk3sBVDCymZp/Ete5jeae3W2F5/pSmyp41cH2u9Q/TB3bGAHM5QzJaRy8b+yxqQEIUqzOtJav3KVdaNXepgrd33wG+fG+0=;6:H0//04daEi2S19ey1888eNRXIufL4g/MDWXvp+JP4USYscIcSKIV6CCkah0NBISwSZBUGASQkXOe/a48rSwsXsXYOl3ujkYwdj8lk6aRb9zevJcJFxe4ml9oEZIlkK+3TAhZUz57k03c8B7DMYd1t59e2PYYzXwBa9tF5oWNuvV8tbxCuwUa5GO6v2Tx1tOUzh4BiiSR+T+22xKzbtYDRYHz7JLjfz3473336HNW64av239QHwpE/uZpQVF9xhjxqzsCvokn1PW+EFUPYiINNlxmDV/Q/hPIOLuoqyKJuj7O01I8SFT4skPyLFKLfr0IKn2L5iGfUax8EWsSE6DCcQ== X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:SN1PR0301MB1648;UriScan:;BCL:0;PCL:0;RULEID:;SRVR:SN1PR0301MB1632; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(601004)(5005006)(520003)(3002001);SRVR:SN1PR0301MB1648;BCL:0;PCL:0;RULEID:;SRVR:SN1PR0301MB1648; X-Microsoft-Exchange-Diagnostics: 1;SN1PR0301MB1648;3:MObAhl5myiZ6pRi677N9osgH//jI+da6xNRMmYdWBOOy7mgZph8gP/ErAbjSy0kymIIlOyc+ybfielja3yf6ed6MlKjXPOvB0duXDlVPus7xO+gJsf3N0Ts7WBPOjGu4vtKntr0xUgYOxWMdkB15plqIzLpwDXLnTvEEo0xYKI6IgMegf2Hq00Y6CrC5KmkWzNKVgzUQLLeCc6BHt9mntHXfY1SO+emBWwuqxEw7AV2o3tO/CPJDtb0fZTG88NGL63M9J0PKysHCcxE+SQPOmTNIm4CGB3ldoKyFgoEFEQtQ158xerrQ5uspel1roogo X-Forefront-PRVS: 06022AA85F X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;SN1PR0301MB1648;9:1BZmOR6CRweUUEtspDF/ogYGmbd5HWf6+NrbS02w?= =?us-ascii?Q?58wRImaKpYeif5le7XNFh7N5sWaqMwmyteO3i7+lyO7gvxWG37xgcn3kkMKL?= =?us-ascii?Q?LWTPrUqL6NBgr6sV9glxmMibER26M2+kd2XfIdV1GmBKSQTnEAKnVVGHznd5?= =?us-ascii?Q?2kGoA2hB2wf+NBr9PMtaM1zieZHzgTzvOcPf0naltn9NHsUEXcjfx7yTwS1t?= =?us-ascii?Q?4s4FdOUfSwKhO/XnCh/8umUAa0FQymzv7x44ko2BeoqgH79FOmASLIkXeuBS?= =?us-ascii?Q?v4fSW4BVwFqpcUhjL0i/4K43uwFi31TCPjYLBuamFU/5gf8rJfAIKsBAM239?= =?us-ascii?Q?j99k8IPvLndBK5Ul3M3mp/jAF42fRmK3zI4zNmUsG+Z2AkN6Iw206fhL1riF?= =?us-ascii?Q?ba/o3dsF5hFIa4wbIn7ojOQLSdmH2qg0tERFuFie7Fr7KKJi6fo/EQWVhhUR?= =?us-ascii?Q?zV2m3lRefIhcGdBBT8rZAhLlJFERNL8qHLIPcW1Ul5KamUnuGKth7RLO7EiU?= =?us-ascii?Q?DuMwuxB+TbzCVEp6rLfn1pyDD99qMALOrr0s2aLqBxYZQrXeBxxVRVpR9zs0?= =?us-ascii?Q?sn/AUqOj7xCT/XnkkzmQfXYq2rKyDSA2Lm4bmT9YdzYkobbkPCkVyuNtoC5H?= =?us-ascii?Q?mIjrxQ+b4NcSXdFbUZPakHUimZsPPtmro4s7Tr1XiZLPyy6RtznVFd/bijdN?= =?us-ascii?Q?/4/cmhLbmCe9zyClay75bsFM60cF4Fta3SqYRp9nD3jL65f6ob/lKRPjhH+8?= =?us-ascii?Q?5bUzIHsR03huvYT6m+uv1tz/u/hSWY6IxkEyTkMR1wvQ3Atx+et3fuHTltLd?= =?us-ascii?Q?VOSOxBOHjg8IFwSVkFFzJvwVCklGdHT/INC7QFygragEXazh1ZQyxk9u+rV0?= =?us-ascii?Q?THcUHFYY3oI7Ifmr1/RvCwPIbRhkuCcPuKey+wb38Nqpsov+Jz2uhv4ighzt?= =?us-ascii?Q?cNaecu29B/AIFeORkB4fD9p6J5A7Fpy4+8//jCh4vw=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1;SN1PR0301MB1648;3:dvfM4hTnf1MapCdesluM72NqRbRunGjdOt+Ql1fzU35/KYBcMo8ysXVWEvUZU1s3ndB8ENEHMb7g/uxi3m7jpjv4Zq6JKAtECV8ekTr9UVFj/Xtiwog5xFdSp0iks/GIh1QYzApdlMpUOxP37WsSGg==;10:IC8KbIhOPJKorlQymC/rjQdrXer4S/Q6SK/ssBo9fX+mtHzJrGw8I33jqe9gr2yuleKJl6m9Q3a2ZOMPH7ZORwHWbLXfLnA1Q1BkdJAscuI=;6:3S/m7uKY393sL9r6SN7qCMqpvgmSwDTFSfMoTL+rL2cMg6asWu7n4uhgRsVBvI2a X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2015 22:08:50.6381 (UTC) X-MS-Exchange-CrossTenant-Id: 710a03f5-10f6-4d38-9ff4-a80b81da590d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=710a03f5-10f6-4d38-9ff4-a80b81da590d;Ip=[192.88.168.50];Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR0301MB1648 X-Microsoft-Exchange-Diagnostics: 1;SN1PR0301MB1632;2:DZ7uPJZNGTSh1CrHOJ5ixGXyyNTLj7cAa9LmTC0F04htHgb8gC8+gPxAwYz2SX6n;2:ofdOWAkYjL09H32++1R1cFkJxHFbaovoC7/gC3WS1fnWaJ8UmvrBNKE/ynVxTr2xoPECkEDXmWKuB9l2nnP7vHbIzN7mRMEsFfS+6uQMJCYW+gXLCjTie/koTNqSNhMcWN5ukYr7RxOD8V1CD5lgLROk/tpmD2kNv8hEyMZh1c/NP6kZTJ7B89dA/RpWIBZNrA0rNWFcsUOM+2q+dmb4eqbp3++L4KB0o69PTkBaN28=;9:QmynB43IoR58T5URgUfHC9DQj44fcBKO3dNpoadANtHakh8ikS17mZpYiyewHL99zeNvTswTZo92JNPERduSS6RQCchNOYYl9Ldy+rHmwgy+2UoQrU6D0WJF8vMLeb9M7jHXAHyu68deyPDHkeP2XQ== X-OriginatorOrg: freescale.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 34073 Lines: 1188 - Refactored fsl_mc_io object to have a DPMCP object attached to it - Created DPMCP object for DPRC's built-in portal, so that waiting on MC command completions for MC commands sent on the DPRC's built-in portal can be done using a DPMCP interrupt and a Linux completion variable. For most cases, mc_send_command() will wait on this completion variable, instead of doing polling. This completion variable will be signaled from the DPMCP IRQ handler. Signed-off-by: J. German Rivera Reviewed-by: Stuart Yoder --- Changes in v4: - Fixed new checkpatch warnings and checks drivers/staging/fsl-mc/bus/dprc-driver.c | 172 ++++++++++-- drivers/staging/fsl-mc/bus/mc-allocator.c | 111 ++++---- drivers/staging/fsl-mc/bus/mc-bus.c | 19 +- drivers/staging/fsl-mc/bus/mc-sys.c | 422 +++++++++++++++++++++++++--- drivers/staging/fsl-mc/include/mc-private.h | 6 +- drivers/staging/fsl-mc/include/mc-sys.h | 37 ++- 6 files changed, 648 insertions(+), 119 deletions(-) diff --git a/drivers/staging/fsl-mc/bus/dprc-driver.c b/drivers/staging/fsl-mc/bus/dprc-driver.c index dc97681..ade2503 100644 --- a/drivers/staging/fsl-mc/bus/dprc-driver.c +++ b/drivers/staging/fsl-mc/bus/dprc-driver.c @@ -15,6 +15,7 @@ #include #include #include "dprc-cmd.h" +#include "dpmcp.h" struct dprc_child_objs { int child_count; @@ -323,7 +324,6 @@ static int dprc_scan_container(struct fsl_mc_device *mc_bus_dev) int error; unsigned int irq_count; struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev); - struct fsl_mc *mc = dev_get_drvdata(fsl_mc_bus_type.dev_root->parent); dprc_init_all_resource_pools(mc_bus_dev); @@ -336,7 +336,7 @@ static int dprc_scan_container(struct fsl_mc_device *mc_bus_dev) if (error < 0) return error; - if (mc->gic_supported && !mc_bus->irq_resources) { + if (fsl_mc_interrupts_supported() && !mc_bus->irq_resources) { irq_count += FSL_MC_IRQ_POOL_MAX_EXTRA_IRQS; error = fsl_mc_populate_irq_pool(mc_bus, irq_count); if (error < 0) @@ -373,7 +373,8 @@ static irqreturn_t dprc_irq0_handler_thread(int irq_num, void *arg) struct fsl_mc_io *mc_io = mc_dev->mc_io; int irq_index = 0; - dev_dbg(dev, "DPRC IRQ %d\n", irq_num); + dev_dbg(dev, "DPRC IRQ %d triggered on CPU %u\n", + irq_num, smp_processor_id()); if (WARN_ON(!(mc_dev->flags & FSL_MC_IS_DPRC))) return IRQ_HANDLED; @@ -445,7 +446,8 @@ static int disable_dprc_irqs(struct fsl_mc_device *mc_dev) error = dprc_set_irq_enable(mc_io, mc_dev->mc_handle, i, 0); if (error < 0) { dev_err(&mc_dev->dev, - "dprc_set_irq_enable() failed: %d\n", error); + "Disabling DPRC IRQ %d failed: dprc_set_irq_enable() failed: %d\n", + i, error); return error; } @@ -456,7 +458,8 @@ static int disable_dprc_irqs(struct fsl_mc_device *mc_dev) error = dprc_set_irq_mask(mc_io, mc_dev->mc_handle, i, 0x0); if (error < 0) { dev_err(&mc_dev->dev, - "dprc_set_irq_mask() failed: %d\n", error); + "Disabling DPRC IRQ %d failed: dprc_set_irq_mask() failed: %d\n", + i, error); return error; } @@ -468,8 +471,9 @@ static int disable_dprc_irqs(struct fsl_mc_device *mc_dev) ~0x0U); if (error < 0) { dev_err(&mc_dev->dev, - "dprc_clear_irq_status() failed: %d\n", - error); + "Disabling DPRC IRQ %d failed: dprc_clear_irq_status() failed: %d\n", + i, error); + return error; } } @@ -566,7 +570,8 @@ static int enable_dprc_irqs(struct fsl_mc_device *mc_dev) ~0x0u); if (error < 0) { dev_err(&mc_dev->dev, - "dprc_set_irq_mask() failed: %d\n", error); + "Enabling DPRC IRQ %d failed: dprc_set_irq_mask() failed: %d\n", + i, error); return error; } @@ -579,7 +584,8 @@ static int enable_dprc_irqs(struct fsl_mc_device *mc_dev) i, 1); if (error < 0) { dev_err(&mc_dev->dev, - "dprc_set_irq_enable() failed: %d\n", error); + "Enabling DPRC IRQ %d failed: dprc_set_irq_enable() failed: %d\n", + i, error); return error; } @@ -618,6 +624,95 @@ error_free_irqs: return error; } +/* + * Creates a DPMCP for a DPRC's built-in MC portal + */ +static int dprc_create_dpmcp(struct fsl_mc_device *dprc_dev) +{ + int error; + struct dpmcp_cfg dpmcp_cfg; + u16 dpmcp_handle; + struct dprc_res_req res_req; + struct dpmcp_attr dpmcp_attr; + struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(dprc_dev); + + dpmcp_cfg.portal_id = mc_bus->dprc_attr.portal_id; + error = dpmcp_create(dprc_dev->mc_io, &dpmcp_cfg, &dpmcp_handle); + if (error < 0) { + dev_err(&dprc_dev->dev, "dpmcp_create() failed: %d\n", + error); + return error; + } + + /* + * Set the state of the newly created DPMCP object to be "plugged": + */ + + error = dpmcp_get_attributes(dprc_dev->mc_io, dpmcp_handle, + &dpmcp_attr); + if (error < 0) { + dev_err(&dprc_dev->dev, "dpmcp_get_attributes() failed: %d\n", + error); + goto error_destroy_dpmcp; + } + + if (WARN_ON(dpmcp_attr.id != mc_bus->dprc_attr.portal_id)) { + error = -EINVAL; + goto error_destroy_dpmcp; + } + + strcpy(res_req.type, "dpmcp"); + res_req.num = 1; + res_req.options = + (DPRC_RES_REQ_OPT_EXPLICIT | DPRC_RES_REQ_OPT_PLUGGED); + res_req.id_base_align = dpmcp_attr.id; + + error = dprc_assign(dprc_dev->mc_io, + dprc_dev->mc_handle, + dprc_dev->obj_desc.id, + &res_req); + + if (error < 0) { + dev_err(&dprc_dev->dev, "dprc_assign() failed: %d\n", error); + goto error_destroy_dpmcp; + } + + (void)dpmcp_close(dprc_dev->mc_io, dpmcp_handle); + return 0; + +error_destroy_dpmcp: + (void)dpmcp_destroy(dprc_dev->mc_io, dpmcp_handle); + return error; +} + +/* + * Destroys the DPMCP for a DPRC's built-in MC portal + */ +static void dprc_destroy_dpmcp(struct fsl_mc_device *dprc_dev) +{ + int error; + u16 dpmcp_handle; + struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(dprc_dev); + + if (WARN_ON(!dprc_dev->mc_io || dprc_dev->mc_io->dpmcp_dev)) + return; + + error = dpmcp_open(dprc_dev->mc_io, mc_bus->dprc_attr.portal_id, + &dpmcp_handle); + if (error < 0) { + dev_err(&dprc_dev->dev, "dpmcp_open() failed: %d\n", + error); + return; + } + + error = dpmcp_destroy(dprc_dev->mc_io, dpmcp_handle); + if (error < 0) { + dev_err(&dprc_dev->dev, "dpmcp_destroy() failed: %d\n", + error); + return; + } +} + /** * dprc_probe - callback invoked when a DPRC is being bound to this driver * @@ -635,7 +730,6 @@ static int dprc_probe(struct fsl_mc_device *mc_dev) struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_dev); bool mc_io_created = false; bool dev_root_set = false; - struct fsl_mc *mc = dev_get_drvdata(fsl_mc_bus_type.dev_root->parent); if (WARN_ON(strcmp(mc_dev->obj_desc.type, "dprc") != 0)) return -EINVAL; @@ -679,16 +773,55 @@ static int dprc_probe(struct fsl_mc_device *mc_dev) goto error_cleanup_mc_io; } + error = dprc_get_attributes(mc_dev->mc_io, mc_dev->mc_handle, + &mc_bus->dprc_attr); + if (error < 0) { + dev_err(&mc_dev->dev, "dprc_get_attributes() failed: %d\n", + error); + goto error_cleanup_open; + } + + if (fsl_mc_interrupts_supported()) { + /* + * Create DPMCP for the DPRC's built-in portal: + */ + error = dprc_create_dpmcp(mc_dev); + if (error < 0) + goto error_cleanup_open; + } + mutex_init(&mc_bus->scan_mutex); /* - * Discover MC objects in DPRC object: + * Discover MC objects in the DPRC object: */ error = dprc_scan_container(mc_dev); if (error < 0) - goto error_cleanup_open; + goto error_destroy_dpmcp; + + if (fsl_mc_interrupts_supported()) { + /* + * The fsl_mc_device object associated with the DPMCP object + * created above was created as part of the + * dprc_scan_container() call above: + */ + if (WARN_ON(!mc_dev->mc_io->dpmcp_dev)) { + error = -EINVAL; + goto error_cleanup_dprc_scan; + } + + /* + * Configure interrupt for the DPMCP object associated with the + * DPRC object's built-in portal: + * + * NOTE: We have to do this after calling dprc_scan_container(), + * since dprc_scan_container() will populate the IRQ pool for + * this DPRC. + */ + error = fsl_mc_io_setup_dpmcp_irq(mc_dev->mc_io); + if (error < 0) + goto error_cleanup_dprc_scan; - if (mc->gic_supported) { /* * Configure interrupts for the DPRC object associated with * this MC bus: @@ -702,10 +835,14 @@ static int dprc_probe(struct fsl_mc_device *mc_dev) return 0; error_cleanup_dprc_scan: + fsl_mc_io_unset_dpmcp(mc_dev->mc_io); device_for_each_child(&mc_dev->dev, NULL, __fsl_mc_device_remove); - if (mc->gic_supported) + if (fsl_mc_interrupts_supported()) fsl_mc_cleanup_irq_pool(mc_bus); +error_destroy_dpmcp: + dprc_destroy_dpmcp(mc_dev); + error_cleanup_open: (void)dprc_close(mc_dev->mc_io, mc_dev->mc_handle); @@ -744,7 +881,6 @@ static int dprc_remove(struct fsl_mc_device *mc_dev) { int error; struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_dev); - struct fsl_mc *mc = dev_get_drvdata(fsl_mc_bus_type.dev_root->parent); if (WARN_ON(strcmp(mc_dev->obj_desc.type, "dprc") != 0)) return -EINVAL; @@ -754,15 +890,17 @@ static int dprc_remove(struct fsl_mc_device *mc_dev) if (WARN_ON(!mc_bus->irq_resources)) return -EINVAL; - if (mc->gic_supported) + if (fsl_mc_interrupts_supported()) dprc_teardown_irqs(mc_dev); + fsl_mc_io_unset_dpmcp(mc_dev->mc_io); device_for_each_child(&mc_dev->dev, NULL, __fsl_mc_device_remove); + dprc_destroy_dpmcp(mc_dev); error = dprc_close(mc_dev->mc_io, mc_dev->mc_handle); if (error < 0) dev_err(&mc_dev->dev, "dprc_close() failed: %d\n", error); - if (mc->gic_supported) + if (fsl_mc_interrupts_supported()) fsl_mc_cleanup_irq_pool(mc_bus); dev_info(&mc_dev->dev, "DPRC device unbound from driver"); diff --git a/drivers/staging/fsl-mc/bus/mc-allocator.c b/drivers/staging/fsl-mc/bus/mc-allocator.c index 3bdfefb..87b3d59 100644 --- a/drivers/staging/fsl-mc/bus/mc-allocator.c +++ b/drivers/staging/fsl-mc/bus/mc-allocator.c @@ -109,7 +109,7 @@ static int __must_check fsl_mc_resource_pool_remove_device(struct fsl_mc_device goto out; resource = mc_dev->resource; - if (WARN_ON(resource->data != mc_dev)) + if (WARN_ON(!resource || resource->data != mc_dev)) goto out; mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent); @@ -281,7 +281,7 @@ int __must_check fsl_mc_portal_allocate(struct fsl_mc_device *mc_dev, struct fsl_mc_bus *mc_bus; phys_addr_t mc_portal_phys_addr; size_t mc_portal_size; - struct fsl_mc_device *mc_adev; + struct fsl_mc_device *dpmcp_dev; int error = -EINVAL; struct fsl_mc_resource *resource = NULL; struct fsl_mc_io *mc_io = NULL; @@ -301,23 +301,24 @@ int __must_check fsl_mc_portal_allocate(struct fsl_mc_device *mc_dev, if (error < 0) return error; - mc_adev = resource->data; - if (WARN_ON(!mc_adev)) + dpmcp_dev = resource->data; + if (WARN_ON(!dpmcp_dev || + strcmp(dpmcp_dev->obj_desc.type, "dpmcp") != 0)) goto error_cleanup_resource; - if (WARN_ON(mc_adev->obj_desc.region_count == 0)) + if (WARN_ON(dpmcp_dev->obj_desc.region_count == 0)) goto error_cleanup_resource; - mc_portal_phys_addr = mc_adev->regions[0].start; - mc_portal_size = mc_adev->regions[0].end - - mc_adev->regions[0].start + 1; + mc_portal_phys_addr = dpmcp_dev->regions[0].start; + mc_portal_size = dpmcp_dev->regions[0].end - + dpmcp_dev->regions[0].start + 1; if (WARN_ON(mc_portal_size != mc_bus_dev->mc_io->portal_size)) goto error_cleanup_resource; error = fsl_create_mc_io(&mc_bus_dev->dev, mc_portal_phys_addr, - mc_portal_size, resource, + mc_portal_size, dpmcp_dev, mc_io_flags, &mc_io); if (error < 0) goto error_cleanup_resource; @@ -339,12 +340,26 @@ EXPORT_SYMBOL_GPL(fsl_mc_portal_allocate); */ void fsl_mc_portal_free(struct fsl_mc_io *mc_io) { + struct fsl_mc_device *dpmcp_dev; struct fsl_mc_resource *resource; - resource = mc_io->resource; - if (WARN_ON(resource->type != FSL_MC_POOL_DPMCP)) + /* + * Every mc_io obtained by calling fsl_mc_portal_allocate() is supposed + * to have a DPMCP object associated with. + */ + dpmcp_dev = mc_io->dpmcp_dev; + if (WARN_ON(!dpmcp_dev)) + return; + if (WARN_ON(strcmp(dpmcp_dev->obj_desc.type, "dpmcp") != 0)) + return; + if (WARN_ON(dpmcp_dev->mc_io != mc_io)) + return; + + resource = dpmcp_dev->resource; + if (WARN_ON(!resource || resource->type != FSL_MC_POOL_DPMCP)) return; - if (WARN_ON(!resource->data)) + + if (WARN_ON(resource->data != dpmcp_dev)) return; fsl_destroy_mc_io(mc_io); @@ -360,31 +375,14 @@ EXPORT_SYMBOL_GPL(fsl_mc_portal_free); int fsl_mc_portal_reset(struct fsl_mc_io *mc_io) { int error; - uint16_t token; - struct fsl_mc_resource *resource = mc_io->resource; - struct fsl_mc_device *mc_dev = resource->data; - - if (WARN_ON(resource->type != FSL_MC_POOL_DPMCP)) - return -EINVAL; + struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev; - if (WARN_ON(!mc_dev)) + if (WARN_ON(!dpmcp_dev)) return -EINVAL; - error = dpmcp_open(mc_io, mc_dev->obj_desc.id, &token); + error = dpmcp_reset(mc_io, dpmcp_dev->mc_handle); if (error < 0) { - dev_err(&mc_dev->dev, "dpmcp_open() failed: %d\n", error); - return error; - } - - error = dpmcp_reset(mc_io, token); - if (error < 0) { - dev_err(&mc_dev->dev, "dpmcp_reset() failed: %d\n", error); - return error; - } - - error = dpmcp_close(mc_io, token); - if (error < 0) { - dev_err(&mc_dev->dev, "dpmcp_close() failed: %d\n", error); + dev_err(&dpmcp_dev->dev, "dpmcp_reset() failed: %d\n", error); return error; } @@ -599,16 +597,31 @@ static int fsl_mc_allocator_probe(struct fsl_mc_device *mc_dev) goto error; mc_bus = to_fsl_mc_bus(mc_bus_dev); - error = object_type_to_pool_type(mc_dev->obj_desc.type, &pool_type); - if (error < 0) - goto error; - error = fsl_mc_resource_pool_add_device(mc_bus, pool_type, mc_dev); - if (error < 0) - goto error; + /* + * If mc_dev is the DPMCP object for the parent DPRC's built-in + * portal, we don't add this DPMCP to the DPMCP object pool, + * but instead allocate it directly to the parent DPRC (mc_bus_dev): + */ + if (strcmp(mc_dev->obj_desc.type, "dpmcp") == 0 && + mc_dev->obj_desc.id == mc_bus->dprc_attr.portal_id) { + error = fsl_mc_io_set_dpmcp(mc_bus_dev->mc_io, mc_dev); + if (error < 0) + goto error; + } else { + error = object_type_to_pool_type(mc_dev->obj_desc.type, + &pool_type); + if (error < 0) + goto error; + + error = fsl_mc_resource_pool_add_device(mc_bus, pool_type, + mc_dev); + if (error < 0) + goto error; + } dev_dbg(&mc_dev->dev, - "Allocatable MC object device bound to fsl_mc_allocator driver"); + "Allocatable MC object device bound to fsl_mc_allocator"); return 0; error: @@ -621,20 +634,20 @@ error: */ static int fsl_mc_allocator_remove(struct fsl_mc_device *mc_dev) { - int error = -EINVAL; + int error; if (WARN_ON(!FSL_MC_IS_ALLOCATABLE(mc_dev->obj_desc.type))) - goto out; + return -EINVAL; - error = fsl_mc_resource_pool_remove_device(mc_dev); - if (error < 0) - goto out; + if (mc_dev->resource) { + error = fsl_mc_resource_pool_remove_device(mc_dev); + if (error < 0) + return error; + } dev_dbg(&mc_dev->dev, - "Allocatable MC object device unbound from fsl_mc_allocator driver"); - error = 0; -out: - return error; + "Allocatable MC object device unbound from fsl_mc_allocator"); + return 0; } static const struct fsl_mc_device_match_id match_id_table[] = { diff --git a/drivers/staging/fsl-mc/bus/mc-bus.c b/drivers/staging/fsl-mc/bus/mc-bus.c index 36bfe68..400300d 100644 --- a/drivers/staging/fsl-mc/bus/mc-bus.c +++ b/drivers/staging/fsl-mc/bus/mc-bus.c @@ -591,6 +591,11 @@ void fsl_mc_device_remove(struct fsl_mc_device *mc_dev) if (&mc_dev->dev == fsl_mc_bus_type.dev_root) fsl_mc_bus_type.dev_root = NULL; + } else if (strcmp(mc_dev->obj_desc.type, "dpmcp") == 0) { + if (mc_dev->mc_io) { + fsl_destroy_mc_io(mc_dev->mc_io); + mc_dev->mc_io = NULL; + } } kfree(mc_dev->driver_override); @@ -844,13 +849,9 @@ static int fsl_mc_bus_probe(struct platform_device *pdev) platform_set_drvdata(pdev, mc); error = create_mc_irq_domain(pdev, &mc->irq_domain); - if (error < 0) - return error; - - error = create_mc_irq_domain(pdev, &mc->irq_domain); if (error < 0) { dev_warn(&pdev->dev, - "WARNING: MC bus driver will run without interrupt support\n"); + "WARNING: MC bus driver running without interrupt support\n"); } else { mc->gic_supported = true; } @@ -931,7 +932,9 @@ error_cleanup_mc_io: fsl_destroy_mc_io(mc_io); error_cleanup_irq_domain: - irq_domain_remove(mc->irq_domain); + if (mc->gic_supported) + irq_domain_remove(mc->irq_domain); + return error; } @@ -946,7 +949,9 @@ static int fsl_mc_bus_remove(struct platform_device *pdev) if (WARN_ON(&mc->root_mc_bus_dev->dev != fsl_mc_bus_type.dev_root)) return -EINVAL; - irq_domain_remove(mc->irq_domain); + if (mc->gic_supported) + irq_domain_remove(mc->irq_domain); + fsl_mc_device_remove(mc->root_mc_bus_dev); dev_info(&pdev->dev, "Root MC bus device removed"); return 0; diff --git a/drivers/staging/fsl-mc/bus/mc-sys.c b/drivers/staging/fsl-mc/bus/mc-sys.c index 0da7700..151f148 100644 --- a/drivers/staging/fsl-mc/bus/mc-sys.c +++ b/drivers/staging/fsl-mc/bus/mc-sys.c @@ -34,10 +34,13 @@ #include "../include/mc-sys.h" #include "../include/mc-cmd.h" +#include "../include/mc.h" #include #include #include #include +#include +#include "dpmcp.h" /** * Timeout in milliseconds to wait for the completion of an MC command @@ -55,6 +58,230 @@ ((uint16_t)mc_dec((_hdr), MC_CMD_HDR_CMDID_O, MC_CMD_HDR_CMDID_S)) /** + * dpmcp_irq0_handler - Regular ISR for DPMCP interrupt 0 + * + * @irq: IRQ number of the interrupt being handled + * @arg: Pointer to device structure + */ +static irqreturn_t dpmcp_irq0_handler(int irq_num, void *arg) +{ + struct device *dev = (struct device *)arg; + struct fsl_mc_device *dpmcp_dev = to_fsl_mc_device(dev); + + dev_dbg(dev, "DPMCP IRQ %d triggered on CPU %u\n", irq_num, + smp_processor_id()); + + if (WARN_ON(dpmcp_dev->irqs[0]->irq_number != (uint32_t)irq_num)) + goto out; + + if (WARN_ON(!dpmcp_dev->mc_io)) + goto out; + + /* + * NOTE: We cannot invoke MC flib function here + */ + + complete(&dpmcp_dev->mc_io->mc_command_done_completion); +out: + return IRQ_HANDLED; +} + +/* + * Disable and clear interrupts for a given DPMCP object + */ +static int disable_dpmcp_irq(struct fsl_mc_device *dpmcp_dev) +{ + int error; + + /* + * Disable generation of the DPMCP interrupt: + */ + error = dpmcp_set_irq_enable(dpmcp_dev->mc_io, + dpmcp_dev->mc_handle, + DPMCP_IRQ_INDEX, 0); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "dpmcp_set_irq_enable() failed: %d\n", error); + + return error; + } + + /* + * Disable all DPMCP interrupt causes: + */ + error = dpmcp_set_irq_mask(dpmcp_dev->mc_io, dpmcp_dev->mc_handle, + DPMCP_IRQ_INDEX, 0x0); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "dpmcp_set_irq_mask() failed: %d\n", error); + + return error; + } + + /* + * Clear any leftover interrupts: + */ + error = dpmcp_clear_irq_status(dpmcp_dev->mc_io, dpmcp_dev->mc_handle, + DPMCP_IRQ_INDEX, ~0x0U); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "dpmcp_clear_irq_status() failed: %d\n", + error); + return error; + } + + return 0; +} + +static void unregister_dpmcp_irq_handler(struct fsl_mc_device *dpmcp_dev) +{ + struct fsl_mc_device_irq *irq = dpmcp_dev->irqs[DPMCP_IRQ_INDEX]; + + devm_free_irq(&dpmcp_dev->dev, irq->irq_number, &dpmcp_dev->dev); +} + +static int register_dpmcp_irq_handler(struct fsl_mc_device *dpmcp_dev) +{ + int error; + struct fsl_mc_device_irq *irq = dpmcp_dev->irqs[DPMCP_IRQ_INDEX]; + + error = devm_request_irq(&dpmcp_dev->dev, + irq->irq_number, + dpmcp_irq0_handler, + IRQF_NO_SUSPEND | IRQF_ONESHOT, + "FSL MC DPMCP irq0", + &dpmcp_dev->dev); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "devm_request_irq() failed: %d\n", + error); + return error; + } + + error = dpmcp_set_irq(dpmcp_dev->mc_io, + dpmcp_dev->mc_handle, + DPMCP_IRQ_INDEX, + irq->msi_paddr, + irq->msi_value, + irq->irq_number); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "dpmcp_set_irq() failed: %d\n", error); + goto error_unregister_irq_handler; + } + + return 0; + +error_unregister_irq_handler: + devm_free_irq(&dpmcp_dev->dev, irq->irq_number, &dpmcp_dev->dev); + return error; +} + +static int enable_dpmcp_irq(struct fsl_mc_device *dpmcp_dev) +{ + int error; + + /* + * Enable MC command completion event to trigger DPMCP interrupt: + */ + error = dpmcp_set_irq_mask(dpmcp_dev->mc_io, + dpmcp_dev->mc_handle, + DPMCP_IRQ_INDEX, + DPMCP_IRQ_EVENT_CMD_DONE); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "dpmcp_set_irq_mask() failed: %d\n", error); + + return error; + } + + /* + * Enable generation of the interrupt: + */ + error = dpmcp_set_irq_enable(dpmcp_dev->mc_io, + dpmcp_dev->mc_handle, + DPMCP_IRQ_INDEX, 1); + if (error < 0) { + dev_err(&dpmcp_dev->dev, + "dpmcp_set_irq_enable() failed: %d\n", error); + + return error; + } + + return 0; +} + +/* + * Setup MC command completion interrupt for the DPMCP device associated with a + * given fsl_mc_io object + */ +int fsl_mc_io_setup_dpmcp_irq(struct fsl_mc_io *mc_io) +{ + int error; + struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev; + + if (WARN_ON(!dpmcp_dev)) + return -EINVAL; + + if (WARN_ON(!fsl_mc_interrupts_supported())) + return -EINVAL; + + if (WARN_ON(dpmcp_dev->obj_desc.irq_count != 1)) + return -EINVAL; + + if (WARN_ON(!dpmcp_dev->mc_io)) + return -EINVAL; + + error = fsl_mc_allocate_irqs(dpmcp_dev); + if (error < 0) + return error; + + error = disable_dpmcp_irq(dpmcp_dev); + if (error < 0) + goto error_free_irqs; + + error = register_dpmcp_irq_handler(dpmcp_dev); + if (error < 0) + goto error_free_irqs; + + error = enable_dpmcp_irq(dpmcp_dev); + if (error < 0) + goto error_unregister_irq_handler; + + mc_io->mc_command_done_irq_armed = true; + return 0; + +error_unregister_irq_handler: + unregister_dpmcp_irq_handler(dpmcp_dev); + +error_free_irqs: + fsl_mc_free_irqs(dpmcp_dev); + return error; +} +EXPORT_SYMBOL_GPL(fsl_mc_io_setup_dpmcp_irq); + +/* + * Tear down interrupts for the DPMCP device associated with a given fsl_mc_io + * object + */ +static void teardown_dpmcp_irq(struct fsl_mc_io *mc_io) +{ + struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev; + + if (WARN_ON(!dpmcp_dev)) + return; + if (WARN_ON(!fsl_mc_interrupts_supported())) + return; + if (WARN_ON(!dpmcp_dev->irqs)) + return; + + mc_io->mc_command_done_irq_armed = false; + (void)disable_dpmcp_irq(dpmcp_dev); + unregister_dpmcp_irq_handler(dpmcp_dev); + fsl_mc_free_irqs(dpmcp_dev); +} + +/** * Creates an MC I/O object * * @dev: device to be associated with the MC I/O object @@ -70,9 +297,10 @@ int __must_check fsl_create_mc_io(struct device *dev, phys_addr_t mc_portal_phys_addr, uint32_t mc_portal_size, - struct fsl_mc_resource *resource, + struct fsl_mc_device *dpmcp_dev, uint32_t flags, struct fsl_mc_io **new_mc_io) { + int error; struct fsl_mc_io *mc_io; void __iomem *mc_portal_virt_addr; struct resource *res; @@ -85,11 +313,13 @@ int __must_check fsl_create_mc_io(struct device *dev, mc_io->flags = flags; mc_io->portal_phys_addr = mc_portal_phys_addr; mc_io->portal_size = mc_portal_size; - mc_io->resource = resource; - if (flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL) + mc_io->mc_command_done_irq_armed = false; + if (flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL) { spin_lock_init(&mc_io->spinlock); - else + } else { mutex_init(&mc_io->mutex); + init_completion(&mc_io->mc_command_done_completion); + } res = devm_request_mem_region(dev, mc_portal_phys_addr, @@ -113,8 +343,26 @@ int __must_check fsl_create_mc_io(struct device *dev, } mc_io->portal_virt_addr = mc_portal_virt_addr; + if (dpmcp_dev) { + error = fsl_mc_io_set_dpmcp(mc_io, dpmcp_dev); + if (error < 0) + goto error_destroy_mc_io; + + if (!(flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL) && + fsl_mc_interrupts_supported()) { + error = fsl_mc_io_setup_dpmcp_irq(mc_io); + if (error < 0) + goto error_destroy_mc_io; + } + } + *new_mc_io = mc_io; return 0; + +error_destroy_mc_io: + fsl_destroy_mc_io(mc_io); + return error; + } EXPORT_SYMBOL_GPL(fsl_create_mc_io); @@ -125,6 +373,11 @@ EXPORT_SYMBOL_GPL(fsl_create_mc_io); */ void fsl_destroy_mc_io(struct fsl_mc_io *mc_io) { + struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev; + + if (dpmcp_dev) + fsl_mc_io_unset_dpmcp(mc_io); + devm_iounmap(mc_io->dev, mc_io->portal_virt_addr); devm_release_mem_region(mc_io->dev, mc_io->portal_phys_addr, @@ -135,6 +388,60 @@ void fsl_destroy_mc_io(struct fsl_mc_io *mc_io) } EXPORT_SYMBOL_GPL(fsl_destroy_mc_io); +int fsl_mc_io_set_dpmcp(struct fsl_mc_io *mc_io, + struct fsl_mc_device *dpmcp_dev) +{ + int error; + + if (WARN_ON(!dpmcp_dev)) + return -EINVAL; + + if (WARN_ON(mc_io->dpmcp_dev)) + return -EINVAL; + + if (WARN_ON(dpmcp_dev->mc_io)) + return -EINVAL; + + if (!(mc_io->flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL)) { + error = dpmcp_open(mc_io, dpmcp_dev->obj_desc.id, + &dpmcp_dev->mc_handle); + if (error < 0) + return error; + } + + mc_io->dpmcp_dev = dpmcp_dev; + dpmcp_dev->mc_io = mc_io; + return 0; +} +EXPORT_SYMBOL_GPL(fsl_mc_io_set_dpmcp); + +void fsl_mc_io_unset_dpmcp(struct fsl_mc_io *mc_io) +{ + int error; + struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev; + + if (WARN_ON(!dpmcp_dev)) + return; + + if (WARN_ON(dpmcp_dev->mc_io != mc_io)) + return; + + if (!(mc_io->flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL)) { + if (dpmcp_dev->irqs) + teardown_dpmcp_irq(mc_io); + + error = dpmcp_close(mc_io, dpmcp_dev->mc_handle); + if (error < 0) { + dev_err(&dpmcp_dev->dev, "dpmcp_close() failed: %d\n", + error); + } + } + + mc_io->dpmcp_dev = NULL; + dpmcp_dev->mc_io = NULL; +} +EXPORT_SYMBOL_GPL(fsl_mc_io_unset_dpmcp); + static int mc_status_to_error(enum mc_cmd_status status) { static const int mc_status_to_error_map[] = { @@ -228,46 +535,51 @@ static inline enum mc_cmd_status mc_read_response(struct mc_command __iomem * return status; } -/** - * Sends an command to the MC device using the given MC I/O object - * - * @mc_io: MC I/O object to be used - * @cmd: command to be sent - * - * Returns '0' on Success; Error code otherwise. - */ -int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd) +static int mc_completion_wait(struct fsl_mc_io *mc_io, struct mc_command *cmd, + enum mc_cmd_status *mc_status) { - int error; enum mc_cmd_status status; - unsigned long jiffies_until_timeout = - jiffies + msecs_to_jiffies(MC_CMD_COMPLETION_TIMEOUT_MS); + unsigned long jiffies_left; + unsigned long timeout_jiffies = + msecs_to_jiffies(MC_CMD_COMPLETION_TIMEOUT_MS); - if (WARN_ON(in_irq())) + if (WARN_ON(!mc_io->dpmcp_dev)) return -EINVAL; - if (mc_io->flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL) - spin_lock(&mc_io->spinlock); - else - mutex_lock(&mc_io->mutex); + if (WARN_ON(mc_io->flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL)) + return -EINVAL; - /* - * Send command to the MC hardware: - */ - mc_write_command(mc_io->portal_virt_addr, cmd); + if (WARN_ON(!preemptible())) + return -EINVAL; + + for (;;) { + status = mc_read_response(mc_io->portal_virt_addr, cmd); + if (status != MC_CMD_STATUS_READY) + break; + + jiffies_left = wait_for_completion_timeout( + &mc_io->mc_command_done_completion, + timeout_jiffies); + if (jiffies_left == 0) + return -ETIMEDOUT; + } + + *mc_status = status; + return 0; +} + +static int mc_polling_wait(struct fsl_mc_io *mc_io, struct mc_command *cmd, + enum mc_cmd_status *mc_status) +{ + enum mc_cmd_status status; + unsigned long jiffies_until_timeout = + jiffies + msecs_to_jiffies(MC_CMD_COMPLETION_TIMEOUT_MS); - /* - * Wait for response from the MC hardware: - */ for (;;) { status = mc_read_response(mc_io->portal_virt_addr, cmd); if (status != MC_CMD_STATUS_READY) break; - /* - * TODO: When MC command completion interrupts are supported - * call wait function here instead of usleep_range() - */ if (preemptible()) { usleep_range(MC_CMD_COMPLETION_POLLING_MIN_SLEEP_USECS, MC_CMD_COMPLETION_POLLING_MAX_SLEEP_USECS); @@ -283,13 +595,53 @@ int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd) (unsigned int) MC_CMD_HDR_READ_CMDID(cmd->header)); - error = -ETIMEDOUT; - goto common_exit; + return -ETIMEDOUT; } } + *mc_status = status; + return 0; +} + +/** + * Sends a command to the MC device using the given MC I/O object + * + * @mc_io: MC I/O object to be used + * @cmd: command to be sent + * + * Returns '0' on Success; Error code otherwise. + */ +int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd) +{ + int error; + enum mc_cmd_status status; + + if (WARN_ON(in_irq())) + return -EINVAL; + + if (mc_io->flags & FSL_MC_IO_ATOMIC_CONTEXT_PORTAL) + spin_lock(&mc_io->spinlock); + else + mutex_lock(&mc_io->mutex); + + /* + * Send command to the MC hardware: + */ + mc_write_command(mc_io->portal_virt_addr, cmd); + + /* + * Wait for response from the MC hardware: + */ + if (mc_io->mc_command_done_irq_armed) + error = mc_completion_wait(mc_io, cmd, &status); + else + error = mc_polling_wait(mc_io, cmd, &status); + + if (error < 0) + goto common_exit; + if (status != MC_CMD_STATUS_OK) { - pr_debug("MC command failed: portal: %#llx, obj handle: %#x, command: %#x, status: %s (%#x)\n", + pr_debug("MC cmd failed: portal: %#llx, obj handle: %#x, cmd: %#x, status: %s (%#x)\n", mc_io->portal_phys_addr, (unsigned int)MC_CMD_HDR_READ_TOKEN(cmd->header), (unsigned int)MC_CMD_HDR_READ_CMDID(cmd->header), diff --git a/drivers/staging/fsl-mc/include/mc-private.h b/drivers/staging/fsl-mc/include/mc-private.h index af7bd81..ebb5061 100644 --- a/drivers/staging/fsl-mc/include/mc-private.h +++ b/drivers/staging/fsl-mc/include/mc-private.h @@ -111,12 +111,14 @@ struct fsl_mc_resource_pool { * from the physical DPRC. * @irq_resources: Pointer to array of IRQ objects for the IRQ pool. * @scan_mutex: Serializes bus scanning + * @dprc_attr: DPRC attributes */ struct fsl_mc_bus { struct fsl_mc_device mc_dev; struct fsl_mc_resource_pool resource_pools[FSL_MC_NUM_POOL_TYPES]; struct fsl_mc_device_irq *irq_resources; struct mutex scan_mutex; /* serializes bus scanning */ + struct dprc_attributes dprc_attr; }; #define to_fsl_mc_bus(_mc_dev) \ @@ -134,10 +136,6 @@ int dprc_scan_objects(struct fsl_mc_device *mc_bus_dev, const char *driver_override, unsigned int *total_irq_count); -int dprc_lookup_object(struct fsl_mc_device *mc_bus_dev, - struct fsl_mc_device *child_dev, - u32 *child_obj_index); - int __init dprc_driver_init(void); void __exit dprc_driver_exit(void); diff --git a/drivers/staging/fsl-mc/include/mc-sys.h b/drivers/staging/fsl-mc/include/mc-sys.h index d2c95831..c426e63 100644 --- a/drivers/staging/fsl-mc/include/mc-sys.h +++ b/drivers/staging/fsl-mc/include/mc-sys.h @@ -39,6 +39,7 @@ #include #include #include +#include #include #include @@ -57,9 +58,11 @@ struct mc_command; * @portal_size: MC command portal size in bytes * @portal_phys_addr: MC command portal physical address * @portal_virt_addr: MC command portal virtual address - * @resource: generic resource associated with the MC portal if - * the MC portal came from a resource pool, or NULL if the MC portal - * is permanently bound to a device (e.g., a DPRC) + * @dpmcp_dev: pointer to the DPMCP device associated with the MC portal. + * @mc_command_done_irq_armed: Flag indicating that the MC command done IRQ + * is currently armed. + * @mc_command_done_completion: Completion variable to be signaled when an MC + * command sent to the MC fw is completed. * @mutex: Mutex to serialize mc_send_command() calls that use the same MC * portal, if the fsl_mc_io object was created with the * FSL_MC_IO_ATOMIC_CONTEXT_PORTAL flag off. mc_send_command() calls for this @@ -75,21 +78,41 @@ struct fsl_mc_io { u16 portal_size; phys_addr_t portal_phys_addr; void __iomem *portal_virt_addr; - struct fsl_mc_resource *resource; + struct fsl_mc_device *dpmcp_dev; union { - struct mutex mutex; /* serializes mc_send_command() calls */ - spinlock_t spinlock; /* serializes mc_send_command() calls */ + /* + * These fields are only meaningful if the + * FSL_MC_IO_ATOMIC_CONTEXT_PORTAL flag is not set + */ + struct { + struct mutex mutex; /* serializes mc_send_command() */ + struct completion mc_command_done_completion; + bool mc_command_done_irq_armed; + }; + + /* + * This field is only meaningful if the + * FSL_MC_IO_ATOMIC_CONTEXT_PORTAL flag is set + */ + spinlock_t spinlock; /* serializes mc_send_command() */ }; }; int __must_check fsl_create_mc_io(struct device *dev, phys_addr_t mc_portal_phys_addr, uint32_t mc_portal_size, - struct fsl_mc_resource *resource, + struct fsl_mc_device *dpmcp_dev, uint32_t flags, struct fsl_mc_io **new_mc_io); void fsl_destroy_mc_io(struct fsl_mc_io *mc_io); +int fsl_mc_io_set_dpmcp(struct fsl_mc_io *mc_io, + struct fsl_mc_device *dpmcp_dev); + +void fsl_mc_io_unset_dpmcp(struct fsl_mc_io *mc_io); + +int fsl_mc_io_setup_dpmcp_irq(struct fsl_mc_io *mc_io); + int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd); #endif /* _FSL_MC_SYS_H */ -- 2.3.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/