2006-02-18 00:57:13

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 00/22] [RFC] IBM eHCA InfiniBand adapter driver

Here's a series of patches that add an InfiniBand adapter driver
for IBM eHCA hardware. Please look it over with an eye towards issues
that need to be addressed before merging this upstream.

This patch series is somewhat unusual in that I am not the original
author of this driver -- I am just sending it for review for the
authors, who are apparently not able to post patches themselves due to
internal issues at IBM. However they are cc'ed and will respond to
comments in this thread.

In fact I have some issues with the code myself that need to be
addressed before this driver is mergeable. I've included most of them
in the individual patches, although I have some general comments too.
However I would like to get some early feedback for the ehca authors
from the wider community. In particular I think its important to run
this past the ppc64 experts, since I'm not sure what the standards for
this sort of pSeries driver are.

Anyway, my general comments:

- The #ifs that test EHCA_USERDRIVER and __KERNEL__ should be killed.
We know that this is kernel code, so there's no reason to include
userspace compatibility junk.

- Many of the comments look like they are for some automatic
documentation system that is not quite kerneldoc. They should be
fixed to be real kerneldoc comments.

- In general there is a huge amount of code in large inline functions
in .h files. Things should be reorganized to cut this down to a
sane amount.

Thanks,
Roland


2006-02-18 00:57:36

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 02/22] Firmware interface code for IB device.

From: Roland Dreier <[email protected]>

This is a very large file with way too much code for a .h file.
The functions look too big to be inlined also. Is there any way
for this code to move to a .c file?
---

drivers/infiniband/hw/ehca/hcp_if.h | 2022 +++++++++++++++++++++++++++++++++++
1 files changed, 2022 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/hcp_if.h b/drivers/infiniband/hw/ehca/hcp_if.h
new file mode 100644
index 0000000..70bf77f
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hcp_if.h
@@ -0,0 +1,2022 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Firmware Infiniband Interface code for POWER
+ *
+ * Authors: Gerd Bayer <[email protected]>
+ * Christoph Raisch <[email protected]>
+ * Waleri Fomin <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hcp_if.h,v 1.62 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __HCP_IF_H__
+#define __HCP_IF_H__
+
+#include "ehca_tools.h"
+#include "hipz_structs.h"
+#include "ehca_classes.h"
+
+#ifndef EHCA_USE_HCALL
+#include "hcz_queue.h"
+#include "hcz_mrmw.h"
+#include "hcz_emmio.h"
+#include "sim_prom.h"
+#endif
+#include "hipz_fns.h"
+#include "hcp_sense.h"
+#include "ehca_irq.h"
+
+#ifndef CONFIG_PPC64
+#ifndef Z_SERIES
+#warning "included with wrong target, this is a p file"
+#endif
+#endif
+
+#ifdef EHCA_USE_HCALL
+
+#ifndef EHCA_USERDRIVER
+#include "hcp_phyp.h"
+#else
+#include "testbench/hcallbridge.h"
+#endif
+#endif
+
+inline static int hcp_galpas_ctor(struct h_galpas *galpas,
+ u64 paddr_kernel, u64 paddr_user)
+{
+ int rc = 0;
+
+ rc = hcall_map_page(paddr_kernel, &galpas->kernel.fw_handle);
+ if (rc != 0)
+ return (rc);
+
+ galpas->user.fw_handle = paddr_user;
+
+ EDEB(7, "paddr_kernel=%lx paddr_user=%lx galpas->kernel=%lx"
+ " galpas->user=%lx",
+ paddr_kernel, paddr_user, galpas->kernel.fw_handle,
+ galpas->user.fw_handle);
+
+ return (rc);
+}
+
+inline static int hcp_galpas_dtor(struct h_galpas *galpas)
+{
+ int rc = 0;
+
+ if (galpas->kernel.fw_handle != 0)
+ rc = hcall_unmap_page(galpas->kernel.fw_handle);
+
+ if (rc != 0)
+ return (rc);
+
+ galpas->user.fw_handle = galpas->kernel.fw_handle = 0;
+
+ return rc;
+}
+
+/**
+ * hipz_h_alloc_resource_eq - Allocate EQ resources in HW and FW, initalize
+ * resources, create the empty EQPT (ring).
+ *
+ * @eq_handle: eq handle for this queue
+ * @act_nr_of_entries: actual number of queue entries
+ * @act_pages: actual number of queue pages
+ * @eq_ist: used by hcp_H_XIRR() call
+ */
+inline static u64 hipz_h_alloc_resource_eq(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfeq *pfeq,
+ const u32 neq_control,
+ const u32
+ number_of_entries,
+ struct ipz_eq_handle
+ *eq_handle,
+ u32 * act_nr_of_entries,
+ u32 * act_pages,
+ u32 * eq_ist)
+{
+ u64 retcode;
+ u64 dummy;
+ u64 act_nr_of_entries_out = 0;
+ u64 act_pages_out = 0;
+ u64 eq_ist_out = 0;
+ u64 allocate_controls = 0;
+ u32 x = (u64)(&x);
+
+ EDEB_EN(7, "pfeq=%p hcp_adapter_handle=%lx new_control=%x"
+ " number_of_entries=%x",
+ pfeq, hcp_adapter_handle.handle, neq_control,
+ number_of_entries);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_alloc_resource_eq(hcp_adapter_handle, pfeq,
+ neq_control,
+ number_of_entries,
+ eq_handle,
+ act_nr_of_entries,
+ act_pages, eq_ist);
+#else
+
+ /* resource type */
+ allocate_controls = 3ULL;
+
+ /* ISN is associated */
+ if (neq_control != 1) {
+ allocate_controls = (1ULL << (63 - 7)) | allocate_controls;
+ }
+
+ /* notification event queue */
+ if (neq_control == 1) {
+ allocate_controls = (1ULL << 63) | allocate_controls;
+ }
+
+ retcode = plpar_hcall_7arg_7ret(H_ALLOC_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ allocate_controls, /* r5 */
+ number_of_entries, /* r6 */
+ 0, 0, 0, 0,
+ &eq_handle->handle, /* r4 */
+ &dummy, /* r5 */
+ &dummy, /* r6 */
+ &act_nr_of_entries_out, /* r7 */
+ &act_pages_out, /* r8 */
+ &eq_ist_out, /* r8 */
+ &dummy);
+
+ *act_nr_of_entries = (u32) act_nr_of_entries_out;
+ *act_pages = (u32) act_pages_out;
+ *eq_ist = (u32) eq_ist_out;
+
+#endif /* EHCA_USE_HCALL */
+
+ if (retcode == H_NOT_ENOUGH_RESOURCES) {
+ EDEB_ERR(4, "Not enough resource - retcode=%lx ", retcode);
+ }
+
+ EDEB_EX(7, "act_nr_of_entries=%x act_pages=%x eq_ist=%x",
+ *act_nr_of_entries, *act_pages, *eq_ist);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_reset_event(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ipz_eq_handle eq_handle,
+ const u64 event_mask)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "eq_handle=%lx, adapter_handle=%lx event_mask=%lx",
+ eq_handle.handle, hcp_adapter_handle.handle, event_mask);
+
+#ifndef EHCA_USE_HCALL
+ /* TODO: Not implemented yet */
+#else
+
+ retcode = plpar_hcall_7arg_7ret(H_RESET_EVENTS,
+ hcp_adapter_handle.handle, /* r4 */
+ eq_handle.handle, /* r5 */
+ event_mask, /* r6 */
+ 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif
+ EDEB(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+/**
+ * hipz_h_allocate_resource_cq - Allocate CQ resources in HW and FW, initialize
+ * resources, create the empty CQPT (ring).
+ *
+ * @eq_handle: eq handle to use for this cq
+ * @cq_handle: cq handle for this queue
+ * @act_nr_of_entries: actual number of queue entries
+ * @act_pages: actual number of queue pages
+ * @galpas: contain logical adress of priv. storage and
+ * log_user_storage
+ */
+static inline u64 hipz_h_alloc_resource_cq(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfcq *pfcq,
+ const struct ipz_eq_handle
+ eq_handle,
+ const u32 cq_token,
+ const u32
+ number_of_entries,
+ struct ipz_cq_handle
+ *cq_handle,
+ u32 * act_nr_of_entries,
+ u32 * act_pages,
+ struct h_galpas *galpas)
+{
+ u64 retcode = 0;
+ u64 dummy;
+ u64 act_nr_of_entries_out;
+ u64 act_pages_out;
+ u64 g_la_privileged_out;
+ u64 g_la_user_out;
+ /* stack location is a unique identifier for a process from beginning
+ * to end of this frame */
+ u32 x = (u64)(&x);
+
+ EDEB_EN(7, "pfcq=%p hcp_adapter_handle=%lx eq_handle=%lx cq_token=%x"
+ " number_of_entries=%x",
+ pfcq, hcp_adapter_handle.handle, eq_handle.handle,
+ cq_token, number_of_entries);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_alloc_resource_cq(hcp_adapter_handle,
+ pfcq,
+ eq_handle,
+ cq_token,
+ number_of_entries,
+ cq_handle,
+ act_nr_of_entries,
+ act_pages, galpas);
+#else
+ retcode = plpar_hcall_7arg_7ret(H_ALLOC_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ 2, /* r5 */
+ eq_handle.handle, /* r6 */
+ cq_token, /* r7 */
+ number_of_entries, /* r8 */
+ 0, 0,
+ &cq_handle->handle, /* r4 */
+ &dummy, /* r5 */
+ &dummy, /* r6 */
+ &act_nr_of_entries_out, /* r7 */
+ &act_pages_out, /* r8 */
+ &g_la_privileged_out, /* r9 */
+ &g_la_user_out); /* r10 */
+
+ *act_nr_of_entries = (u32) act_nr_of_entries_out;
+ *act_pages = (u32) act_pages_out;
+
+ if (retcode == 0) {
+ hcp_galpas_ctor(galpas, g_la_privileged_out, g_la_user_out);
+ }
+#endif /* EHCA_US_HCALL */
+
+ if (retcode == H_NOT_ENOUGH_RESOURCES) {
+ EDEB_ERR(4, "Not enough resources. retcode=%lx", retcode);
+ }
+
+ EDEB_EX(7, "cq_handle=%lx act_nr_of_entries=%x act_pages=%x",
+ cq_handle->handle, *act_nr_of_entries, *act_pages);
+
+ return retcode;
+}
+
+#define H_ALL_RES_QP_Enhanced_QP_Operations EHCA_BMASK_IBM(9,11)
+#define H_ALL_RES_QP_QP_PTE_Pin EHCA_BMASK_IBM(12,12)
+#define H_ALL_RES_QP_Service_Type EHCA_BMASK_IBM(13,15)
+#define H_ALL_RES_QP_LL_RQ_CQE_Posting EHCA_BMASK_IBM(18,18)
+#define H_ALL_RES_QP_LL_SQ_CQE_Posting EHCA_BMASK_IBM(19,21)
+#define H_ALL_RES_QP_Signalling_Type EHCA_BMASK_IBM(22,23)
+#define H_ALL_RES_QP_UD_Address_Vector_L_Key_Control EHCA_BMASK_IBM(31,31)
+#define H_ALL_RES_QP_Resource_Type EHCA_BMASK_IBM(56,63)
+
+#define H_ALL_RES_QP_Max_Outstanding_Send_Work_Requests EHCA_BMASK_IBM(0,15)
+#define H_ALL_RES_QP_Max_Outstanding_Receive_Work_Requests EHCA_BMASK_IBM(16,31)
+#define H_ALL_RES_QP_Max_Send_SG_Elements EHCA_BMASK_IBM(32,39)
+#define H_ALL_RES_QP_Max_Receive_SG_Elements EHCA_BMASK_IBM(40,47)
+
+#define H_ALL_RES_QP_Act_Outstanding_Send_Work_Requests EHCA_BMASK_IBM(16,31)
+#define H_ALL_RES_QP_Act_Outstanding_Receive_Work_Requests EHCA_BMASK_IBM(48,63)
+#define H_ALL_RES_QP_Act_Send_SG_Elements EHCA_BMASK_IBM(8,15)
+#define H_ALL_RES_QP_Act_Receeive_SG_Elements EHCA_BMASK_IBM(24,31)
+
+#define H_ALL_RES_QP_Send_Queue_Size_pages EHCA_BMASK_IBM(0,31)
+#define H_ALL_RES_QP_Receive_Queue_Size_pages EHCA_BMASK_IBM(32,63)
+
+/* direct access qp controls */
+#define DAQP_CTRL_ENABLE 0x01
+#define DAQP_CTRL_SEND_COMPLETION 0x20
+#define DAQP_CTRL_RECV_COMPLETION 0x40
+
+/**
+ * hipz_h_alloc_resource_qp - Allocate QP resources in HW and FW,
+ * initialize resources, create empty QPPTs (2 rings).
+ *
+ * @h_galpas to access HCA resident QP attributes
+ */
+static inline u64 hipz_h_alloc_resource_qp(const struct
+ ipz_adapter_handle
+ adapter_handle,
+ struct ehca_pfqp *pfqp,
+ const u8 servicetype,
+ const u8 daqp_ctrl,
+ const u8 signalingtype,
+ const u8 ud_av_l_key_ctl,
+ const struct ipz_cq_handle send_cq_handle,
+ const struct ipz_cq_handle receive_cq_handle,
+ const struct ipz_eq_handle async_eq_handle,
+ const u32 qp_token,
+ const struct ipz_pd pd,
+ const u16 max_nr_send_wqes,
+ const u16 max_nr_receive_wqes,
+ const u8 max_nr_send_sges,
+ const u8 max_nr_receive_sges,
+ const u32 ud_av_l_key,
+ struct ipz_qp_handle *qp_handle,
+ u32 * qp_nr,
+ u16 * act_nr_send_wqes,
+ u16 * act_nr_receive_wqes,
+ u8 * act_nr_send_sges,
+ u8 * act_nr_receive_sges,
+ u32 * nr_sq_pages,
+ u32 * nr_rq_pages,
+ struct h_galpas *h_galpas)
+{
+ u64 retcode = H_Success;
+ u64 allocate_controls;
+ u64 max_r10_reg;
+ u64 dummy = 0;
+ u64 qp_nr_out = 0;
+ u64 r6_out = 0;
+ u64 r7_out = 0;
+ u64 r8_out = 0;
+ u64 g_la_user_out = 0;
+ u64 r11_out = 0;
+
+ EDEB_EN(7, "pfqp=%p adapter_handle=%lx servicetype=%x signalingtype=%x"
+ " ud_av_l_key=%x send_cq_handle=%lx receive_cq_handle=%lx"
+ " async_eq_handle=%lx qp_token=%x pd=%x max_nr_send_wqes=%x"
+ " max_nr_receive_wqes=%x max_nr_send_sges=%x"
+ " max_nr_receive_sges=%x ud_av_l_key=%x galpa.pid=%x",
+ pfqp, adapter_handle.handle, servicetype, signalingtype,
+ ud_av_l_key, send_cq_handle.handle,
+ receive_cq_handle.handle, async_eq_handle.handle, qp_token,
+ pd.value, max_nr_send_wqes, max_nr_receive_wqes,
+ max_nr_send_sges, max_nr_receive_sges, ud_av_l_key,
+ h_galpas->pid);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_alloc_resource_qp(adapter_handle,
+ pfqp,
+ servicetype,
+ signalingtype,
+ ud_av_l_key_ctl,
+ send_cq_handle,
+ receive_cq_handle,
+ async_eq_handle,
+ qp_token,
+ pd,
+ max_nr_send_wqes,
+ max_nr_receive_wqes,
+ max_nr_send_sges,
+ max_nr_receive_sges,
+ ud_av_l_key,
+ qp_handle,
+ qp_nr,
+ act_nr_send_wqes,
+ act_nr_receive_wqes,
+ act_nr_send_sges,
+ act_nr_receive_sges,
+ nr_sq_pages, nr_rq_pages, h_galpas);
+
+#else
+ allocate_controls =
+ EHCA_BMASK_SET(H_ALL_RES_QP_Enhanced_QP_Operations,
+ (daqp_ctrl & DAQP_CTRL_ENABLE) ? 1 : 0)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_QP_PTE_Pin, 0)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_Service_Type, servicetype)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_Signalling_Type, signalingtype)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_LL_RQ_CQE_Posting,
+ (daqp_ctrl & DAQP_CTRL_RECV_COMPLETION) ? 1 : 0)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_LL_SQ_CQE_Posting,
+ (daqp_ctrl & DAQP_CTRL_SEND_COMPLETION) ? 1 : 0)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_UD_Address_Vector_L_Key_Control,
+ ud_av_l_key_ctl)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_Resource_Type, 1);
+
+ max_r10_reg =
+ EHCA_BMASK_SET(H_ALL_RES_QP_Max_Outstanding_Send_Work_Requests,
+ max_nr_send_wqes)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_Max_Outstanding_Receive_Work_Requests,
+ max_nr_receive_wqes)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_Max_Send_SG_Elements,
+ max_nr_send_sges)
+ | EHCA_BMASK_SET(H_ALL_RES_QP_Max_Receive_SG_Elements,
+ max_nr_receive_sges);
+
+
+ retcode = plpar_hcall_9arg_9ret(H_ALLOC_RESOURCE,
+ adapter_handle.handle, /* r4 */
+ allocate_controls, /* r5 */
+ send_cq_handle.handle, /* r6 */
+ receive_cq_handle.handle,/* r7 */
+ async_eq_handle.handle, /* r8 */
+ ((u64) qp_token << 32)
+ | pd.value, /* r9 */
+ max_r10_reg, /* r10 */
+ ud_av_l_key, /* r11 */
+ 0,
+ &qp_handle->handle, /* r4 */
+ &qp_nr_out, /* r5 */
+ &r6_out, /* r6 */
+ &r7_out, /* r7 */
+ &r8_out, /* r8 */
+ &dummy, /* r9 */
+ &g_la_user_out, /* r10 */
+ &r11_out,
+ &dummy);
+
+ /* extract outputs */
+ *qp_nr = (u32) qp_nr_out;
+ *act_nr_send_wqes = (u16)
+ EHCA_BMASK_GET(H_ALL_RES_QP_Act_Outstanding_Send_Work_Requests,
+ r6_out);
+ *act_nr_receive_wqes = (u16)
+ EHCA_BMASK_GET(H_ALL_RES_QP_Act_Outstanding_Receive_Work_Requests,
+ r6_out);
+ *act_nr_send_sges =
+ (u8) EHCA_BMASK_GET(H_ALL_RES_QP_Act_Send_SG_Elements,
+ r7_out);
+ *act_nr_receive_sges =
+ (u8) EHCA_BMASK_GET(H_ALL_RES_QP_Act_Receeive_SG_Elements,
+ r7_out);
+ *nr_sq_pages =
+ (u32) EHCA_BMASK_GET(H_ALL_RES_QP_Send_Queue_Size_pages,
+ r8_out);
+ *nr_rq_pages =
+ (u32) EHCA_BMASK_GET(H_ALL_RES_QP_Receive_Queue_Size_pages,
+ r8_out);
+ if (retcode == 0) {
+ hcp_galpas_ctor(h_galpas, g_la_user_out, g_la_user_out);
+ }
+#endif /* EHCA_USE_HCALL */
+
+ if (retcode == H_NOT_ENOUGH_RESOURCES) {
+ EDEB_ERR(4, "Not enough resources. retcode=%lx",
+ retcode);
+ }
+
+ EDEB_EX(7, "qp_nr=%x act_nr_send_wqes=%x"
+ " act_nr_receive_wqes=%x act_nr_send_sges=%x"
+ " act_nr_receive_sges=%x nr_sq_pages=%x"
+ " nr_rq_pages=%x galpa.user=%lx galpa.kernel=%lx",
+ *qp_nr, *act_nr_send_wqes, *act_nr_receive_wqes,
+ *act_nr_send_sges, *act_nr_receive_sges, *nr_sq_pages,
+ *nr_rq_pages, h_galpas->user.fw_handle,
+ h_galpas->kernel.fw_handle);
+
+ return (retcode);
+}
+
+static inline u64 hipz_h_query_port(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const u8 port_id,
+ struct query_port_rblock
+ *query_port_response_block)
+{
+ u64 retcode = H_Success;
+ u64 dummy;
+ u64 r_cb;
+
+ EDEB_EN(7, "hcp_adapter_handle=%lx port_id %x",
+ hcp_adapter_handle.handle, port_id);
+
+ if ((((u64)query_port_response_block) & 0xfff) != 0) {
+ EDEB_ERR(4, "response block not page aligned");
+ retcode = H_Parameter;
+ return (retcode);
+ }
+
+#ifndef EHCA_USE_HCALL
+ retcode = 0;
+#else
+ r_cb = ehca_kv_to_g(query_port_response_block);
+
+ retcode = plpar_hcall_7arg_7ret(H_QUERY_PORT,
+ hcp_adapter_handle.handle, /* r4 */
+ port_id, /* r5 */
+ r_cb, /* r6 */
+ 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+
+ EDEB(7, "offset0=%x offset1=%x offset2=%x offset3=%x",
+ ((u32 *) query_port_response_block)[0],
+ ((u32 *) query_port_response_block)[1],
+ ((u32 *) query_port_response_block)[2],
+ ((u32 *) query_port_response_block)[3]);
+ EDEB(7, "offset4=%x offset5=%x offset6=%x offset7=%x",
+ ((u32 *) query_port_response_block)[4],
+ ((u32 *) query_port_response_block)[5],
+ ((u32 *) query_port_response_block)[6],
+ ((u32 *) query_port_response_block)[7]);
+ EDEB(7, "offset8=%x offset9=%x offseta=%x offsetb=%x",
+ ((u32 *) query_port_response_block)[8],
+ ((u32 *) query_port_response_block)[9],
+ ((u32 *) query_port_response_block)[10],
+ ((u32 *) query_port_response_block)[11]);
+ EDEB(7, "offsetc=%x offsetd=%x offsete=%x offsetf=%x",
+ ((u32 *) query_port_response_block)[12],
+ ((u32 *) query_port_response_block)[13],
+ ((u32 *) query_port_response_block)[14],
+ ((u32 *) query_port_response_block)[15]);
+ EDEB(7, "offset31=%x offset35=%x offset36=%x",
+ ((u32 *) query_port_response_block)[32],
+ ((u32 *) query_port_response_block)[36],
+ ((u32 *) query_port_response_block)[37]);
+ EDEB(7, "offset200=%x offset201=%x offset202=%x "
+ "offset203=%x",
+ ((u32 *) query_port_response_block)[0x200],
+ ((u32 *) query_port_response_block)[0x201],
+ ((u32 *) query_port_response_block)[0x202],
+ ((u32 *) query_port_response_block)[0x203]);
+
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_query_hca(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct query_hca_rblock
+ *query_hca_rblock)
+{
+ u64 retcode = 0;
+ u64 dummy;
+ u64 r_cb;
+ EDEB_EN(7, "hcp_adapter_handle=%lx", hcp_adapter_handle.handle);
+
+ if ((((u64)query_hca_rblock) & 0xfff) != 0) {
+ EDEB_ERR(4, "response block not page aligned");
+ retcode = H_Parameter;
+ return (retcode);
+ }
+
+#ifndef EHCA_USE_HCALL
+ retcode = 0;
+#else
+ r_cb = ehca_kv_to_g(query_hca_rblock);
+
+ retcode = plpar_hcall_7arg_7ret(H_QUERY_HCA,
+ hcp_adapter_handle.handle, /* r4 */
+ r_cb, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+
+ EDEB(7, "offset0=%x offset1=%x offset2=%x offset3=%x",
+ ((u32 *) query_hca_rblock)[0],
+ ((u32 *) query_hca_rblock)[1],
+ ((u32 *) query_hca_rblock)[2], ((u32 *) query_hca_rblock)[3]);
+ EDEB(7, "offset4=%x offset5=%x offset6=%x offset7=%x",
+ ((u32 *) query_hca_rblock)[4],
+ ((u32 *) query_hca_rblock)[5],
+ ((u32 *) query_hca_rblock)[6], ((u32 *) query_hca_rblock)[7]);
+ EDEB(7, "offset8=%x offset9=%x offseta=%x offsetb=%x",
+ ((u32 *) query_hca_rblock)[8],
+ ((u32 *) query_hca_rblock)[9],
+ ((u32 *) query_hca_rblock)[10], ((u32 *) query_hca_rblock)[11]);
+ EDEB(7, "offsetc=%x offsetd=%x offsete=%x offsetf=%x",
+ ((u32 *) query_hca_rblock)[12],
+ ((u32 *) query_hca_rblock)[13],
+ ((u32 *) query_hca_rblock)[14], ((u32 *) query_hca_rblock)[15]);
+ EDEB(7, "offset136=%x offset192=%x offset204=%x",
+ ((u32 *) query_hca_rblock)[32],
+ ((u32 *) query_hca_rblock)[48], ((u32 *) query_hca_rblock)[51]);
+ EDEB(7, "offset231=%x offset235=%x",
+ ((u32 *) query_hca_rblock)[57], ((u32 *) query_hca_rblock)[58]);
+ EDEB(7, "offset200=%x offset201=%x offset202=%x offset203=%x",
+ ((u32 *) query_hca_rblock)[0x201],
+ ((u32 *) query_hca_rblock)[0x202],
+ ((u32 *) query_hca_rblock)[0x203],
+ ((u32 *) query_hca_rblock)[0x204]);
+
+ EDEB_EX(7, "retcode=%lx hcp_adapter_handle=%lx",
+ retcode, hcp_adapter_handle.handle);
+
+ return retcode;
+}
+
+/**
+ * hipz_h_register_rpage - hcp_if.h internal function for all
+ * hcp_H_REGISTER_RPAGE calls.
+ *
+ * @logical_address_of_page: kv transformation to GX address in this routine
+ */
+static inline u64 hipz_h_register_rpage(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const u8 pagesize,
+ const u8 queue_type,
+ const u64 resource_handle,
+ const u64
+ logical_address_of_page,
+ u64 count)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "hcp_adapter_handle=%lx pagesize=%x queue_type=%x"
+ " resource_handle=%lx logical_address_of_page=%lx count=%lx",
+ hcp_adapter_handle.handle, pagesize, queue_type,
+ resource_handle, logical_address_of_page, count);
+
+#ifndef EHCA_USE_HCALL
+ EDEB_ERR(4, "Not implemented");
+#else
+ retcode = plpar_hcall_7arg_7ret(H_REGISTER_RPAGES,
+ hcp_adapter_handle.handle, /* r4 */
+ queue_type | pagesize << 8, /* r5 */
+ resource_handle, /* r6 */
+ logical_address_of_page, /* r7 */
+ count, /* r8 */
+ 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_register_rpage_eq(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_eq_handle
+ eq_handle,
+ struct ehca_pfeq *pfeq,
+ const u8 pagesize,
+ const u8 queue_type,
+ const u64
+ logical_address_of_page,
+ const u64 count)
+{
+ u64 retcode = 0;
+
+ EDEB_EN(7, "pfeq=%p hcp_adapter_handle=%lx eq_handle=%lx pagesize=%x"
+ " queue_type=%x logical_address_of_page=%lx count=%lx",
+ pfeq, hcp_adapter_handle.handle, eq_handle.handle, pagesize,
+ queue_type,logical_address_of_page, count);
+
+#ifndef EHCA_USE_HCALL
+ retcode =
+ simp_h_register_rpage_eq(hcp_adapter_handle, eq_handle, pfeq,
+ pagesize, queue_type,
+ logical_address_of_page, count);
+#else
+ if (count != 1) {
+ EDEB_ERR(4, "Ppage counter=%lx", count);
+ return (H_Parameter);
+ }
+ retcode = hipz_h_register_rpage(hcp_adapter_handle,
+ pagesize,
+ queue_type,
+ eq_handle.handle,
+ logical_address_of_page, count);
+#endif /* EHCA_USE_HCALL */
+
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u32 hipz_request_interrupt(struct ehca_irq_info *irq_info,
+ irqreturn_t(*handler)
+ (int, void *, struct pt_regs *))
+{
+
+ int ret = 0;
+
+ EDEB_EN(7, "ist=0x%x", irq_info->ist);
+
+#ifdef EHCA_USE_HCALL
+#ifndef EHCA_USERDRIVER
+ ret = ibmebus_request_irq(NULL, irq_info->ist, handler,
+ SA_INTERRUPT, "ehca", (void *)irq_info);
+
+ if (ret < 0)
+ EDEB_ERR(4, "Can't map interrupt handler.");
+#else
+ struct hcall_irq_info hirq = {.irq = irq_info->irq,
+ .ist = irq_info->ist,
+ .pid = irq_info->pid};
+
+ hirq = hirq;
+ ret = hcall_reg_eqh(&hirq, ehca_interrupt_eq);
+#endif /* EHCA_USERDRIVER */
+#endif /* EHCA_USE_HCALL */
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+static inline void hipz_free_interrupt(struct ehca_irq_info *irq_info)
+{
+#ifdef EHCA_USE_HCALL
+#ifndef EHCA_USERDRIVER
+ ibmebus_free_irq(NULL, irq_info->ist, (void *)irq_info);
+#endif
+#endif
+}
+
+static inline u32 hipz_h_query_int_state(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_irq_info *irq_info)
+{
+ u32 rc = 0;
+ u64 dummy = 0;
+
+ EDEB_EN(7, "ist=0x%x", irq_info->ist);
+
+#ifdef EHCA_USE_HCALL
+#ifdef EHCA_USERDRIVER
+ /* TODO: Not implemented yet */
+#else
+ rc = plpar_hcall_7arg_7ret(H_QUERY_INT_STATE,
+ hcp_adapter_handle.handle, /* r4 */
+ irq_info->ist, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+
+ if ((rc != H_Success) && (rc != H_Busy))
+ EDEB_ERR(4, "Could not query interrupt state.");
+#endif
+#endif
+ EDEB_EX(7, "interrupt state: %x", rc);
+
+ return rc;
+}
+
+static inline u64 hipz_h_register_rpage_cq(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_cq_handle
+ cq_handle,
+ struct ehca_pfcq *pfcq,
+ const u8 pagesize,
+ const u8 queue_type,
+ const u64
+ logical_address_of_page,
+ const u64 count,
+ const struct h_galpa gal)
+{
+ u64 retcode = 0;
+
+ EDEB_EN(7, "pfcq=%p hcp_adapter_handle=%lx cq_handle=%lx pagesize=%x"
+ " queue_type=%x logical_address_of_page=%lx count=%lx",
+ pfcq, hcp_adapter_handle.handle, cq_handle.handle, pagesize,
+ queue_type, logical_address_of_page, count);
+
+#ifndef EHCA_USE_HCALL
+ retcode =
+ simp_h_register_rpage_cq(hcp_adapter_handle, cq_handle, pfcq,
+ pagesize, queue_type,
+ logical_address_of_page, count, gal);
+#else
+ if (count != 1) {
+ EDEB_ERR(4, "Page counter=%lx", count);
+ return (H_Parameter);
+ }
+
+ retcode =
+ hipz_h_register_rpage(hcp_adapter_handle, pagesize, queue_type,
+ cq_handle.handle, logical_address_of_page,
+ count);
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_register_rpage_qp(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle,
+ struct ehca_pfqp *pfqp,
+ const u8 pagesize,
+ const u8 queue_type,
+ const u64
+ logical_address_of_page,
+ const u64 count,
+ const struct h_galpa
+ galpa)
+{
+ u64 retcode = 0;
+
+ EDEB_EN(7, "pfqp=%p hcp_adapter_handle=%lx qp_handle=%lx pagesize=%x"
+ " queue_type=%x logical_address_of_page=%lx count=%lx",
+ pfqp, hcp_adapter_handle.handle, qp_handle.handle, pagesize,
+ queue_type, logical_address_of_page, count);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_register_rpage_qp(hcp_adapter_handle,
+ qp_handle,
+ pfqp,
+ pagesize,
+ queue_type,
+ logical_address_of_page,
+ count, galpa);
+#else
+ if (count != 1) {
+ EDEB_ERR(4, "Page counter=%lx", count);
+ return (H_Parameter);
+ }
+
+ retcode = hipz_h_register_rpage(hcp_adapter_handle,
+ pagesize,
+ queue_type,
+ qp_handle.handle,
+ logical_address_of_page, count);
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_remove_rpt_cq(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_cq_handle
+ cq_handle,
+ struct ehca_pfcq *pfcq)
+{
+ u64 retcode = 0;
+
+ EDEB_EN(7, "pfcq=%p hcp_adapter_handle=%lx cq_handle=%lx",
+ pfcq, hcp_adapter_handle.handle, cq_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_remove_rpt_cq(hcp_adapter_handle, cq_handle, pfcq);
+#else
+ /* TODO: hcall not implemented */
+#endif
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return 0;
+}
+
+static inline u64 hipz_h_remove_rpt_eq(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_eq_handle
+ eq_handle,
+ struct ehca_pfeq *pfeq)
+{
+ u64 retcode = 0;
+
+ EDEB_EX(7, "hcp_adapter_handle=%lx eq_handle=%lx",
+ hcp_adapter_handle.handle, eq_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_remove_rpt_eq(hcp_adapter_handle, eq_handle, pfeq);
+#else
+ /* TODO: hcall not implemented */
+#endif
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return 0;
+}
+
+static inline u64 hipz_h_remove_rpt_qp(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle,
+ struct ehca_pfqp *pfqp)
+{
+ u64 retcode = 0;
+
+ EDEB_EN(7, "pfqp=%p hcp_adapter_handle=%lx qp_handle=%lx",
+ pfqp, hcp_adapter_handle.handle, qp_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ retcode = simp_h_remove_rpt_qp(hcp_adapter_handle, qp_handle, pfqp);
+#else
+ /* TODO: hcall not implemented */
+#endif
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return 0;
+}
+
+static inline u64 hipz_h_disable_and_get_wqe(const struct
+ ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct
+ ipz_qp_handle qp_handle,
+ struct ehca_pfqp *pfqp,
+ void **log_addr_next_sq_wqe_tb_processed,
+ void **log_addr_next_rq_wqe_tb_processed,
+ int dis_and_get_function_code)
+{
+ u64 retcode = 0;
+ u8 function_code = 1;
+ u64 dummy, dummy1, dummy2;
+
+ EDEB_EN(7, "pfqp=%p hcp_adapter_handle=%lx function=%x qp_handle=%lx",
+ pfqp, hcp_adapter_handle.handle, function_code, qp_handle.handle);
+
+ if (log_addr_next_sq_wqe_tb_processed==NULL) {
+ log_addr_next_sq_wqe_tb_processed = (void**)&dummy1;
+ }
+ if (log_addr_next_rq_wqe_tb_processed==NULL) {
+ log_addr_next_rq_wqe_tb_processed = (void**)&dummy2;
+ }
+#ifndef EHCA_USE_HCALL
+ retcode =
+ simp_h_disable_and_get_wqe(hcp_adapter_handle, qp_handle, pfqp,
+ log_addr_next_sq_wqe_tb_processed,
+ log_addr_next_rq_wqe_tb_processed);
+#else
+
+ retcode = plpar_hcall_7arg_7ret(H_DISABLE_AND_GETC,
+ hcp_adapter_handle.handle, /* r4 */
+ dis_and_get_function_code, /* r5 */
+ /* function code 1-disQP ret
+ * SQ RQ wqe ptr
+ * 2- ret SQ wqe ptr
+ * 3- ret. RQ count */
+ qp_handle.handle, /* r6 */
+ 0, 0, 0, 0,
+ (void*)log_addr_next_sq_wqe_tb_processed, /* r4 */
+ (void*)log_addr_next_rq_wqe_tb_processed, /* r5 */
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "retcode=%lx ladr_next_rq_wqe_out=%p"
+ " ladr_next_sq_wqe_out=%p", retcode,
+ *log_addr_next_sq_wqe_tb_processed,
+ *log_addr_next_rq_wqe_tb_processed);
+
+ return retcode;
+}
+
+enum hcall_sigt {
+ HCALL_SIGT_NO_CQE = 0,
+ HCALL_SIGT_BY_WQE = 1,
+ HCALL_SIGT_EVERY = 2
+};
+
+static inline u64 hipz_h_modify_qp(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle, struct ehca_pfqp *pfqp,
+ const u64 update_mask,
+ struct hcp_modify_qp_control_block
+ *mqpcb,
+ struct h_galpa gal)
+{
+ u64 retcode = 0;
+ u64 invalid_attribute_identifier = 0;
+ u64 rc_attrib_mask = 0;
+ u64 dummy;
+ u64 r_cb;
+ EDEB_EN(7, "pfqp=%p hcp_adapter_handle=%lx qp_handle=%lx"
+ " update_mask=%lx qp_state=%x mqpcb=%p",
+ pfqp, hcp_adapter_handle.handle, qp_handle.handle,
+ update_mask, mqpcb->qp_state, mqpcb);
+
+#ifndef EHCA_USE_HCALL
+ simp_h_modify_qp(hcp_adapter_handle, qp_handle, pfqp, update_mask,
+ mqpcb, gal);
+#else
+ r_cb = ehca_kv_to_g(mqpcb);
+ retcode = plpar_hcall_7arg_7ret(H_MODIFY_QP,
+ hcp_adapter_handle.handle, /* r4 */
+ qp_handle.handle, /* r5 */
+ update_mask, /* r6 */
+ r_cb, /* r7 */
+ 0, 0, 0,
+ &invalid_attribute_identifier, /* r4 */
+ &dummy, /* r5 */
+ &dummy, /* r6 */
+ &dummy, /* r7 */
+ &dummy, /* r8 */
+ &rc_attrib_mask, /* r9 */
+ &dummy);
+#endif
+ if (retcode == H_NOT_ENOUGH_RESOURCES) {
+ EDEB_ERR(4, "Insufficient resources retcode=%lx", retcode);
+ }
+
+ EDEB_EX(7, "retcode=%lx invalid_attribute_identifier=%lx"
+ " invalid_attribute_MASK=%lx", retcode,
+ invalid_attribute_identifier, rc_attrib_mask);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_query_qp(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle, struct ehca_pfqp *pfqp,
+ struct hcp_modify_qp_control_block
+ *qqpcb, struct h_galpa gal)
+{
+ u64 retcode = 0;
+ u64 dummy;
+ u64 r_cb;
+ EDEB_EN(7, "hcp_adapter_handle=%lx qp_handle=%lx",
+ hcp_adapter_handle.handle, qp_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ simp_h_query_qp(hcp_adapter_handle, qp_handle, qqpcb, gal);
+#else
+ r_cb = ehca_kv_to_g(qqpcb);
+ EDEB(7, "r_cb=%lx", r_cb);
+
+ retcode = plpar_hcall_7arg_7ret(H_QUERY_QP,
+ hcp_adapter_handle.handle, /* r4 */
+ qp_handle.handle, /* r5 */
+ r_cb, /* r6 */
+ 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+
+#endif
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_destroy_qp(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_qp *qp)
+{
+ u64 retcode = 0;
+ u64 dummy;
+ u64 ladr_next_sq_wqe_out;
+ u64 ladr_next_rq_wqe_out;
+
+ EDEB_EN(7, "qp = %p ,ipz_qp_handle=%lx adapter_handle=%lx",
+ qp, qp->ipz_qp_handle.handle, hcp_adapter_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ retcode =
+ simp_h_destroy_qp(hcp_adapter_handle, qp,
+ qp->ehca_qp_core.galpas.user);
+#else
+
+ retcode = hcp_galpas_dtor(&qp->ehca_qp_core.galpas);
+
+ retcode = plpar_hcall_7arg_7ret(H_DISABLE_AND_GETC,
+ hcp_adapter_handle.handle, /* r4 */
+ /* function code */
+ 1, /* r5 */
+ qp->ipz_qp_handle.handle, /* r6 */
+ 0, 0, 0, 0,
+ &ladr_next_sq_wqe_out, /* r4 */
+ &ladr_next_rq_wqe_out, /* r5 */
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+ if (retcode == H_Hardware) {
+ EDEB_ERR(4, "HCA not operational. retcode=%lx", retcode);
+ }
+
+ retcode = plpar_hcall_7arg_7ret(H_FREE_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ qp->ipz_qp_handle.handle, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+
+ if (retcode == H_Resource) {
+ EDEB_ERR(4, "Resource still in use. retcode=%lx", retcode);
+ }
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_define_aqp0(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle, struct h_galpa gal,
+ u32 port)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "port=%x ipz_qp_handle=%lx adapter_handle=%lx",
+ port, qp_handle.handle, hcp_adapter_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ /* TODO: not implemented yet */
+#else
+
+ retcode = plpar_hcall_7arg_7ret(H_DEFINE_AQP0,
+ hcp_adapter_handle.handle, /* r4 */
+ qp_handle.handle, /* r5 */
+ port, /* r6 */
+ 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_define_aqp1(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle, struct h_galpa gal,
+ u32 port, u32 * pma_qp_nr,
+ u32 * bma_qp_nr)
+{
+ u64 retcode = 0;
+ u64 dummy;
+ u64 pma_qp_nr_out;
+ u64 bma_qp_nr_out;
+
+ EDEB_EN(7, "port=%x qp_handle=%lx adapter_handle=%lx",
+ port, qp_handle.handle, hcp_adapter_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ /* TODO: not implemented yet */
+#else
+
+ retcode = plpar_hcall_7arg_7ret(H_DEFINE_AQP1,
+ hcp_adapter_handle.handle, /* r4 */
+ qp_handle.handle, /* r5 */
+ port, /* r6 */
+ 0, 0, 0, 0,
+ &pma_qp_nr_out, /* r4 */
+ &bma_qp_nr_out, /* r5 */
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+ *pma_qp_nr = (u32) pma_qp_nr_out;
+ *bma_qp_nr = (u32) bma_qp_nr_out;
+
+#endif
+ if (retcode == H_ALIAS_EXIST) {
+ EDEB_ERR(4, "AQP1 already exists. retcode=%lx", retcode);
+ }
+
+ EDEB_EX(7, "retcode=%lx pma_qp_nr=%i bma_qp_nr=%i",
+ retcode, (int)*pma_qp_nr, (int)*bma_qp_nr);
+
+ return retcode;
+}
+
+/* TODO: Don't use ib_* types in this file */
+static inline u64 hipz_h_attach_mcqp(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle, struct h_galpa gal,
+ u16 mcg_dlid, union ib_gid dgid)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "qp_handle=%lx adapter_handle=%lx\nMCG_DGID ="
+ " %d.%d.%d.%d.%d.%d.%d.%d."
+ " %d.%d.%d.%d.%d.%d.%d.%d\n",
+ qp_handle.handle, hcp_adapter_handle.handle,
+ dgid.raw[0], dgid.raw[1],
+ dgid.raw[2], dgid.raw[3],
+ dgid.raw[4], dgid.raw[5],
+ dgid.raw[6], dgid.raw[7],
+ dgid.raw[0 + 8], dgid.raw[1 + 8],
+ dgid.raw[2 + 8], dgid.raw[3 + 8],
+ dgid.raw[4 + 8], dgid.raw[5 + 8],
+ dgid.raw[6 + 8], dgid.raw[7 + 8]);
+
+#ifndef EHCA_USE_HCALL
+ /* TODO: not implemented yet */
+#else
+ retcode = plpar_hcall_7arg_7ret(H_ATTACH_MCQP,
+ hcp_adapter_handle.handle, /* r4 */
+ qp_handle.handle, /* r5 */
+ mcg_dlid, /* r6 */
+ dgid.global.interface_id, /* r7 */
+ dgid.global.subnet_prefix, /* r8 */
+ 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+ if (retcode == H_NOT_ENOUGH_RESOURCES) {
+ EDEB_ERR(4, "Not enough resources. retcode=%lx", retcode);
+ }
+
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_detach_mcqp(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_qp_handle
+ qp_handle, struct h_galpa gal,
+ u16 mcg_dlid, union ib_gid dgid)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "qp_handle=%lx adapter_handle=%lx\nMCG_DGID ="
+ " %d.%d.%d.%d.%d.%d.%d.%d."
+ " %d.%d.%d.%d.%d.%d.%d.%d\n",
+ qp_handle.handle, hcp_adapter_handle.handle,
+ dgid.raw[0], dgid.raw[1],
+ dgid.raw[2], dgid.raw[3],
+ dgid.raw[4], dgid.raw[5],
+ dgid.raw[6], dgid.raw[7],
+ dgid.raw[0 + 8], dgid.raw[1 + 8],
+ dgid.raw[2 + 8], dgid.raw[3 + 8],
+ dgid.raw[4 + 8], dgid.raw[5 + 8],
+ dgid.raw[6 + 8], dgid.raw[7 + 8]);
+#ifndef EHCA_USE_HCALL
+ /* TODO: not implemented yet */
+#else
+ retcode = plpar_hcall_7arg_7ret(H_DETACH_MCQP,
+ hcp_adapter_handle.handle, /* r4 */
+ qp_handle.handle, /* r5 */
+ mcg_dlid, /* r6 */
+ dgid.global.interface_id, /* r7 */
+ dgid.global.subnet_prefix, /* r8 */
+ 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+ EDEB(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_destroy_cq(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_cq *cq,
+ u8 force_flag)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "cq->pf=%p cq=.%p ipz_cq_handle=%lx adapter_handle=%lx",
+ &cq->pf, cq, cq->ipz_cq_handle.handle, hcp_adapter_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ simp_h_destroy_cq(hcp_adapter_handle, cq,
+ cq->ehca_cq_core.galpas.kernel);
+#else
+ retcode = hcp_galpas_dtor(&cq->ehca_cq_core.galpas);
+ if (retcode != 0) {
+ EDEB_ERR(4, "Could not destruct cp->galpas");
+ return (H_Resource);
+ }
+
+ retcode = plpar_hcall_7arg_7ret(H_FREE_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ cq->ipz_cq_handle.handle, /* r5 */
+ force_flag!=0 ? 1L : 0L, /* r6 */
+ 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif
+
+ if (retcode == H_Resource) {
+ EDEB(4, "retcode=%lx ", retcode);
+ }
+
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+static inline u64 hipz_h_destroy_eq(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_eq *eq)
+{
+ u64 retcode = 0;
+ u64 dummy;
+
+ EDEB_EN(7, "eq->pf=%p eq=%p ipz_eq_handle=%lx adapter_handle=%lx",
+ &eq->pf, eq, eq->ipz_eq_handle.handle,
+ hcp_adapter_handle.handle);
+
+#ifndef EHCA_USE_HCALL
+ /* TODO: not implemeted et */
+#else
+
+ retcode = hcp_galpas_dtor(&eq->galpas);
+ if (retcode != 0) {
+ EDEB_ERR(4, "Could not destruct ep->galpas");
+ return (H_Resource);
+ }
+
+ retcode = plpar_hcall_7arg_7ret(H_FREE_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ eq->ipz_eq_handle.handle, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+
+#endif
+ if (retcode == H_Resource) {
+ EDEB_ERR(4, "Resource in use. retcode=%lx ", retcode);
+ }
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return retcode;
+}
+
+/**
+ * hipz_h_alloc_resource_mr - Allocate MR resources in HW and FW, initialize
+ * resources.
+ *
+ * @pfmr: platform specific for MR
+ * pfshca: platform specific for SHCA
+ * vaddr: Memory Region I/O Virtual Address
+ * @length: Memory Region Length
+ * @access_ctrl: Memory Region Access Controls
+ * @pd: Protection Domain
+ * @mr_handle: Memory Region Handle
+ */
+static inline u64 hipz_h_alloc_resource_mr(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmr *pfmr,
+ struct ehca_pfshca
+ *pfshca,
+ const u64 vaddr,
+ const u64 length,
+ const u32 access_ctrl,
+ const struct ipz_pd pd,
+ struct ipz_mrmw_handle
+ *mr_handle,
+ u32 * lkey,
+ u32 * rkey)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 lkey_out;
+ u64 rkey_out;
+
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p vaddr=%lx length=%lx"
+ " access_ctrl=%x pd=%x pfshca=%p",
+ hcp_adapter_handle.handle, pfmr, vaddr, length, access_ctrl,
+ pd.value, pfshca);
+
+#ifndef EHCA_USE_HCALL
+ rc = simp_hcz_h_alloc_resource_mr(hcp_adapter_handle,
+ pfmr,
+ pfshca,
+ vaddr,
+ length,
+ access_ctrl,
+ pd,
+ (struct hcz_mrmw_handle *)mr_handle,
+ lkey, rkey);
+ EDEB_EX(7, "rc=%lx mr_handle.mrwpte=%p mr_handle.page_index=%x"
+ " lkey=%x rkey=%x",
+ rc, mr_handle->mrwpte, mr_handle->page_index, *lkey, *rkey);
+#else
+
+ rc = plpar_hcall_7arg_7ret(H_ALLOC_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ 5, /* r5 */
+ vaddr, /* r6 */
+ length, /* r7 */
+ ((((u64) access_ctrl) << 32ULL)), /* r8 */
+ pd.value, /* r9 */
+ 0,
+ &mr_handle->handle, /* r4 */
+ &dummy, /* r5 */
+ &lkey_out, /* r6 */
+ &rkey_out, /* r7 */
+ &dummy,
+ &dummy,
+ &dummy);
+ *lkey = (u32) lkey_out;
+ *rkey = (u32) rkey_out;
+
+ EDEB_EX(7, "rc=%lx mr_handle=%lx lkey=%x rkey=%x",
+ rc, mr_handle->handle, *lkey, *rkey);
+#endif /* EHCA_USE_HCALL */
+
+ return rc;
+}
+
+/**
+ * hipz_h_register_rpage_mr - Register MR resource page in HW and FW .
+ *
+ * @pfmr: platform specific for MR
+ * @pfshca: platform specific for SHCA
+ * @queue_type: must be zero for MR
+ */
+static inline u64 hipz_h_register_rpage_mr(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ const struct ipz_mrmw_handle
+ *mr_handle,
+ struct ehca_pfmr *pfmr,
+ struct ehca_pfshca *pfshca,
+ const u8 pagesize,
+ const u8 queue_type,
+ const u64
+ logical_address_of_page,
+ const u64 count)
+{
+ u64 rc = H_Success;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p mr_handle.mrwpte=%p"
+ " mr_handle.page_index=%x pagesize=%x queue_type=%x "
+ " logical_address_of_page=%lx count=%lx pfshca=%p",
+ hcp_adapter_handle.handle, pfmr, mr_handle->mrwpte,
+ mr_handle->page_index, pagesize, queue_type,
+ logical_address_of_page, count, pfshca);
+
+ rc = simp_hcz_h_register_rpage_mr(hcp_adapter_handle,
+ (struct hcz_mrmw_handle *)mr_handle,
+ pfmr,
+ pfshca,
+ pagesize,
+ queue_type,
+ logical_address_of_page, count);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p mr_handle=%lx pagesize=%x"
+ " queue_type=%x logical_address_of_page=%lx count=%lx",
+ hcp_adapter_handle.handle, pfmr, mr_handle->handle, pagesize,
+ queue_type, logical_address_of_page, count);
+
+ if ((count > 1) && (logical_address_of_page & 0xfff)) {
+ ehca_catastrophic("ERROR: logical_address_of_page "
+ "not on a 4k boundary");
+ rc = H_Parameter;
+ } else {
+ rc = hipz_h_register_rpage(hcp_adapter_handle, pagesize,
+ queue_type, mr_handle->handle,
+ logical_address_of_page, count);
+ }
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "rc=%lx", rc);
+
+ return rc;
+}
+
+/**
+ * hipz_h_query_mr - Query MR in HW and FW.
+ *
+ * @pfmr: platform specific for MR
+ * @mr_handle: Memory Region Handle
+ * @mr_local_length: Local MR Length
+ * @mr_local_vaddr: Local MR I/O Virtual Address
+ * @mr_remote_length: Remote MR Length
+ * @mr_remote_vaddr Remote MR I/O Virtual Address
+ * @access_ctrl: Memory Region Access Controls
+ * @pd: Protection Domain
+ * lkey: L_Key
+ * rkey: R_Key
+ */
+static inline u64 hipz_h_query_mr(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmr *pfmr,
+ const struct ipz_mrmw_handle
+ *mr_handle,
+ u64 * mr_local_length,
+ u64 * mr_local_vaddr,
+ u64 * mr_remote_length,
+ u64 * mr_remote_vaddr,
+ u32 * access_ctrl,
+ struct ipz_pd *pd,
+ u32 * lkey,
+ u32 * rkey)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 acc_ctrl_pd_out;
+ u64 r9_out;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p mr_handle.mrwpte=%p"
+ " mr_handle.page_index=%x",
+ hcp_adapter_handle.handle, pfmr, mr_handle->mrwpte,
+ mr_handle->page_index);
+
+ rc = simp_hcz_h_query_mr(hcp_adapter_handle,
+ pfmr,
+ mr_handle,
+ mr_local_length,
+ mr_local_vaddr,
+ mr_remote_length,
+ mr_remote_vaddr, access_ctrl, pd, lkey, rkey);
+
+ EDEB_EX(7, "rc=%lx mr_local_length=%lx mr_local_vaddr=%lx"
+ " mr_remote_length=%lx mr_remote_vaddr=%lx access_ctrl=%x"
+ " pd=%x lkey=%x rkey=%x",
+ rc, *mr_local_length, *mr_local_vaddr, *mr_remote_length,
+ *mr_remote_vaddr, *access_ctrl, pd->value, *lkey, *rkey);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p mr_handle=%lx",
+ hcp_adapter_handle.handle, pfmr, mr_handle->handle);
+
+
+ rc = plpar_hcall_7arg_7ret(H_QUERY_MR,
+ hcp_adapter_handle.handle, /* r4 */
+ mr_handle->handle, /* r5 */
+ 0, 0, 0, 0, 0,
+ mr_local_length, /* r4 */
+ mr_local_vaddr, /* r5 */
+ mr_remote_length, /* r6 */
+ mr_remote_vaddr, /* r7 */
+ &acc_ctrl_pd_out, /* r8 */
+ &r9_out,
+ &dummy);
+
+ *access_ctrl = acc_ctrl_pd_out >> 32;
+ pd->value = (u32) acc_ctrl_pd_out;
+ *lkey = (u32) (r9_out >> 32);
+ *rkey = (u32) (r9_out & (0xffffffff));
+
+ EDEB_EX(7, "rc=%lx mr_local_length=%lx mr_local_vaddr=%lx"
+ " mr_remote_length=%lx mr_remote_vaddr=%lx access_ctrl=%x"
+ " pd=%x lkey=%x rkey=%x",
+ rc, *mr_local_length, *mr_local_vaddr, *mr_remote_length,
+ *mr_remote_vaddr, *access_ctrl, pd->value, *lkey, *rkey);
+#endif /* EHCA_USE_HCALL */
+
+ return rc;
+}
+
+/**
+ * hipz_h_free_resource_mr - Free MR resources in HW and FW.
+ *
+ * @pfmr: platform specific for MR
+ * @mr_handle: Memory Region Handle
+ */
+static inline u64 hipz_h_free_resource_mr(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmr *pfmr,
+ const struct ipz_mrmw_handle
+ *mr_handle)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p mr_handle.mrwpte=%p"
+ " mr_handle.page_index=%x",
+ hcp_adapter_handle.handle, pfmr, mr_handle->mrwpte,
+ mr_handle->page_index);
+
+ rc = simp_hcz_h_free_resource_mr(hcp_adapter_handle, pfmr, mr_handle);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p mr_handle=%lx",
+ hcp_adapter_handle.handle, pfmr, mr_handle->handle);
+
+ rc = plpar_hcall_7arg_7ret(H_FREE_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ mr_handle->handle, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "rc=%lx", rc);
+
+ return rc;
+}
+
+/**
+ * hipz_h_reregister_pmr - Reregister MR in HW and FW.
+ *
+ * @pfmr: platform specific for MR
+ * @pfshca: platform specific for SHCA
+ * @mr_handle: Memory Region Handle
+ * @vaddr_in: Memory Region I/O Virtual Address
+ * @length: Memory Region Length
+ * @access_ctrl: Memory Region Access Controls
+ * @pd: Protection Domain
+ * @mr_addr_cb: Logical Address of MR Control Block
+ * @vaddr_out: Memory Region I/O Virtual Address
+ * lkey: L_Key
+ * rkey: R_Key
+ *
+ */
+static inline u64 hipz_h_reregister_pmr(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmr *pfmr,
+ struct ehca_pfshca *pfshca,
+ const struct ipz_mrmw_handle
+ *mr_handle,
+ const u64 vaddr_in,
+ const u64 length,
+ const u32 access_ctrl,
+ const struct ipz_pd pd,
+ const u64 mr_addr_cb,
+ u64 * vaddr_out,
+ u32 * lkey,
+ u32 * rkey)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 lkey_out;
+ u64 rkey_out;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p pfshca=%p"
+ " mr_handle.mrwpte=%p mr_handle.page_index=%x vaddr_in=%lx"
+ " length=%lx access_ctrl=%x pd=%x mr_addr_cb=",
+ hcp_adapter_handle.handle, pfmr, pfshca, mr_handle->mrwpte,
+ mr_handle->page_index, vaddr_in, length, access_ctrl,
+ pd.value, mr_addr_cb);
+
+ rc = simp_hcz_h_reregister_pmr(hcp_adapter_handle, pfmr, pfshca,
+ mr_handle, vaddr_in, length, access_ctrl,
+ pd, mr_addr_cb, vaddr_out, lkey, rkey);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p pfshca=%p mr_handle=%lx "
+ "vaddr_in=%lx length=%lx access_ctrl=%x pd=%x mr_addr_cb=%lx",
+ hcp_adapter_handle.handle, pfmr, pfshca, mr_handle->handle,
+ vaddr_in, length, access_ctrl, pd.value, mr_addr_cb);
+
+ rc = plpar_hcall_7arg_7ret(H_REREGISTER_PMR,
+ hcp_adapter_handle.handle, /* r4 */
+ mr_handle->handle, /* r5 */
+ vaddr_in, /* r6 */
+ length, /* r7 */
+ /* r8 */
+ ((((u64) access_ctrl) << 32ULL) | pd.value),
+ mr_addr_cb, /* r9 */
+ 0,
+ &dummy, /* r4 */
+ vaddr_out, /* r5 */
+ &lkey_out, /* r6 */
+ &rkey_out, /* r7 */
+ &dummy,
+ &dummy,
+ &dummy);
+ *lkey = (u32) lkey_out;
+ *rkey = (u32) rkey_out;
+#endif /* EHCA_USE_HCALL */
+
+ EDEB_EX(7, "rc=%lx vaddr_out=%lx lkey=%x rkey=%x",
+ rc, *vaddr_out, *lkey, *rkey);
+ return rc;
+}
+
+/** @brief
+ as defined in carols hcall document
+*/
+
+/**
+ * Register shared MR in HW and FW.
+ *
+ * @pfmr: platform specific for new shared MR
+ * @orig_pfmr: platform specific for original MR
+ * @pfshca: platform specific for SHCA
+ * @orig_mr_handle: Memory Region Handle of original MR
+ * @vaddr_in: Memory Region I/O Virtual Address of new shared MR
+ * @access_ctrl: Memory Region Access Controls of new shared MR
+ * @pd: Protection Domain of new shared MR
+ * @mr_handle: Memory Region Handle of new shared MR
+ * @lkey: L_Key of new shared MR
+ * @rkey: R_Key of new shared MR
+ */
+static inline u64 hipz_h_register_smr(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmr *pfmr,
+ struct ehca_pfmr *orig_pfmr,
+ struct ehca_pfshca *pfshca,
+ const struct ipz_mrmw_handle
+ *orig_mr_handle,
+ const u64 vaddr_in,
+ const u32 access_ctrl,
+ const struct ipz_pd pd,
+ struct ipz_mrmw_handle
+ *mr_handle,
+ u32 * lkey,
+ u32 * rkey)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 lkey_out;
+ u64 rkey_out;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmr=%p orig_pfmr=%p pfshca=%p"
+ " orig_mr_handle.mrwpte=%p orig_mr_handle.page_index=%x"
+ " vaddr_in=%lx access_ctrl=%x pd=%x",
+ hcp_adapter_handle.handle, pfmr, orig_pfmr, pfshca,
+ orig_mr_handle->mrwpte, orig_mr_handle->page_index,
+ vaddr_in, access_ctrl, pd.value);
+
+ rc = simp_hcz_h_register_smr(hcp_adapter_handle, pfmr, orig_pfmr,
+ pfshca, orig_mr_handle, vaddr_in,
+ access_ctrl, pd,
+ (struct hcz_mrmw_handle *)mr_handle, lkey,
+ rkey);
+ EDEB_EX(7, "rc=%lx mr_handle.mrwpte=%p mr_handle.page_index=%x"
+ " lkey=%x rkey=%x",
+ rc, mr_handle->mrwpte, mr_handle->page_index, *lkey, *rkey);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx orig_pfmr=%p pfshca=%p"
+ " orig_mr_handle=%lx vaddr_in=%lx access_ctrl=%x pd=%x",
+ hcp_adapter_handle.handle, orig_pfmr, pfshca,
+ orig_mr_handle->handle, vaddr_in, access_ctrl, pd.value);
+
+
+ rc = plpar_hcall_7arg_7ret(H_REGISTER_SMR,
+ hcp_adapter_handle.handle, /* r4 */
+ orig_mr_handle->handle, /* r5 */
+ vaddr_in, /* r6 */
+ ((((u64) access_ctrl) << 32ULL)), /* r7 */
+ pd.value, /* r8 */
+ 0, 0,
+ &mr_handle->handle, /* r4 */
+ &dummy, /* r5 */
+ &lkey_out, /* r6 */
+ &rkey_out, /* r7 */
+ &dummy,
+ &dummy,
+ &dummy);
+ *lkey = (u32) lkey_out;
+ *rkey = (u32) rkey_out;
+
+ EDEB_EX(7, "rc=%lx mr_handle=%lx lkey=%x rkey=%x",
+ rc, mr_handle->handle, *lkey, *rkey);
+#endif /* EHCA_USE_HCALL */
+
+ return rc;
+}
+
+static inline u64 hipz_h_alloc_resource_mw(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmw *pfmw,
+ struct ehca_pfshca *pfshca,
+ const struct ipz_pd pd,
+ struct ipz_mrmw_handle *mw_handle,
+ u32 * rkey)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 rkey_out;
+
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmw=%p pd=%x pfshca=%p",
+ hcp_adapter_handle.handle, pfmw, pd.value, pfshca);
+
+#ifndef EHCA_USE_HCALL
+
+ rc = simp_hcz_h_alloc_resource_mw(hcp_adapter_handle, pfmw, pfshca, pd,
+ (struct hcz_mrmw_handle *)mw_handle,
+ rkey);
+ EDEB_EX(7, "rc=%lx mw_handle.mrwpte=%p mw_handle.page_index=%x rkey=%x",
+ rc, mw_handle->mrwpte, mw_handle->page_index, *rkey);
+#else
+ rc = plpar_hcall_7arg_7ret(H_ALLOC_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ 6, /* r5 */
+ pd.value, /* r6 */
+ 0, 0, 0, 0,
+ &mw_handle->handle, /* r4 */
+ &dummy, /* r5 */
+ &dummy, /* r6 */
+ &rkey_out, /* r7 */
+ &dummy,
+ &dummy,
+ &dummy);
+ *rkey = (u32) rkey_out;
+
+ EDEB_EX(7, "rc=%lx mw_handle=%lx rkey=%x",
+ rc, mw_handle->handle, *rkey);
+#endif /* EHCA_USE_HCALL */
+ return rc;
+}
+
+static inline u64 hipz_h_query_mw(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmw *pfmw,
+ const struct ipz_mrmw_handle
+ *mw_handle,
+ u32 * rkey,
+ struct ipz_pd *pd)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 pd_out;
+ u64 rkey_out;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmw=%p mw_handle.mrwpte=%p"
+ " mw_handle.page_index=%x",
+ hcp_adapter_handle.handle, pfmw, mw_handle->mrwpte,
+ mw_handle->page_index);
+
+ rc = simp_hcz_h_query_mw(hcp_adapter_handle, pfmw, mw_handle, rkey, pd);
+
+ EDEB_EX(7, "rc=%lx rkey=%x pd=%x", rc, *rkey, pd->value);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmw=%p mw_handle=%lx",
+ hcp_adapter_handle.handle, pfmw, mw_handle->handle);
+
+ rc = plpar_hcall_7arg_7ret(H_QUERY_MW,
+ hcp_adapter_handle.handle, /* r4 */
+ mw_handle->handle, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy, /* r4 */
+ &dummy, /* r5 */
+ &dummy, /* r6 */
+ &rkey_out, /* r7 */
+ &pd_out, /* r8 */
+ &dummy,
+ &dummy);
+ *rkey = (u32) rkey_out;
+ pd->value = (u32) pd_out;
+
+ EDEB_EX(7, "rc=%lx rkey=%x pd=%x", rc, *rkey, pd->value);
+#endif /* EHCA_USE_HCALL */
+
+ return rc;
+}
+
+static inline u64 hipz_h_free_resource_mw(const struct ipz_adapter_handle
+ hcp_adapter_handle,
+ struct ehca_pfmw *pfmw,
+ const struct ipz_mrmw_handle
+ *mw_handle)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+
+#ifndef EHCA_USE_HCALL
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmw=%p mw_handle.mrwpte=%p"
+ " mw_handle.page_index=%x",
+ hcp_adapter_handle.handle, pfmw, mw_handle->mrwpte,
+ mw_handle->page_index);
+
+ rc = simp_hcz_h_free_resource_mw(hcp_adapter_handle, pfmw, mw_handle);
+#else
+ EDEB_EN(7, "hcp_adapter_handle=%lx pfmw=%p mw_handle=%lx",
+ hcp_adapter_handle.handle, pfmw, mw_handle->handle);
+
+ rc = plpar_hcall_7arg_7ret(H_FREE_RESOURCE,
+ hcp_adapter_handle.handle, /* r4 */
+ mw_handle->handle, /* r5 */
+ 0, 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+#endif /* EHCA_USE_HCALL */
+ EDEB_EX(7, "rc=%lx", rc);
+
+ return rc;
+}
+
+static inline u64 hipz_h_error_data(const struct ipz_adapter_handle
+ adapter_handle,
+ const u64 ressource_handle,
+ void *rblock,
+ unsigned long *byte_count)
+{
+ u64 rc = H_Success;
+ u64 dummy;
+ u64 r_cb;
+
+ EDEB_EN(7, "adapter_handle=%lx ressource_handle=%lx rblock=%p",
+ adapter_handle.handle, ressource_handle, rblock);
+
+ if ((((u64)rblock) & 0xfff) != 0) {
+ EDEB_ERR(4, "rblock not page aligned.");
+ rc = H_Parameter;
+ return rc;
+ }
+
+ r_cb = ehca_kv_to_g(rblock);
+
+ rc = plpar_hcall_7arg_7ret(H_ERROR_DATA,
+ adapter_handle.handle,
+ ressource_handle,
+ r_cb,
+ 0, 0, 0, 0,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy,
+ &dummy);
+
+ EDEB_EX(7, "rc=%lx", rc);
+
+ return rc;
+}
+
+#endif /* __HCP_IF_H__ */

2006-02-18 00:57:12

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 01/22] Add powerpc-specific clear_cacheline(), which just compiles to "dcbz".

From: Roland Dreier <[email protected]>

This is horribly non-portable. How much of a performance difference
does it make? How does it do on ppc64 systems where the cacheline
size is not 32?
---

drivers/infiniband/hw/ehca/ehca_asm.h | 58 +++++++++++++++++++++++++++++++++
1 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_asm.h b/drivers/infiniband/hw/ehca/ehca_asm.h
new file mode 100644
index 0000000..6a09ac5
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_asm.h
@@ -0,0 +1,58 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Some helper macros with assembler instructions
+ *
+ * Authors: Khadija Souissi <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_asm.h,v 1.7 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#ifndef __EHCA_ASM_H__
+#define __EHCA_ASM_H__
+
+#if defined(CONFIG_PPC_PSERIES) || defined (__PPC64__) || defined (__PPC__)
+
+#define clear_cacheline(adr) __asm__ __volatile("dcbz 0,%0"::"r"(adr))
+
+#elif defined(CONFIG_ARCH_S390)
+#error "unsupported yet"
+#else
+#error "invalid platform"
+#endif
+
+#endif /* __EHCA_ASM_H__ */

2006-02-18 00:58:14

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 06/22] Queue handling

From: Roland Dreier <[email protected]>

Code like

#ifndef __PPC64__
void * dummy1; /* make sure we use the same thing on
32 bit
*/
#endif

looks _very_ suspicious. Much better to make sure that the
structures are laid out the same no matter what the word size
of the architecture is rather than relying on fragile hacks
like this.
---

drivers/infiniband/hw/ehca/ipz_pt_fn.c | 137 ++++++++++++++++++++++
drivers/infiniband/hw/ehca/ipz_pt_fn.h | 165 +++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/ipz_pt_fn_core.h | 152 +++++++++++++++++++++++++
3 files changed, 454 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ipz_pt_fn.c b/drivers/infiniband/hw/ehca/ipz_pt_fn.c
new file mode 100644
index 0000000..d6c490c
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ipz_pt_fn.c
@@ -0,0 +1,137 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * internal queue handling
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ipz_pt_fn.c,v 1.16 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#define DEB_PREFIX "iptz"
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+#include "ipz_pt_fn.h"
+
+extern int ehca_hwlevel;
+
+void *ipz_QPageit_get_inc(struct ipz_queue *queue)
+{
+ void *retvalue = NULL;
+ u8 *EOF_last_page = queue->queue + queue->queue_length;
+
+ retvalue = queue->current_q_addr;
+ queue->current_q_addr += queue->pagesize;
+ if (queue->current_q_addr > EOF_last_page) {
+ queue->current_q_addr -= queue->pagesize;
+ retvalue = NULL;
+ }
+
+ if ((((u64)retvalue) % EHCA_PAGESIZE) != 0) {
+ EDEB(4, "ERROR!! not at PAGE-Boundary");
+ return (NULL);
+ }
+ EDEB(7, "queue=%p retvalue=%p", queue, retvalue);
+ return (retvalue);
+}
+
+void *ipz_QEit_EQ_get_inc(struct ipz_queue *queue)
+{
+ void *retvalue = NULL;
+ u8 *last_entry_in_q = queue->queue + queue->queue_length
+ - queue->qe_size;
+
+ retvalue = queue->current_q_addr;
+ queue->current_q_addr += queue->qe_size;
+ if (queue->current_q_addr > last_entry_in_q) {
+ queue->current_q_addr = queue->queue;
+ queue->toggle_state = (~queue->toggle_state) & 1;
+ }
+
+ EDEB(7, "queue=%p retvalue=%p new current_q_addr=%p qe_size=%x",
+ queue, retvalue, queue->current_q_addr, queue->qe_size);
+
+ return (retvalue);
+}
+
+int ipz_queue_ctor(struct ipz_queue *queue,
+ const u32 nr_of_pages,
+ const u32 pagesize, const u32 qe_size, const u32 nr_of_sg)
+{
+ EDEB_EN(7, "nr_of_pages=%x pagesize=%x qe_size=%x",
+ nr_of_pages, pagesize, qe_size);
+ queue->queue_length = nr_of_pages * pagesize;
+ queue->queue = vmalloc(queue->queue_length);
+ if (queue->queue == 0) {
+ EDEB(4, "ERROR!! didn't get the memory");
+ return (FALSE);
+ }
+ if ((((u64)queue->queue) & (EHCA_PAGESIZE - 1)) != 0) {
+ EDEB(4, "ERROR!! QUEUE doesn't start at "
+ "page boundary");
+ vfree(queue->queue);
+ return (FALSE);
+ }
+
+ memset(queue->queue, 0, queue->queue_length);
+ queue->current_q_addr = queue->queue;
+ queue->qe_size = qe_size;
+ queue->act_nr_of_sg = nr_of_sg;
+ queue->pagesize = pagesize;
+ queue->toggle_state = 1;
+ EDEB_EX(7, "queue_length=%x queue=%p qe_size=%x"
+ " act_nr_of_sg=%x", queue->queue_length, queue->queue,
+ queue->qe_size, queue->act_nr_of_sg);
+ return TRUE;
+}
+
+int ipz_queue_dtor(struct ipz_queue *queue)
+{
+ EDEB_EN(7, "ipz_queue pointer=%p", queue);
+ if (queue == NULL) {
+ return (FALSE);
+ }
+ if (queue->queue == NULL) {
+ return (FALSE);
+ }
+ EDEB(7, "destructing a queue with the following "
+ "properties:\n nr_of_pages=%x pagesize=%x qe_size=%x",
+ queue->act_nr_of_sg, queue->pagesize, queue->qe_size);
+ vfree(queue->queue);
+
+ EDEB_EX(7, "queue freed!");
+ return TRUE;
+}
diff --git a/drivers/infiniband/hw/ehca/ipz_pt_fn.h b/drivers/infiniband/hw/ehca/ipz_pt_fn.h
new file mode 100644
index 0000000..2e197db
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ipz_pt_fn.h
@@ -0,0 +1,165 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * internal queue handling
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ipz_pt_fn.h,v 1.11 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __IPZ_PT_FN_H__
+#define __IPZ_PT_FN_H__
+
+#include "ipz_pt_fn_core.h"
+#include "ehca_qes.h"
+
+#define EHCA_PAGESIZE 4096UL
+#define EHCA_PT_ENTRIES 512UL
+
+/** @brief generic page table
+ */
+struct ipz_pt {
+ u64 entries[EHCA_PT_ENTRIES];
+};
+
+/** @brief generic page
+ */
+struct ipz_page {
+ u8 entries[EHCA_PAGESIZE];
+};
+
+/** @brief page table for a queue, only to be used in pf
+ */
+struct ipz_qpt {
+ /* queue page tables (kv), use u64 because we know the element length */
+ u64 *qpts;
+ u32 allocated_qpts_entries;
+ u32 nr_of_PTEs; /* number of page table entries PTE iterators */
+ u64 *current_pte_addr;
+};
+
+/** @brief constructor for a ipz_queue_t, placement new for ipz_queue_t,
+ new for all dependent datastructors
+
+ all QP Tables are the same
+ flow:
+ -# allocate+pin queue
+ @see ipz_qpt_ctor()
+ @returns true if ok, false if out of memory
+ */
+int ipz_queue_ctor(struct ipz_queue *queue, const u32 nr_of_pages,
+ const u32 pagesize,
+ const u32 qe_size, /* queue entry size*/
+ const u32 nr_of_sg);
+
+/** @brief destructor for a ipz_queue_t
+ -# free queue
+ @see ipz_queue_ctor()
+ @returns true if ok, false if queue was NULL-ptr of free failed
+*/
+int ipz_queue_dtor(struct ipz_queue *queue);
+
+/** @brief constructor for a ipz_qpt_t,
+ * placement new for struct ipz_queue, new for all dependent datastructors
+ *
+ * all QP Tables are the same,
+ * flow:
+ * -# allocate+pin queue
+ * -# initialise ptcb
+ * -# allocate+pin PTs
+ * -# link PTs to a ring, according to HCA Arch, set bit62 id needed
+ * -# the ring must have room for exactly nr_of_PTEs
+ * @see ipz_qpt_ctor()
+ */
+void ipz_qpt_ctor(struct ipz_qpt *qpt,
+ struct ehca_bridge_handle bridge,
+ const u32 nr_of_QEs,
+ const u32 pagesize,
+ const u32 qe_size,
+ const u8 lowbyte, const u8 toggle,
+ u32 * act_nr_of_QEs,
+ u32 * act_nr_of_pages);
+
+/** @brief return current Queue Entry, increment Queue Entry iterator by one
+ step in struct ipz_queue, will wrap in ringbuffer
+ @returns address (kv) of Queue Entry BEFORE increment
+ @warning don't use in parallel with ipz_QPageit_get_inc()
+ @warning unpredictable results may occur if steps>act_nr_of_queue_entries
+
+ fix EQ page problems
+ */
+void *ipz_QEit_EQ_get_inc(struct ipz_queue *queue);
+
+/** @brief return current Event Queue Entry, increment Queue Entry iterator
+ by one step in struct ipz_queue if valid, will wrap in ringbuffer
+ @returns address (kv) of Queue Entry BEFORE increment
+ @returns 0 and does not increment, if wrong valid state
+ @warning don't use in parallel with ipz_queue_QPageit_get_inc()
+ @warning unpredictable results may occur if steps>act_nr_of_queue_entries
+ */
+inline static void *ipz_QEit_EQ_get_inc_valid(struct ipz_queue *queue)
+{
+ void *retvalue = ipz_QEit_get(queue);
+ u32 qe = *(u8 *) retvalue;
+ EDEB(7, "ipz_QEit_EQ_get_inc_valid qe=%x", qe);
+ if ((qe >> 7) == (queue->toggle_state & 1)) {
+ /* this is a good one */
+ ipz_QEit_EQ_get_inc(queue);
+ } else {
+ retvalue = NULL;
+ }
+ return (retvalue);
+}
+
+/**
+ @returns address (GX) of first queue entry
+ */
+inline static u64 ipz_qpt_get_firstpage(struct ipz_qpt *qpt)
+{
+ return (be64_to_cpu(qpt->qpts[0]));
+}
+
+/**
+ @returns address (kv) of first page of queue page table
+ */
+inline static void *ipz_qpt_get_qpt(struct ipz_qpt *qpt)
+{
+ return (qpt->qpts);
+}
+
+#endif /* __IPZ_PT_FN_H__ */
diff --git a/drivers/infiniband/hw/ehca/ipz_pt_fn_core.h b/drivers/infiniband/hw/ehca/ipz_pt_fn_core.h
new file mode 100644
index 0000000..1b9a114
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ipz_pt_fn_core.h
@@ -0,0 +1,152 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * internal queue handling
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ipz_pt_fn_core.h,v 1.12 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __IPZ_PT_FN_CORE_H__
+#define __IPZ_PT_FN_CORE_H__
+
+#ifdef __KERNEL__
+#include "ehca_tools.h"
+#else /* some replacements for kernel stuff */
+#include "ehca_utools.h"
+#endif
+
+#include "ehca_qes.h"
+
+/** @brief generic queue in linux kernel virtual memory (kv)
+ */
+struct ipz_queue {
+#ifndef __PPC64__
+ void * dummy1; /* make sure we use the same thing on 32 bit */
+#endif
+ u8 *current_q_addr; /* current queue entry */
+#ifndef __PPC64__
+ void * dummy2;
+#endif
+ u8 *queue; /* points to first queue entry */
+ u32 qe_size; /* queue entry size */
+ u32 act_nr_of_sg;
+ u32 queue_length; /* queue length allocated in bytes */
+ u32 pagesize;
+ u32 toggle_state; /* toggle flag - per page */
+ u32 dummy3; /* 64 bit alignment*/
+};
+
+/** @brief return current Queue Entry
+ @returns address (kv) of Queue Entry
+ */
+static inline void *ipz_QEit_get(struct ipz_queue *queue)
+{
+ return (queue->current_q_addr);
+}
+
+/** @brief return current Queue Page , increment Queue Page iterator from
+ page to page in struct ipz_queue, last increment will return 0! and
+ NOT wrap
+ @returns address (kv) of Queue Page
+ @warning don't use in parallel with ipz_QE_get_inc()
+ */
+void *ipz_QPageit_get_inc(struct ipz_queue *queue);
+
+/** @brief return current Queue Entry, increment Queue Entry iterator by one
+ step in struct ipz_queue, will wrap in ringbuffer
+ @returns address (kv) of Queue Entry BEFORE increment
+ @warning don't use in parallel with ipz_QPageit_get_inc()
+ @warning unpredictable results may occur if steps>act_nr_of_queue_entries
+ */
+static inline void *ipz_QEit_get_inc(struct ipz_queue *queue)
+{
+ void *retvalue = 0;
+ u8 *last_entry_in_q = queue->queue + queue->queue_length
+ - queue->qe_size;
+
+ retvalue = queue->current_q_addr;
+ queue->current_q_addr += queue->qe_size;
+ if (queue->current_q_addr > last_entry_in_q) {
+ queue->current_q_addr = queue->queue;
+ /* toggle the valid flag */
+ queue->toggle_state = (~queue->toggle_state) & 1;
+ }
+
+ EDEB(7, "queue=%p retvalue=%p new current_q_addr=%p qe_size=%x",
+ queue, retvalue, queue->current_q_addr, queue->qe_size);
+
+ return (retvalue);
+}
+
+/** @brief return current Queue Entry, increment Queue Entry iterator by one
+ step in struct ipz_queue, will wrap in ringbuffer
+ @returns address (kv) of Queue Entry BEFORE increment
+ @returns 0 and does not increment, if wrong valid state
+ @warning don't use in parallel with ipz_QPageit_get_inc()
+ @warning unpredictable results may occur if steps>act_nr_of_queue_entries
+ */
+inline static void *ipz_QEit_get_inc_valid(struct ipz_queue *queue)
+{
+ void *retvalue = ipz_QEit_get(queue);
+#ifdef USERSPACE_DRIVER
+
+ u32 qe =
+ ((struct ehca_cqe *)(ehca_ktou((struct ehca_cqe *)retvalue)))->
+ cqe_flags;
+#else
+ u32 qe = ((struct ehca_cqe *)retvalue)->cqe_flags;
+#endif
+ if ((qe >> 7) == (queue->toggle_state & 1)) {
+ /* this is a good one */
+ ipz_QEit_get_inc(queue);
+ } else
+ retvalue = 0;
+ return (retvalue);
+}
+
+/** @brief returns and resets Queue Entry iterator
+ @returns address (kv) of first Queue Entry
+ */
+static inline void *ipz_QEit_reset(struct ipz_queue *queue)
+{
+ queue->current_q_addr = queue->queue;
+ return (queue->queue);
+}
+
+#endif /* __IPZ_PT_FN_CORE_H__ */

2006-02-18 00:58:11

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 08/22] Generic ehca headers

From: Roland Dreier <[email protected]>

The defines of TRUE and FALSE look rather useless. Why are they needed?

What is struct ehca_cache for? It doesn't seem to be used anywhere.

ehca_kv_to_g() looks completely horrible. The whole idea of using
vmalloc()ed kernel memory to do DMA seems unacceptable to me.

It's usual to include all <linux/> headers before all <asm/> headers.
---

drivers/infiniband/hw/ehca/ehca_flightrecorder.h | 74 ++++
drivers/infiniband/hw/ehca/ehca_kernel.h | 135 +++++++
drivers/infiniband/hw/ehca/ehca_tools.h | 431 ++++++++++++++++++++++
3 files changed, 640 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_flightrecorder.h b/drivers/infiniband/hw/ehca/ehca_flightrecorder.h
new file mode 100644
index 0000000..7c631ad
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_flightrecorder.h
@@ -0,0 +1,74 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * flightrecorder macros
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_flightrecorder.h,v 1.5 2006/02/06 10:17:34 schickhj Exp $
+ */
+/*****************************************************************************/
+#ifndef EHCA_FLIGHTRECORDER_H
+#define EHCA_FLIGHTRECORDER_H
+
+#define ED_EXTEND1(x,ar1...) \
+ unsigned long __EDEB_R2=(const unsigned long)x-0;ED_EXTEND2(ar1)
+#define ED_EXTEND2(x,ar1...) \
+ unsigned long __EDEB_R3=(const unsigned long)x-0;ED_EXTEND3(ar1)
+#define ED_EXTEND3(x,ar1...) \
+ unsigned long __EDEB_R4=(const unsigned long)x-0;ED_EXTEND4(ar1)
+#define ED_EXTEND4(x,ar1...)
+
+#define EHCA_FLIGHTRECORDER_SIZE 65536
+
+extern atomic_t ehca_flightrecorder_index;
+extern unsigned long ehca_flightrecorder[EHCA_FLIGHTRECORDER_SIZE];
+
+/* Not nice, but -O2 optimized */
+
+#define ED_FLIGHT_LOG(x,ar1...) { \
+ u32 flight_offset = ((u32) \
+ atomic_add_return(4, &ehca_flightrecorder_index)) \
+ % EHCA_FLIGHTRECORDER_SIZE; \
+ unsigned long *flight_trline = &ehca_flightrecorder[flight_offset]; \
+ unsigned long __EDEB_R1 = (unsigned long) x-0; ED_EXTEND1(ar1) \
+ flight_trline[0]=__EDEB_R1,flight_trline[1]=__EDEB_R2, \
+ flight_trline[2]=__EDEB_R3,flight_trline[3]=__EDEB_R4; }
+
+#define EHCA_FLIGHTRECORDER_BACKLOG 60
+
+void ehca_flight_to_printk(void);
+
+#endif
diff --git a/drivers/infiniband/hw/ehca/ehca_kernel.h b/drivers/infiniband/hw/ehca/ehca_kernel.h
new file mode 100644
index 0000000..f119149
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_kernel.h
@@ -0,0 +1,135 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * generalized functions for code shared between kernel and userspace
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_kernel.h,v 1.39 2006/02/06 11:45:10 schickhj Exp $
+ */
+
+#ifndef _EHCA_KERNEL_H_
+#define _EHCA_KERNEL_H_
+
+#define FALSE (1==0)
+#define TRUE (1==1)
+
+#define big_little_target 0 /* needed for simulation */
+#include <linux/version.h>
+
+#include <linux/types.h>
+#include "ehca_common.h"
+#include "ehca_kernel.h"
+
+/**
+ * Handle to be used for adress translation mechanisms, currently a placeholder.
+ */
+struct ehca_bridge_handle {
+ int handle;
+};
+
+inline static int ehca_adr_bad(void *adr)
+{
+ return (adr == 0);
+};
+
+#ifdef EHCA_USERDRIVER
+/* userspace replacement for kernel functions */
+#include "ehca_usermain.h"
+#else /* USERDRIVER */
+/* kernel includes */
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/kernel.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/sched.h>
+#include <linux/err.h>
+#include <linux/kthread.h>
+#include <linux/mman.h>
+#include <linux/delay.h>
+#include <asm/processor.h>
+#include <asm/ibmebus.h>
+#include <linux/pci.h>
+#include <linux/idr.h>
+#include <linux/rwsem.h>
+
+struct ehca_cache {
+ kmem_cache_t *cache;
+ int size;
+};
+
+#ifdef __powerpc64__
+#include <linux/spinlock.h>
+#include <asm/abs_addr.h>
+#include <asm/prom.h>
+#else
+#endif
+
+#include <ehca_tools.h>
+
+#include <asm/pgtable.h>
+
+
+/**
+ * ehca_kv_to_g - Converts a kernel virtual address to visible addresses
+ * (i.e. a physical/absolute address).
+ */
+inline static u64 ehca_kv_to_g(void *adr)
+{
+ u64 raddr;
+#ifndef CONFIG_PPC64
+ raddr = virt_to_phys((u64)adr);
+#else
+ /* we need to find not only the physical address
+ * but the absolute to account for memory segmentation */
+ raddr = virt_to_abs((u64)adr);
+#endif
+ if (((u64)adr & VMALLOC_START) == VMALLOC_START) {
+ raddr = phys_to_abs((page_to_pfn(vmalloc_to_page(adr)) <<
+ PAGE_SHIFT));
+ }
+ return (raddr);
+}
+
+#endif /* USERDRIVER */
+#include <linux/types.h>
+
+
+#endif /* _EHCA_KERNEL_H_ */
diff --git a/drivers/infiniband/hw/ehca/ehca_tools.h b/drivers/infiniband/hw/ehca/ehca_tools.h
new file mode 100644
index 0000000..915a0b7
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_tools.h
@@ -0,0 +1,431 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * auxiliary functions
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ * Khadija Souissi <[email protected]>
+ * Waleri Fomin <[email protected]>
+ * Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_tools.h,v 1.43 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#ifndef EHCA_TOOLS_H
+#define EHCA_TOOLS_H
+
+#include "ehca_flightrecorder.h"
+#include "ehca_common.h"
+
+#define flightlog_value() mftb()
+
+#ifndef sizeofmember
+#define sizeofmember(TYPE, MEMBER) (sizeof( ((TYPE *)0)->MEMBER))
+#endif
+
+#define EHCA_EDEB_TRACE_MASK_SIZE 32
+extern u8 ehca_edeb_mask[EHCA_EDEB_TRACE_MASK_SIZE];
+#define EDEB_ID_TO_U32(str4) (str4[3] | (str4[2] << 8) | (str4[1] << 16) | \
+ (str4[0] << 24))
+
+inline static u64 ehca_edeb_filter(const u32 level,
+ const u32 id, const u32 line)
+{
+ u64 ret = 0;
+ u32 filenr = 0;
+ u32 filter_level = 9;
+ u32 dynamic_level = 0;
+ /* This is code written for the gcc -O2 optimizer which should colapse
+ * to two single ints filter_level is the first level kicked out by
+ * compiler means trace everythin below 6. */
+ if (id == EDEB_ID_TO_U32("ehav")) {
+ filenr = 0x01;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("clas")) {
+ filenr = 0x02;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("cqeq")) {
+ filenr = 0x03;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("shca")) {
+ filenr = 0x05;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("eirq")) {
+ filenr = 0x06;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("lMad")) {
+ filenr = 0x07;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("mcas")) {
+ filenr = 0x08;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("mrmw")) {
+ filenr = 0x09;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("vpd ")) {
+ filenr = 0x0a;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("e_qp")) {
+ filenr = 0x0b;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("uqes")) {
+ filenr = 0x0c;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("PHYP")) {
+ filenr = 0x0d;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("snse")) {
+ filenr = 0x0e;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("iptz")) {
+ filenr = 0x0f;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("spta")) {
+ filenr = 0x10;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("simp")) {
+ filenr = 0x11;
+ filter_level = 8;
+ }
+ if (id == EDEB_ID_TO_U32("reqs")) {
+ filenr = 0x12;
+ filter_level = 8;
+ }
+
+ if ((filenr - 1) > sizeof(ehca_edeb_mask)) {
+ filenr = 0;
+ }
+
+ if (filenr == 0) {
+ filter_level = 9;
+ } /* default */
+ ret = filenr * 0x10000 + line;
+ if (filter_level <= level) {
+ return (ret | 0x100000000); /* this is the flag to not trace */
+ }
+ dynamic_level = ehca_edeb_mask[filenr];
+ if (likely(dynamic_level <= level)) {
+ ret = ret | 0x100000000;
+ };
+ return ret;
+}
+
+#ifdef EHCA_USE_HCALL_KERNEL
+#ifdef CONFIG_PPC_PSERIES
+
+#include <asm/paca.h>
+
+/**
+ * IS_EDEB_ON - Checks if debug is on for the given level.
+ */
+#define IS_EDEB_ON(level) \
+ ((ehca_edeb_filter(level, EDEB_ID_TO_U32(DEB_PREFIX), __LINE__) & 0x100000000)==0)
+
+#define EDEB_P_GENERIC(level,idstring,format,args...) \
+do { \
+ u64 ehca_edeb_filterresult = \
+ ehca_edeb_filter(level, EDEB_ID_TO_U32(DEB_PREFIX), __LINE__);\
+ if ((ehca_edeb_filterresult & 0x100000000) == 0) \
+ printk("PU%04x %08x:%s " idstring " "format "\n", \
+ get_paca()->paca_index, (u32)(ehca_edeb_filterresult), \
+ __func__, ##args); \
+ if (unlikely(ehca_edeb_mask[0x1e]!=0)) \
+ ED_FLIGHT_LOG((((u64)(get_paca()->paca_index)<< 32) | \
+ ((u64)(ehca_edeb_filterresult & 0xffffffff)) << 40 | \
+ (flightlog_value()&0xffffffff)), args); \
+} while (1==0)
+
+#elif CONFIG_ARCH_S390
+
+#include <asm/smp.h>
+#define EDEB_P_GENERIC(level,idstring,format,args...) \
+do { \
+ u64 ehca_edeb_filterresult = \
+ ehca_edeb_filter(level, EDEB_ID_TO_U32(DEB_PREFIX), __LINE__);\
+ if ((ehca_edeb_filterresult & 0x100000000) == 0) \
+ printk("PU%04x %08x:%s " idstring " "format "\n", \
+ smp_processor_id(), (u32)(ehca_edeb_filterresult), \
+ __func__, ##args); \
+} while (1==0)
+
+#elif REAL_HCALL
+
+#define EDEB_P_GENERIC(level,idstring,format,args...) \
+do { \
+ u64 ehca_edeb_filterresult = \
+ ehca_edeb_filter(level, EDEB_ID_TO_U32(DEB_PREFIX), __LINE__); \
+ if ((ehca_edeb_filterresult & 0x100000000) == 0) \
+ printk("%08x:%s " idstring " "format "\n", \
+ (u32)(ehca_edeb_filterresult), \
+ __func__, ##args); \
+} while (1==0)
+
+#endif
+#else
+
+#define IS_EDEB_ON(level) (1)
+
+#define EDEB_P_GENERIC(level,idstring,format,args...) \
+do { \
+ printk("%s " idstring " "format "\n", \
+ __func__, ##args); \
+} while (1==0)
+
+#endif
+
+/**
+ * EDEB - Trace output macro.
+ * @level tracelevel
+ * @format optional format string, use "" if not desired
+ * @args printf like arguments for trace, use %Lx for u64, %x for u32
+ * %p for pointer
+ */
+#define EDEB(level,format,args...) \
+ EDEB_P_GENERIC(level,"",format,##args)
+#define EDEB_ERR(level,format,args...) \
+ EDEB_P_GENERIC(level,"HCAD_ERROR ",format,##args)
+#define EDEB_EN(level,format,args...) \
+ EDEB_P_GENERIC(level,">>>",format,##args)
+#define EDEB_EX(level,format,args...) \
+ EDEB_P_GENERIC(level,"<<<",format,##args)
+
+/**
+ * EDEB macro to dump a memory block, whose length is n*8 bytes.
+ * Each line has the following layout:
+ * <format string> adr=X ofs=Y <8 bytes hex> <8 bytes hex>
+ */
+
+#define EDEB_DMP(level,adr,len,format,args...) \
+ do { \
+ unsigned int x; \
+ unsigned int l = (unsigned int)(len); \
+ unsigned char *deb = (unsigned char*)(adr); \
+ for (x = 0; x < l; x += 16) { \
+ EDEB(level, format " adr=%p ofs=%04x %016lx %016lx", \
+ ##args, deb, x, *((u64 *)&deb[0]), *((u64 *)&deb[8])); \
+ deb += 16; \
+ } \
+ } while (0)
+
+#define LOCATION __FILE__ " "
+
+/* define a bitmask, little endian version */
+#define EHCA_BMASK(pos,length) (((pos)<<16)+(length))
+/* define a bitmask, the ibm way... */
+#define EHCA_BMASK_IBM(from,to) (((63-to)<<16)+((to)-(from)+1))
+/* internal function, don't use */
+#define EHCA_BMASK_SHIFTPOS(mask) (((mask)>>16)&0xffff)
+/* internal function, don't use */
+#define EHCA_BMASK_MASK(mask) (0xffffffffffffffffULL >> ((64-(mask))&0xffff))
+/* return value shifted and masked by mask\n
+ * variable|=HCA_BMASK_SET(MY_MASK,0x4711) ORs the bits in variable\n
+ * variable&=~HCA_BMASK_SET(MY_MASK,-1) clears the bits from the mask
+ * in variable
+ */
+#define EHCA_BMASK_SET(mask,value) \
+ ((EHCA_BMASK_MASK(mask) & ((u64)(value)))<<EHCA_BMASK_SHIFTPOS(mask))
+/* extract a parameter from value by mask\n
+ * param=EHCA_BMASK_GET(MY_MASK,value)
+ */
+#define EHCA_BMASK_GET(mask,value) \
+ ( EHCA_BMASK_MASK(mask)& (((u64)(value))>>EHCA_BMASK_SHIFTPOS(mask)))
+
+/**
+ * ehca_fixme - Dummy function which will be removed in production code
+ * to find all todos by compiler.
+ */
+void ehca_fixme(void);
+
+extern void exit(int);
+inline static void ehca_catastrophic(char *str)
+{
+#ifndef EHCA_USERDRIVER
+ printk(KERN_ERR "HCAD_ERROR %s\n", str);
+ ehca_flight_to_printk();
+#else
+ exit(1);
+#endif
+}
+
+#define PARANOIA_MODE
+#ifdef PARANOIA_MODE
+
+#define EHCA_CHECK_ADR_P(adr) \
+ if (unlikely(adr==0)) { \
+ EDEB_ERR(4, "adr=%p check failed line %i", adr, \
+ __LINE__); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_ADR(adr) \
+ if (unlikely(adr==0)) { \
+ EDEB_ERR(4, "adr=%p check failed line %i", adr, \
+ __LINE__); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_DEVICE_P(device) \
+ if (unlikely(device==0)) { \
+ EDEB_ERR(4, "device=%p check failed", device); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_DEVICE(device) \
+ if (unlikely(device==0)) { \
+ EDEB_ERR(4, "device=%p check failed", device); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_PD(pd) \
+ if (unlikely(pd==0)) { \
+ EDEB_ERR(4, "pd=%p check failed", pd); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_PD_P(pd) \
+ if (unlikely(pd==0)) { \
+ EDEB_ERR(4, "pd=%p check failed", pd); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_AV(av) \
+ if (unlikely(av==0)) { \
+ EDEB_ERR(4, "av=%p check failed", av); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_AV_P(av) \
+ if (unlikely(av==0)) { \
+ EDEB_ERR(4, "av=%p check failed", av); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_CQ(cq) \
+ if (unlikely(cq==0)) { \
+ EDEB_ERR(4, "cq=%p check failed", cq); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_CQ_P(cq) \
+ if (unlikely(cq==0)) { \
+ EDEB_ERR(4, "cq=%p check failed", cq); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_EQ(eq) \
+ if (unlikely(eq==0)) { \
+ EDEB_ERR(4, "eq=%p check failed", eq); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_EQ_P(eq) \
+ if (unlikely(eq==0)) { \
+ EDEB_ERR(4, "eq=%p check failed", eq); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_QP(qp) \
+ if (unlikely(qp==0)) { \
+ EDEB_ERR(4, "qp=%p check failed", qp); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_QP_P(qp) \
+ if (unlikely(qp==0)) { \
+ EDEB_ERR(4, "qp=%p check failed", qp); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_MR(mr) \
+ if (unlikely(mr==0)) { \
+ EDEB_ERR(4, "mr=%p check failed", mr); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_MR_P(mr) \
+ if (unlikely(mr==0)) { \
+ EDEB_ERR(4, "mr=%p check failed", mr); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_MW(mw) \
+ if (unlikely(mw==0)) { \
+ EDEB_ERR(4, "mw=%p check failed", mw); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_MW_P(mw) \
+ if (unlikely(mw==0)) { \
+ EDEB_ERR(4, "mw=%p check failed", mw); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_CHECK_FMR(fmr) \
+ if (unlikely(fmr==0)) { \
+ EDEB_ERR(4, "fmr=%p check failed", fmr); \
+ return -EFAULT; }
+
+#define EHCA_CHECK_FMR_P(fmr) \
+ if (unlikely(fmr==0)) { \
+ EDEB_ERR(4, "fmr=%p check failed", fmr); \
+ return ERR_PTR(-EFAULT); }
+
+#define EHCA_REGISTER_PD(device,pd)
+#define EHCA_REGISTER_AV(pd,av)
+#define EHCA_DEREGISTER_PD(PD)
+#define EHCA_DEREGISTER_AV(av)
+#else
+#define EHCA_CHECK_DEVICE_P(device)
+
+#define EHCA_CHECK_PD(pd)
+#define EHCA_REGISTER_PD(device,pd)
+#define EHCA_DEREGISTER_PD(PD)
+#endif
+
+/**
+ * ehca2ib_return_code - Returns ib return code corresponding to the given
+ * ehca return code.
+ */
+static inline int ehca2ib_return_code(u64 ehca_rc)
+{
+ switch (ehca_rc) {
+ case H_Success:
+ return 0;
+ case H_Busy:
+ return -EBUSY;
+ case H_NoMem:
+ return -ENOMEM;
+ default:
+ return -EINVAL;
+ }
+}
+
+#endif /* EHCA_TOOLS_H */

2006-02-18 00:57:38

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 09/22] ehca classes

From: Roland Dreier <[email protected]>

The fact that ehca_cq_delete and ehca_qp_delete return an int seems
a little silly, given that the functions can never fail.

The code in ehca_classes.c seems like a misuse of the kmem_cache API;
rather than wrapping kmem_cache_alloc() and doing extra initialization,
why not just use the kmem_cache's constructor to do this?
---

drivers/infiniband/hw/ehca/ehca_classes.c | 191 +++++++++++
drivers/infiniband/hw/ehca/ehca_classes.h | 369 +++++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_classes_core.h | 73 ++++
drivers/infiniband/hw/ehca/ehca_classes_pSeries.h | 256 +++++++++++++++
4 files changed, 889 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_classes.c b/drivers/infiniband/hw/ehca/ehca_classes.c
new file mode 100644
index 0000000..9819788
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_classes.c
@@ -0,0 +1,191 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * struct initialisations and allocation
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_classes.c,v 1.21 2006/02/06 16:20:38 schickhj Exp $
+ */
+
+#define DEB_PREFIX "clas"
+#include "ehca_kernel.h"
+
+#include "ehca_classes.h"
+
+struct ehca_pd *ehca_pd_new(void)
+{
+ extern struct ehca_module ehca_module;
+ struct ehca_pd *me;
+
+ me = kmem_cache_alloc(ehca_module.cache_pd, SLAB_KERNEL);
+ if (me == NULL)
+ return NULL;
+
+ memset(me, 0, sizeof(struct ehca_pd));
+
+ return me;
+}
+
+void ehca_pd_delete(struct ehca_pd *me)
+{
+ extern struct ehca_module ehca_module;
+
+ kmem_cache_free(ehca_module.cache_pd, me);
+}
+
+struct ehca_cq *ehca_cq_new(void)
+{
+ extern struct ehca_module ehca_module;
+ struct ehca_cq *me;
+
+ me = kmem_cache_alloc(ehca_module.cache_cq, SLAB_KERNEL);
+ if (me == NULL)
+ return NULL;
+
+ memset(me, 0, sizeof(struct ehca_cq));
+ spin_lock_init(&me->spinlock);
+ spin_lock_init(&me->cb_lock);
+
+ return me;
+}
+
+int ehca_cq_delete(struct ehca_cq *me)
+{
+ extern struct ehca_module ehca_module;
+
+ kmem_cache_free(ehca_module.cache_cq, me);
+
+ return H_Success;
+}
+
+struct ehca_qp *ehca_qp_new(void)
+{
+ extern struct ehca_module ehca_module;
+ struct ehca_qp *me;
+
+ me = kmem_cache_alloc(ehca_module.cache_qp, SLAB_KERNEL);
+ if (me == NULL)
+ return NULL;
+
+ memset(me, 0, sizeof(struct ehca_qp));
+ spin_lock_init(&me->spinlock_s);
+ spin_lock_init(&me->spinlock_r);
+
+ return me;
+}
+
+int ehca_qp_delete(struct ehca_qp *me)
+{
+ extern struct ehca_module ehca_module;
+
+ kmem_cache_free(ehca_module.cache_qp, me);
+
+ return H_Success;
+}
+
+struct ehca_av *ehca_av_new(void)
+{
+ extern struct ehca_module ehca_module;
+ struct ehca_av *me;
+
+ me = kmem_cache_alloc(ehca_module.cache_av, SLAB_KERNEL);
+ if (me == NULL)
+ return NULL;
+
+ memset(me, 0, sizeof(struct ehca_av));
+
+ return me;
+}
+
+int ehca_av_delete(struct ehca_av *me)
+{
+ extern struct ehca_module ehca_module;
+
+ kmem_cache_free(ehca_module.cache_av, me);
+
+ return H_Success;
+}
+
+struct ehca_mr *ehca_mr_new(void)
+{
+ extern struct ehca_module ehca_module;
+ struct ehca_mr *me;
+
+ me = kmem_cache_alloc(ehca_module.cache_mr, SLAB_KERNEL);
+ if (me) {
+ memset(me, 0, sizeof(struct ehca_mr));
+ spin_lock_init(&me->mrlock);
+ EDEB_EX(7, "ehca_mr=%p sizeof(ehca_mr_t)=%x", me,
+ (u32) sizeof(struct ehca_mr));
+ } else {
+ EDEB_ERR(3, "alloc failed");
+ }
+
+ return me;
+}
+
+void ehca_mr_delete(struct ehca_mr *me)
+{
+ extern struct ehca_module ehca_module;
+
+ kmem_cache_free(ehca_module.cache_mr, me);
+}
+
+struct ehca_mw *ehca_mw_new(void)
+{
+ extern struct ehca_module ehca_module;
+ struct ehca_mw *me;
+
+ me = kmem_cache_alloc(ehca_module.cache_mw, SLAB_KERNEL);
+ if (me) {
+ memset(me, 0, sizeof(struct ehca_mw));
+ spin_lock_init(&me->mwlock);
+ EDEB_EX(7, "ehca_mw=%p sizeof(ehca_mw_t)=%x", me,
+ (u32) sizeof(struct ehca_mw));
+ } else {
+ EDEB_ERR(3, "alloc failed");
+ }
+
+ return me;
+}
+
+void ehca_mw_delete(struct ehca_mw *me)
+{
+ extern struct ehca_module ehca_module;
+
+ kmem_cache_free(ehca_module.cache_mw, me);
+}
+
diff --git a/drivers/infiniband/hw/ehca/ehca_classes.h b/drivers/infiniband/hw/ehca/ehca_classes.h
new file mode 100644
index 0000000..1d72aaf
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_classes.h
@@ -0,0 +1,369 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * struct definitions for hcad internal structures
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_classes.h,v 1.80 2006/02/06 16:20:38 schickhj Exp $
+ */
+
+#ifndef __EHCA_CLASSES_H__
+#define __EHCA_CLASSES_H__
+
+#include "ehca_kernel.h"
+#include "ipz_pt_fn.h"
+
+#include <linux/list.h>
+
+struct ehca_module;
+struct ehca_qp;
+struct ehca_cq;
+struct ehca_eq;
+struct ehca_mr;
+struct ehca_mw;
+struct ehca_pd;
+struct ehca_av;
+
+#ifndef CONFIG_PPC64
+#ifndef Z_SERIES
+#error "no series defined"
+#endif
+#endif
+
+#ifdef CONFIG_PPC64
+#include "ehca_classes_pSeries.h"
+#endif
+
+#ifdef Z_SERIES
+#include "ehca_classes_zSeries.h"
+#endif
+
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_user_verbs.h>
+
+#include "ehca_irq.h"
+
+#include "ehca_classes_core.h"
+
+/** @brief HCAD class
+ *
+ * contains HCAD specific data
+ *
+ */
+struct ehca_module {
+ struct list_head shca_list;
+ spinlock_t shca_lock;
+
+ kmem_cache_t *cache_pd;
+ kmem_cache_t *cache_cq;
+ kmem_cache_t *cache_qp;
+ kmem_cache_t *cache_av;
+ kmem_cache_t *cache_mr;
+ kmem_cache_t *cache_mw;
+
+ struct ehca_pfmodule pf; /* plattform specific part of HCA */
+};
+
+/** @brief EQ class
+ */
+struct ehca_eq {
+ u32 length; /* length of EQ */
+ struct ipz_queue ipz_queue; /* EQ in kv */
+ struct ipz_eq_handle ipz_eq_handle;
+ struct ehca_irq_info irq_info;
+ struct work_struct work;
+ struct h_galpas galpas;
+ int is_initialized;
+
+ struct ehca_pfeq pf; /* plattform specific part of EQ */
+
+ spinlock_t spinlock;
+};
+
+/** static port
+ */
+struct ehca_sport {
+ struct ib_cq *ibcq_aqp1; /* CQ for AQP1 */
+ struct ib_qp *ibqp_aqp1; /* QP for AQP1 */
+ enum ib_port_state port_state;
+};
+
+/** @brief HCA class "static HCA"
+ *
+ * contains HCA specific data per HCA (or vHCA?)
+ * per instance reported by firmware
+ *
+ */
+struct ehca_shca {
+ struct ib_device ib_device;
+ struct ibmebus_dev *ibmebus_dev;
+ u8 num_ports;
+ int hw_level;
+ struct list_head shca_list;
+ struct ipz_adapter_handle ipz_hca_handle; /* firmware HCA handle */
+ struct ehca_bridge_handle bridge;
+ struct ehca_sport sport[2];
+ struct ehca_eq eq; /* event queue */
+ struct ehca_eq neq; /* notification event queue */
+ struct ehca_mr *maxmr; /* internal max MR (for kernel users) */
+ struct ehca_pd *pd; /* internal pd (for kernel users) */
+ struct ehca_pfshca pf; /* plattform specific part of HCA */
+ struct h_galpas galpas;
+};
+
+/** @brief protection domain
+ */
+struct ehca_pd {
+ struct ib_pd ib_pd; /* gen2 qp, must always be first in ehca_pd */
+ struct ipz_pd fw_pd;
+ struct ehca_pfpd pf;
+};
+
+/** @brief QP class
+ */
+struct ehca_qp {
+ struct ib_qp ib_qp; /* gen2 qp, must always be first in ehca_qp */
+ struct ehca_qp_core ehca_qp_core; /* common fields for
+ user/kernel space */
+ u32 token;
+ spinlock_t spinlock_s;
+ spinlock_t spinlock_r;
+ u32 sq_max_inline_data_size; /* max # of inline data can be send */
+ struct ipz_qp_handle ipz_qp_handle; /* QP handle for h-calls */
+ struct ehca_pfqp pf; /* plattform specific part of QP */
+ struct ib_qp_init_attr init_attr;
+ /* adr mapping for s/r queues and fw handle bw kernel&user space */
+ u64 uspace_squeue;
+ u64 uspace_rqueue;
+ u64 uspace_fwh;
+ struct ehca_cq* send_cq;
+ unsigned int sqerr_purgeflag;
+ struct list_head list_entries;
+};
+
+#define QP_HASHTAB_LEN 7
+/** @brief CQ class
+ */
+struct ehca_cq {
+ struct ib_cq ib_cq; /* gen2 cq, must always be first
+ in ehca_cq */
+ struct ehca_cq_core ehca_cq_core; /* common fields for
+ user/kernel space */
+ spinlock_t spinlock;
+ u32 cq_number;
+ u32 token;
+ u32 nr_of_entries;
+ /* fw specific data common for p+z */
+ struct ipz_cq_handle ipz_cq_handle; /* CQ handle for h-calls */
+ /* pf specific code */
+ struct ehca_pfcq pf; /* platform specific part of CQ */
+ spinlock_t cb_lock; /* completion event handler */
+ /* adr mapping for queue and fw handle bw kernel&user space */
+ u64 uspace_queue;
+ u64 uspace_fwh;
+ struct list_head qp_hashtab[QP_HASHTAB_LEN];
+};
+
+
+/** @brief MR flags
+ */
+enum ehca_mr_flag {
+ EHCA_MR_FLAG_FMR = 0x80000000, /* FMR, created with ehca_alloc_fmr */
+ EHCA_MR_FLAG_MAXMR = 0x40000000, /* max-MR */
+ EHCA_MR_FLAG_USER = 0x20000000 /* user space TODO...necessary????. */
+};
+
+/** @brief MR class
+ */
+struct ehca_mr {
+ union {
+ struct ib_mr ib_mr; /* must always be first in ehca_mr */
+ struct ib_fmr ib_fmr; /* must always be first in ehca_mr */
+ } ib;
+
+ spinlock_t mrlock;
+
+ /* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+ * !!! ehca_mr_deletenew() memsets from flags to end of structure
+ * !!! DON'T move flags or insert another field before.
+ * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! */
+
+ enum ehca_mr_flag flags;
+ u32 num_pages; /* number of MR pages */
+ int acl; /* ACL (stored here for usage in reregister) */
+ u64 *start; /* virtual start address (stored here for */
+ /* usage in reregister) */
+ u64 size; /* size (stored here for usage in reregister) */
+ u32 fmr_page_size; /* page size for FMR */
+ u32 fmr_max_pages; /* max pages for FMR */
+ u32 fmr_max_maps; /* max outstanding maps for FMR */
+ u32 fmr_map_cnt; /* map counter for FMR */
+ /* fw specific data */
+ struct ipz_mrmw_handle ipz_mr_handle; /* MR handle for h-calls */
+ struct h_galpas galpas;
+ /* data for userspace bridge */
+ u32 nr_of_pages;
+ void *pagearray;
+
+ struct ehca_pfmr pf; /* platform specific part of MR */
+};
+
+/** @brief MW class
+ */
+struct ehca_mw {
+ struct ib_mw ib_mw; /* gen2 mw, must always be first in ehca_mw */
+ spinlock_t mwlock;
+
+ u8 never_bound; /* indication MW was never bound */
+ struct ipz_mrmw_handle ipz_mw_handle; /* MW handle for h-calls */
+ struct h_galpas galpas;
+
+ struct ehca_pfmw pf; /* platform specific part of MW */
+};
+
+/** @brief MR page info type
+ */
+enum ehca_mr_pgi_type {
+ EHCA_MR_PGI_PHYS = 1, /* type of ehca_reg_phys_mr,
+ * ehca_rereg_phys_mr,
+ * ehca_reg_internal_maxmr */
+ EHCA_MR_PGI_USER = 2, /* type of ehca_reg_user_mr */
+ EHCA_MR_PGI_FMR = 3 /* type of ehca_map_phys_fmr */
+};
+
+/** @brief MR page info
+ */
+struct ehca_mr_pginfo {
+ enum ehca_mr_pgi_type type;
+ u64 num_pages;
+ u64 page_count;
+
+ /* type EHCA_MR_PGI_PHYS section */
+ int num_phys_buf;
+ struct ib_phys_buf *phys_buf_array;
+ u64 next_buf;
+ u64 next_page;
+
+ /* type EHCA_MR_PGI_USER section */
+ struct ib_umem *region;
+ struct ib_umem_chunk *next_chunk;
+ u64 next_nmap;
+
+ /* type EHCA_MR_PGI_FMR section */
+ u64 *page_list;
+ u64 next_listelem;
+};
+
+
+/** @brief addres vector suitable for a ud enqueue request
+ */
+struct ehca_av {
+ struct ib_ah ib_ah; /* gen2 ah, must always be first in ehca_ah */
+ struct ehca_ud_av av;
+};
+
+/** @brief user context
+ */
+struct ehca_ucontext {
+ struct ib_ucontext ib_ucontext;
+};
+
+struct ehca_module *ehca_module_new(void);
+
+int ehca_module_delete(struct ehca_module *me);
+
+int ehca_eq_ctor(struct ehca_eq *eq);
+
+int ehca_eq_dtor(struct ehca_eq *eq);
+
+struct ehca_shca *ehca_shca_new(void);
+
+int ehca_shca_delete(struct ehca_shca *me);
+
+struct ehca_sport *ehca_sport_new(struct ehca_shca *anchor); /*anchor?? */
+
+struct ehca_cq *ehca_cq_new(void);
+
+int ehca_cq_delete(struct ehca_cq *me);
+
+struct ehca_av *ehca_av_new(void);
+
+int ehca_av_delete(struct ehca_av *me);
+
+struct ehca_pd *ehca_pd_new(void);
+
+void ehca_pd_delete(struct ehca_pd *me);
+
+struct ehca_qp *ehca_qp_new(void);
+
+int ehca_qp_delete(struct ehca_qp *me);
+
+struct ehca_mr *ehca_mr_new(void);
+
+void ehca_mr_delete(struct ehca_mr *me);
+
+struct ehca_mw *ehca_mw_new(void);
+
+void ehca_mw_delete(struct ehca_mw *me);
+
+extern struct rw_semaphore ehca_qp_idr_sem;
+extern struct rw_semaphore ehca_cq_idr_sem;
+extern struct idr ehca_qp_idr;
+extern struct idr ehca_cq_idr;
+
+/*
+ * resp structs for comm bw user and kernel space
+ */
+struct ehca_create_cq_resp {
+ u32 cq_number;
+ u32 token;
+ struct ehca_cq_core ehca_cq_core;
+};
+
+struct ehca_create_qp_resp {
+ u32 qp_num;
+ u32 token;
+ struct ehca_qp_core ehca_qp_core;
+};
+
+/*
+ * helper funcs to link send cq and qp
+ */
+int ehca_cq_assign_qp(struct ehca_cq *cq, struct ehca_qp *qp);
+int ehca_cq_unassign_qp(struct ehca_cq *cq, unsigned int qp_num);
+struct ehca_qp* ehca_cq_get_qp(struct ehca_cq *cq, int qp_num);
+
+#endif /* __EHCA_CLASSES_H__ */
diff --git a/drivers/infiniband/hw/ehca/ehca_classes_core.h b/drivers/infiniband/hw/ehca/ehca_classes_core.h
new file mode 100644
index 0000000..5e864b3
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_classes_core.h
@@ -0,0 +1,73 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * core struct definitions for hcad internal structures and
+ * to be used/compiled commonly in user and kernel space
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_classes_core.h,v 1.12 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __EHCA_CLASSES_CORE_H__
+#define __EHCA_CLASSES_CORE_H__
+
+#include "ipz_pt_fn_core.h"
+#include "ehca_galpa.h"
+
+/** @brief qp core contains common fields for user/kernel space
+ */
+struct ehca_qp_core {
+ /* kernel space: enum ib_qp_type, user space: enum ibv_qp_type */
+ int qp_type;
+ int dummy1; /* 8 byte alignment */
+ struct ipz_queue ipz_squeue;
+ struct ipz_queue ipz_rqueue;
+ struct h_galpas galpas;
+ unsigned int qkey;
+ int dummy2; /* 8 byte alignment */
+ /* qp_num assigned by ehca: sqp0/1 may have got different numbers */
+ unsigned int real_qp_num;
+};
+
+/** @brief cq core contains common fields for user/kernel space
+ */
+struct ehca_cq_core {
+ struct ipz_queue ipz_queue;
+ struct h_galpas galpas;
+};
+
+#endif /* __EHCA_CLASSES_CORE_H__ */
diff --git a/drivers/infiniband/hw/ehca/ehca_classes_pSeries.h b/drivers/infiniband/hw/ehca/ehca_classes_pSeries.h
new file mode 100644
index 0000000..8f86137
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_classes_pSeries.h
@@ -0,0 +1,256 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * pSeries interface definitions
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_classes_pSeries.h,v 1.24 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __EHCA_CLASSES_PSERIES_H__
+#define __EHCA_CLASSES_PSERIES_H__
+
+#include "ehca_galpa.h"
+#include "ipz_pt_fn.h"
+
+
+struct ehca_pfmodule {
+};
+
+struct ehca_pfshca {
+};
+
+struct ehca_pfqp {
+ struct ipz_qpt sqpt;
+ struct ipz_qpt rqpt;
+ struct ehca_bridge_handle bridge;
+};
+
+struct ehca_pfcq {
+ struct ipz_qpt qpt;
+ struct ehca_bridge_handle bridge;
+ u32 cqnr;
+};
+
+struct ehca_pfeq {
+ struct ipz_qpt qpt;
+ struct ehca_bridge_handle bridge;
+ struct h_galpa galpa;
+ u32 eqnr;
+};
+
+struct ehca_pfpd {
+};
+
+struct ehca_pfmr {
+ struct ehca_bridge_handle bridge;
+};
+struct ehca_pfmw {
+};
+
+struct ipz_adapter_handle {
+ u64 handle;
+};
+
+struct ipz_cq_handle {
+ u64 handle;
+};
+
+struct ipz_eq_handle {
+ u64 handle;
+};
+
+struct ipz_qp_handle {
+ u64 handle;
+};
+struct ipz_mrmw_handle {
+ u64 handle;
+};
+
+struct ipz_pd {
+ u32 value;
+};
+
+struct hcp_modify_qp_control_block {
+ u32 qkey; /* 00 */
+ u32 rdd; /* reliable datagram domain */
+ u32 send_psn; /* 02 */
+ u32 receive_psn; /* 03 */
+ u32 prim_phys_port; /* 04 */
+ u32 alt_phys_port; /* 05 */
+ u32 prim_p_key_idx; /* 06 */
+ u32 alt_p_key_idx; /* 07 */
+ u32 rdma_atomic_ctrl; /* 08 */
+ u32 qp_state; /* 09 */
+ u32 reserved_10; /* 10 */
+ u32 rdma_nr_atomic_resp_res; /* 11 */
+ u32 path_migration_state; /* 12 */
+ u32 rdma_atomic_outst_dest_qp; /* 13 */
+ u32 dest_qp_nr; /* 14 */
+ u32 min_rnr_nak_timer_field; /* 15 */
+ u32 service_level; /* 16 */
+ u32 send_grh_flag; /* 17 */
+ u32 retry_count; /* 18 */
+ u32 timeout; /* 19 */
+ u32 path_mtu; /* 20 */
+ u32 max_static_rate; /* 21 */
+ u32 dlid; /* 22 */
+ u32 rnr_retry_count; /* 23 */
+ u32 source_path_bits; /* 24 */
+ u32 traffic_class; /* 25 */
+ u32 hop_limit; /* 26 */
+ u32 source_gid_idx; /* 27 */
+ u32 flow_label; /* 28 */
+ u32 reserved_29; /* 29 */
+ union { /* 30 */
+ u64 dw[2];
+ u8 byte[16];
+ } dest_gid;
+ u32 service_level_al; /* 34 */
+ u32 send_grh_flag_al; /* 35 */
+ u32 retry_count_al; /* 36 */
+ u32 timeout_al; /* 37 */
+ u32 max_static_rate_al; /* 38 */
+ u32 dlid_al; /* 39 */
+ u32 rnr_retry_count_al; /* 40 */
+ u32 source_path_bits_al; /* 41 */
+ u32 traffic_class_al; /* 42 */
+ u32 hop_limit_al; /* 43 */
+ u32 source_gid_idx_al; /* 44 */
+ u32 flow_label_al; /* 45 */
+ u32 reserved_46; /* 46 */
+ u32 reserved_47; /* 47 */
+ union { /* 48 */
+ u64 dw[2];
+ u8 byte[16];
+ } dest_gid_al;
+ u32 max_nr_outst_send_wr; /* 52 */
+ u32 max_nr_outst_recv_wr; /* 53 */
+ u32 disable_ete_credit_check; /* 54 */
+ u32 qp_number; /* 55 */
+ u64 send_queue_handle; /* 56 */
+ u64 recv_queue_handle; /* 58 */
+ u32 actual_nr_sges_in_sq_wqe; /* 60 */
+ u32 actual_nr_sges_in_rq_wqe; /* 61 */
+ u32 qp_enable; /* 62 */
+ u32 curr_srq_limit; /* 63 */
+ u64 qp_aff_asyn_ev_log_reg; /* 64 */
+ u64 shared_rq_hndl; /* 66 */
+ u64 trigg_doorbell_qp_hndl; /* 68 */
+ u32 reserved_70_127[58]; /* 70 */
+};
+
+#define MQPCB_MASK_QKEY EHCA_BMASK_IBM(0,0)
+#define MQPCB_MASK_SEND_PSN EHCA_BMASK_IBM(2,2)
+#define MQPCB_MASK_RECEIVE_PSN EHCA_BMASK_IBM(3,3)
+#define MQPCB_MASK_PRIM_PHYS_PORT EHCA_BMASK_IBM(4,4)
+#define MQPCB_PRIM_PHYS_PORT EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_ALT_PHYS_PORT EHCA_BMASK_IBM(5,5)
+#define MQPCB_MASK_PRIM_P_KEY_IDX EHCA_BMASK_IBM(6,6)
+#define MQPCB_PRIM_P_KEY_IDX EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_ALT_P_KEY_IDX EHCA_BMASK_IBM(7,7)
+#define MQPCB_MASK_RDMA_ATOMIC_CTRL EHCA_BMASK_IBM(8,8)
+#define MQPCB_MASK_QP_STATE EHCA_BMASK_IBM(9,9)
+#define MQPCB_QP_STATE EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_RDMA_NR_ATOMIC_RESP_RES EHCA_BMASK_IBM(11,11)
+#define MQPCB_MASK_PATH_MIGRATION_STATE EHCA_BMASK_IBM(12,12)
+#define MQPCB_MASK_RDMA_ATOMIC_OUTST_DEST_QP EHCA_BMASK_IBM(13,13)
+#define MQPCB_MASK_DEST_QP_NR EHCA_BMASK_IBM(14,14)
+#define MQPCB_MASK_MIN_RNR_NAK_TIMER_FIELD EHCA_BMASK_IBM(15,15)
+#define MQPCB_MASK_SERVICE_LEVEL EHCA_BMASK_IBM(16,16)
+#define MQPCB_MASK_SEND_GRH_FLAG EHCA_BMASK_IBM(17,17)
+#define MQPCB_MASK_RETRY_COUNT EHCA_BMASK_IBM(18,18)
+#define MQPCB_MASK_TIMEOUT EHCA_BMASK_IBM(19,19)
+#define MQPCB_MASK_PATH_MTU EHCA_BMASK_IBM(20,20)
+#define MQPCB_PATH_MTU EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_MAX_STATIC_RATE EHCA_BMASK_IBM(21,21)
+#define MQPCB_MAX_STATIC_RATE EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_DLID EHCA_BMASK_IBM(22,22)
+#define MQPCB_DLID EHCA_BMASK_IBM(16,31)
+#define MQPCB_MASK_RNR_RETRY_COUNT EHCA_BMASK_IBM(23,23)
+#define MQPCB_RNR_RETRY_COUNT EHCA_BMASK_IBM(29,31)
+#define MQPCB_MASK_SOURCE_PATH_BITS EHCA_BMASK_IBM(24,24)
+#define MQPCB_SOURCE_PATH_BITS EHCA_BMASK_IBM(25,31)
+#define MQPCB_MASK_TRAFFIC_CLASS EHCA_BMASK_IBM(25,25)
+#define MQPCB_TRAFFIC_CLASS EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_HOP_LIMIT EHCA_BMASK_IBM(26,26)
+#define MQPCB_HOP_LIMIT EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_SOURCE_GID_IDX EHCA_BMASK_IBM(27,27)
+#define MQPCB_SOURCE_GID_IDX EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_FLOW_LABEL EHCA_BMASK_IBM(28,28)
+#define MQPCB_FLOW_LABEL EHCA_BMASK_IBM(12,31)
+#define MQPCB_MASK_DEST_GID EHCA_BMASK_IBM(30,30)
+#define MQPCB_MASK_SERVICE_LEVEL_AL EHCA_BMASK_IBM(31,31)
+#define MQPCB_SERVICE_LEVEL_AL EHCA_BMASK_IBM(28,31)
+#define MQPCB_MASK_SEND_GRH_FLAG_AL EHCA_BMASK_IBM(32,32)
+#define MQPCB_SEND_GRH_FLAG_AL EHCA_BMASK_IBM(31,31)
+#define MQPCB_MASK_RETRY_COUNT_AL EHCA_BMASK_IBM(33,33)
+#define MQPCB_RETRY_COUNT_AL EHCA_BMASK_IBM(29,31)
+#define MQPCB_MASK_TIMEOUT_AL EHCA_BMASK_IBM(34,34)
+#define MQPCB_TIMEOUT_AL EHCA_BMASK_IBM(27,31)
+#define MQPCB_MASK_MAX_STATIC_RATE_AL EHCA_BMASK_IBM(35,35)
+#define MQPCB_MAX_STATIC_RATE_AL EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_DLID_AL EHCA_BMASK_IBM(36,36)
+#define MQPCB_DLID_AL EHCA_BMASK_IBM(16,31)
+#define MQPCB_MASK_RNR_RETRY_COUNT_AL EHCA_BMASK_IBM(37,37)
+#define MQPCB_RNR_RETRY_COUNT_AL EHCA_BMASK_IBM(29,31)
+#define MQPCB_MASK_SOURCE_PATH_BITS_AL EHCA_BMASK_IBM(38,38)
+#define MQPCB_SOURCE_PATH_BITS_AL EHCA_BMASK_IBM(25,31)
+#define MQPCB_MASK_TRAFFIC_CLASS_AL EHCA_BMASK_IBM(39,39)
+#define MQPCB_TRAFFIC_CLASS_AL EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_HOP_LIMIT_AL EHCA_BMASK_IBM(40,40)
+#define MQPCB_HOP_LIMIT_AL EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_SOURCE_GID_IDX_AL EHCA_BMASK_IBM(41,41)
+#define MQPCB_SOURCE_GID_IDX_AL EHCA_BMASK_IBM(24,31)
+#define MQPCB_MASK_FLOW_LABEL_AL EHCA_BMASK_IBM(42,42)
+#define MQPCB_FLOW_LABEL_AL EHCA_BMASK_IBM(12,31)
+#define MQPCB_MASK_DEST_GID_AL EHCA_BMASK_IBM(44,44)
+#define MQPCB_MASK_MAX_NR_OUTST_SEND_WR EHCA_BMASK_IBM(45,45)
+#define MQPCB_MAX_NR_OUTST_SEND_WR EHCA_BMASK_IBM(16,31)
+#define MQPCB_MASK_MAX_NR_OUTST_RECV_WR EHCA_BMASK_IBM(46,46)
+#define MQPCB_MAX_NR_OUTST_RECV_WR EHCA_BMASK_IBM(16,31)
+#define MQPCB_MASK_DISABLE_ETE_CREDIT_CHECK EHCA_BMASK_IBM(47,47)
+#define MQPCB_DISABLE_ETE_CREDIT_CHECK EHCA_BMASK_IBM(31,31)
+#define MQPCB_QP_NUMBER EHCA_BMASK_IBM(8,31)
+#define MQPCB_MASK_QP_ENABLE EHCA_BMASK_IBM(48,48)
+#define MQPCB_QP_ENABLE EHCA_BMASK_IBM(31,31)
+#define MQPCB_MASK_CURR_SQR_LIMIT EHCA_BMASK_IBM(49,49)
+#define MQPCB_CURR_SQR_LIMIT EHCA_BMASK_IBM(15,31)
+#define MQPCB_MASK_QP_AFF_ASYN_EV_LOG_REG EHCA_BMASK_IBM(50,50)
+#define MQPCB_MASK_SHARED_RQ_HNDL EHCA_BMASK_IBM(51,51)
+
+#endif /* __EHCA_CLASSES_PSERIES_H__ */

2006-02-18 00:58:58

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 11/22] ehca event queues

From: Roland Dreier <[email protected]>

in ehca_poll_eqs(), is there any reason not to use list_for_each_entry()?

Since ehca_poll_eqs() defers all the work to an workqueue, is
there any reason for it to run in a kernel thread? Why not just
make it a recurring timer?
---

drivers/infiniband/hw/ehca/ehca_eq.c | 242 ++++++++++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_eq.h | 78 +++++++++++
2 files changed, 320 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_eq.c b/drivers/infiniband/hw/ehca/ehca_eq.c
new file mode 100644
index 0000000..e508edb
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_eq.c
@@ -0,0 +1,242 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Event queue handling
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Heiko J Schick <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_eq.c,v 1.40 2006/02/06 16:20:38 schickhj Exp $
+ */
+
+#define DEB_PREFIX "e_eq"
+
+#include "ehca_eq.h"
+#include "ehca_kernel.h"
+#include "ehca_classes.h"
+#include "hcp_if.h"
+#include "ehca_iverbs.h"
+#include "ipz_pt_fn.h"
+#include "ehca_qes.h"
+#include "ehca_irq.h"
+
+/* TODO: should be defined in ehca_classes_pSeries.h */
+#define HIPZ_EQ_REGISTER_ORIG 0
+
+int ehca_create_eq(struct ehca_shca *shca,
+ struct ehca_eq *eq,
+ const enum ehca_eq_type type, const u32 length)
+{
+ extern struct workqueue_struct *ehca_wq;
+ u64 ret = H_Success;
+ u32 nr_pages = 0;
+ u32 i;
+ void *vpage = NULL;
+
+ EDEB_EN(7, "shca=%p eq=%p length=%x", shca, eq, length);
+ EHCA_CHECK_ADR(shca);
+ EHCA_CHECK_ADR(eq);
+
+ spin_lock_init(&eq->spinlock);
+ eq->is_initialized = 0;
+
+ if (type!=EHCA_EQ && type!=EHCA_NEQ) {
+ EDEB_ERR(4, "Invalid EQ type %x. eq=%p", type, eq);
+ return -EINVAL;
+ }
+ if (length==0) {
+ EDEB_ERR(4, "EQ length must not be zero. eq=%p", eq);
+ return -EINVAL;
+ }
+
+ ret = hipz_h_alloc_resource_eq(shca->ipz_hca_handle,
+ &eq->pf,
+ type,
+ length,
+ &eq->ipz_eq_handle,
+ &eq->length,
+ &nr_pages, &eq->irq_info.ist);
+
+ if (ret != H_Success) {
+ EDEB_ERR(4, "Can't allocate EQ / NEQ. eq=%p", eq);
+ return -EINVAL;
+ }
+
+ ret = ipz_queue_ctor(&eq->ipz_queue, nr_pages,
+ EHCA_PAGESIZE, sizeof(struct ehca_eqe), 0);
+ if (!ret) {
+ EDEB_ERR(4, "Can't allocate EQ pages. eq=%p", eq);
+ goto create_eq_exit1;
+ }
+
+ for (i = 0; i < nr_pages; i++) {
+ u64 rpage;
+
+ if (!(vpage = ipz_QPageit_get_inc(&eq->ipz_queue))) {
+ ret = H_Resource;
+ goto create_eq_exit2;
+ }
+
+ rpage = ehca_kv_to_g(vpage);
+ ret = hipz_h_register_rpage_eq(shca->ipz_hca_handle,
+ eq->ipz_eq_handle,
+ &eq->pf,
+ 0,
+ HIPZ_EQ_REGISTER_ORIG, rpage, 1);
+
+ if (i == (nr_pages - 1)) {
+ /* last page */
+ vpage = ipz_QPageit_get_inc(&eq->ipz_queue);
+ if ((ret != H_Success) || (vpage != 0)) {
+ goto create_eq_exit2;
+ }
+ } else {
+ if ((ret != H_PAGE_REGISTERED) || (vpage == 0)) {
+ goto create_eq_exit2;
+ }
+ }
+ }
+
+ ipz_QEit_reset(&eq->ipz_queue);
+
+#ifndef EHCA_USERDRIVER
+ {
+ pid_t pid = 0;
+ (eq->irq_info).pid = pid;
+ (eq->irq_info).eq = eq;
+ (eq->irq_info).wq = ehca_wq;
+ (eq->irq_info).work = &(eq->work);
+ }
+#endif
+
+ /* register interrupt handlers and initialize work queues */
+ if (type == EHCA_EQ) {
+ INIT_WORK(&(eq->work),
+ ehca_interrupt_eq, (void *)&(eq->irq_info));
+ eq->is_initialized = 1;
+ hipz_request_interrupt(&(eq->irq_info), ehca_interrupt);
+ } else if (type == EHCA_NEQ) {
+ INIT_WORK(&(eq->work),
+ ehca_interrupt_neq, (void *)&(eq->irq_info));
+ hipz_request_interrupt(&(eq->irq_info), ehca_interrupt);
+ }
+
+ EDEB_EX(7, "ret=%lx", ret);
+
+ return 0;
+
+ create_eq_exit2:
+ ipz_queue_dtor(&eq->ipz_queue);
+
+ create_eq_exit1:
+ hipz_h_destroy_eq(shca->ipz_hca_handle, eq);
+
+ EDEB_EX(7, "ret=%lx", ret);
+
+ return -EINVAL;
+}
+
+void *ehca_poll_eq(struct ehca_shca *shca, struct ehca_eq *eq)
+{
+ unsigned long flags = 0;
+ void *eqe = NULL;
+
+ EDEB_EN(7, "shca=%p eq=%p", shca, eq);
+ EHCA_CHECK_ADR_P(shca);
+ EHCA_CHECK_EQ_P(eq);
+
+ spin_lock_irqsave(&eq->spinlock, flags);
+ eqe = ipz_QEit_EQ_get_inc_valid(&eq->ipz_queue);
+ spin_unlock_irqrestore(&eq->spinlock, flags);
+
+ EDEB_EX(7, "eq=%p eqe=%p", eq, eqe);
+
+ return eqe;
+}
+
+int ehca_poll_eqs(void *data)
+{
+ extern struct workqueue_struct *ehca_wq;
+ struct ehca_shca *shca;
+ struct ehca_module* module = data;
+ struct list_head *entry;
+
+ do {
+ spin_lock(&module->shca_lock);
+ list_for_each(entry, &module->shca_list) {
+ shca = list_entry(entry, struct ehca_shca, shca_list);
+
+ if (shca->eq.is_initialized && !kthread_should_stop())
+ queue_work(ehca_wq, &shca->eq.work);
+ }
+ spin_unlock(&module->shca_lock);
+
+ msleep_interruptible(1000);
+ }
+ while(!kthread_should_stop());
+
+ return 0;
+}
+
+int ehca_destroy_eq(struct ehca_shca *shca, struct ehca_eq *eq)
+{
+ unsigned long flags = 0;
+ u64 retcode = H_Success;
+
+ EDEB_EN(7, "shca=%p eq=%p", shca, eq);
+ EHCA_CHECK_ADR(shca);
+ EHCA_CHECK_EQ(eq);
+
+ spin_lock_irqsave(&eq->spinlock, flags);
+ hipz_free_interrupt(&(eq->irq_info));
+
+ retcode = hipz_h_destroy_eq(shca->ipz_hca_handle, eq);
+
+ spin_unlock_irqrestore(&eq->spinlock, flags);
+
+ if (retcode != H_Success) {
+ EDEB_ERR(4, "Can't free EQ resources.");
+ return -EINVAL;
+ }
+ ipz_queue_dtor(&eq->ipz_queue);
+
+ EDEB_EX(7, "retcode=%lx", retcode);
+
+ return 0;
+}
+
diff --git a/drivers/infiniband/hw/ehca/ehca_eq.h b/drivers/infiniband/hw/ehca/ehca_eq.h
new file mode 100644
index 0000000..d09f21b
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_eq.h
@@ -0,0 +1,78 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Completion queue, event queue handling helper functions
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Heiko J Schick <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_eq.h,v 1.10 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef EHCA_EQ_H
+#define EHCA_EQ_H
+
+#include "ehca_classes.h"
+#include "ehca_common.h"
+
+enum ehca_eq_type {
+ EHCA_EQ = 0, /* event queue */
+ EHCA_NEQ /* notification event queue */
+};
+
+/** @brief hcad internal create EQ
+ */
+int ehca_create_eq(struct ehca_shca *shca,
+ struct ehca_eq *eq, /* struct contains eq to create */
+ enum ehca_eq_type type,
+ const u32 length);
+
+/** @brief destroy the eq
+ */
+int ehca_destroy_eq(struct ehca_shca *shca, struct ehca_eq *eq);
+
+/** @brief hcad internal poll EQ
+ - check if new EQE available,
+ - if yes, increment EQE pointer
+ - otherwise return 0
+ @returns pointer to EQE if new valid EQEavailable
+ */
+void *ehca_poll_eq(struct ehca_shca *shca, struct ehca_eq *eq);
+
+#endif /* EHCA_EQ_H */
+

2006-02-18 00:58:59

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 15/22] ehca queue pair handling

From: Roland Dreier <[email protected]>


---

drivers/infiniband/hw/ehca/ehca_qp.c | 1528 ++++++++++++++++++++++++++++++++++
1 files changed, 1528 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_qp.c b/drivers/infiniband/hw/ehca/ehca_qp.c
new file mode 100644
index 0000000..e5b1b80
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_qp.c
@@ -0,0 +1,1528 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * QP functions
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ * Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_qp.c,v 1.159 2006/02/15 15:01:24 nguyen Exp $
+ */
+
+
+#define DEB_PREFIX "e_qp"
+
+#include "ehca_kernel.h"
+
+#include "ehca_classes.h"
+#include "ehca_tools.h"
+#include "hcp_if.h"
+#include "ehca_qes.h"
+
+#include "ehca_iverbs.h"
+#include <linux/module.h>
+#include <linux/err.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+/** @brief attributes not supported by query qp
+ */
+#define QP_ATTR_QUERY_NOT_SUPPORTED (IB_QP_MAX_DEST_RD_ATOMIC | \
+ IB_QP_MAX_QP_RD_ATOMIC | \
+ IB_QP_ACCESS_FLAGS | \
+ IB_QP_EN_SQD_ASYNC_NOTIFY)
+
+/** @brief ehca (internal) qp state values
+ */
+enum ehca_qp_state {
+ EHCA_QPS_RESET = 1,
+ EHCA_QPS_INIT = 2,
+ EHCA_QPS_RTR = 3,
+ EHCA_QPS_RTS = 5,
+ EHCA_QPS_SQD = 6,
+ EHCA_QPS_SQE = 8,
+ EHCA_QPS_ERR = 128
+};
+
+/** @brief qp state transitions as defined by IB Arch Rel 1.1 page 431
+ */
+enum ib_qp_statetrans {
+ IB_QPST_ANY2RESET,
+ IB_QPST_ANY2ERR,
+ IB_QPST_RESET2INIT,
+ IB_QPST_INIT2RTR,
+ IB_QPST_INIT2INIT,
+ IB_QPST_RTR2RTS,
+ IB_QPST_RTS2SQD,
+ IB_QPST_RTS2RTS,
+ IB_QPST_SQD2RTS,
+ IB_QPST_SQE2RTS,
+ IB_QPST_SQD2SQD,
+ IB_QPST_MAX /* nr of transitions, this must be last!!! */
+};
+
+/** @brief returns ehca qp state corresponding to given ib qp state
+ */
+static inline enum ehca_qp_state ib2ehca_qp_state(enum ib_qp_state ib_qp_state)
+{
+ switch (ib_qp_state) {
+ case IB_QPS_RESET:
+ return EHCA_QPS_RESET;
+ case IB_QPS_INIT:
+ return EHCA_QPS_INIT;
+ case IB_QPS_RTR:
+ return EHCA_QPS_RTR;
+ case IB_QPS_RTS:
+ return EHCA_QPS_RTS;
+ case IB_QPS_SQD:
+ return EHCA_QPS_SQD;
+ case IB_QPS_SQE:
+ return EHCA_QPS_SQE;
+ case IB_QPS_ERR:
+ return EHCA_QPS_ERR;
+ default:
+ EDEB_ERR(4, "invalid ib_qp_state=%x", ib_qp_state);
+ return -EINVAL;
+ }
+}
+
+/** @brief returns ib qp state corresponding to given ehca qp state
+ */
+static inline enum ib_qp_state ehca2ib_qp_state(enum ehca_qp_state
+ ehca_qp_state)
+{
+ switch (ehca_qp_state) {
+ case EHCA_QPS_RESET:
+ return IB_QPS_RESET;
+ case EHCA_QPS_INIT:
+ return IB_QPS_INIT;
+ case EHCA_QPS_RTR:
+ return IB_QPS_RTR;
+ case EHCA_QPS_RTS:
+ return IB_QPS_RTS;
+ case EHCA_QPS_SQD:
+ return IB_QPS_SQD;
+ case EHCA_QPS_SQE:
+ return IB_QPS_SQE;
+ case EHCA_QPS_ERR:
+ return IB_QPS_ERR;
+ default:
+ EDEB_ERR(4,"invalid ehca_qp_state=%x",ehca_qp_state);
+ return -EINVAL;
+ }
+}
+
+/** @brief qp type
+ * used as index for req_attr and opt_attr of struct ehca_modqp_statetrans
+ */
+enum ehca_qp_type {
+ QPT_RC = 0,
+ QPT_UC = 1,
+ QPT_UD = 2,
+ QPT_SQP = 3,
+ QPT_MAX
+};
+
+/** @brief returns ehca qp type corresponding to ib qp type
+ */
+static inline enum ehca_qp_type ib2ehcaqptype(enum ib_qp_type ibqptype)
+{
+ switch (ibqptype) {
+ case IB_QPT_SMI:
+ case IB_QPT_GSI:
+ return QPT_SQP;
+ case IB_QPT_RC:
+ return QPT_RC;
+ case IB_QPT_UC:
+ return QPT_UC;
+ case IB_QPT_UD:
+ return QPT_UD;
+ default:
+ EDEB_ERR(4,"Invalid ibqptype=%x", ibqptype);
+ return -EINVAL;
+ }
+}
+
+static inline enum ib_qp_statetrans get_modqp_statetrans(int ib_fromstate,
+ int ib_tostate)
+{
+ int index = -EINVAL;
+ switch (ib_tostate) {
+ case IB_QPS_RESET:
+ index = IB_QPST_ANY2RESET;
+ break;
+ case IB_QPS_INIT:
+ if (ib_fromstate == IB_QPS_RESET) {
+ index = IB_QPST_RESET2INIT;
+ } else if (ib_fromstate == IB_QPS_INIT) {
+ index = IB_QPST_INIT2INIT;
+ }
+ break;
+ case IB_QPS_RTR:
+ if (ib_fromstate == IB_QPS_INIT) {
+ index = IB_QPST_INIT2RTR;
+ }
+ break;
+ case IB_QPS_RTS:
+ if (ib_fromstate == IB_QPS_RTR) {
+ index = IB_QPST_RTR2RTS;
+ } else if (ib_fromstate == IB_QPS_RTS) {
+ index = IB_QPST_RTS2RTS;
+ } else if (ib_fromstate == IB_QPS_SQD) {
+ index = IB_QPST_SQD2RTS;
+ } else if (ib_fromstate == IB_QPS_SQE) {
+ index = IB_QPST_SQE2RTS;
+ }
+ break;
+ case IB_QPS_SQD:
+ if (ib_fromstate == IB_QPS_RTS) {
+ index = IB_QPST_RTS2SQD;
+ }
+ break;
+ case IB_QPS_SQE:
+ /* not allowed via mod qp */
+ break;
+ case IB_QPS_ERR:
+ index = IB_QPST_ANY2ERR;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return index;
+}
+
+/** @brief ehca service types
+ */
+enum ehca_service_type {
+ ST_RC = 0,
+ ST_UC = 1,
+ ST_RD = 2,
+ ST_UD = 3
+};
+
+/** @brief returns hcp service type corresponding to given ib qp type
+ * used by create_qp()
+ */
+static inline int ibqptype2servicetype(enum ib_qp_type ibqptype)
+{
+ switch (ibqptype) {
+ case IB_QPT_SMI:
+ case IB_QPT_GSI:
+ return ST_UD;
+ case IB_QPT_RC:
+ return ST_RC;
+ case IB_QPT_UC:
+ return ST_UC;
+ case IB_QPT_UD:
+ return ST_UD;
+ case IB_QPT_RAW_IPV6:
+ return -EINVAL;
+ case IB_QPT_RAW_ETY:
+ return -EINVAL;
+ default:
+ EDEB_ERR(4, "Invalid ibqptype=%x", ibqptype);
+ return -EINVAL;
+ }
+}
+
+/* init_qp_queues - Initializes/constructs r/squeue and registers queue pages.
+ * returns 0 if successful,
+ * -EXXXX if not
+ */
+static inline int init_qp_queues(struct ipz_adapter_handle ipz_hca_handle,
+ struct ehca_qp *my_qp,
+ int nr_sq_pages,
+ int nr_rq_pages,
+ int swqe_size,
+ int rwqe_size,
+ int nr_send_sges, int nr_receive_sges)
+{
+ int ret = -EINVAL;
+ int cnt = 0;
+ void *vpage = NULL;
+ u64 rpage = 0;
+ int ipz_rc = -1;
+ u64 hipz_rc = H_Parameter;
+
+ ipz_rc = ipz_queue_ctor(&my_qp->ehca_qp_core.ipz_squeue,
+ nr_sq_pages,
+ EHCA_PAGESIZE, swqe_size, nr_send_sges);
+ if (!ipz_rc) {
+ EDEB_ERR(4, "Cannot allocate page for squeue. ipz_rc=%x",
+ ipz_rc);
+ ret = -EBUSY;
+ return ret;
+ }
+
+ ipz_rc = ipz_queue_ctor(&my_qp->ehca_qp_core.ipz_rqueue,
+ nr_rq_pages,
+ EHCA_PAGESIZE, rwqe_size, nr_receive_sges);
+ if (!ipz_rc) {
+ EDEB_ERR(4, "Cannot allocate page for rqueue. ipz_rc=%x",
+ ipz_rc);
+ ret = -EBUSY;
+ goto init_qp_queues0;
+ }
+ /* register SQ pages */
+ for (cnt = 0; cnt < nr_sq_pages; cnt++) {
+ vpage = ipz_QPageit_get_inc(&my_qp->ehca_qp_core.ipz_squeue);
+ if (!vpage) {
+ EDEB_ERR(4, "SQ ipz_QPageit_get_inc() "
+ "failed p_vpage= %p", vpage);
+ ret = -EINVAL;
+ goto init_qp_queues1;
+ }
+ rpage = ehca_kv_to_g(vpage);
+
+ hipz_rc = hipz_h_register_rpage_qp(ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ &my_qp->pf, 0, 0, /*TODO*/
+ rpage, 1,
+ my_qp->ehca_qp_core.galpas.kernel);
+ if (hipz_rc < H_Success) {
+ EDEB_ERR(4,"SQ hipz_qp_register_rpage() faield "
+ " rc=%lx", hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto init_qp_queues1;
+ }
+ /* for sq no need to check hipz_rc against
+ e.g. H_PAGE_REGISTERED */
+ }
+
+ ipz_QEit_reset(&my_qp->ehca_qp_core.ipz_squeue);
+
+ /* register RQ pages */
+ for (cnt = 0; cnt < nr_rq_pages; cnt++) {
+ vpage = ipz_QPageit_get_inc(&my_qp->ehca_qp_core.ipz_rqueue);
+ if (!vpage) {
+ EDEB_ERR(4,"RQ ipz_QPageit_get_inc() "
+ "failed p_vpage = %p", vpage);
+ hipz_rc = H_Resource;
+ ret = -EINVAL;
+ goto init_qp_queues1;
+ }
+
+ rpage = ehca_kv_to_g(vpage);
+
+ hipz_rc = hipz_h_register_rpage_qp(ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ &my_qp->pf, 0, 1, /*TODO*/
+ rpage, 1,
+ my_qp->ehca_qp_core.galpas.
+ kernel);
+ if (hipz_rc < H_Success) {
+ EDEB_ERR(4, "RQ hipz_qp_register_rpage() failed "
+ "rc=%lx", hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto init_qp_queues1;
+ }
+ if (cnt == (nr_rq_pages - 1)) { /* last page! */
+ if (hipz_rc != H_Success) {
+ EDEB_ERR(4,"RQ hipz_qp_register_rpage() "
+ "hipz_rc= %lx ", hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto init_qp_queues1;
+ }
+ vpage = ipz_QPageit_get_inc(&my_qp->ehca_qp_core.ipz_rqueue);
+ if (vpage != NULL) {
+ EDEB_ERR(4,"ipz_QPageit_get_inc() "
+ "should not succeed vpage=%p",
+ vpage);
+ ret = -EINVAL;
+ goto init_qp_queues1;
+ }
+ } else {
+ if (hipz_rc != H_PAGE_REGISTERED) {
+ EDEB_ERR(4,"RQ hipz_qp_register_rpage() "
+ "hipz_rc= %lx ", hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto init_qp_queues1;
+ }
+ }
+ }
+
+ ipz_QEit_reset(&my_qp->ehca_qp_core.ipz_rqueue);
+
+ return 0;
+
+ init_qp_queues1:
+ ipz_queue_dtor(&my_qp->ehca_qp_core.ipz_rqueue);
+ init_qp_queues0:
+ ipz_queue_dtor(&my_qp->ehca_qp_core.ipz_squeue);
+ return ret;
+}
+
+
+struct ib_qp *ehca_create_qp(struct ib_pd *pd,
+ struct ib_qp_init_attr *init_attr,
+ struct ib_udata *udata)
+{
+ static int da_msg_size[]={ 128, 256, 512, 1024, 2048, 4096 };
+ int ret = -EINVAL;
+ int servicetype = 0;
+ int sigtype = 0;
+
+ struct ehca_qp *my_qp = NULL;
+ struct ehca_pd *my_pd = NULL;
+ struct ehca_shca *shca = NULL;
+ struct ehca_cq *recv_ehca_cq = NULL;
+ struct ehca_cq *send_ehca_cq = NULL;
+ struct ib_ucontext *context = NULL;
+ u64 hipz_rc = H_Parameter;
+ int max_send_sge;
+ int max_recv_sge;
+ /* h_call's out parameters */
+ u16 act_nr_send_wqes = 0, act_nr_receive_wqes = 0;
+ u8 act_nr_send_sges = 0, act_nr_receive_sges = 0;
+ u32 qp_nr = 0,
+ nr_sq_pages = 0, swqe_size = 0, rwqe_size = 0, nr_rq_pages = 0;
+ u8 daqp_completion;
+ u8 isdaqp;
+ EDEB_EN(7,"pd=%p init_attr=%p", pd, init_attr);
+
+ EHCA_CHECK_PD_P(pd);
+ EHCA_CHECK_ADR_P(init_attr);
+
+ if (init_attr->sq_sig_type != IB_SIGNAL_REQ_WR &&
+ init_attr->sq_sig_type != IB_SIGNAL_ALL_WR) {
+ EDEB_ERR(4, "init_attr->sg_sig_type=%x not allowed",
+ init_attr->sq_sig_type);
+ return ERR_PTR(-EINVAL);
+ }
+
+ /* save daqp completion bits */
+ daqp_completion = init_attr->qp_type & 0x60;
+ /* save daqp bit */
+ isdaqp = (init_attr->qp_type & 0x80) ? 1 : 0;
+ init_attr->qp_type = init_attr->qp_type & 0x1F;
+
+ if (init_attr->qp_type != IB_QPT_UD &&
+ init_attr->qp_type != IB_QPT_SMI &&
+ init_attr->qp_type != IB_QPT_GSI &&
+ init_attr->qp_type != IB_QPT_UC &&
+ init_attr->qp_type != IB_QPT_RC) {
+ EDEB_ERR(4,"wrong QP Type=%x",init_attr->qp_type);
+ return ERR_PTR(-EINVAL);
+ }
+ if (init_attr->qp_type != IB_QPT_RC && isdaqp != 0) {
+ EDEB_ERR(4,"unsupported LL QP Type=%x",init_attr->qp_type);
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (pd->uobject && udata != NULL) {
+ context = pd->uobject->context;
+ }
+
+ my_qp = ehca_qp_new();
+ if (!my_qp) {
+ EDEB_ERR(4, "pd=%p not enough memory to alloc qp", pd);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ my_pd = container_of(pd, struct ehca_pd, ib_pd);
+
+ shca = container_of(pd->device, struct ehca_shca, ib_device);
+ recv_ehca_cq = container_of(init_attr->recv_cq, struct ehca_cq, ib_cq);
+ send_ehca_cq = container_of(init_attr->send_cq, struct ehca_cq, ib_cq);
+
+ my_qp->init_attr = *init_attr;
+
+ do {
+ if (!idr_pre_get(&ehca_qp_idr, GFP_KERNEL)) {
+ ret = -ENOMEM;
+ EDEB_ERR(4, "Can't reserve idr resources.");
+ goto create_qp_exit0;
+ }
+
+ down_write(&ehca_qp_idr_sem);
+ ret = idr_get_new(&ehca_qp_idr, my_qp, &my_qp->token);
+ up_write(&ehca_qp_idr_sem);
+
+ } while (ret == -EAGAIN);
+
+ if (ret) {
+ ret = -ENOMEM;
+ EDEB_ERR(4, "Can't allocate new idr entry.");
+ goto create_qp_exit0;
+ }
+
+ servicetype = ibqptype2servicetype(init_attr->qp_type);
+ if (servicetype < 0) {
+ ret = -EINVAL;
+ EDEB_ERR(4, "Invalid qp_type=%x", init_attr->qp_type);
+ goto create_qp_exit0;
+ }
+
+ if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
+ sigtype = HCALL_SIGT_EVERY;
+ } else {
+ sigtype = HCALL_SIGT_BY_WQE;
+ }
+
+ /* UD_AV CIRCUMVENTION */
+ max_send_sge=init_attr->cap.max_send_sge;
+ max_recv_sge=init_attr->cap.max_recv_sge;
+ if (IB_QPT_UD == init_attr->qp_type ||
+ IB_QPT_GSI == init_attr->qp_type ||
+ IB_QPT_SMI == init_attr->qp_type) {
+ max_send_sge += 2;
+ max_recv_sge += 2;
+ }
+
+ EDEB(7, "isdaqp=%x daqp_completion=%x", isdaqp, daqp_completion);
+
+ hipz_rc = hipz_h_alloc_resource_qp(shca->ipz_hca_handle,
+ &my_qp->pf,
+ servicetype,
+ isdaqp | daqp_completion,
+ sigtype, 0, /* no ud ad lkey ctrl */
+ send_ehca_cq->ipz_cq_handle,
+ recv_ehca_cq->ipz_cq_handle,
+ shca->eq.ipz_eq_handle,
+ my_qp->token,
+ my_pd->fw_pd,
+ (u16) init_attr->cap.max_send_wr + 1, /* fixme(+1 ??) */
+ (u16) init_attr->cap.max_recv_wr + 1, /* fixme(+1 ??) */
+ (u8) max_send_sge,
+ (u8) max_recv_sge,
+ 0, /* ignored if ud ad lkey ctrl is 0 */
+ &my_qp->ipz_qp_handle,
+ &qp_nr,
+ &act_nr_send_wqes,
+ &act_nr_receive_wqes,
+ &act_nr_send_sges,
+ &act_nr_receive_sges,
+ &nr_sq_pages,
+ &nr_rq_pages,
+ &my_qp->ehca_qp_core.galpas);
+ if (hipz_rc != H_Success) {
+ EDEB_ERR(4, "h_alloc_resource_qp() failed rc=%lx", hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto create_qp_exit1;
+ }
+
+ /* store real qp_num as we got from ehca */
+ my_qp->ehca_qp_core.real_qp_num = qp_nr;
+
+ switch (init_attr->qp_type) {
+ case IB_QPT_RC:
+ if (isdaqp == 0) {
+ swqe_size = offsetof(struct ehca_wqe,
+ u.nud.sg_list[(act_nr_send_sges)]);
+ rwqe_size = offsetof(struct ehca_wqe,
+ u.nud.sg_list[(act_nr_receive_sges)]);
+ } else { /* for daqp we need to use msg size, not wqe size */
+ swqe_size = da_msg_size[max_send_sge];
+ rwqe_size = da_msg_size[max_recv_sge];
+ act_nr_send_sges=1;
+ act_nr_receive_sges=1;
+ }
+ break;
+ case IB_QPT_UC:
+ swqe_size = offsetof(struct ehca_wqe,
+ u.nud.sg_list[(act_nr_send_sges)]);
+ rwqe_size = offsetof(struct ehca_wqe,
+ u.nud.sg_list[(act_nr_receive_sges)]);
+ break;
+
+ case IB_QPT_UD:
+ case IB_QPT_GSI:
+ case IB_QPT_SMI:
+ /* UD circumvention */
+ act_nr_receive_sges -= 2;
+ act_nr_send_sges -= 2;
+ swqe_size = offsetof(struct ehca_wqe,
+ u.ud_av.sg_list[(act_nr_send_sges)]);
+ rwqe_size = offsetof(struct ehca_wqe,
+ u.ud_av.sg_list[(act_nr_receive_sges)]);
+
+ if (IB_QPT_GSI == init_attr->qp_type ||
+ IB_QPT_SMI == init_attr->qp_type) {
+ act_nr_send_wqes = init_attr->cap.max_send_wr;
+ act_nr_receive_wqes = init_attr->cap.max_recv_wr;
+ act_nr_send_sges = init_attr->cap.max_send_sge;
+ act_nr_receive_sges = init_attr->cap.max_recv_sge;
+ qp_nr = (init_attr->qp_type == IB_QPT_SMI) ? 0 : 1;
+ }
+
+ break;
+
+ default:
+ break;
+ }
+
+ /* initializes r/squeue and registers queue pages */
+ ret = init_qp_queues(shca->ipz_hca_handle, my_qp,
+ nr_sq_pages, nr_rq_pages,
+ swqe_size, rwqe_size,
+ act_nr_send_sges, act_nr_receive_sges);
+ if (ret != 0) {
+ EDEB_ERR(4,"Couldn't initialize r/squeue and pages ret=%x",
+ ret);
+ goto create_qp_exit2;
+ }
+
+ my_qp->ib_qp.pd = &my_pd->ib_pd;
+ my_qp->ib_qp.device = my_pd->ib_pd.device;
+
+ my_qp->ib_qp.recv_cq = init_attr->recv_cq;
+ my_qp->ib_qp.send_cq = init_attr->send_cq;
+
+ my_qp->ib_qp.qp_num = qp_nr;
+ my_qp->ib_qp.qp_type = init_attr->qp_type;
+
+ my_qp->ehca_qp_core.qp_type = init_attr->qp_type;
+ my_qp->ib_qp.srq = init_attr->srq;
+
+ my_qp->ib_qp.qp_context = init_attr->qp_context;
+ my_qp->ib_qp.event_handler = init_attr->event_handler;
+
+ init_attr->cap.max_inline_data = 0; /* not supported? */
+ init_attr->cap.max_recv_sge = act_nr_receive_sges;
+ init_attr->cap.max_recv_wr = act_nr_receive_wqes;
+ init_attr->cap.max_send_sge = act_nr_send_sges;
+ init_attr->cap.max_send_wr = act_nr_send_wqes;
+
+ /* TODO : define_apq0() not supported yet */
+ if (init_attr->qp_type == IB_QPT_GSI) {
+ if ((hipz_rc = ehca_define_sqp(shca, my_qp, init_attr))) {
+ EDEB_ERR(4, "ehca_define_sqp() failed rc=%lx", hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto create_qp_exit3;
+ }
+ }
+
+ if (init_attr->send_cq != NULL) {
+ struct ehca_cq *cq = container_of(init_attr->send_cq,
+ struct ehca_cq, ib_cq);
+ ret = ehca_cq_assign_qp(cq, my_qp);
+ if (ret != 0) {
+ EDEB_ERR(4, "Couldn't assign qp to send_cq ret=%x", ret);
+ goto create_qp_exit3;
+ }
+ my_qp->send_cq = cq;
+ }
+
+ /* copy queues, galpa data to user space */
+ if (context != NULL && udata != NULL) {
+ struct ehca_create_qp_resp resp;
+ struct vm_area_struct * vma;
+ resp.qp_num = qp_nr;
+ resp.token = my_qp->token;
+ resp.ehca_qp_core = my_qp->ehca_qp_core;
+
+ ehca_mmap_nopage(((u64) (my_qp->token) << 32) | 0x22000000,
+ my_qp->ehca_qp_core.ipz_rqueue.queue_length,
+ ((void**)&resp.ehca_qp_core.ipz_rqueue.queue),
+ &vma);
+ my_qp->uspace_rqueue = (u64)resp.ehca_qp_core.ipz_rqueue.queue;
+ ehca_mmap_nopage(((u64) (my_qp->token) << 32) | 0x23000000,
+ my_qp->ehca_qp_core.ipz_squeue.queue_length,
+ ((void**)&resp.ehca_qp_core.ipz_squeue.queue),
+ &vma);
+ my_qp->uspace_squeue = (u64)resp.ehca_qp_core.ipz_squeue.queue;
+ ehca_mmap_register(my_qp->ehca_qp_core.galpas.user.fw_handle,
+ ((void**)&resp.ehca_qp_core.galpas.kernel.fw_handle),
+ &vma);
+ my_qp->uspace_fwh = (u64)resp.ehca_qp_core.galpas.kernel.fw_handle;
+
+ if (ib_copy_to_udata(udata, &resp, sizeof resp)) {
+ EDEB_ERR(4, "Copy to udata failed");
+ ret = -EINVAL;
+ goto create_qp_exit3;
+ }
+ }
+
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x, token=%x",
+ my_qp, qp_nr, my_qp->token);
+ return (&my_qp->ib_qp);
+
+ create_qp_exit3:
+ ipz_queue_dtor(&my_qp->ehca_qp_core.ipz_rqueue);
+ ipz_queue_dtor(&my_qp->ehca_qp_core.ipz_squeue);
+
+ create_qp_exit2:
+ hipz_h_destroy_qp(shca->ipz_hca_handle, my_qp);
+
+ create_qp_exit1:
+ down_write(&ehca_qp_idr_sem);
+ idr_remove(&ehca_qp_idr, my_qp->token);
+ up_write(&ehca_qp_idr_sem);
+
+ create_qp_exit0:
+ ehca_qp_delete(my_qp);
+ EDEB_EX(4, "failed ret=%x", ret);
+ return ERR_PTR(ret);
+
+}
+
+/** called by internal_modify_qp() at trans sqe -> rts:
+ * set purge bit of bad wqe and subsequent wqes to avoid reentering sqe
+ * @return total number of bad wqes in bad_wqe_cnt
+ */
+static int prepare_sqe_rts(struct ehca_qp *my_qp, struct ehca_shca *shca,
+ int *bad_wqe_cnt)
+{
+ int ret = 0;
+ u64 hipz_rc = H_Success;
+ struct ipz_queue *squeue = NULL;
+ void *bad_send_wqe_p = NULL;
+ void *bad_send_wqe_v = NULL;
+ void *squeue_start_p = NULL;
+ void *squeue_end_p = NULL;
+ void *squeue_start_v = NULL;
+ void *squeue_end_v = NULL;
+ struct ehca_wqe *wqe = NULL;
+ int qp_num = my_qp->ib_qp.qp_num;
+
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x ", my_qp, qp_num);
+
+ /* get send wqe pointer */
+ hipz_rc = hipz_h_disable_and_get_wqe(shca->ipz_hca_handle,
+ my_qp->ipz_qp_handle, &my_qp->pf,
+ &bad_send_wqe_p, NULL, 2);
+ if (hipz_rc != H_Success) {
+ EDEB_ERR(4, "hipz_h_disable_and_get_wqe() failed "
+ "ehca_qp=%p qp_num=%x hipz_rc=%lx",
+ my_qp, qp_num, hipz_rc);
+ ret = ehca2ib_return_code(hipz_rc);
+ goto prepare_sqe_rts_exit1;
+ }
+ bad_send_wqe_p = (void*)((u64)bad_send_wqe_p & (~(1L<<63)));
+ EDEB(7, "qp_num=%x bad_send_wqe_p=%p", qp_num, bad_send_wqe_p);
+ /* convert wqe pointer to vadr */
+ bad_send_wqe_v = abs_to_virt((u64)bad_send_wqe_p);
+ EDEB_DMP(6, bad_send_wqe_v, 32, "qp_num=%x bad_wqe", qp_num);
+
+ squeue = &my_qp->ehca_qp_core.ipz_squeue;
+ squeue_start_p = (void*)ehca_kv_to_g(squeue->queue);
+ squeue_end_p = squeue_start_p+squeue->queue_length;
+ squeue_start_v = abs_to_virt((u64)squeue_start_p);
+ squeue_end_v = abs_to_virt((u64)squeue_end_p);
+ EDEB(6, "qp_num=%x squeue_start_v=%p squeue_end_v=%p",
+ qp_num, squeue_start_v, squeue_end_v);
+
+ /* loop sets wqe's purge bit */
+ wqe = (struct ehca_wqe*)bad_send_wqe_v;
+ *bad_wqe_cnt = 0;
+ while (wqe->optype != 0xff && wqe->wqef != 0xff) {
+ EDEB_DMP(6, wqe, 32, "qp_num=%x wqe", qp_num);
+ wqe->nr_of_data_seg = 0; /* suppress data access */
+ wqe->wqef = WQEF_PURGE; /* WQE to be purged */
+ wqe = (struct ehca_wqe*)((u8*)wqe+squeue->qe_size);
+ *bad_wqe_cnt = (*bad_wqe_cnt)+1;
+ if ((void*)wqe >= squeue_end_v) {
+ wqe = squeue_start_v;
+ }
+ } /* eof while wqe */
+ /* bad wqe will be reprocessed and ignored when pol_cq() is called,
+ i.e. nr of wqes with flush error status is one less */
+ EDEB(6, "qp_num=%x flusherr_wqe_cnt=%x", qp_num, (*bad_wqe_cnt)-1);
+ wqe->wqef = 0;
+
+ prepare_sqe_rts_exit1:
+
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x ret=%x", my_qp, qp_num, ret);
+ return ret;
+}
+
+/** @brief internal modify qp with circumvention to handle aqp0 properly
+ * smi_reset2init indicates if this is an internal reset-to-init-call for
+ * smi. This flag must always be zero if called from ehca_modify_qp()!
+ * This internal func was intorduced to avoid recursion of ehca_modify_qp()!
+ */
+static int internal_modify_qp(struct ib_qp *ibqp,
+ struct ib_qp_attr *attr,
+ int attr_mask, int smi_reset2init)
+{
+ enum ib_qp_state qp_cur_state = 0, qp_new_state = 0;
+ int cnt = 0, qp_attr_idx = 0, retcode = 0;
+
+ enum ib_qp_statetrans statetrans;
+ struct hcp_modify_qp_control_block *mqpcb = NULL;
+ struct ehca_qp *my_qp = NULL;
+ struct ehca_shca *shca = NULL;
+ u64 update_mask = 0;
+ u64 hipz_rc = H_Success;
+ int bad_wqe_cnt = 0;
+ int squeue_locked = 0;
+ unsigned long spl_flags = 0;
+
+ my_qp = container_of(ibqp, struct ehca_qp, ib_qp);
+ shca = container_of(ibqp->pd->device, struct ehca_shca, ib_device);
+
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x ibqp_type=%x "
+ "new qp_state=%x attribute_mask=%x",
+ my_qp, ibqp->qp_num, ibqp->qp_type,
+ attr->qp_state, attr_mask);
+
+ /* do query_qp to obtain current attr values */
+ mqpcb = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (mqpcb == NULL) {
+ retcode = -ENOMEM;
+ EDEB_ERR(4, "Could not get zeroed page for mqpcb "
+ "ehca_qp=%p qp_num=%x ", my_qp, ibqp->qp_num);
+ goto modify_qp_exit0;
+ }
+ memset(mqpcb, 0, PAGE_SIZE);
+
+ hipz_rc = hipz_h_query_qp(shca->ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ &my_qp->pf,
+ mqpcb, my_qp->ehca_qp_core.galpas.kernel);
+ if (hipz_rc != H_Success) {
+ EDEB_ERR(4, "hipz_h_query_qp() failed "
+ "ehca_qp=%p qp_num=%x hipz_rc=%lx",
+ my_qp, ibqp->qp_num, hipz_rc);
+ retcode = ehca2ib_return_code(hipz_rc);
+ goto modify_qp_exit1;
+ }
+ EDEB(7, "ehca_qp=%p qp_num=%x ehca_qp_state=%x",
+ my_qp, ibqp->qp_num, mqpcb->qp_state);
+
+ qp_cur_state = ehca2ib_qp_state(mqpcb->qp_state);
+
+ if (qp_cur_state == -EINVAL) { /* invalid qp state */
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Invalid current ehca_qp_state=%x "
+ "ehca_qp=%p qp_num=%x",
+ mqpcb->qp_state, my_qp, ibqp->qp_num);
+ goto modify_qp_exit1;
+ }
+ /* circumvention to set aqp0 initial state to init
+ as expected by IB spec */
+ if (smi_reset2init == 0 &&
+ ibqp->qp_type == IB_QPT_SMI &&
+ qp_cur_state == IB_QPS_RESET &&
+ (attr_mask & IB_QP_STATE)
+ && attr->qp_state == IB_QPS_INIT) { /* RESET -> INIT */
+ struct ib_qp_attr smiqp_attr = {
+ .qp_state = IB_QPS_INIT,
+ .port_num = my_qp->init_attr.port_num,
+ .pkey_index = 0,
+ .qkey = 0
+ };
+ int smiqp_attr_mask = IB_QP_STATE | IB_QP_PORT |
+ IB_QP_PKEY_INDEX | IB_QP_QKEY;
+ int smirc = internal_modify_qp(
+ ibqp, &smiqp_attr, smiqp_attr_mask, 1);
+ if (smirc != 0) {
+ EDEB_ERR(4, "SMI RESET -> INIT failed. "
+ "ehca_modify_qp() rc=%x", smirc);
+ retcode = H_Parameter;
+ goto modify_qp_exit1;
+ }
+ qp_cur_state = IB_QPS_INIT;
+ EDEB(7, "SMI RESET -> INIT succeeded");
+ }
+ /* is transmitted current state equal to "real" current state */
+ if (attr_mask & IB_QP_CUR_STATE) {
+ if (qp_cur_state != attr->cur_qp_state) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Invalid IB_QP_CUR_STATE "
+ "attr->curr_qp_state=%x <> "
+ "actual cur_qp_state=%x. "
+ "ehca_qp=%p qp_num=%x",
+ attr->cur_qp_state, qp_cur_state,
+ my_qp, ibqp->qp_num);
+ goto modify_qp_exit1;
+ }
+ }
+
+ EDEB(7, "ehca_qp=%p qp_num=%x current qp_state=%x "
+ "new qp_state=%x attribute_mask=%x",
+ my_qp, ibqp->qp_num, qp_cur_state, attr->qp_state, attr_mask);
+
+ qp_new_state = attr_mask & IB_QP_STATE ? attr->qp_state : qp_cur_state;
+ if (!smi_reset2init &&
+ !ib_modify_qp_is_ok(qp_cur_state, qp_new_state, ibqp->qp_type,
+ attr_mask)) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Invalid qp transition new_state=%x cur_state=%x "
+ "ehca_qp=%p qp_num=%x attr_mask=%x",
+ qp_new_state, qp_cur_state, my_qp, ibqp->qp_num,
+ attr_mask);
+ goto modify_qp_exit1;
+ }
+
+ if ((mqpcb->qp_state = ib2ehca_qp_state(qp_new_state))) {
+ update_mask = EHCA_BMASK_SET(MQPCB_MASK_QP_STATE, 1);
+ } else {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Invalid new qp state=%x "
+ "ehca_qp=%p qp_num=%x",
+ qp_new_state, my_qp, ibqp->qp_num);
+ goto modify_qp_exit1;
+ }
+
+ /* retrieve state transition struct to get req and opt attrs */
+ statetrans = get_modqp_statetrans(qp_cur_state, qp_new_state);
+ if (statetrans < 0) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "<INVALID STATE CHANGE> qp_cur_state=%x "
+ "new_qp_state=%x State_xsition=%x "
+ "ehca_qp=%p qp_num=%x",
+ qp_cur_state, qp_new_state,
+ statetrans, my_qp, ibqp->qp_num);
+ goto modify_qp_exit1;
+ }
+
+ qp_attr_idx = ib2ehcaqptype(ibqp->qp_type);
+
+ if (qp_attr_idx < 0) {
+ retcode = qp_attr_idx;
+ EDEB_ERR(4, "Invalid QP type=%x ehca_qp=%p qp_num=%x",
+ ibqp->qp_type, my_qp, ibqp->qp_num);
+ goto modify_qp_exit1;
+ }
+
+ EDEB(7, "ehca_qp=%p qp_num=%x <VALID STATE CHANGE> qp_state_xsit=%x",
+ my_qp, ibqp->qp_num, statetrans);
+
+ /* sqe -> rts: set purge bit of bad wqe before actual trans */
+ if ((my_qp->ehca_qp_core.qp_type == IB_QPT_UD
+ || my_qp->ehca_qp_core.qp_type == IB_QPT_GSI
+ || my_qp->ehca_qp_core.qp_type == IB_QPT_SMI)
+ && statetrans == IB_QPST_SQE2RTS) {
+ /* mark next free wqe if kernel */
+ if (my_qp->uspace_squeue == 0) {
+ struct ehca_wqe *wqe = NULL;
+ /* lock send queue */
+ spin_lock_irqsave(&my_qp->spinlock_s, spl_flags);
+ squeue_locked = 1;
+ /* mark next free wqe */
+ wqe=(struct ehca_wqe*)
+ my_qp->ehca_qp_core.ipz_squeue.current_q_addr;
+ wqe->optype = wqe->wqef = 0xff;
+ EDEB(7, "qp_num=%x next_free_wqe=%p",
+ ibqp->qp_num, wqe);
+ }
+ retcode = prepare_sqe_rts(my_qp, shca, &bad_wqe_cnt);
+ if (retcode != 0) {
+ EDEB_ERR(4, "prepare_sqe_rts() failed "
+ "ehca_qp=%p qp_num=%x ret=%x",
+ my_qp, ibqp->qp_num, retcode);
+ goto modify_qp_exit2;
+ }
+ }
+
+ /* enable RDMA_Atomic_Control if reset->init und reliable con
+ this is necessary since gen2 does not provide that flag,
+ but pHyp requires it */
+ if (statetrans == IB_QPST_RESET2INIT &&
+ (ibqp->qp_type == IB_QPT_RC || ibqp->qp_type == IB_QPT_UC)) {
+ mqpcb->rdma_atomic_ctrl = 3;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_RDMA_ATOMIC_CTRL, 1);
+ }
+ /* circ. pHyp requires #RDMA/Atomic Responder Resources for UC INIT -> RTR */
+ if (statetrans == IB_QPST_INIT2RTR &&
+ (ibqp->qp_type == IB_QPT_UC) &&
+ !(attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)) {
+ mqpcb->rdma_nr_atomic_resp_res = 1; /* default to 1 */
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_RDMA_NR_ATOMIC_RESP_RES, 1);
+ }
+
+ if (attr_mask & IB_QP_PKEY_INDEX) {
+ mqpcb->prim_p_key_idx = attr->pkey_index;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_PRIM_P_KEY_IDX, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_PKEY_INDEX update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_PORT) {
+ if (attr->port_num < 1 || attr->port_num > shca->num_ports) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Invalid port=%x. "
+ "ehca_qp=%p qp_num=%x num_ports=%x",
+ attr->port_num, my_qp, ibqp->qp_num,
+ shca->num_ports);
+ goto modify_qp_exit2;
+ }
+ mqpcb->prim_phys_port = attr->port_num;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_PRIM_PHYS_PORT, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_PORT update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_QKEY) {
+ mqpcb->qkey = attr->qkey;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_QKEY, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_QKEY update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_AV) {
+ mqpcb->dlid = attr->ah_attr.dlid;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_DLID, 1);
+ mqpcb->source_path_bits = attr->ah_attr.src_path_bits;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_SOURCE_PATH_BITS, 1);
+ mqpcb->service_level = attr->ah_attr.sl;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_SERVICE_LEVEL, 1);
+ mqpcb->max_static_rate = attr->ah_attr.static_rate;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_MAX_STATIC_RATE, 1);
+
+ /* only if GRH is TRUE we might consider SOURCE_GID_IDX and DEST_GID
+ * otherwise phype will return H_ATTR_PARM!!!
+ */
+ if (attr->ah_attr.ah_flags == IB_AH_GRH) {
+ mqpcb->send_grh_flag = 1 << 31;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_SEND_GRH_FLAG, 1);
+ mqpcb->source_gid_idx = attr->ah_attr.grh.sgid_index;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_SOURCE_GID_IDX, 1);
+
+ for (cnt = 0; cnt < 16; cnt++) {
+ mqpcb->dest_gid.byte[cnt] =
+ attr->ah_attr.grh.dgid.raw[cnt];
+ }
+
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_DEST_GID, 1);
+ mqpcb->flow_label = attr->ah_attr.grh.flow_label;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_FLOW_LABEL, 1);
+ mqpcb->hop_limit = attr->ah_attr.grh.hop_limit;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_HOP_LIMIT, 1);
+ mqpcb->traffic_class = attr->ah_attr.grh.traffic_class;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_TRAFFIC_CLASS, 1);
+ }
+
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_AV update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+
+ if (attr_mask & IB_QP_PATH_MTU) {
+ mqpcb->path_mtu = attr->path_mtu;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_PATH_MTU, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_PATH_MTU update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_TIMEOUT) {
+ mqpcb->timeout = attr->timeout;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_TIMEOUT, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_TIMEOUT update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_RETRY_CNT) {
+ mqpcb->retry_count = attr->retry_cnt;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_RETRY_COUNT, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_RETRY_CNT update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_RNR_RETRY) {
+ mqpcb->rnr_retry_count = attr->rnr_retry;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_RNR_RETRY_COUNT, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_RNR_RETRY update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_RQ_PSN) {
+ mqpcb->receive_psn = attr->rq_psn;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_RECEIVE_PSN, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_RQ_PSN update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) {
+ /* @TODO CHECK THIS with our spec */
+ mqpcb->rdma_nr_atomic_resp_res = attr->max_dest_rd_atomic;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_RDMA_NR_ATOMIC_RESP_RES, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_MAX_DEST_RD_ATOMIC "
+ "update_mask=%lx", my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC) {
+ /* @TODO CHECK THIS with our spec */
+ mqpcb->rdma_atomic_outst_dest_qp = attr->max_rd_atomic;
+ update_mask |=
+ EHCA_BMASK_SET
+ (MQPCB_MASK_RDMA_ATOMIC_OUTST_DEST_QP, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x IB_QP_MAX_QP_RD_ATOMIC "
+ "update_mask=%lx", my_qp, ibqp->qp_num, update_mask);
+ }
+ if (attr_mask & IB_QP_ALT_PATH) {
+ mqpcb->dlid_al = attr->alt_ah_attr.dlid;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_DLID_AL, 1);
+ mqpcb->source_path_bits_al = attr->alt_ah_attr.src_path_bits;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_SOURCE_PATH_BITS_AL, 1);
+ mqpcb->service_level_al = attr->alt_ah_attr.sl;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_SERVICE_LEVEL_AL, 1);
+ mqpcb->max_static_rate_al = attr->alt_ah_attr.static_rate;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_MAX_STATIC_RATE_AL, 1);
+
+ /* only if GRH is TRUE we might consider SOURCE_GID_IDX and DEST_GID
+ * otherwise phype will return H_ATTR_PARM!!!
+ */
+ if (attr->alt_ah_attr.ah_flags == IB_AH_GRH) {
+ mqpcb->send_grh_flag_al = 1 << 31;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_SEND_GRH_FLAG_AL, 1);
+ mqpcb->source_gid_idx_al =
+ attr->alt_ah_attr.grh.sgid_index;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_SOURCE_GID_IDX_AL, 1);
+
+ for (cnt = 0; cnt < 16; cnt++) {
+ mqpcb->dest_gid_al.byte[cnt] =
+ attr->alt_ah_attr.grh.dgid.raw[cnt];
+ }
+
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_DEST_GID_AL, 1);
+ mqpcb->flow_label_al = attr->alt_ah_attr.grh.flow_label;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_FLOW_LABEL_AL, 1);
+ mqpcb->hop_limit_al = attr->alt_ah_attr.grh.hop_limit;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_HOP_LIMIT_AL, 1);
+ mqpcb->traffic_class_al =
+ attr->alt_ah_attr.grh.traffic_class;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_TRAFFIC_CLASS_AL, 1);
+ }
+
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_ALT_PATH update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+
+ if (attr_mask & IB_QP_MIN_RNR_TIMER) {
+ mqpcb->min_rnr_nak_timer_field = attr->min_rnr_timer;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_MIN_RNR_NAK_TIMER_FIELD, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_MIN_RNR_TIMER update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+
+ if (attr_mask & IB_QP_SQ_PSN) {
+ mqpcb->send_psn = attr->sq_psn;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_SEND_PSN, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_SQ_PSN update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+
+ if (attr_mask & IB_QP_DEST_QPN) {
+ mqpcb->dest_qp_nr = attr->dest_qp_num;
+ update_mask |= EHCA_BMASK_SET(MQPCB_MASK_DEST_QP_NR, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_DEST_QPN update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ }
+
+ if (attr_mask & IB_QP_PATH_MIG_STATE) {
+ mqpcb->path_migration_state = attr->path_mig_state;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_PATH_MIGRATION_STATE, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_PATH_MIG_STATE update_mask=%lx", my_qp,
+ ibqp->qp_num, update_mask);
+ }
+
+ if (attr_mask & IB_QP_CAP) {
+ mqpcb->max_nr_outst_send_wr = attr->cap.max_send_wr+1;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_MAX_NR_OUTST_SEND_WR, 1);
+ mqpcb->max_nr_outst_recv_wr = attr->cap.max_recv_wr+1;
+ update_mask |=
+ EHCA_BMASK_SET(MQPCB_MASK_MAX_NR_OUTST_RECV_WR, 1);
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "IB_QP_CAP update_mask=%lx",
+ my_qp, ibqp->qp_num, update_mask);
+ /* TODO no support for max_send/recv_sge??? */
+ }
+
+ EDEB_DMP(7, mqpcb, 4*70, "ehca_qp=%p qp_num=%x", my_qp, ibqp->qp_num);
+
+ hipz_rc = hipz_h_modify_qp(shca->ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ &my_qp->pf,
+ update_mask,
+ mqpcb, my_qp->ehca_qp_core.galpas.kernel);
+
+ if (hipz_rc != H_Success) {
+ retcode = ehca2ib_return_code(hipz_rc);
+ EDEB_ERR(4, "hipz_h_modify_qp() failed rc=%lx "
+ "ehca_qp=%p qp_num=%x",
+ hipz_rc, my_qp, ibqp->qp_num);
+ goto modify_qp_exit2;
+ }
+
+ if ((my_qp->ehca_qp_core.qp_type == IB_QPT_UD
+ || my_qp->ehca_qp_core.qp_type == IB_QPT_GSI
+ || my_qp->ehca_qp_core.qp_type == IB_QPT_SMI)
+ && statetrans == IB_QPST_SQE2RTS) {
+ /* doorbell to reprocessing wqes */
+ iosync(); /* serialize GAL register access */
+ hipz_update_SQA(&my_qp->ehca_qp_core, bad_wqe_cnt-1);
+ EDEB(6, "doorbell for %x wqes", bad_wqe_cnt);
+ }
+
+ if (statetrans == IB_QPST_RESET2INIT ||
+ statetrans == IB_QPST_INIT2INIT) {
+ mqpcb->qp_enable = TRUE;
+ mqpcb->qp_state = EHCA_QPS_INIT;
+ update_mask = 0;
+ update_mask = EHCA_BMASK_SET(MQPCB_MASK_QP_ENABLE, 1);
+
+ EDEB(7, "ehca_qp=%p qp_num=%x "
+ "RESET_2_INIT needs an additional enable "
+ "-> update_mask=%lx", my_qp, ibqp->qp_num, update_mask);
+
+ hipz_rc = hipz_h_modify_qp(shca->ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ &my_qp->pf,
+ update_mask,
+ mqpcb,
+ my_qp->ehca_qp_core.galpas.kernel);
+
+ if (hipz_rc != H_Success) {
+ retcode = ehca2ib_return_code(hipz_rc);
+ EDEB_ERR(4, "ENABLE in context of "
+ "RESET_2_INIT failed! "
+ "Maybe you didn't get a LID"
+ "hipz_rc=%lx ehca_qp=%p qp_num=%x",
+ hipz_rc, my_qp, ibqp->qp_num);
+ goto modify_qp_exit2;
+ }
+ }
+
+ if (statetrans == IB_QPST_ANY2RESET) {
+ ipz_QEit_reset(&my_qp->ehca_qp_core.ipz_rqueue);
+ ipz_QEit_reset(&my_qp->ehca_qp_core.ipz_squeue);
+ }
+
+ if (attr_mask & IB_QP_QKEY) {
+ my_qp->ehca_qp_core.qkey = attr->qkey;
+ }
+
+ modify_qp_exit2:
+ if (squeue_locked) { /* this means: sqe -> rts */
+ spin_unlock_irqrestore(&my_qp->spinlock_s, spl_flags);
+ my_qp->sqerr_purgeflag = 1;
+ }
+
+ modify_qp_exit1:
+ kfree(mqpcb);
+
+ modify_qp_exit0:
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x ibqp_type=%x retcode=%x",
+ my_qp, ibqp->qp_num, ibqp->qp_type, retcode);
+ return retcode;
+}
+
+int ehca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask)
+{
+ int ret = 0;
+ struct ehca_qp *my_qp = NULL;
+
+ EHCA_CHECK_ADR(ibqp);
+ EHCA_CHECK_ADR(attr);
+ EHCA_CHECK_ADR(ibqp->device);
+
+ my_qp = container_of(ibqp, struct ehca_qp, ib_qp);
+
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x ibqp_type=%x attr_mask=%x",
+ my_qp, ibqp->qp_num, ibqp->qp_type, attr_mask);
+
+ ret = internal_modify_qp(ibqp, attr, attr_mask, 0);
+
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x ibqp_type=%x ret=%x",
+ my_qp, ibqp->qp_num, ibqp->qp_type, ret);
+ return ret;
+}
+
+int ehca_query_qp(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+{
+ struct ehca_qp *my_qp = NULL;
+ struct ehca_shca *shca = NULL;
+ struct hcp_modify_qp_control_block *qpcb = NULL;
+
+ struct ipz_adapter_handle adapter_handle;
+ int cnt = 0, retcode = 0;
+ u64 hipz_rc = H_Success;
+
+ EHCA_CHECK_ADR(qp);
+ EHCA_CHECK_ADR(qp_attr);
+ EHCA_CHECK_DEVICE(qp->device);
+
+ my_qp = container_of(qp, struct ehca_qp, ib_qp);
+
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x "
+ "qp_attr=%p qp_attr_mask=%x qp_init_attr=%p",
+ my_qp, qp->qp_num, qp_attr, qp_attr_mask, qp_init_attr);
+
+ shca = container_of(qp->device, struct ehca_shca, ib_device);
+ adapter_handle = shca->ipz_hca_handle;
+
+ if (qp_attr_mask & QP_ATTR_QUERY_NOT_SUPPORTED) {
+ retcode = -EINVAL;
+ EDEB_ERR(4,"Invalid attribute mask "
+ "ehca_qp=%p qp_num=%x qp_attr_mask=%x ",
+ my_qp, qp->qp_num, qp_attr_mask);
+ goto query_qp_exit0;
+ }
+
+ qpcb = kmalloc(EHCA_PAGESIZE, GFP_KERNEL );
+
+ if (qpcb == NULL) {
+ retcode = -ENOMEM;
+ EDEB_ERR(4,"Out of memory for qpcb "
+ "ehca_qp=%p qp_num=%x", my_qp, qp->qp_num);
+ goto query_qp_exit0;
+ }
+ memset(qpcb, 0, sizeof(*qpcb));
+
+ hipz_rc = hipz_h_query_qp(adapter_handle,
+ my_qp->ipz_qp_handle,
+ &my_qp->pf,
+ qpcb, my_qp->ehca_qp_core.galpas.kernel);
+
+ if (hipz_rc != H_Success) {
+ retcode = ehca2ib_return_code(hipz_rc);
+ EDEB_ERR(4,"hipz_h_query_qp() failed "
+ "ehca_qp=%p qp_num=%x hipz_rc=%lx",
+ my_qp, qp->qp_num, hipz_rc);
+ goto query_qp_exit1;
+ }
+
+ qp_attr->cur_qp_state = ehca2ib_qp_state(qpcb->qp_state);
+ qp_attr->qp_state = qp_attr->cur_qp_state;
+ if (qp_attr->cur_qp_state == -EINVAL) {
+ retcode = -EINVAL;
+ EDEB_ERR(4,"Got invalid ehca_qp_state=%x "
+ "ehca_qp=%p qp_num=%x",
+ qpcb->qp_state, my_qp, qp->qp_num);
+ goto query_qp_exit1;
+ }
+
+ if (qp_attr->qp_state == IB_QPS_SQD) {
+ qp_attr->sq_draining = TRUE;
+ }
+
+ qp_attr->qkey = qpcb->qkey;
+ qp_attr->path_mtu = qpcb->path_mtu;
+ qp_attr->path_mig_state = qpcb->path_migration_state;
+ qp_attr->rq_psn = qpcb->receive_psn;
+ qp_attr->sq_psn = qpcb->send_psn;
+ qp_attr->min_rnr_timer = qpcb->min_rnr_nak_timer_field;
+ qp_attr->cap.max_send_wr = qpcb->max_nr_outst_send_wr-1;
+ qp_attr->cap.max_recv_wr = qpcb->max_nr_outst_recv_wr-1;
+ /* UD_AV CIRCUMVENTION */
+ if (my_qp->ehca_qp_core.qp_type == IB_QPT_UD) {
+ qp_attr->cap.max_send_sge =
+ qpcb->actual_nr_sges_in_sq_wqe - 2;
+ qp_attr->cap.max_recv_sge =
+ qpcb->actual_nr_sges_in_rq_wqe - 2;
+ } else {
+ qp_attr->cap.max_send_sge =
+ qpcb->actual_nr_sges_in_sq_wqe;
+ qp_attr->cap.max_recv_sge =
+ qpcb->actual_nr_sges_in_rq_wqe;
+ }
+
+ qp_attr->cap.max_inline_data = my_qp->sq_max_inline_data_size;
+ qp_attr->dest_qp_num = qpcb->dest_qp_nr;
+
+ qp_attr->pkey_index =
+ EHCA_BMASK_GET(MQPCB_PRIM_P_KEY_IDX, qpcb->prim_p_key_idx);
+
+ qp_attr->port_num =
+ EHCA_BMASK_GET(MQPCB_PRIM_PHYS_PORT, qpcb->prim_phys_port);
+
+ qp_attr->timeout = qpcb->timeout;
+ qp_attr->retry_cnt = qpcb->retry_count;
+ qp_attr->rnr_retry = qpcb->rnr_retry_count;
+
+ qp_attr->alt_pkey_index =
+ EHCA_BMASK_GET(MQPCB_PRIM_P_KEY_IDX, qpcb->alt_p_key_idx);
+
+ qp_attr->alt_port_num = qpcb->alt_phys_port;
+ qp_attr->alt_timeout = qpcb->timeout_al;
+
+ /* primary av */
+ qp_attr->ah_attr.sl = qpcb->service_level;
+
+ if (qpcb->send_grh_flag) {
+ qp_attr->ah_attr.ah_flags = IB_AH_GRH;
+ }
+
+ qp_attr->ah_attr.static_rate = qpcb->max_static_rate;
+ qp_attr->ah_attr.dlid = qpcb->dlid;
+ qp_attr->ah_attr.src_path_bits = qpcb->source_path_bits;
+ qp_attr->ah_attr.port_num = qp_attr->port_num;
+
+ /* primary GRH */
+ qp_attr->ah_attr.grh.traffic_class = qpcb->traffic_class;
+ qp_attr->ah_attr.grh.hop_limit = qpcb->hop_limit;
+ qp_attr->ah_attr.grh.sgid_index = qpcb->source_gid_idx;
+ qp_attr->ah_attr.grh.flow_label = qpcb->flow_label;
+
+ for (cnt = 0; cnt < 16; cnt++) {
+ qp_attr->ah_attr.grh.dgid.raw[cnt] =
+ qpcb->dest_gid.byte[cnt];
+ }
+
+ /* alternate AV */
+ qp_attr->alt_ah_attr.sl = qpcb->service_level_al;
+ if (qpcb->send_grh_flag_al) {
+ qp_attr->alt_ah_attr.ah_flags = IB_AH_GRH;
+ }
+
+ qp_attr->alt_ah_attr.static_rate = qpcb->max_static_rate_al;
+ qp_attr->alt_ah_attr.dlid = qpcb->dlid_al;
+ qp_attr->alt_ah_attr.src_path_bits = qpcb->source_path_bits_al;
+
+ /* alternate GRH */
+ qp_attr->alt_ah_attr.grh.traffic_class = qpcb->traffic_class_al;
+ qp_attr->alt_ah_attr.grh.hop_limit = qpcb->hop_limit_al;
+ qp_attr->alt_ah_attr.grh.sgid_index = qpcb->source_gid_idx_al;
+ qp_attr->alt_ah_attr.grh.flow_label = qpcb->flow_label_al;
+
+ for (cnt = 0; cnt < 16; cnt++) {
+ qp_attr->alt_ah_attr.grh.dgid.raw[cnt] =
+ qpcb->dest_gid_al.byte[cnt];
+ }
+
+ /* return init attributes given in ehca_create_qp */
+ if (qp_init_attr != NULL) {
+ *qp_init_attr = my_qp->init_attr;
+ }
+
+ EDEB(7, "ehca_qp=%p qp_number=%x dest_qp_number=%x "
+ "dlid=%x path_mtu=%x dest_gid=%lx_%lx "
+ "service_level=%x qp_state=%x",
+ my_qp, qpcb->qp_number, qpcb->dest_qp_nr,
+ qpcb->dlid, qpcb->path_mtu,
+ qpcb->dest_gid.dw[0], qpcb->dest_gid.dw[1],
+ qpcb->service_level, qpcb->qp_state);
+
+ EDEB_DMP(7, qpcb, 4*70, "ehca_qp=%p qp_num=%x", my_qp, qp->qp_num);
+
+ query_qp_exit1:
+ kfree(qpcb);
+
+ query_qp_exit0:
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x retcode=%x",
+ my_qp, qp->qp_num, retcode);
+ return retcode;
+}
+
+int ehca_destroy_qp(struct ib_qp *ibqp)
+{
+ struct ehca_qp *my_qp = NULL;
+ struct ehca_shca *shca = NULL;
+ struct ehca_pfqp *qp_pf = NULL;
+ u32 qp_num = 0;
+ int retcode = 0;
+ u64 hipz_ret = H_Success;
+ u8 port_num = 0;
+ enum ib_qp_type qp_type;
+
+ EHCA_CHECK_ADR(ibqp);
+
+ my_qp = container_of(ibqp, struct ehca_qp, ib_qp);
+ qp_num = ibqp->qp_num;
+ qp_pf = &my_qp->pf;
+
+ shca = container_of(ibqp->device, struct ehca_shca, ib_device);
+
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x", my_qp, ibqp->qp_num);
+
+ if (my_qp->send_cq != NULL) {
+ retcode = ehca_cq_unassign_qp(my_qp->send_cq,
+ my_qp->ehca_qp_core.real_qp_num);
+ if (retcode != 0) {
+ EDEB_ERR(4, "Couldn't unassign qp from send_cq "
+ "ret=%x qp_num=%x cq_num=%x",
+ retcode, my_qp->ib_qp.qp_num,
+ my_qp->send_cq->cq_number);
+ goto destroy_qp_exit0;
+ }
+ }
+
+ down_write(&ehca_qp_idr_sem);
+ idr_remove(&ehca_qp_idr, my_qp->token);
+ up_write(&ehca_qp_idr_sem);
+
+ /* un-mmap if vma alloc */
+ if (my_qp->uspace_rqueue != 0) {
+ struct ehca_qp_core *qp_core = &my_qp->ehca_qp_core;
+ retcode = ehca_munmap(my_qp->uspace_rqueue,
+ qp_core->ipz_rqueue.queue_length);
+ retcode = ehca_munmap(my_qp->uspace_squeue,
+ qp_core->ipz_squeue.queue_length);
+ retcode = ehca_munmap(my_qp->uspace_fwh, 4096);
+ }
+
+ hipz_ret = hipz_h_destroy_qp(shca->ipz_hca_handle, my_qp);
+ if (hipz_ret != H_Success) {
+ EDEB_ERR(4, "hipz_h_destroy_qp() failed "
+ "rc=%lx ehca_qp=%p qp_num=%x",
+ hipz_ret, qp_pf, qp_num);
+ goto destroy_qp_exit0;
+ }
+
+ port_num = my_qp->init_attr.port_num;
+ qp_type = my_qp->init_attr.qp_type;
+
+ /* TODO: later with IB_QPT_SMI */
+ if (qp_type == IB_QPT_GSI) {
+ struct ib_event event;
+
+ EDEB(4, "EHCA port %x is inactive.", port_num);
+ event.device = &shca->ib_device;
+ event.event = IB_EVENT_PORT_ERR;
+ event.element.port_num = port_num;
+ shca->sport[port_num - 1].port_state = IB_PORT_DOWN;
+ ib_dispatch_event(&event);
+ }
+
+ ipz_queue_dtor(&my_qp->ehca_qp_core.ipz_rqueue);
+ ipz_queue_dtor(&my_qp->ehca_qp_core.ipz_squeue);
+ ehca_qp_delete(my_qp);
+
+ destroy_qp_exit0:
+ retcode = ehca2ib_return_code(hipz_ret);
+ EDEB_EX(7,"ret=%x", retcode);
+ return retcode;
+}
+
+/* eof ehca_qp.c */

2006-02-18 01:00:32

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 20/22] ehca userspace verbs

From: Roland Dreier <[email protected]>


---

drivers/infiniband/hw/ehca/ehca_uverbs.c | 376 ++++++++++++++++++++++++++++++
1 files changed, 376 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_uverbs.c b/drivers/infiniband/hw/ehca/ehca_uverbs.c
new file mode 100644
index 0000000..f813e9c
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_uverbs.c
@@ -0,0 +1,376 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * userspace support verbs
+ *
+ * Authors: Heiko J Schick <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_uverbs.c,v 1.29 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#undef DEB_PREFIX
+#define DEB_PREFIX "uver"
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+#include "ehca_classes.h"
+#include "ehca_iverbs.h"
+#include "ehca_eq.h"
+#include "ehca_mrmw.h"
+
+#include "hcp_sense.h" /* TODO: later via hipz_* header file */
+#include "hcp_if.h" /* TODO: later via hipz_* header file */
+
+struct ib_ucontext *ehca_alloc_ucontext(struct ib_device *device,
+ struct ib_udata *udata)
+{
+ struct ehca_ucontext *my_context = NULL;
+ EHCA_CHECK_ADR_P(device);
+ EDEB_EN(7, "device=%p name=%s", device, device->name);
+ my_context = kmalloc(sizeof *my_context, GFP_KERNEL);
+ if (NULL == my_context) {
+ EDEB_ERR(4, "Out of memory device=%p", device);
+ return ERR_PTR(-ENOMEM);
+ }
+ memset(my_context, 0, sizeof(*my_context));
+ EDEB_EX(7, "device=%p ucontext=%p", device, my_context);
+ return &my_context->ib_ucontext;
+}
+
+int ehca_dealloc_ucontext(struct ib_ucontext *context)
+{
+ struct ehca_ucontext *my_context = NULL;
+ EHCA_CHECK_ADR(context);
+ EDEB_EN(7, "ucontext=%p", context);
+ my_context = container_of(context, struct ehca_ucontext, ib_ucontext);
+ kfree(my_context);
+ EDEB_EN(7, "ucontext=%p", context);
+ return 0;
+}
+
+struct page *ehca_nopage(struct vm_area_struct *vma,
+ unsigned long address, int *type)
+{
+ struct page *mypage = 0;
+ u64 fileoffset = vma->vm_pgoff << PAGE_SHIFT;
+ u32 idr_handle = fileoffset >> 32;
+ u32 q_type = (fileoffset >> 28) & 0xF; /* CQ, QP,... */
+ u32 rsrc_type = (fileoffset >> 24) & 0xF; /* sq,rq,cmnd_window */
+
+ EDEB_EN(7,
+ "vm_start=%lx vm_end=%lx vm_page_prot=%lx vm_fileoff=%lx",
+ vma->vm_start, vma->vm_end, vma->vm_page_prot, fileoffset);
+
+
+ if (q_type == 1) { /* CQ */
+ struct ehca_cq *cq;
+
+ down_read(&ehca_cq_idr_sem);
+ cq = idr_find(&ehca_cq_idr, idr_handle);
+ up_read(&ehca_cq_idr_sem);
+
+ /* make sure this mmap really belongs to the authorized user */
+ if (cq == 0) {
+ EDEB_ERR(4, "cq is NULL ret=NOPAGE_SIGBUS");
+ return NOPAGE_SIGBUS;
+ }
+ if (rsrc_type == 2) {
+ void *vaddr;
+ EDEB(6, "cq=%p cq queuearea", cq);
+ vaddr = address - vma->vm_start
+ + cq->ehca_cq_core.ipz_queue.queue;
+ EDEB(6, "queue=%p vaddr=%p",
+ cq->ehca_cq_core.ipz_queue.queue, vaddr);
+ mypage = vmalloc_to_page(vaddr);
+ }
+ } else if (q_type == 2) { /* QP */
+ struct ehca_qp *qp;
+
+ down_read(&ehca_qp_idr_sem);
+ qp = idr_find(&ehca_qp_idr, idr_handle);
+ up_read(&ehca_qp_idr_sem);
+
+ /* make sure this mmap really belongs to the authorized user */
+ if (qp == NULL) {
+ EDEB_ERR(4, "qp is NULL ret=NOPAGE_SIGBUS");
+ return NOPAGE_SIGBUS;
+ }
+ if (rsrc_type == 2) { /* rqueue */
+ void *vaddr;
+ EDEB(6, "qp=%p qp rqueuearea", qp);
+ vaddr = address - vma->vm_start
+ + qp->ehca_qp_core.ipz_rqueue.queue;
+ EDEB(6, "rqueue=%p vaddr=%p",
+ qp->ehca_qp_core.ipz_rqueue.queue, vaddr);
+ mypage = vmalloc_to_page(vaddr);
+ } else if (rsrc_type == 3) { /* squeue */
+ void *vaddr;
+ EDEB(6, "qp=%p qp squeuearea", qp);
+ vaddr = address - vma->vm_start
+ + qp->ehca_qp_core.ipz_squeue.queue;
+ EDEB(6, "squeue=%p vaddr=%p",
+ qp->ehca_qp_core.ipz_squeue.queue, vaddr);
+ mypage = vmalloc_to_page(vaddr);
+ }
+ }
+ if (mypage == 0) {
+ EDEB_ERR(4, "Invalid page adr==NULL ret=NOPAGE_SIGBUS");
+ return NOPAGE_SIGBUS;
+ }
+ get_page(mypage);
+ EDEB_EX(7, "page adr=%p", mypage);
+ return mypage;
+}
+
+static struct vm_operations_struct ehcau_vm_ops = {
+ .nopage = ehca_nopage,
+};
+
+/* TODO: better error output messages !!!
+ NO RETURN WITHOUT ERROR
+ */
+int ehca_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+{
+ u64 fileoffset = vma->vm_pgoff << PAGE_SHIFT;
+
+
+ u32 idr_handle = fileoffset >> 32;
+ u32 q_type = (fileoffset >> 28) & 0xF; /* CQ, QP,... */
+ u32 rsrc_type = (fileoffset >> 24) & 0xF; /* sq,rq,cmnd_window */
+ u32 ret = -EFAULT; /* assume the worst */
+ u64 vsize = 0; /* must be calculated/set below */
+ u64 physical = 0; /* must be calculated/set below */
+
+ EDEB_EN(7, "vm_start=%lx vm_end=%lx vm_page_prot=%lx vm_fileoff=%lx",
+ vma->vm_start, vma->vm_end, vma->vm_page_prot, fileoffset);
+
+ if (q_type == 1) { /* CQ */
+ struct ehca_cq *cq;
+
+ down_read(&ehca_cq_idr_sem);
+ cq = idr_find(&ehca_cq_idr, idr_handle);
+ up_read(&ehca_cq_idr_sem);
+
+ /* make sure this mmap really belongs to the authorized user */
+ if (cq == 0)
+ return -EINVAL;
+ if (cq->ib_cq.uobject == 0)
+ return -EINVAL;
+ if (cq->ib_cq.uobject->context != context)
+ return -EINVAL;
+ if (rsrc_type == 1) { /* galpa fw handle */
+ EDEB(6, "cq=%p cq triggerarea", cq);
+ vma->vm_flags |= VM_RESERVED;
+ vsize = vma->vm_end - vma->vm_start;
+ if (vsize != 4096) {
+ EDEB_ERR(4, "invalid vsize=%lx",
+ vma->vm_end - vma->vm_start);
+ ret = -EINVAL;
+ goto mmap_exit0;
+ }
+
+ physical = cq->ehca_cq_core.galpas.user.fw_handle;
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_flags |= VM_IO | VM_RESERVED;
+
+ EDEB(6, "vsize=%lx physical=%lx", vsize,
+ physical);
+ ret =
+ remap_pfn_range(vma, vma->vm_start,
+ physical >> PAGE_SHIFT, vsize,
+ vma->vm_page_prot);
+ if (ret != 0) {
+ EDEB_ERR(4,
+ "Error: remap_pfn_range() returned %x!",
+ ret);
+ ret = -ENOMEM;
+ }
+ goto mmap_exit0;
+ } else if (rsrc_type == 2) { /* cq queue_addr */
+ EDEB(6, "cq=%p cq q_addr", cq);
+ /* vma->vm_page_prot =
+ * pgprot_noncached(vma->vm_page_prot); */
+ vma->vm_flags |= VM_RESERVED;
+ vma->vm_ops = &ehcau_vm_ops;
+ ret = 0;
+ goto mmap_exit0;
+ } else {
+ EDEB_ERR(6, "bad resource type %x", rsrc_type);
+ ret = -EINVAL;
+ goto mmap_exit0;
+ }
+ } else if (q_type == 2) { /* QP */
+ struct ehca_qp *qp;
+
+ down_read(&ehca_qp_idr_sem);
+ qp = idr_find(&ehca_qp_idr, idr_handle);
+ up_read(&ehca_qp_idr_sem);
+
+ /* make sure this mmap really belongs to the authorized user */
+ if (qp == NULL || qp->ib_qp.uobject == NULL ||
+ qp->ib_qp.uobject->context != context) {
+ EDEB(6, "qp=%p, uobject=%p, context=%p",
+ qp, qp->ib_qp.uobject, qp->ib_qp.uobject->context);
+ ret = -EINVAL;
+ goto mmap_exit0;
+ }
+ if (rsrc_type == 1) { /* galpa fw handle */
+ EDEB(6, "qp=%p qp triggerarea", qp);
+ vma->vm_flags |= VM_RESERVED;
+ vsize = vma->vm_end - vma->vm_start;
+ if (vsize != 4096) {
+ EDEB_ERR(4, "invalid vsize=%lx",
+ vma->vm_end - vma->vm_start);
+ ret = -EINVAL;
+ goto mmap_exit0;
+ }
+
+ physical = qp->ehca_qp_core.galpas.user.fw_handle;
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_flags |= VM_IO | VM_RESERVED;
+
+ EDEB(6, "vsize=%lx physical=%lx", vsize,
+ physical);
+ ret =
+ remap_pfn_range(vma, vma->vm_start,
+ physical >> PAGE_SHIFT, vsize,
+ vma->vm_page_prot);
+ if (ret != 0) {
+ EDEB_ERR(4,
+ "Error: remap_pfn_range() returned %x!",
+ ret);
+ ret = -ENOMEM;
+ }
+ goto mmap_exit0;
+ } else if (rsrc_type == 2) { /* qp rqueue_addr */
+ EDEB(6, "qp=%p qp rqueue_addr", qp);
+ vma->vm_flags |= VM_RESERVED;
+ vma->vm_ops = &ehcau_vm_ops;
+ ret = 0;
+ goto mmap_exit0;
+ } else if (rsrc_type == 3) { /* qp squeue_addr */
+ EDEB(6, "qp=%p qp squeue_addr", qp);
+ vma->vm_flags |= VM_RESERVED;
+ vma->vm_ops = &ehcau_vm_ops;
+ ret = 0;
+ goto mmap_exit0;
+ } else {
+ EDEB_ERR(4, "bad resource type %x",
+ rsrc_type);
+ ret = -EINVAL;
+ goto mmap_exit0;
+ }
+ } else {
+ EDEB_ERR(4, "bad queue type %x", q_type);
+ ret = -EINVAL;
+ goto mmap_exit0;
+ }
+
+ mmap_exit0:
+ EDEB_EX(7, "ret=%x", ret);
+ return ret;
+}
+
+int ehca_mmap_nopage(u64 foffset,u64 length,void ** mapped,struct vm_area_struct ** vma)
+{
+ down_write(&current->mm->mmap_sem);
+ *mapped=(void*)
+ do_mmap(NULL,0,
+ length,
+ PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS,
+ foffset);
+ up_write(&current->mm->mmap_sem);
+ if (*mapped) {
+ *vma = find_vma(current->mm,(u64)*mapped);
+ if (*vma) {
+ (*vma)->vm_flags |= VM_RESERVED;
+ (*vma)->vm_ops = &ehcau_vm_ops;
+ } else {
+ EDEB_ERR(4,"couldn't find queue vma queue=%p",
+ *mapped);
+ }
+ } else {
+ EDEB_ERR(4,"couldn't create mmap length=%lx",length);
+ }
+ EDEB(7,"mapped=%p",*mapped);
+ return 0;
+}
+
+int ehca_mmap_register(u64 physical,void ** mapped,struct vm_area_struct ** vma)
+{
+ int ret;
+ unsigned long vsize;
+ ehca_mmap_nopage(0,4096,mapped,vma);
+ (*vma)->vm_flags |= VM_RESERVED;
+ vsize = (*vma)->vm_end - (*vma)->vm_start;
+ if (vsize != 4096) {
+ EDEB_ERR(4, "invalid vsize=%lx",
+ (*vma)->vm_end - (*vma)->vm_start);
+ ret = -EINVAL;
+ return ret;
+ }
+
+ (*vma)->vm_page_prot = pgprot_noncached((*vma)->vm_page_prot);
+ (*vma)->vm_flags |= VM_IO | VM_RESERVED;
+
+ EDEB(6, "vsize=%lx physical=%lx", vsize,
+ physical);
+ ret =
+ remap_pfn_range((*vma), (*vma)->vm_start,
+ physical >> PAGE_SHIFT, vsize,
+ (*vma)->vm_page_prot);
+ if (ret != 0) {
+ EDEB_ERR(4,
+ "Error: remap_pfn_range() returned %x!",
+ ret);
+ ret = -ENOMEM;
+ }
+ return ret;
+
+}
+
+int ehca_munmap(unsigned long addr, size_t len) {
+ int ret=0;
+ struct mm_struct *mm = current->mm;
+ if (mm!=0) {
+ down_write(&mm->mmap_sem);
+ ret = do_munmap(mm, addr, len);
+ up_write(&mm->mmap_sem);
+ }
+ return ret;
+}
+
+/* eof ehca_uverbs.c */

2006-02-18 01:00:05

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 19/22] ehca memory regions

From: Roland Dreier <[email protected]>

Nearly all the inline functions in ehca_mrmw.h look too big to
be inlined. Why can't they just be static functions in ehca_mrmw.c?
---

drivers/infiniband/hw/ehca/ehca_mrmw.c | 1711 ++++++++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_mrmw.h | 739 ++++++++++++++
2 files changed, 2450 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_mrmw.c b/drivers/infiniband/hw/ehca/ehca_mrmw.c
new file mode 100644
index 0000000..d756082
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_mrmw.c
@@ -0,0 +1,1711 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * MR/MW functions
+ *
+ * Authors: Dietmar Decker <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_mrmw.c,v 1.86 2006/02/07 07:51:13 decker Exp $
+ */
+
+#undef DEB_PREFIX
+#define DEB_PREFIX "mrmw"
+
+#include "ehca_kernel.h"
+#include "ehca_iverbs.h"
+#include "hcp_if.h"
+#include "ehca_mrmw.h"
+
+extern int ehca_use_hp_mr;
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+struct ib_mr *ehca_get_dma_mr(struct ib_pd *pd, int mr_access_flags)
+{
+ struct ib_mr *ib_mr;
+ int retcode = 0;
+ struct ehca_mr *e_maxmr = 0;
+ struct ehca_pd *e_pd;
+ struct ehca_shca *shca;
+
+ EDEB_EN(7, "pd=%p mr_access_flags=%x", pd, mr_access_flags);
+
+ EHCA_CHECK_PD_P(pd);
+ e_pd = container_of(pd, struct ehca_pd, ib_pd);
+ shca = container_of(pd->device, struct ehca_shca, ib_device);
+
+ if (shca->maxmr) {
+ e_maxmr = ehca_mr_new();
+ if (!e_maxmr) {
+ EDEB_ERR(4, "out of memory");
+ ib_mr = ERR_PTR(-ENOMEM);
+ goto get_dma_mr_exit0;
+ }
+
+ retcode = ehca_reg_maxmr(shca, e_maxmr,
+ (u64 *)KERNELBASE,
+ mr_access_flags, e_pd,
+ &e_maxmr->ib.ib_mr.lkey,
+ &e_maxmr->ib.ib_mr.rkey);
+ if (retcode != 0) {
+ ib_mr = ERR_PTR(retcode);
+ goto get_dma_mr_exit0;
+ }
+ ib_mr = &e_maxmr->ib.ib_mr;
+ } else {
+ EDEB_ERR(4, "no internal max-MR exist!");
+ ib_mr = ERR_PTR(-EINVAL);
+ goto get_dma_mr_exit0;
+ }
+
+ get_dma_mr_exit0:
+ if (IS_ERR(ib_mr) == 0)
+ EDEB_EX(7, "ib_mr=%p lkey=%x rkey=%x",
+ ib_mr, ib_mr->lkey, ib_mr->rkey);
+ else
+ EDEB_EX(4, "rc=%lx pd=%p mr_access_flags=%x ",
+ PTR_ERR(ib_mr), pd, mr_access_flags);
+ return (ib_mr);
+} /* end ehca_get_dma_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+struct ib_mr *ehca_reg_phys_mr(struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start)
+{
+ struct ib_mr *ib_mr = 0;
+ int retcode = 0;
+ struct ehca_mr *e_mr = 0;
+ struct ehca_shca *shca = 0;
+ struct ehca_pd *e_pd = 0;
+ u64 size = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+ u32 num_pages_mr = 0;
+
+ EDEB_EN(7, "pd=%p phys_buf_array=%p num_phys_buf=%x "
+ "mr_access_flags=%x iova_start=%p", pd, phys_buf_array,
+ num_phys_buf, mr_access_flags, iova_start);
+
+ EHCA_CHECK_PD_P(pd);
+ if ((num_phys_buf <= 0) || ehca_adr_bad(phys_buf_array)) {
+ EDEB_ERR(4, "bad input values: num_phys_buf=%x "
+ "phys_buf_array=%p", num_phys_buf, phys_buf_array);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_phys_mr_exit0;
+ }
+ if (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
+ ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE))) {
+ /* Remote Write Access requires Local Write Access */
+ /* Remote Atomic Access requires Local Write Access */
+ EDEB_ERR(4, "bad input values: mr_access_flags=%x",
+ mr_access_flags);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_phys_mr_exit0;
+ }
+
+ /* check physical buffer list and calculate size */
+ retcode = ehca_mr_chk_buf_and_calc_size(phys_buf_array, num_phys_buf,
+ iova_start, &size);
+ if (retcode != 0) {
+ ib_mr = ERR_PTR(retcode);
+ goto reg_phys_mr_exit0;
+ }
+ if ((size == 0) ||
+ ((0xFFFFFFFFFFFFFFFF - size) < (u64)iova_start)) {
+ EDEB_ERR(4, "bad input values: size=%lx iova_start=%p",
+ size, iova_start);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_phys_mr_exit0;
+ }
+
+ e_pd = container_of(pd, struct ehca_pd, ib_pd);
+ shca = container_of(pd->device, struct ehca_shca, ib_device);
+
+ e_mr = ehca_mr_new();
+ if (!e_mr) {
+ EDEB_ERR(4, "out of memory");
+ ib_mr = ERR_PTR(-ENOMEM);
+ goto reg_phys_mr_exit0;
+ }
+
+ /* determine number of MR pages */
+ /* pagesize currently hardcoded to 4k ... TODO.. */
+ num_pages_mr =
+ ((((u64)iova_start % PAGE_SIZE) + size +
+ PAGE_SIZE - 1) / PAGE_SIZE);
+
+ /* register MR on HCA */
+ if (ehca_mr_is_maxmr(size, iova_start)) {
+ e_mr->flags |= EHCA_MR_FLAG_MAXMR;
+ retcode = ehca_reg_maxmr(shca, e_mr, iova_start,
+ mr_access_flags, e_pd,
+ &e_mr->ib.ib_mr.lkey,
+ &e_mr->ib.ib_mr.rkey);
+ if (retcode != 0) {
+ ib_mr = ERR_PTR(retcode);
+ goto reg_phys_mr_exit1;
+ }
+ } else {
+ pginfo.type = EHCA_MR_PGI_PHYS;
+ pginfo.num_pages = num_pages_mr;
+ pginfo.num_phys_buf = num_phys_buf;
+ pginfo.phys_buf_array = phys_buf_array;
+
+ retcode = ehca_reg_mr(shca, e_mr, iova_start, size,
+ mr_access_flags, e_pd, &pginfo,
+ &e_mr->ib.ib_mr.lkey,
+ &e_mr->ib.ib_mr.rkey);
+ if (retcode != 0) {
+ ib_mr = ERR_PTR(retcode);
+ goto reg_phys_mr_exit1;
+ }
+ }
+
+ /* successful registration of all pages */
+ ib_mr = &e_mr->ib.ib_mr;
+ goto reg_phys_mr_exit0;
+
+ reg_phys_mr_exit1:
+ ehca_mr_delete(e_mr);
+ reg_phys_mr_exit0:
+ if (IS_ERR(ib_mr) == 0)
+ EDEB_EX(7, "ib_mr=%p lkey=%x rkey=%x",
+ ib_mr, ib_mr->lkey, ib_mr->rkey);
+ else
+ EDEB_EX(4, "rc=%lx pd=%p phys_buf_array=%p "
+ "num_phys_buf=%x mr_access_flags=%x iova_start=%p",
+ PTR_ERR(ib_mr), pd, phys_buf_array,
+ num_phys_buf, mr_access_flags, iova_start);
+ return (ib_mr);
+} /* end ehca_reg_phys_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd,
+ struct ib_umem *region,
+ int mr_access_flags,
+ struct ib_udata *udata)
+{
+ struct ib_mr *ib_mr = 0;
+ struct ehca_mr *e_mr = 0;
+ struct ehca_shca *shca = 0;
+ struct ehca_pd *e_pd = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+ int retcode = 0;
+ u32 num_pages_mr = 0;
+
+ EDEB_EN(7, "pd=%p region=%p mr_access_flags=%x udata=%p",
+ pd, region, mr_access_flags, udata);
+
+ EHCA_CHECK_PD_P(pd);
+ if (ehca_adr_bad(region)) {
+ EDEB_ERR(4, "bad input values: region=%p", region);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_user_mr_exit0;
+ }
+ if (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
+ ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE))) {
+ /* Remote Write Access requires Local Write Access */
+ /* Remote Atomic Access requires Local Write Access */
+ EDEB_ERR(4, "bad input values: mr_access_flags=%x",
+ mr_access_flags);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_user_mr_exit0;
+ }
+ EDEB(7, "user_base=%lx virt_base=%lx length=%lx offset=%x page_size=%x "
+ "chunk_list.next=%p",
+ region->user_base, region->virt_base, region->length,
+ region->offset, region->page_size, region->chunk_list.next);
+ if (region->page_size != PAGE_SIZE) {
+ /* @TODO large page support */
+ EDEB_ERR(4, "large pages not supported, region->page_size=%x",
+ region->page_size);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_user_mr_exit0;
+ }
+
+ if ((region->length == 0) ||
+ ((0xFFFFFFFFFFFFFFFF - region->length) < region->virt_base)) {
+ EDEB_ERR(4, "bad input values: length=%lx virt_base=%lx",
+ region->length, region->virt_base);
+ ib_mr = ERR_PTR(-EINVAL);
+ goto reg_user_mr_exit0;
+ }
+
+ e_pd = container_of(pd, struct ehca_pd, ib_pd);
+ shca = container_of(pd->device, struct ehca_shca, ib_device);
+
+ e_mr = ehca_mr_new();
+ if (!e_mr) {
+ EDEB_ERR(4, "out of memory");
+ ib_mr = ERR_PTR(-ENOMEM);
+ goto reg_user_mr_exit0;
+ }
+
+ /* determine number of MR pages */
+ /* pagesize currently hardcoded to 4k ...TODO... */
+ num_pages_mr =
+ (((region->virt_base % PAGE_SIZE) + region->length +
+ PAGE_SIZE - 1) / PAGE_SIZE);
+
+ /* register MR on HCA */
+ pginfo.type = EHCA_MR_PGI_USER;
+ pginfo.num_pages = num_pages_mr;
+ pginfo.region = region;
+ pginfo.next_chunk = list_prepare_entry(pginfo.next_chunk,
+ (&region->chunk_list),
+ list);
+
+ retcode = ehca_reg_mr(shca, e_mr, (u64 *)region->virt_base,
+ region->length, mr_access_flags, e_pd, &pginfo,
+ &e_mr->ib.ib_mr.lkey, &e_mr->ib.ib_mr.rkey);
+ if (retcode != 0) {
+ ib_mr = ERR_PTR(retcode);
+ goto reg_user_mr_exit1;
+ }
+
+ /* successful registration of all pages */
+ ib_mr = &e_mr->ib.ib_mr;
+ goto reg_user_mr_exit0;
+
+ reg_user_mr_exit1:
+ ehca_mr_delete(e_mr);
+ reg_user_mr_exit0:
+ if (IS_ERR(ib_mr) == 0)
+ EDEB_EX(7, "ib_mr=%p lkey=%x rkey=%x",
+ ib_mr, ib_mr->lkey, ib_mr->rkey);
+ else
+ EDEB_EX(4, "rc=%lx pd=%p region=%p mr_access_flags=%x "
+ "udata=%p",
+ PTR_ERR(ib_mr), pd, region, mr_access_flags, udata);
+ return (ib_mr);
+} /* end ehca_reg_user_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_rereg_phys_mr(struct ib_mr *mr,
+ int mr_rereg_mask,
+ struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start)
+{
+ int retcode = 0;
+ struct ehca_shca *shca = 0;
+ struct ehca_mr *e_mr = 0;
+ u64 new_size = 0;
+ u64 *new_start = 0;
+ u32 new_acl = 0;
+ struct ehca_pd *new_pd = 0;
+ u32 tmp_lkey = 0;
+ u32 tmp_rkey = 0;
+ unsigned long sl_flags;
+ u64 num_pages_mr = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+
+ EDEB_EN(7, "mr=%p mr_rereg_mask=%x pd=%p phys_buf_array=%p "
+ "num_phys_buf=%x mr_access_flags=%x iova_start=%p",
+ mr, mr_rereg_mask, pd, phys_buf_array, num_phys_buf,
+ mr_access_flags, iova_start);
+
+ if (!(mr_rereg_mask & IB_MR_REREG_TRANS)) {
+ /*@TODO not supported, because PHYP rereg hCall needs pages*/
+ /*@TODO: We will follow this with Tom ....*/
+ EDEB_ERR(4, "rereg without IB_MR_REREG_TRANS not supported yet,"
+ " mr_rereg_mask=%x", mr_rereg_mask);
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit0;
+ }
+
+ EHCA_CHECK_MR(mr);
+ e_mr = container_of(mr, struct ehca_mr, ib.ib_mr);
+ if (mr_rereg_mask & IB_MR_REREG_PD) {
+ EHCA_CHECK_PD(pd);
+ }
+
+ if ((mr_rereg_mask &
+ ~(IB_MR_REREG_TRANS | IB_MR_REREG_PD | IB_MR_REREG_ACCESS)) ||
+ (mr_rereg_mask == 0)) {
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit0;
+ }
+
+ shca = container_of(mr->device, struct ehca_shca, ib_device);
+
+ /* check other parameters */
+ if (e_mr == shca->maxmr) {
+ /* should be impossible, however reject to be sure */
+ EDEB_ERR(3, "rereg internal max-MR impossible, mr=%p "
+ "shca->maxmr=%p mr->lkey=%x",
+ mr, shca->maxmr, mr->lkey);
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit0;
+ }
+ if (mr_rereg_mask & IB_MR_REREG_TRANS) { /* transl., i.e. addr/size */
+ if (e_mr->flags & EHCA_MR_FLAG_FMR) {
+ EDEB_ERR(4, "not supported for FMR, mr=%p flags=%x",
+ mr, e_mr->flags);
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit0;
+ }
+ if (ehca_adr_bad(phys_buf_array) || num_phys_buf <= 0) {
+ EDEB_ERR(4, "bad input values: mr_rereg_mask=%x "
+ "phys_buf_array=%p num_phys_buf=%x",
+ mr_rereg_mask, phys_buf_array, num_phys_buf);
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit0;
+ }
+ }
+ if ((mr_rereg_mask & IB_MR_REREG_ACCESS) && /* change ACL */
+ (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
+ ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)))) {
+ /* Remote Write Access requires Local Write Access */
+ /* Remote Atomic Access requires Local Write Access */
+ EDEB_ERR(4, "bad input values: mr_rereg_mask=%x "
+ "mr_access_flags=%x", mr_rereg_mask, mr_access_flags);
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit0;
+ }
+
+ /* set requested values dependent on rereg request */
+ spin_lock_irqsave(&e_mr->mrlock, sl_flags); /* get lock @TODO for MR*/
+ new_start = e_mr->start; /* new == old address */
+ new_size = e_mr->size; /* new == old length */
+ new_acl = e_mr->acl; /* new == old access control */
+ new_pd = container_of(mr->pd,struct ehca_pd,ib_pd); /*new == old PD*/
+
+ if (mr_rereg_mask & IB_MR_REREG_TRANS) {
+ new_start = iova_start; /* change address */
+ /* check physical buffer list and calculate size */
+ retcode = ehca_mr_chk_buf_and_calc_size(phys_buf_array,
+ num_phys_buf,
+ iova_start, &new_size);
+ if (retcode != 0)
+ goto rereg_phys_mr_exit1;
+ if ((new_size == 0) ||
+ ((0xFFFFFFFFFFFFFFFF - new_size) < (u64)iova_start)) {
+ EDEB_ERR(4, "bad input values: new_size=%lx "
+ "iova_start=%p", new_size, iova_start);
+ retcode = -EINVAL;
+ goto rereg_phys_mr_exit1;
+ }
+ num_pages_mr = ((((u64)new_start % PAGE_SIZE) +
+ new_size + PAGE_SIZE - 1) / PAGE_SIZE);
+ pginfo.type = EHCA_MR_PGI_PHYS;
+ pginfo.num_pages = num_pages_mr;
+ pginfo.num_phys_buf = num_phys_buf;
+ pginfo.phys_buf_array = phys_buf_array;
+ }
+ if (mr_rereg_mask & IB_MR_REREG_ACCESS)
+ new_acl = mr_access_flags;
+ if (mr_rereg_mask & IB_MR_REREG_PD)
+ new_pd = container_of(pd, struct ehca_pd, ib_pd);
+
+ EDEB(7, "mr=%p new_start=%p new_size=%lx new_acl=%x new_pd=%p "
+ "num_pages_mr=%lx",
+ e_mr, new_start, new_size, new_acl, new_pd, num_pages_mr);
+
+ retcode = ehca_rereg_mr(shca, e_mr, new_start, new_size, new_acl,
+ new_pd, &pginfo, &tmp_lkey, &tmp_rkey);
+ if (retcode != 0)
+ goto rereg_phys_mr_exit1;
+
+ /* successful reregistration */
+ if (mr_rereg_mask & IB_MR_REREG_PD)
+ mr->pd = pd;
+ mr->lkey = tmp_lkey;
+ mr->rkey = tmp_rkey;
+
+ rereg_phys_mr_exit1:
+ spin_unlock_irqrestore(&e_mr->mrlock, sl_flags); /* free spin lock */
+ rereg_phys_mr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "mr=%p mr_rereg_mask=%x pd=%p phys_buf_array=%p "
+ "num_phys_buf=%x mr_access_flags=%x iova_start=%p",
+ mr, mr_rereg_mask, pd, phys_buf_array, num_phys_buf,
+ mr_access_flags, iova_start);
+ else
+ EDEB_EX(4, "retcode=%x mr=%p mr_rereg_mask=%x pd=%p "
+ "phys_buf_array=%p num_phys_buf=%x mr_access_flags=%x "
+ "iova_start=%p",
+ retcode, mr, mr_rereg_mask, pd, phys_buf_array,
+ num_phys_buf, mr_access_flags, iova_start);
+
+ return (retcode);
+} /* end ehca_rereg_phys_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_shca *shca = 0;
+ struct ehca_mr *e_mr = 0;
+ struct ipz_pd fwpd; /* Firmware PD */
+ u32 access_ctrl = 0;
+ u64 tmp_remote_size = 0;
+ u64 tmp_remote_len = 0;
+
+ unsigned long sl_flags;
+
+ EDEB_EN(7, "mr=%p mr_attr=%p", mr, mr_attr);
+
+ EHCA_CHECK_MR(mr);
+ e_mr = container_of(mr, struct ehca_mr, ib.ib_mr);
+ if (ehca_adr_bad(mr_attr)) {
+ EDEB_ERR(4, "bad input values: mr_attr=%p", mr_attr);
+ retcode = -EINVAL;
+ goto query_mr_exit0;
+ }
+ if ((e_mr->flags & EHCA_MR_FLAG_FMR)) {
+ EDEB_ERR(4, "not supported for FMR, mr=%p e_mr=%p "
+ "e_mr->flags=%x", mr, e_mr, e_mr->flags);
+ retcode = -EINVAL;
+ goto query_mr_exit0;
+ }
+
+ shca = container_of(mr->device, struct ehca_shca, ib_device);
+ memset(mr_attr, 0, sizeof(struct ib_mr_attr));
+ spin_lock_irqsave(&e_mr->mrlock, sl_flags); /* get spin lock @TODO?? */
+
+ rc = hipz_h_query_mr(shca->ipz_hca_handle, &e_mr->pf,
+ &e_mr->ipz_mr_handle, &mr_attr->size,
+ &mr_attr->device_virt_addr, &tmp_remote_size,
+ &tmp_remote_len, &access_ctrl, &fwpd,
+ &mr_attr->lkey, &mr_attr->rkey);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_mr_query failed, rc=%lx mr=%p "
+ "hca_hndl=%lx mr_hndl=%lx lkey=%x",
+ rc, mr, shca->ipz_hca_handle.handle,
+ e_mr->ipz_mr_handle.handle, mr->lkey);
+ retcode = ehca_mrmw_map_rc_query_mr(rc);
+ goto query_mr_exit1;
+ }
+ ehca_mrmw_reverse_map_acl(&access_ctrl, &mr_attr->mr_access_flags);
+ mr_attr->pd = mr->pd;
+
+ query_mr_exit1:
+ spin_unlock_irqrestore(&e_mr->mrlock, sl_flags); /* free spin lock */
+ query_mr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "pd=%p device_virt_addr=%lx size=%lx "
+ "mr_access_flags=%x lkey=%x rkey=%x",
+ mr_attr->pd, mr_attr->device_virt_addr,
+ mr_attr->size, mr_attr->mr_access_flags,
+ mr_attr->lkey, mr_attr->rkey);
+ else
+ EDEB_EX(4, "retcode=%x mr=%p mr_attr=%p", retcode, mr, mr_attr);
+ return (retcode);
+} /* end ehca_query_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_dereg_mr(struct ib_mr *mr)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_shca *shca = 0;
+ struct ehca_mr *e_mr = 0;
+
+ EDEB_EN(7, "mr=%p", mr);
+
+ EHCA_CHECK_MR(mr);
+ e_mr = container_of(mr, struct ehca_mr, ib.ib_mr);
+ shca = container_of(mr->device, struct ehca_shca, ib_device);
+
+ if ((e_mr->flags & EHCA_MR_FLAG_FMR)) {
+ EDEB_ERR(4, "not supported for FMR, mr=%p e_mr=%p "
+ "e_mr->flags=%x", mr, e_mr, e_mr->flags);
+ retcode = -EINVAL;
+ goto dereg_mr_exit0;
+ } else if (e_mr == shca->maxmr) {
+ /* should be impossible, however reject to be sure */
+ EDEB_ERR(3, "dereg internal max-MR impossible, mr=%p "
+ "shca->maxmr=%p mr->lkey=%x",
+ mr, shca->maxmr, mr->lkey);
+ retcode = -EINVAL;
+ goto dereg_mr_exit0;
+ }
+
+ /*@TODO: BUSY: MR still has bound window(s) */
+ rc = hipz_h_free_resource_mr(shca->ipz_hca_handle, &e_mr->pf,
+ &e_mr->ipz_mr_handle);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_free_mr failed, rc=%lx shca=%p e_mr=%p"
+ " hca_hndl=%lx mr_hndl=%lx mr->lkey=%x",
+ rc, shca, e_mr, shca->ipz_hca_handle.handle,
+ e_mr->ipz_mr_handle.handle, mr->lkey);
+ retcode = ehca_mrmw_map_rc_free_mr(rc);
+ goto dereg_mr_exit0;
+ }
+
+ /* successful deregistration */
+ ehca_mr_delete(e_mr);
+
+ dereg_mr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "");
+ else
+ EDEB_EX(4, "retcode=%x mr=%p", retcode, mr);
+ return (retcode);
+} /* end ehca_dereg_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+struct ib_mw *ehca_alloc_mw(struct ib_pd *pd)
+{
+ struct ib_mw *ib_mw = 0;
+ u64 rc = H_Success;
+ struct ehca_shca *shca = 0;
+ struct ehca_mw *e_mw = 0;
+ struct ehca_pd *e_pd = 0;
+
+ EDEB_EN(7, "pd=%p", pd);
+
+ EHCA_CHECK_PD_P(pd);
+ e_pd = container_of(pd, struct ehca_pd, ib_pd);
+ shca = container_of(pd->device, struct ehca_shca, ib_device);
+
+ e_mw = ehca_mw_new();
+ if (!e_mw) {
+ ib_mw = ERR_PTR(-ENOMEM);
+ goto alloc_mw_exit0;
+ }
+
+ rc = hipz_h_alloc_resource_mw(shca->ipz_hca_handle, &e_mw->pf,
+ &shca->pf, e_pd->fw_pd,
+ &e_mw->ipz_mw_handle, &e_mw->ib_mw.rkey);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_mw_allocate failed, rc=%lx shca=%p "
+ "hca_hndl=%lx mw=%p", rc, shca,
+ shca->ipz_hca_handle.handle, e_mw);
+ ib_mw = ERR_PTR(ehca_mrmw_map_rc_alloc(rc));
+ goto alloc_mw_exit1;
+ }
+ /* save R_Key in local copy */
+ /*@TODO????? mw->rkey = *rkey_p; */
+
+ /* successful MW allocation */
+ ib_mw = &e_mw->ib_mw;
+ goto alloc_mw_exit0;
+
+ alloc_mw_exit1:
+ ehca_mw_delete(e_mw);
+ alloc_mw_exit0:
+ if (IS_ERR(ib_mw) == 0)
+ EDEB_EX(7, "ib_mw=%p rkey=%x", ib_mw, ib_mw->rkey);
+ else
+ EDEB_EX(4, "rc=%lx pd=%p", PTR_ERR(ib_mw), pd);
+ return (ib_mw);
+} /* end ehca_alloc_mw() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_bind_mw(struct ib_qp *qp,
+ struct ib_mw *mw,
+ struct ib_mw_bind *mw_bind)
+{
+ int retcode = 0;
+
+ /*@TODO: not supported up to now */
+ EDEB_ERR(4, "bind MW currently not supported by HCAD");
+ retcode = -EPERM;
+ goto bind_mw_exit0;
+
+ bind_mw_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "qp=%p mw=%p mw_bind=%p", qp, mw, mw_bind);
+ else
+ EDEB_EX(4, "rc=%x qp=%p mw=%p mw_bind=%p",
+ retcode, qp, mw, mw_bind);
+ return (retcode);
+} /* end ehca_bind_mw() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_dealloc_mw(struct ib_mw *mw)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_shca *shca = 0;
+ struct ehca_mw *e_mw = 0;
+
+ EDEB_EN(7, "mw=%p", mw);
+
+ EHCA_CHECK_MW(mw);
+ e_mw = container_of(mw, struct ehca_mw, ib_mw);
+ shca = container_of(mw->device, struct ehca_shca, ib_device);
+
+ rc = hipz_h_free_resource_mw(shca->ipz_hca_handle, &e_mw->pf,
+ &e_mw->ipz_mw_handle);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_free_mw failed, rc=%lx shca=%p mw=%p "
+ "rkey=%x hca_hndl=%lx mw_hndl=%lx",
+ rc, shca, mw, mw->rkey, shca->ipz_hca_handle.handle,
+ e_mw->ipz_mw_handle.handle);
+ retcode = ehca_mrmw_map_rc_free_mw(rc);
+ goto dealloc_mw_exit0;
+ }
+ /* successful deallocation */
+ ehca_mw_delete(e_mw);
+
+ dealloc_mw_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "");
+ else
+ EDEB_EX(4, "retcode=%x mw=%p", retcode, mw);
+ return (retcode);
+} /* end ehca_dealloc_mw() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+struct ib_fmr *ehca_alloc_fmr(struct ib_pd *pd,
+ int mr_access_flags,
+ struct ib_fmr_attr *fmr_attr)
+{
+ struct ib_fmr *ib_fmr = 0;
+ struct ehca_shca *shca = 0;
+ struct ehca_mr *e_fmr = 0;
+ int retcode = 0;
+ struct ehca_pd *e_pd = 0;
+ u32 tmp_lkey = 0;
+ u32 tmp_rkey = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+
+ EDEB_EN(7, "pd=%p mr_access_flags=%x fmr_attr=%p",
+ pd, mr_access_flags, fmr_attr);
+
+ EHCA_CHECK_PD_P(pd);
+ if (ehca_adr_bad(fmr_attr)) {
+ EDEB_ERR(4, "bad input values: fmr_attr=%p", fmr_attr);
+ ib_fmr = ERR_PTR(-EINVAL);
+ goto alloc_fmr_exit0;
+ }
+
+ EDEB(7, "max_pages=%x max_maps=%x page_shift=%x",
+ fmr_attr->max_pages, fmr_attr->max_maps, fmr_attr->page_shift);
+
+ /* check other parameters */
+ if (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
+ ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
+ !(mr_access_flags & IB_ACCESS_LOCAL_WRITE))) {
+ /* Remote Write Access requires Local Write Access */
+ /* Remote Atomic Access requires Local Write Access */
+ EDEB_ERR(4, "bad input values: mr_access_flags=%x",
+ mr_access_flags);
+ ib_fmr = ERR_PTR(-EINVAL);
+ goto alloc_fmr_exit0;
+ }
+ if (mr_access_flags & IB_ACCESS_MW_BIND) {
+ EDEB_ERR(4, "bad input values: mr_access_flags=%x",
+ mr_access_flags);
+ ib_fmr = ERR_PTR(-EINVAL);
+ goto alloc_fmr_exit0;
+ }
+ if ((fmr_attr->max_pages == 0) || (fmr_attr->max_maps == 0)) {
+ EDEB_ERR(4, "bad input values: fmr_attr->max_pages=%x "
+ "fmr_attr->max_maps=%x fmr_attr->page_shift=%x",
+ fmr_attr->max_pages, fmr_attr->max_maps,
+ fmr_attr->page_shift);
+ ib_fmr = ERR_PTR(-EINVAL);
+ goto alloc_fmr_exit0;
+ }
+ if ((1 << fmr_attr->page_shift) != PAGE_SIZE) {
+ /* pagesize currently hardcoded to 4k ... */
+ EDEB_ERR(4, "unsupported fmr_attr->page_shift=%x",
+ fmr_attr->page_shift);
+ ib_fmr = ERR_PTR(-EINVAL);
+ goto alloc_fmr_exit0;
+ }
+
+ e_pd = container_of(pd, struct ehca_pd, ib_pd);
+ shca = container_of(pd->device, struct ehca_shca, ib_device);
+
+ e_fmr = ehca_mr_new();
+ if (e_fmr == 0) {
+ ib_fmr = ERR_PTR(-ENOMEM);
+ goto alloc_fmr_exit0;
+ }
+ e_fmr->flags |= EHCA_MR_FLAG_FMR;
+
+ /* register MR on HCA */
+ retcode = ehca_reg_mr(shca, e_fmr, 0,
+ fmr_attr->max_pages * PAGE_SIZE,
+ mr_access_flags, e_pd, &pginfo,
+ &tmp_lkey, &tmp_rkey);
+ if (retcode != 0) {
+ ib_fmr = ERR_PTR(retcode);
+ goto alloc_fmr_exit1;
+ }
+
+ /* successful registration of all pages */
+ e_fmr->fmr_page_size = 1 << fmr_attr->page_shift;
+ e_fmr->fmr_max_pages = fmr_attr->max_pages; /* pagesize hardcoded 4k */
+ e_fmr->fmr_max_maps = fmr_attr->max_maps;
+ e_fmr->fmr_map_cnt = 0;
+ ib_fmr = &e_fmr->ib.ib_fmr;
+ goto alloc_fmr_exit0;
+
+ alloc_fmr_exit1:
+ ehca_mr_delete(e_fmr);
+ alloc_fmr_exit0:
+ if (IS_ERR(ib_fmr) == 0)
+ EDEB_EX(7, "ib_fmr=%p tmp_lkey=%x tmp_rkey=%x",
+ ib_fmr, tmp_lkey, tmp_rkey);
+ else
+ EDEB_EX(4, "rc=%lx pd=%p mr_access_flags=%x "
+ "fmr_attr=%p", PTR_ERR(ib_fmr), pd,
+ mr_access_flags, fmr_attr);
+ return (ib_fmr);
+} /* end ehca_alloc_fmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_map_phys_fmr(struct ib_fmr *fmr,
+ u64 *page_list,
+ int list_len,
+ u64 iova)
+{
+ int retcode = 0;
+ struct ehca_shca *shca = 0;
+ struct ehca_mr *e_fmr = 0;
+ struct ehca_pd *e_pd = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+ u32 tmp_lkey = 0;
+ u32 tmp_rkey = 0;
+ /*@TODO unsigned long sl_flags; */
+
+ EDEB_EN(7, "fmr=%p page_list=%p list_len=%x iova=%lx",
+ fmr, page_list, list_len, iova);
+
+ EHCA_CHECK_FMR(fmr);
+ e_fmr = container_of(fmr, struct ehca_mr, ib.ib_fmr);
+ shca = container_of(fmr->device, struct ehca_shca, ib_device);
+ e_pd = container_of(fmr->pd, struct ehca_pd, ib_pd);
+
+ if (!(e_fmr->flags & EHCA_MR_FLAG_FMR)) {
+ EDEB_ERR(4, "not a FMR, e_fmr=%p e_fmr->flags=%x",
+ e_fmr, e_fmr->flags);
+ retcode = -EINVAL;
+ goto map_phys_fmr_exit0;
+ }
+ retcode = ehca_fmr_check_page_list(e_fmr, page_list, list_len);
+ if (retcode != 0)
+ goto map_phys_fmr_exit0;
+ if (iova % PAGE_SIZE) {
+ /* only whole-numbered pages */
+ EDEB_ERR(4, "bad iova, iova=%lx", iova);
+ retcode = -EINVAL;
+ goto map_phys_fmr_exit0;
+ }
+ if (e_fmr->fmr_map_cnt >= e_fmr->fmr_max_maps) {
+ /* HCAD does not limit the maps, however trace this anyway */
+ EDEB(6, "map limit exceeded, fmr=%p e_fmr->fmr_map_cnt=%x "
+ "e_fmr->fmr_max_maps=%x",
+ fmr, e_fmr->fmr_map_cnt, e_fmr->fmr_max_maps);
+ }
+
+ pginfo.type = EHCA_MR_PGI_FMR;
+ pginfo.num_pages = list_len;
+ pginfo.page_list = page_list;
+
+ /* @TODO spin_lock_irqsave(&e_fmr->mrlock, sl_flags); */
+
+ retcode = ehca_rereg_mr(shca, e_fmr, (u64 *)iova,
+ list_len * PAGE_SIZE,
+ e_fmr->acl, e_pd, &pginfo,
+ &tmp_lkey, &tmp_rkey);
+ if (retcode != 0) {
+ /* @TODO spin_unlock_irqrestore(&fmr->mrlock, sl_flags); */
+ goto map_phys_fmr_exit0;
+ }
+ /* successful reregistration */
+ e_fmr->fmr_map_cnt++;
+ /* @TODO spin_unlock_irqrestore(&fmr->mrlock, sl_flags); */
+
+ e_fmr->ib.ib_fmr.lkey = tmp_lkey;
+ e_fmr->ib.ib_fmr.rkey = tmp_rkey;
+
+ map_phys_fmr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "lkey=%x rkey=%x",
+ e_fmr->ib.ib_fmr.lkey, e_fmr->ib.ib_fmr.rkey);
+ else
+ EDEB_EX(4, "retcode=%x fmr=%p page_list=%p list_len=%x "
+ "iova=%lx",
+ retcode, fmr, page_list, list_len, iova);
+ return (retcode);
+} /* end ehca_map_phys_fmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_unmap_fmr(struct list_head *fmr_list)
+{
+ int retcode = 0;
+ struct ib_fmr *ib_fmr;
+ struct ehca_shca *shca = 0;
+ struct ehca_shca *prev_shca = 0;
+ struct ehca_mr *e_fmr = 0;
+ u32 num_fmr = 0;
+ u32 unmap_fmr_cnt = 0;
+ /* @TODO unsigned long sl_flags; */
+
+ EDEB_EN(7, "fmr_list=%p", fmr_list);
+
+ /* check all FMR belong to same SHCA, and check internal flag */
+ list_for_each_entry(ib_fmr, fmr_list, list) {
+ prev_shca = shca;
+ shca = container_of(ib_fmr->device, struct ehca_shca,
+ ib_device);
+ EHCA_CHECK_FMR(ib_fmr);
+ e_fmr = container_of(ib_fmr, struct ehca_mr, ib.ib_fmr);
+ if ((shca != prev_shca) && (prev_shca != 0)) {
+ EDEB_ERR(4, "SHCA mismatch, shca=%p prev_shca=%p "
+ "e_fmr=%p", shca, prev_shca, e_fmr);
+ retcode = -EINVAL;
+ goto unmap_fmr_exit0;
+ }
+ if (!(e_fmr->flags & EHCA_MR_FLAG_FMR)) {
+ EDEB_ERR(4, "not a FMR, e_fmr=%p e_fmr->flags=%x",
+ e_fmr, e_fmr->flags);
+ retcode = -EINVAL;
+ goto unmap_fmr_exit0;
+ }
+ num_fmr++;
+ }
+
+ /* loop over all FMRs to unmap */
+ list_for_each_entry(ib_fmr, fmr_list, list) {
+ unmap_fmr_cnt++;
+ e_fmr = container_of(ib_fmr, struct ehca_mr, ib.ib_fmr);
+ shca = container_of(ib_fmr->device, struct ehca_shca,
+ ib_device);
+ /*@TODO??? spin_lock_irqsave(&fmr->mrlock, sl_flags); */
+ retcode = ehca_unmap_one_fmr(shca, e_fmr);
+ /*@TODO???? spin_unlock_irqrestore(&fmr->mrlock, sl_flags); */
+ if (retcode != 0) {
+ /* unmap failed, stop unmapping of rest of FMRs */
+ EDEB_ERR(4, "unmap of one FMR failed, stop rest, "
+ "e_fmr=%p num_fmr=%x unmap_fmr_cnt=%x lkey=%x",
+ e_fmr, num_fmr, unmap_fmr_cnt,
+ e_fmr->ib.ib_fmr.lkey);
+ goto unmap_fmr_exit0;
+ }
+ }
+
+ unmap_fmr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "num_fmr=%x", num_fmr);
+ else
+ EDEB_EX(4, "retcode=%x fmr_list=%p num_fmr=%x unmap_fmr_cnt=%x",
+ retcode, fmr_list, num_fmr, unmap_fmr_cnt);
+ return (retcode);
+} /* end ehca_unmap_fmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_dealloc_fmr(struct ib_fmr *fmr)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_shca *shca = 0;
+ struct ehca_mr *e_fmr = 0;
+
+ EDEB_EN(7, "fmr=%p", fmr);
+
+ EHCA_CHECK_FMR(fmr);
+ e_fmr = container_of(fmr, struct ehca_mr, ib.ib_fmr);
+ shca = container_of(fmr->device, struct ehca_shca, ib_device);
+
+ if (!(e_fmr->flags & EHCA_MR_FLAG_FMR)) {
+ EDEB_ERR(4, "not a FMR, e_fmr=%p e_fmr->flags=%x",
+ e_fmr, e_fmr->flags);
+ retcode = -EINVAL;
+ goto free_fmr_exit0;
+ }
+
+ rc = hipz_h_free_resource_mr(shca->ipz_hca_handle, &e_fmr->pf,
+ &e_fmr->ipz_mr_handle);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_free_mr failed, rc=%lx e_fmr=%p "
+ "hca_hndl=%lx fmr_hndl=%lx fmr->lkey=%x",
+ rc, e_fmr, shca->ipz_hca_handle.handle,
+ e_fmr->ipz_mr_handle.handle, fmr->lkey);
+ ehca_mrmw_map_rc_free_mr(rc);
+ goto free_fmr_exit0;
+ }
+ /* successful deregistration */
+ ehca_mr_delete(e_fmr);
+
+ free_fmr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "");
+ else
+ EDEB_EX(4, "retcode=%x fmr=%p", retcode, fmr);
+ return (retcode);
+} /* end ehca_dealloc_fmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_reg_mr(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ u64 *iova_start,
+ u64 size,
+ int acl,
+ struct ehca_pd *e_pd,
+ struct ehca_mr_pginfo *pginfo,
+ u32 *lkey,
+ u32 *rkey)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_mr->pf;
+ u32 hipz_acl = 0;
+
+ EDEB_EN(7, "shca=%p e_mr=%p iova_start=%p size=%lx acl=%x e_pd=%p "
+ "pginfo=%p num_pages=%lx", shca, e_mr, iova_start, size, acl,
+ e_pd, pginfo, pginfo->num_pages);
+
+ ehca_mrmw_map_acl(acl, &hipz_acl);
+ ehca_mrmw_set_pgsize_hipz_acl(&hipz_acl);
+ if (ehca_use_hp_mr == 1)
+ hipz_acl |= 0x00000001;
+
+ rc = hipz_h_alloc_resource_mr(shca->ipz_hca_handle, pfmr, &shca->pf,
+ (u64)iova_start, size, hipz_acl,
+ e_pd->fw_pd, &e_mr->ipz_mr_handle,
+ lkey, rkey);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_alloc_mr failed, rc=%lx hca_hndl=%lx "
+ "mr_hndl=%lx", rc, shca->ipz_hca_handle.handle,
+ e_mr->ipz_mr_handle.handle);
+ retcode = ehca_mrmw_map_rc_alloc(rc);
+ goto ehca_reg_mr_exit0;
+ }
+
+ retcode = ehca_reg_mr_rpages(shca, e_mr, pginfo);
+ if (retcode != 0)
+ goto ehca_reg_mr_exit1;
+
+ /* successful registration */
+ e_mr->num_pages = pginfo->num_pages;
+ e_mr->start = iova_start;
+ e_mr->size = size;
+ e_mr->acl = acl;
+ goto ehca_reg_mr_exit0;
+
+ ehca_reg_mr_exit1:
+ rc = hipz_h_free_resource_mr(shca->ipz_hca_handle, pfmr,
+ &e_mr->ipz_mr_handle);
+ if (rc != H_Success) {
+ EDEB(1, "rc=%lx shca=%p e_mr=%p iova_start=%p "
+ "size=%lx acl=%x e_pd=%p lkey=%x pginfo=%p num_pages=%lx",
+ rc, shca, e_mr, iova_start, size, acl,
+ e_pd, *lkey, pginfo, pginfo->num_pages);
+ ehca_catastrophic("internal error in ehca_reg_mr, "
+ "not recoverable");
+ }
+ ehca_reg_mr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "retcode=%x lkey=%x rkey=%x", retcode, *lkey, *rkey);
+ else
+ EDEB_EX(4, "retcode=%x shca=%p e_mr=%p iova_start=%p "
+ "size=%lx acl=%x e_pd=%p pginfo=%p num_pages=%lx",
+ retcode, shca, e_mr, iova_start,
+ size, acl, e_pd, pginfo, pginfo->num_pages);
+ return (retcode);
+} /* end ehca_reg_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_reg_mr_rpages(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ struct ehca_mr_pginfo *pginfo)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_mr->pf;
+ u32 rnum = 0;
+ u64 rpage = 0;
+ u32 i;
+ u64 *kpage = 0;
+
+ EDEB_EN(7, "shca=%p e_mr=%p pginfo=%p num_pages=%lx",
+ shca, e_mr, pginfo, pginfo->num_pages);
+
+ kpage = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (kpage == 0) {
+ EDEB_ERR(4, "kpage alloc failed");
+ retcode = -ENOMEM;
+ goto ehca_reg_mr_rpages_exit0;
+ }
+ memset(kpage, 0, PAGE_SIZE);
+
+ /* max 512 pages per shot */
+ for (i = 0; i < ((pginfo->num_pages + 512 - 1) / 512); i++) {
+
+ if (i == ((pginfo->num_pages + 512 - 1) / 512) - 1) {
+ rnum = pginfo->num_pages % 512; /* last shot */
+ if (rnum == 0)
+ rnum = 512; /* last shot is full */
+ } else
+ rnum = 512;
+
+ if (rnum > 1) {
+ retcode = ehca_set_pagebuf(e_mr, pginfo, rnum, kpage);
+ if (retcode) {
+ EDEB_ERR(4, "ehca_set_pagebuf bad rc, "
+ "retcode=%x rnum=%x kpage=%p",
+ retcode, rnum, kpage);
+ retcode = -EFAULT;
+ goto ehca_reg_mr_rpages_exit1;
+ }
+ rpage = ehca_kv_to_g(kpage);
+ if (rpage == 0) {
+ EDEB_ERR(4, "kpage=%p i=%x", kpage, i);
+ retcode = -EFAULT;
+ goto ehca_reg_mr_rpages_exit1;
+ }
+ } else { /* rnum==1 */
+ retcode = ehca_set_pagebuf_1(e_mr, pginfo, &rpage);
+ if (retcode) {
+ EDEB_ERR(4, "ehca_set_pagebuf_1 bad rc, "
+ "retcode=%x i=%x", retcode, i);
+ retcode = -EFAULT;
+ goto ehca_reg_mr_rpages_exit1;
+ }
+ }
+
+ EDEB(9, "i=%x rnum=%x rpage=%lx", i, rnum, rpage);
+
+ rc = hipz_h_register_rpage_mr(shca->ipz_hca_handle,
+ &e_mr->ipz_mr_handle, pfmr,
+ &shca->pf,
+ 0, /* pagesize hardcoded to 4k */
+ 0, rpage, rnum);
+
+ if (i == ((pginfo->num_pages + 512 - 1) / 512) - 1) {
+ /* check for 'registration complete'==H_Success */
+ /* and for 'page registered'==H_PAGE_REGISTERED */
+ if (rc != H_Success) {
+ EDEB_ERR(4, "last hipz_reg_rpage_mr failed, "
+ "rc=%lx e_mr=%p i=%x hca_hndl=%lx "
+ "mr_hndl=%lx lkey=%x", rc, e_mr, i,
+ shca->ipz_hca_handle.handle,
+ e_mr->ipz_mr_handle.handle,
+ e_mr->ib.ib_mr.lkey);
+ retcode = ehca_mrmw_map_rc_rrpg_last(rc);
+ break;
+ } else
+ retcode = 0;
+ } else if (rc != H_PAGE_REGISTERED) {
+ EDEB_ERR(4, "hipz_reg_rpage_mr failed, rc=%lx e_mr=%p "
+ "i=%x lkey=%x hca_hndl=%lx mr_hndl=%lx",
+ rc, e_mr, i, e_mr->ib.ib_mr.lkey,
+ shca->ipz_hca_handle.handle,
+ e_mr->ipz_mr_handle.handle);
+ retcode = ehca_mrmw_map_rc_rrpg_notlast(rc);
+ break;
+ } else
+ retcode = 0;
+ } /* end for(i) */
+
+
+ ehca_reg_mr_rpages_exit1:
+ kfree(kpage);
+ ehca_reg_mr_rpages_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "retcode=%x", retcode);
+ else
+ EDEB_EX(4, "retcode=%x shca=%p e_mr=%p pginfo=%p "
+ "num_pages=%lx",
+ retcode, shca, e_mr, pginfo, pginfo->num_pages);
+ return (retcode);
+} /* end ehca_reg_mr_rpages() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+inline int ehca_rereg_mr_rereg1(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ u64 *iova_start,
+ u64 size,
+ u32 acl,
+ struct ehca_pd *e_pd,
+ struct ehca_mr_pginfo *pginfo,
+ u32 *lkey,
+ u32 *rkey)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_mr->pf;
+ u64 iova_start_out = 0;
+ u32 hipz_acl = 0;
+ u64 *kpage = 0;
+ u64 rpage = 0;
+ struct ehca_mr_pginfo pginfo_save;
+
+ EDEB_EN(7, "shca=%p e_mr=%p iova_start=%p size=%lx acl=%x "
+ "e_pd=%p pginfo=%p num_pages=%lx", shca, e_mr,
+ iova_start, size, acl, e_pd, pginfo, pginfo->num_pages);
+
+ ehca_mrmw_map_acl(acl, &hipz_acl);
+ ehca_mrmw_set_pgsize_hipz_acl(&hipz_acl);
+
+ kpage = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (kpage == 0) {
+ EDEB_ERR(4, "kpage alloc failed");
+ retcode = -ENOMEM;
+ goto ehca_rereg_mr_rereg1_exit0;
+ }
+ memset(kpage, 0, PAGE_SIZE);
+
+ pginfo_save = *pginfo;
+ retcode = ehca_set_pagebuf(e_mr, pginfo, pginfo->num_pages, kpage);
+ if (retcode != 0) {
+ EDEB_ERR(4, "set pagebuf failed, e_mr=%p pginfo=%p type=%x "
+ "num_pages=%lx kpage=%p",
+ e_mr, pginfo, pginfo->type, pginfo->num_pages, kpage);
+ goto ehca_rereg_mr_rereg1_exit1;
+ }
+ rpage = ehca_kv_to_g(kpage);
+ if (rpage == 0) {
+ EDEB_ERR(4, "kpage=%p", kpage);
+ retcode = -EFAULT;
+ goto ehca_rereg_mr_rereg1_exit1;
+ }
+ rc = hipz_h_reregister_pmr(shca->ipz_hca_handle, pfmr, &shca->pf,
+ &e_mr->ipz_mr_handle, (u64)iova_start,
+ size, hipz_acl, e_pd->fw_pd, rpage,
+ &iova_start_out, lkey, rkey);
+ if (rc != H_Success) {
+ /* reregistration unsuccessful, */
+ /* try it again with the 3 hCalls, */
+ /* e.g. this is required in case H_MR_CONDITION */
+ /* (MW bound or MR is shared) */
+ EDEB(6, "hipz_h_reregister_pmr failed (Rereg1), rc=%lx "
+ "e_mr=%p", rc, e_mr);
+ *pginfo = pginfo_save;
+ retcode = -EAGAIN;
+ } else if ((u64 *)iova_start_out != iova_start) {
+ EDEB_ERR(4, "PHYP changed iova_start in rereg_pmr, "
+ "iova_start=%p iova_start_out=%lx e_mr=%p "
+ "mr_handle=%lx lkey=%x", iova_start, iova_start_out,
+ e_mr, e_mr->ipz_mr_handle.handle, e_mr->ib.ib_mr.lkey);
+ retcode = -EFAULT;
+ } else {
+ /* successful reregistration */
+ /* note: start and start_out are identical for eServer HCAs */
+ e_mr->num_pages = pginfo->num_pages;
+ e_mr->start = iova_start;
+ e_mr->size = size;
+ e_mr->acl = acl;
+ }
+
+ ehca_rereg_mr_rereg1_exit1:
+ kfree(kpage);
+ ehca_rereg_mr_rereg1_exit0:
+ if ((retcode == 0) || (retcode == -EAGAIN))
+ EDEB_EX(7, "retcode=%x rc=%lx lkey=%x rkey=%x pginfo=%p "
+ "num_pages=%lx",
+ retcode, rc, *lkey, *rkey, pginfo, pginfo->num_pages);
+ else
+ EDEB_EX(4, "retcode=%x rc=%lx lkey=%x rkey=%x pginfo=%p "
+ "num_pages=%lx",
+ retcode, rc, *lkey, *rkey, pginfo, pginfo->num_pages);
+ return (retcode);
+} /* end ehca_rereg_mr_rereg1() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_rereg_mr(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ u64 *iova_start,
+ u64 size,
+ int acl,
+ struct ehca_pd *e_pd,
+ struct ehca_mr_pginfo *pginfo,
+ u32 *lkey,
+ u32 *rkey)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_mr->pf;
+ int Rereg1Hcall = TRUE; /* TRUE: use hipz_h_reregister_pmr directly */
+ int Rereg3Hcall = FALSE; /* TRUE: use 3 hipz calls for reregistration */
+ struct ehca_bridge_handle save_bridge;
+
+ EDEB_EN(7, "shca=%p e_mr=%p iova_start=%p size=%lx acl=%x "
+ "e_pd=%p pginfo=%p num_pages=%lx", shca, e_mr,
+ iova_start, size, acl, e_pd, pginfo, pginfo->num_pages);
+
+ /* first determine reregistration hCall(s) */
+ if ((pginfo->num_pages > 512) || (e_mr->num_pages > 512) ||
+ (pginfo->num_pages > e_mr->num_pages)) {
+ EDEB(7, "Rereg3 case, pginfo->num_pages=%lx "
+ "e_mr->num_pages=%x", pginfo->num_pages, e_mr->num_pages);
+ Rereg1Hcall = FALSE;
+ Rereg3Hcall = TRUE;
+ }
+
+ if (e_mr->flags & EHCA_MR_FLAG_MAXMR) { /* check for max-MR */
+ Rereg1Hcall = FALSE;
+ Rereg3Hcall = TRUE;
+ e_mr->flags &= ~EHCA_MR_FLAG_MAXMR;
+ EDEB(4, "Rereg MR for max-MR! e_mr=%p", e_mr);
+ }
+
+ if (Rereg1Hcall) {
+ retcode = ehca_rereg_mr_rereg1(shca, e_mr, iova_start, size,
+ acl, e_pd, pginfo, lkey, rkey);
+ if (retcode != 0) {
+ if (retcode == -EAGAIN)
+ Rereg3Hcall = TRUE;
+ else
+ goto ehca_rereg_mr_exit0;
+ }
+ }
+
+ if (Rereg3Hcall) {
+ struct ehca_mr save_mr;
+
+ /* first deregister old MR */
+ rc = hipz_h_free_resource_mr(shca->ipz_hca_handle, pfmr,
+ &e_mr->ipz_mr_handle);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_free_mr failed, rc=%lx e_mr=%p "
+ "hca_hndl=%lx mr_hndl=%lx mr->lkey=%x",
+ rc, e_mr, shca->ipz_hca_handle.handle,
+ e_mr->ipz_mr_handle.handle,
+ e_mr->ib.ib_mr.lkey);
+ retcode = ehca_mrmw_map_rc_free_mr(rc);
+ goto ehca_rereg_mr_exit0;
+ }
+ /* clean ehca_mr_t, without changing struct ib_mr and lock */
+ save_bridge = pfmr->bridge;
+ save_mr = *e_mr;
+ ehca_mr_deletenew(e_mr);
+
+ /* set some MR values */
+ e_mr->flags = save_mr.flags;
+ pfmr->bridge = save_bridge;
+ e_mr->fmr_page_size = save_mr.fmr_page_size;
+ e_mr->fmr_max_pages = save_mr.fmr_max_pages;
+ e_mr->fmr_max_maps = save_mr.fmr_max_maps;
+ e_mr->fmr_map_cnt = save_mr.fmr_map_cnt;
+
+ retcode = ehca_reg_mr(shca, e_mr, iova_start, size, acl,
+ e_pd, pginfo, lkey, rkey);
+ if (retcode != 0) {
+ u32 offset = (u64)(&e_mr->flags) - (u64)e_mr;
+ memcpy(&e_mr->flags, &(save_mr.flags),
+ sizeof(struct ehca_mr) - offset);
+ goto ehca_rereg_mr_exit0;
+ }
+ }
+
+ ehca_rereg_mr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "retcode=%x shca=%p e_mr=%p iova_start=%p size=%lx "
+ "acl=%x e_pd=%p pginfo=%p num_pages=%lx lkey=%x "
+ "rkey=%x Rereg1Hcall=%x Rereg3Hcall=%x",
+ retcode, shca, e_mr, iova_start, size, acl, e_pd,
+ pginfo, pginfo->num_pages, *lkey, *rkey, Rereg1Hcall,
+ Rereg3Hcall);
+ else
+ EDEB_EX(4, "retcode=%x shca=%p e_mr=%p iova_start=%p size=%lx "
+ "acl=%x e_pd=%p pginfo=%p num_pages=%lx lkey=%x "
+ "rkey=%x Rereg1Hcall=%x Rereg3Hcall=%x",
+ retcode, shca, e_mr, iova_start, size, acl, e_pd,
+ pginfo, pginfo->num_pages, *lkey, *rkey, Rereg1Hcall,
+ Rereg3Hcall);
+
+ return (retcode);
+} /* end ehca_rereg_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_unmap_one_fmr(struct ehca_shca *shca,
+ struct ehca_mr *e_fmr)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_fmr->pf;
+ int Rereg1Hcall = TRUE; /* TRUE: use hipz_mr_reregister directly */
+ int Rereg3Hcall = FALSE; /* TRUE: use 3 hipz calls for unmapping */
+ struct ehca_bridge_handle save_bridge;
+ struct ehca_pd *e_pd = 0;
+ struct ehca_mr save_fmr;
+ u32 tmp_lkey = 0;
+ u32 tmp_rkey = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+
+ EDEB_EN(7, "shca=%p e_fmr=%p", shca, e_fmr);
+
+ /* first check if reregistration hCall can be used for unmap */
+ if (e_fmr->fmr_max_pages > 512) {
+ Rereg1Hcall = FALSE;
+ Rereg3Hcall = TRUE;
+ }
+
+ e_pd = container_of(e_fmr->ib.ib_fmr.pd, struct ehca_pd, ib_pd);
+
+ if (Rereg1Hcall) {
+ /* note: after using rereg hcall with len=0, */
+ /* rereg hcall must be used again for registering pages */
+ u64 start_out = 0;
+ rc = hipz_h_reregister_pmr(shca->ipz_hca_handle, pfmr,
+ &shca->pf, &e_fmr->ipz_mr_handle, 0,
+ 0, 0, e_pd->fw_pd, 0, &start_out,
+ &tmp_lkey, &tmp_rkey);
+ if (rc != H_Success) {
+ /* should not happen, because length checked above, */
+ /* FMRs are not shared and no MW bound to FMRs */
+ EDEB_ERR(4, "hipz_reregister_pmr failed (Rereg1), "
+ "rc=%lx e_fmr=%p hca_hndl=%lx mr_hndl=%lx "
+ "lkey=%x", rc, e_fmr,
+ shca->ipz_hca_handle.handle,
+ e_fmr->ipz_mr_handle.handle,
+ e_fmr->ib.ib_fmr.lkey);
+ Rereg3Hcall = TRUE;
+ } else {
+ /* successful reregistration */
+ e_fmr->start = 0;
+ e_fmr->size = 0;
+ }
+ }
+
+ if (Rereg3Hcall) {
+ struct ehca_mr save_mr;
+
+ /* first free old FMR */
+ rc = hipz_h_free_resource_mr(shca->ipz_hca_handle, pfmr,
+ &e_fmr->ipz_mr_handle);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_free_mr failed, rc=%lx e_fmr=%p "
+ "hca_hndl=%lx mr_hndl=%lx lkey=%x", rc, e_fmr,
+ shca->ipz_hca_handle.handle,
+ e_fmr->ipz_mr_handle.handle,
+ e_fmr->ib.ib_fmr.lkey);
+ retcode = ehca_mrmw_map_rc_free_mr(rc);
+ goto ehca_unmap_one_fmr_exit0;
+ }
+ /* clean ehca_mr_t, without changing lock */
+ save_bridge = pfmr->bridge;
+ save_fmr = *e_fmr;
+ ehca_mr_deletenew(e_fmr);
+
+ /* set some MR values */
+ e_fmr->flags = save_fmr.flags;
+ pfmr->bridge = save_bridge;
+ e_fmr->fmr_page_size = save_fmr.fmr_page_size;
+ e_fmr->fmr_max_pages = save_fmr.fmr_max_pages;
+ e_fmr->fmr_max_maps = save_fmr.fmr_max_maps;
+ e_fmr->fmr_map_cnt = save_fmr.fmr_map_cnt;
+ e_fmr->acl = save_fmr.acl;
+
+ pginfo.type = EHCA_MR_PGI_FMR;
+ pginfo.num_pages = 0;
+ retcode = ehca_reg_mr(shca, e_fmr, 0,
+ (e_fmr->fmr_max_pages *
+ e_fmr->fmr_page_size),
+ e_fmr->acl, e_pd, &pginfo, &tmp_lkey,
+ &tmp_rkey);
+ if (retcode != 0) {
+ u32 offset = (u64)(&e_fmr->flags) - (u64)e_fmr;
+ memcpy(&e_fmr->flags, &(save_mr.flags),
+ sizeof(struct ehca_mr) - offset);
+ goto ehca_unmap_one_fmr_exit0;
+ }
+ }
+
+ ehca_unmap_one_fmr_exit0:
+ EDEB_EX(7, "retcode=%x tmp_lkey=%x tmp_rkey=%x fmr_max_pages=%x "
+ "Rereg1Hcall=%x Rereg3Hcall=%x", retcode, tmp_lkey, tmp_rkey,
+ e_fmr->fmr_max_pages, Rereg1Hcall, Rereg3Hcall);
+ return (retcode);
+} /* end ehca_unmap_one_fmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_reg_smr(struct ehca_shca *shca,
+ struct ehca_mr *e_origmr,
+ struct ehca_mr *e_newmr,
+ u64 *iova_start,
+ int acl,
+ struct ehca_pd *e_pd,
+ u32 *lkey,
+ u32 *rkey)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_newmr->pf;
+ u32 hipz_acl = 0;
+
+ EDEB_EN(7,"shca=%p e_origmr=%p e_newmr=%p iova_start=%p acl=%x e_pd=%p",
+ shca, e_origmr, e_newmr, iova_start, acl, e_pd);
+
+ ehca_mrmw_map_acl(acl, &hipz_acl);
+ ehca_mrmw_set_pgsize_hipz_acl(&hipz_acl);
+
+ rc = hipz_h_register_smr(shca->ipz_hca_handle, pfmr, &e_origmr->pf,
+ &shca->pf, &e_origmr->ipz_mr_handle,
+ (u64)iova_start, hipz_acl, e_pd->fw_pd,
+ &e_newmr->ipz_mr_handle, lkey, rkey);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_reg_smr failed, rc=%lx shca=%p e_origmr=%p "
+ "e_newmr=%p iova_start=%p acl=%x e_pd=%p hca_hndl=%lx "
+ "mr_hndl=%lx lkey=%x", rc, shca, e_origmr, e_newmr,
+ iova_start, acl, e_pd, shca->ipz_hca_handle.handle,
+ e_origmr->ipz_mr_handle.handle,
+ e_origmr->ib.ib_mr.lkey);
+ retcode = ehca_mrmw_map_rc_reg_smr(rc);
+ goto ehca_reg_smr_exit0;
+ }
+ /* successful registration */
+ e_newmr->num_pages = e_origmr->num_pages;
+ e_newmr->start = iova_start;
+ e_newmr->size = e_origmr->size;
+ e_newmr->acl = acl;
+ goto ehca_reg_smr_exit0;
+
+ ehca_reg_smr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "retcode=%x lkey=%x rkey=%x",
+ retcode, *lkey, *rkey);
+ else
+ EDEB_EX(4, "retcode=%x shca=%p e_origmr=%p e_newmr=%p "
+ "iova_start=%p acl=%x e_pd=%p", retcode,
+ shca, e_origmr, e_newmr, iova_start, acl, e_pd);
+ return (retcode);
+} /* end ehca_reg_smr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_reg_internal_maxmr(
+ struct ehca_shca *shca,
+ struct ehca_pd *e_pd,
+ struct ehca_mr **e_maxmr)
+{
+ int retcode = 0;
+ struct ehca_mr *e_mr = 0;
+ u64 *iova_start = 0;
+ u64 size_maxmr = 0;
+ struct ehca_mr_pginfo pginfo={0,0,0,0,0,0,0,0,0,0,0,0};
+ struct ib_phys_buf ib_pbuf;
+ u32 num_pages_mr = 0;
+
+ EDEB_EN(7, "shca=%p e_pd=%p e_maxmr=%p", shca, e_pd, e_maxmr);
+
+ if (ehca_adr_bad(shca) || ehca_adr_bad(e_pd) || ehca_adr_bad(e_maxmr)) {
+ EDEB_ERR(4, "bad input values: shca=%p e_pd=%p e_maxmr=%p",
+ shca, e_pd, e_maxmr);
+ retcode = -EINVAL;
+ goto ehca_reg_internal_maxmr_exit0;
+ }
+
+ e_mr = ehca_mr_new();
+ if (!e_mr) {
+ EDEB_ERR(4, "out of memory");
+ retcode = -ENOMEM;
+ goto ehca_reg_internal_maxmr_exit0;
+ }
+ e_mr->flags |= EHCA_MR_FLAG_MAXMR;
+
+ /* register internal max-MR on HCA */
+ size_maxmr = (u64)high_memory - PAGE_OFFSET;
+ EDEB(9, "high_memory=%p PAGE_OFFSET=%lx", high_memory, PAGE_OFFSET);
+ iova_start = (u64 *)KERNELBASE;
+ ib_pbuf.addr = 0;
+ ib_pbuf.size = size_maxmr;
+ num_pages_mr =
+ ((((u64)iova_start % PAGE_SIZE) + size_maxmr +
+ PAGE_SIZE - 1) / PAGE_SIZE);
+
+ pginfo.type = EHCA_MR_PGI_PHYS;
+ pginfo.num_pages = num_pages_mr;
+ pginfo.num_phys_buf = 1;
+ pginfo.phys_buf_array = &ib_pbuf;
+
+ retcode = ehca_reg_mr(shca, e_mr, iova_start, size_maxmr, 0, e_pd,
+ &pginfo, &e_mr->ib.ib_mr.lkey,
+ &e_mr->ib.ib_mr.rkey);
+ if (retcode != 0) {
+ EDEB_ERR(4, "reg of internal max MR failed, e_mr=%p "
+ "iova_start=%p size_maxmr=%lx num_pages_mr=%x",
+ e_mr, iova_start, size_maxmr, num_pages_mr);
+ goto ehca_reg_internal_maxmr_exit1;
+ }
+
+ /* successful registration of all pages */
+ e_mr->ib.ib_mr.device = e_pd->ib_pd.device;
+ e_mr->ib.ib_mr.pd = &e_pd->ib_pd;
+ e_mr->ib.ib_mr.uobject = NULL;
+ atomic_inc(&(e_pd->ib_pd.usecnt));
+ atomic_set(&(e_mr->ib.ib_mr.usecnt), 0);
+ *e_maxmr = e_mr;
+ goto ehca_reg_internal_maxmr_exit0;
+
+ ehca_reg_internal_maxmr_exit1:
+ ehca_mr_delete(e_mr);
+ ehca_reg_internal_maxmr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "*e_maxmr=%p lkey=%x rkey=%x",
+ *e_maxmr, (*e_maxmr)->ib.ib_mr.lkey,
+ (*e_maxmr)->ib.ib_mr.rkey);
+ else
+ EDEB_EX(4, "retcode=%x shca=%p e_pd=%p e_maxmr=%p",
+ retcode, shca, e_pd, e_maxmr);
+ return (retcode);
+} /* end ehca_reg_internal_maxmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_reg_maxmr(struct ehca_shca *shca,
+ struct ehca_mr *e_newmr,
+ u64 *iova_start,
+ int acl,
+ struct ehca_pd *e_pd,
+ u32 *lkey,
+ u32 *rkey)
+{
+ int retcode = 0;
+ u64 rc = H_Success;
+ struct ehca_pfmr *pfmr = &e_newmr->pf;
+ struct ehca_mr *e_origmr = shca->maxmr;
+ u32 hipz_acl = 0;
+
+ EDEB_EN(7,"shca=%p e_origmr=%p e_newmr=%p iova_start=%p acl=%x e_pd=%p",
+ shca, e_origmr, e_newmr, iova_start, acl, e_pd);
+
+ ehca_mrmw_map_acl(acl, &hipz_acl);
+ ehca_mrmw_set_pgsize_hipz_acl(&hipz_acl);
+
+ rc = hipz_h_register_smr(shca->ipz_hca_handle, pfmr, &e_origmr->pf,
+ &shca->pf, &e_origmr->ipz_mr_handle,
+ (u64)iova_start, hipz_acl, e_pd->fw_pd,
+ &e_newmr->ipz_mr_handle, lkey, rkey);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "hipz_reg_smr failed, rc=%lx e_origmr=%p "
+ "hca_hndl=%lx mr_hndl=%lx lkey=%x",
+ rc, e_origmr, shca->ipz_hca_handle.handle,
+ e_origmr->ipz_mr_handle.handle,
+ e_origmr->ib.ib_mr.lkey);
+ retcode = ehca_mrmw_map_rc_reg_smr(rc);
+ goto ehca_reg_maxmr_exit0;
+ }
+ /* successful registration */
+ e_newmr->num_pages = e_origmr->num_pages;
+ e_newmr->start = iova_start;
+ e_newmr->size = e_origmr->size;
+ e_newmr->acl = acl;
+
+ ehca_reg_maxmr_exit0:
+ EDEB_EX(7, "retcode=%x lkey=%x rkey=%x", retcode, *lkey, *rkey);
+ return (retcode);
+} /* end ehca_reg_maxmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+int ehca_dereg_internal_maxmr(struct ehca_shca *shca)
+{
+ int retcode = 0;
+ struct ehca_mr *e_maxmr = 0;
+ struct ib_pd *ib_pd = 0;
+
+ EDEB_EN(7, "shca=%p shca->maxmr=%p", shca, shca->maxmr);
+
+ if (shca->maxmr == 0) {
+ EDEB_ERR(4, "bad call, shca=%p", shca);
+ retcode = -EINVAL;
+ goto ehca_dereg_internal_maxmr_exit0;
+ }
+
+ e_maxmr = shca->maxmr;
+ ib_pd = e_maxmr->ib.ib_mr.pd;
+ shca->maxmr = 0; /* remove internal max-MR indication from SHCA */
+
+ retcode = ehca_dereg_mr(&e_maxmr->ib.ib_mr);
+ if (retcode != 0) {
+ EDEB_ERR(3, "dereg internal max-MR failed, "
+ "retcode=%x e_maxmr=%p shca=%p lkey=%x",
+ retcode, e_maxmr, shca, e_maxmr->ib.ib_mr.lkey);
+ shca->maxmr = e_maxmr;
+ goto ehca_dereg_internal_maxmr_exit0;
+ }
+
+ atomic_dec(&ib_pd->usecnt);
+
+ ehca_dereg_internal_maxmr_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "");
+ else
+ EDEB_EX(4, "retcode=%x shca=%p shca->maxmr=%p",
+ retcode, shca, shca->maxmr);
+ return (retcode);
+} /* end ehca_dereg_internal_maxmr() */
diff --git a/drivers/infiniband/hw/ehca/ehca_mrmw.h b/drivers/infiniband/hw/ehca/ehca_mrmw.h
new file mode 100644
index 0000000..4df4b5b
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_mrmw.h
@@ -0,0 +1,739 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * MR/MW declarations and inline functions
+ *
+ * Authors: Dietmar Decker <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_mrmw.h,v 1.59 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef _EHCA_MRMW_H_
+#define _EHCA_MRMW_H_
+
+#undef DEB_PREFIX
+#define DEB_PREFIX "mrmw"
+
+#include "hipz_structs.h"
+
+
+int ehca_reg_mr(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ u64 *iova_start,
+ u64 size,
+ int acl,
+ struct ehca_pd *e_pd,
+ struct ehca_mr_pginfo *pginfo,
+ u32 *lkey, /**<OUT*/
+ u32 *rkey); /**<OUT*/
+
+int ehca_reg_mr_rpages(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ struct ehca_mr_pginfo *pginfo);
+
+int ehca_rereg_mr(struct ehca_shca *shca,
+ struct ehca_mr *e_mr,
+ u64 *iova_start,
+ u64 size,
+ int mr_access_flags,
+ struct ehca_pd *e_pd,
+ struct ehca_mr_pginfo *pginfo,
+ u32 *lkey, /**<OUT*/
+ u32 *rkey); /**<OUT*/
+
+int ehca_unmap_one_fmr(struct ehca_shca *shca,
+ struct ehca_mr *e_fmr);
+
+int ehca_reg_smr(struct ehca_shca *shca,
+ struct ehca_mr *e_origmr,
+ struct ehca_mr *e_newmr,
+ u64 *iova_start,
+ int acl,
+ struct ehca_pd *e_pd,
+ u32 *lkey, /**<OUT*/
+ u32 *rkey); /**<OUT*/
+
+/** @brief register internal max-MR to internal SHCA
+ */
+int ehca_reg_internal_maxmr(struct ehca_shca *shca, /**<IN*/
+ struct ehca_pd *e_pd, /**<IN*/
+ struct ehca_mr **maxmr); /**<OUT*/
+
+int ehca_reg_maxmr(struct ehca_shca *shca,
+ struct ehca_mr *e_newmr,
+ u64 *iova_start,
+ int acl,
+ struct ehca_pd *e_pd,
+ u32 *lkey,
+ u32 *rkey);
+
+int ehca_dereg_internal_maxmr(struct ehca_shca *shca);
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief check physical buffer array of MR verbs for validness and
+ calculates MR size
+*/
+static inline int ehca_mr_chk_buf_and_calc_size(
+ struct ib_phys_buf *phys_buf_array, /**<IN*/
+ int num_phys_buf, /**<IN*/
+ u64 *iova_start, /**<IN*/
+ u64 *size) /**<OUT*/
+{
+ struct ib_phys_buf *pbuf = phys_buf_array;
+ u64 size_count = 0;
+ u32 i;
+
+ if (num_phys_buf == 0) {
+ EDEB_ERR(4, "bad phys buf array len, num_phys_buf=0");
+ return (-EINVAL);
+ }
+ /* check first buffer */
+ if (((u64)iova_start & ~PAGE_MASK) != (pbuf->addr & ~PAGE_MASK)) {
+ EDEB_ERR(4, "iova_start/addr mismatch, iova_start=%p "
+ "pbuf->addr=%lx pbuf->size=%lx",
+ iova_start, pbuf->addr, pbuf->size);
+ return (-EINVAL);
+ }
+ if (((pbuf->addr + pbuf->size) % PAGE_SIZE) &&
+ (num_phys_buf > 1)) {
+ EDEB_ERR(4, "addr/size mismatch in 1st buf, pbuf->addr=%lx "
+ "pbuf->size=%lx", pbuf->addr, pbuf->size);
+ return (-EINVAL);
+ }
+
+ for (i = 0; i < num_phys_buf; i++) {
+ if ((i > 0) && (pbuf->addr % PAGE_SIZE)) {
+ EDEB_ERR(4, "bad address, i=%x pbuf->addr=%lx "
+ "pbuf->size=%lx", i, pbuf->addr, pbuf->size);
+ return (-EINVAL);
+ }
+ if (((i > 0) && /* not 1st */
+ (i < (num_phys_buf - 1)) && /* not last */
+ (pbuf->size % PAGE_SIZE)) || (pbuf->size == 0)) {
+ EDEB_ERR(4, "bad size, i=%x pbuf->size=%lx",
+ i, pbuf->size);
+ return (-EINVAL);
+ }
+ size_count += pbuf->size;
+ pbuf++;
+ }
+
+ *size = size_count;
+ return (0);
+} /* end ehca_mr_chk_buf_and_calc_size() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief check page list of map FMR verb for validness
+*/
+static inline int ehca_fmr_check_page_list(
+ struct ehca_mr *e_fmr, /**<IN*/
+ u64 *page_list, /**<IN*/
+ int list_len) /**<IN*/
+{
+ u32 i;
+ u64 *page = 0;
+
+ if (ehca_adr_bad(page_list)) {
+ EDEB_ERR(4, "bad page_list, page_list=%p fmr=%p",
+ page_list, e_fmr);
+ return (-EINVAL);
+ }
+
+ if ((list_len == 0) || (list_len > e_fmr->fmr_max_pages)) {
+ EDEB_ERR(4, "bad list_len, list_len=%x e_fmr->fmr_max_pages=%x "
+ "fmr=%p", list_len, e_fmr->fmr_max_pages, e_fmr);
+ return (-EINVAL);
+ }
+
+ /* each page must be aligned */
+ page = page_list;
+ for (i = 0; i < list_len; i++) {
+ if (*page % PAGE_SIZE) {
+ EDEB_ERR(4, "bad page, i=%x *page=%lx page=%p "
+ "fmr=%p", i, *page, page, e_fmr);
+ return (-EINVAL);
+ }
+ page++;
+ }
+
+ return (0);
+} /* end ehca_fmr_check_page_list() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief setup page buffer from page info
+ */
+static inline int ehca_set_pagebuf(struct ehca_mr *e_mr,
+ struct ehca_mr_pginfo *pginfo,
+ u32 number,
+ u64 *kpage) /**<OUT*/
+{
+ int retcode = 0;
+ struct ib_umem_chunk *prev_chunk = NULL;
+ struct ib_umem_chunk *chunk = NULL;
+ struct ib_phys_buf *pbuf = NULL;
+ u64 *fmrlist = NULL;
+ u64 numpg = 0;
+ u64 pgaddr = 0;
+ u32 i = 0;
+ u32 j = 0;
+
+
+ EDEB_EN(7, "pginfo=%p type=%x num_pages=%lx next_buf=%lx next_page=%lx "
+ "number=%x kpage=%p page_count=%lx next_listelem=%lx "
+ "region=%p next_chunk=%p next_nmap=%lx",
+ pginfo, pginfo->type, pginfo->num_pages, pginfo->next_buf,
+ pginfo->next_page, number, kpage, pginfo->page_count,
+ pginfo->next_listelem, pginfo->region, pginfo->next_chunk,
+ pginfo->next_nmap);
+
+ if (pginfo->type == EHCA_MR_PGI_PHYS) {
+ /* loop over desired phys_buf_array entries */
+ while (i < number) {
+ pbuf = pginfo->phys_buf_array + pginfo->next_buf;
+ numpg = ((pbuf->size + PAGE_SIZE - 1) / PAGE_SIZE);
+ while (pginfo->next_page < numpg) {
+ /* sanity check */
+ if (pginfo->page_count >= pginfo->num_pages) {
+ EDEB_ERR(4, "page_count >= num_pages, "
+ "page_count=%lx num_pages=%lx "
+ "i=%x", pginfo->page_count,
+ pginfo->num_pages, i);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_exit0;
+ }
+ *kpage = phys_to_abs((pbuf->addr & PAGE_MASK)
+ + (pginfo->next_page *
+ PAGE_SIZE));
+ if ((*kpage == 0) && (pbuf->addr != 0)) {
+ EDEB_ERR(4, "pbuf->addr=%lx"
+ " pbuf->size=%lx"
+ " next_page=%lx",
+ pbuf->addr, pbuf->size,
+ pginfo->next_page);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_exit0;
+ }
+ (pginfo->next_page)++;
+ (pginfo->page_count)++;
+ kpage++;
+ i++;
+ if (i >= number) break;
+ }
+ if (pginfo->next_page >= numpg) {
+ (pginfo->next_buf)++;
+ pginfo->next_page = 0;
+ }
+ }
+ } else if (pginfo->type == EHCA_MR_PGI_USER) {
+ /* loop over desired chunk entries */
+ /* (@TODO: add support for large pages) */
+ chunk = pginfo->next_chunk;
+ prev_chunk = pginfo->next_chunk;
+ list_for_each_entry_continue(chunk,
+ (&(pginfo->region->chunk_list)),
+ list) {
+ EDEB(9, "chunk->page_list[0]=%lx",
+ (u64)sg_dma_address(&chunk->page_list[0]));
+ for (i = pginfo->next_nmap; i < chunk->nmap; i++) {
+ pgaddr = ( page_to_pfn(chunk->page_list[i].page)
+ << PAGE_SHIFT );
+ *kpage = phys_to_abs(pgaddr);
+ EDEB(9,"pgaddr=%lx *kpage=%lx", pgaddr, *kpage);
+ if (*kpage == 0) {
+ EDEB_ERR(4, "chunk->page_list[i]=%lx"
+ " i=%x mr=%p",
+ (u64)sg_dma_address(
+ &chunk->page_list[i]),
+ i, e_mr);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_exit0;
+ }
+ (pginfo->page_count)++;
+ (pginfo->next_nmap)++;
+ kpage++;
+ j++;
+ if (j >= number) break;
+ }
+ if ( (pginfo->next_nmap >= chunk->nmap) &&
+ (j >= number) ) {
+ pginfo->next_nmap = 0;
+ prev_chunk = chunk;
+ break;
+ } else if (pginfo->next_nmap >= chunk->nmap) {
+ pginfo->next_nmap = 0;
+ prev_chunk = chunk;
+ } else if (j >= number)
+ break;
+ else
+ prev_chunk = chunk;
+ }
+ pginfo->next_chunk =
+ list_prepare_entry(prev_chunk,
+ (&(pginfo->region->chunk_list)),
+ list);
+ } else if (pginfo->type == EHCA_MR_PGI_FMR) {
+ /* loop over desired page_list entries */
+ fmrlist = pginfo->page_list + pginfo->next_listelem;
+ for (i = 0; i < number; i++) {
+ *kpage = phys_to_abs(*fmrlist);
+ if (*kpage == 0) {
+ EDEB_ERR(4, "*fmrlist=%lx fmrlist=%p"
+ " next_listelem=%lx", *fmrlist,
+ fmrlist, pginfo->next_listelem);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_exit0;
+ }
+ (pginfo->next_listelem)++;
+ (pginfo->page_count)++;
+ fmrlist++;
+ kpage++;
+ }
+ } else {
+ EDEB_ERR(4, "bad pginfo->type=%x", pginfo->type);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_exit0;
+ }
+
+ ehca_set_pagebuf_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "retcode=%x e_mr=%p pginfo=%p type=%x num_pages=%lx "
+ "next_buf=%lx next_page=%lx number=%x kpage=%p "
+ "page_count=%lx i=%x next_listelem=%lx region=%p "
+ "next_chunk=%p next_nmap=%lx",
+ retcode, e_mr, pginfo, pginfo->type, pginfo->num_pages,
+ pginfo->next_buf, pginfo->next_page, number, kpage,
+ pginfo->page_count, i, pginfo->next_listelem,
+ pginfo->region, pginfo->next_chunk, pginfo->next_nmap);
+ else
+ EDEB_EX(4, "retcode=%x e_mr=%p pginfo=%p type=%x num_pages=%lx "
+ "next_buf=%lx next_page=%lx number=%x kpage=%p "
+ "page_count=%lx i=%x next_listelem=%lx region=%p "
+ "next_chunk=%p next_nmap=%lx",
+ retcode, e_mr, pginfo, pginfo->type, pginfo->num_pages,
+ pginfo->next_buf, pginfo->next_page, number, kpage,
+ pginfo->page_count, i, pginfo->next_listelem,
+ pginfo->region, pginfo->next_chunk, pginfo->next_nmap);
+ return (retcode);
+} /* end ehca_set_pagebuf() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief setup 1 page from page info page buffer
+ */
+static inline int ehca_set_pagebuf_1(struct ehca_mr *e_mr,
+ struct ehca_mr_pginfo *pginfo,
+ u64 *rpage) /**<OUT*/
+{
+ int retcode = 0;
+ struct ib_phys_buf *tmp_pbuf = 0;
+ u64 *tmp_fmrlist = 0;
+ struct ib_umem_chunk *chunk = 0;
+ struct ib_umem_chunk *prev_chunk = 0;
+ u64 pgaddr = 0;
+
+ EDEB_EN(7, "pginfo=%p type=%x num_pages=%lx next_buf=%lx next_page=%lx "
+ "rpage=%p page_count=%lx next_listelem=%lx region=%p "
+ "next_chunk=%p next_nmap=%lx",
+ pginfo, pginfo->type, pginfo->num_pages, pginfo->next_buf,
+ pginfo->next_page, rpage, pginfo->page_count,
+ pginfo->next_listelem, pginfo->region, pginfo->next_chunk,
+ pginfo->next_nmap);
+
+ if (pginfo->type == EHCA_MR_PGI_PHYS) {
+ /* sanity check */
+ if (pginfo->page_count >= pginfo->num_pages) {
+ EDEB_ERR(4, "page_count >= num_pages, "
+ "page_count=%lx num_pages=%lx",
+ pginfo->page_count, pginfo->num_pages);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_1_exit0;
+ }
+ tmp_pbuf = pginfo->phys_buf_array + pginfo->next_buf;
+ *rpage = phys_to_abs(((tmp_pbuf->addr & PAGE_MASK) +
+ (pginfo->next_page * PAGE_SIZE)));
+ if ((*rpage == 0) && (tmp_pbuf->addr != 0)) {
+ EDEB_ERR(4, "tmp_pbuf->addr=%lx"
+ " tmp_pbuf->size=%lx next_page=%lx",
+ tmp_pbuf->addr, tmp_pbuf->size,
+ pginfo->next_page);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_1_exit0;
+ }
+ (pginfo->next_page)++;
+ (pginfo->page_count)++;
+ if (pginfo->next_page >= tmp_pbuf->size / PAGE_SIZE) {
+ (pginfo->next_buf)++;
+ pginfo->next_page = 0;
+ }
+ } else if (pginfo->type == EHCA_MR_PGI_USER) {
+ chunk = pginfo->next_chunk;
+ prev_chunk = pginfo->next_chunk;
+ list_for_each_entry_continue(chunk,
+ (&(pginfo->region->chunk_list)),
+ list) {
+ pgaddr = ( page_to_pfn(chunk->page_list[
+ pginfo->next_nmap].page)
+ << PAGE_SHIFT );
+ *rpage = phys_to_abs(pgaddr);
+ EDEB(9,"pgaddr=%lx *rpage=%lx", pgaddr, *rpage);
+ if (*rpage == 0) {
+ EDEB_ERR(4, "chunk->page_list[]=%lx next_nmap=%lx "
+ "mr=%p", (u64)sg_dma_address(
+ &chunk->page_list[
+ pginfo->next_nmap]),
+ pginfo->next_nmap, e_mr);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_1_exit0;
+ }
+ (pginfo->page_count)++;
+ (pginfo->next_nmap)++;
+ if (pginfo->next_nmap >= chunk->nmap) {
+ pginfo->next_nmap = 0;
+ prev_chunk = chunk;
+ }
+ break;
+ }
+ pginfo->next_chunk =
+ list_prepare_entry(prev_chunk,
+ (&(pginfo->region->chunk_list)),
+ list);
+ } else if (pginfo->type == EHCA_MR_PGI_FMR) {
+ tmp_fmrlist = pginfo->page_list + pginfo->next_listelem;
+ *rpage = phys_to_abs(*tmp_fmrlist);
+ if (*rpage == 0) {
+ EDEB_ERR(4, "*tmp_fmrlist=%lx tmp_fmrlist=%p"
+ " next_listelem=%lx", *tmp_fmrlist,
+ tmp_fmrlist, pginfo->next_listelem);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_1_exit0;
+ }
+ (pginfo->next_listelem)++;
+ (pginfo->page_count)++;
+ } else {
+ EDEB_ERR(4, "bad pginfo->type=%x", pginfo->type);
+ retcode = -EFAULT;
+ goto ehca_set_pagebuf_1_exit0;
+ }
+
+ ehca_set_pagebuf_1_exit0:
+ if (retcode == 0)
+ EDEB_EX(7, "retcode=%x e_mr=%p pginfo=%p type=%x num_pages=%lx "
+ "next_buf=%lx next_page=%lx rpage=%p page_count=%lx "
+ "next_listelem=%lx region=%p next_chunk=%p "
+ "next_nmap=%lx",
+ retcode, e_mr, pginfo, pginfo->type, pginfo->num_pages,
+ pginfo->next_buf, pginfo->next_page, rpage,
+ pginfo->page_count, pginfo->next_listelem,
+ pginfo->region, pginfo->next_chunk, pginfo->next_nmap);
+ else
+ EDEB_EX(4, "retcode=%x e_mr=%p pginfo=%p type=%x num_pages=%lx "
+ "next_buf=%lx next_page=%lx rpage=%p page_count=%lx "
+ "next_listelem=%lx region=%p next_chunk=%p "
+ "next_nmap=%lx",
+ retcode, e_mr, pginfo, pginfo->type, pginfo->num_pages,
+ pginfo->next_buf, pginfo->next_page, rpage,
+ pginfo->page_count, pginfo->next_listelem,
+ pginfo->region, pginfo->next_chunk, pginfo->next_nmap);
+ return (retcode);
+} /* end ehca_set_pagebuf_1() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief check MR if it is a max-MR, i.e. uses whole memory
+ in case it's a max-MR TRUE is returned, else FALSE
+*/
+static inline int ehca_mr_is_maxmr(u64 size,
+ u64 *iova_start)
+{
+ /* a MR is treated as max-MR only if it fits following: */
+ if ((size == ((u64)high_memory - PAGE_OFFSET)) &&
+ (iova_start == (void*)KERNELBASE)) {
+ EDEB(6, "this is a max-MR");
+ return (TRUE);
+ } else
+ return (FALSE);
+} /* end ehca_mr_is_maxmr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+/** @brief map access control for MR/MW.
+ This routine is used for MR and MW.
+*/
+static inline void ehca_mrmw_map_acl(int ib_acl, /**<IN*/
+ u32 *hipz_acl) /**<OUT*/
+{
+ *hipz_acl = 0;
+ if (ib_acl & IB_ACCESS_REMOTE_READ)
+ *hipz_acl |= HIPZ_ACCESSCTRL_R_READ;
+ if (ib_acl & IB_ACCESS_REMOTE_WRITE)
+ *hipz_acl |= HIPZ_ACCESSCTRL_R_WRITE;
+ if (ib_acl & IB_ACCESS_REMOTE_ATOMIC)
+ *hipz_acl |= HIPZ_ACCESSCTRL_R_ATOMIC;
+ if (ib_acl & IB_ACCESS_LOCAL_WRITE)
+ *hipz_acl |= HIPZ_ACCESSCTRL_L_WRITE;
+ if (ib_acl & IB_ACCESS_MW_BIND)
+ *hipz_acl |= HIPZ_ACCESSCTRL_MW_BIND;
+} /* end ehca_mrmw_map_acl() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief sets page size in hipz access control for MR/MW.
+ */
+static inline void ehca_mrmw_set_pgsize_hipz_acl(
+ u32 *hipz_acl) /**<INOUT HIPZ access control */
+{
+ /* @TODO page size of 4k currently hardcoded ... */
+ return;
+} /* end ehca_mrmw_set_pgsize_hipz_acl() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief reverse map access control for MR/MW.
+ This routine is used for MR and MW.
+*/
+static inline void ehca_mrmw_reverse_map_acl(
+ const u32 *hipz_acl, /**<IN*/
+ int *ib_acl) /**<OUT*/
+{
+ *ib_acl = 0;
+ if (*hipz_acl & HIPZ_ACCESSCTRL_R_READ)
+ *ib_acl |= IB_ACCESS_REMOTE_READ;
+ if (*hipz_acl & HIPZ_ACCESSCTRL_R_WRITE)
+ *ib_acl |= IB_ACCESS_REMOTE_WRITE;
+ if (*hipz_acl & HIPZ_ACCESSCTRL_R_ATOMIC)
+ *ib_acl |= IB_ACCESS_REMOTE_ATOMIC;
+ if (*hipz_acl & HIPZ_ACCESSCTRL_L_WRITE)
+ *ib_acl |= IB_ACCESS_LOCAL_WRITE;
+ if (*hipz_acl & HIPZ_ACCESSCTRL_MW_BIND)
+ *ib_acl |= IB_ACCESS_MW_BIND;
+} /* end ehca_mrmw_reverse_map_acl() */
+
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for MR/MW allocations
+ Used for hipz_mr_reg_alloc and hipz_mw_alloc.
+*/
+static inline int ehca_mrmw_map_rc_alloc(const u64 rc)
+{
+ switch (rc) {
+ case H_Success: /* successful completion */
+ return (0);
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RT_PARM: /* invalid resource type */
+ case H_NOT_ENOUGH_RESOURCES: /* insufficient resources */
+ case H_MLENGTH_PARM: /* invalid memory length */
+ case H_MEM_ACCESS_PARM: /* invalid access controls */
+ case H_Constrained: /* resource constraint */
+ return (-EINVAL);
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_alloc() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for MR register rpage
+ Used for hipz_h_register_rpage_mr at registering last page
+*/
+static inline int ehca_mrmw_map_rc_rrpg_last(const u64 rc)
+{
+ switch (rc) {
+ case H_Success: /* registration complete */
+ return (0);
+ case H_PAGE_REGISTERED: /* page registered */
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RH_PARM: /* invalid resource handle */
+/* case H_QT_PARM: invalid queue type */
+ case H_Parameter: /* invalid logical address, */
+ /* or count zero or greater 512 */
+ case H_TABLE_FULL: /* page table full */
+ case H_Hardware: /* HCA not operational */
+ return (-EINVAL);
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_rrpg_last() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for MR register rpage
+ Used for hipz_h_register_rpage_mr at registering one page, but not last page
+*/
+static inline int ehca_mrmw_map_rc_rrpg_notlast(const u64 rc)
+{
+ switch (rc) {
+ case H_PAGE_REGISTERED: /* page registered */
+ return (0);
+ case H_Success: /* registration complete */
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RH_PARM: /* invalid resource handle */
+/* case H_QT_PARM: invalid queue type */
+ case H_Parameter: /* invalid logical address, */
+ /* or count zero or greater 512 */
+ case H_TABLE_FULL: /* page table full */
+ case H_Hardware: /* HCA not operational */
+ return (-EINVAL);
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_rrpg_notlast() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for MR query
+ Used for hipz_mr_query.
+*/
+static inline int ehca_mrmw_map_rc_query_mr(const u64 rc)
+{
+ switch (rc) {
+ case H_Success: /* successful completion */
+ return (0);
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RH_PARM: /* invalid resource handle */
+ return (-EINVAL);
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_query_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for freeing MR resource
+ Used for hipz_h_free_resource_mr
+*/
+static inline int ehca_mrmw_map_rc_free_mr(const u64 rc)
+{
+ switch (rc) {
+ case H_Success: /* resource freed */
+ return (0);
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RH_PARM: /* invalid resource handle */
+ case H_R_STATE: /* invalid resource state */
+ case H_Hardware: /* HCA not operational */
+ return (-EINVAL);
+ case H_Resource: /* Resource in use */
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_free_mr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for freeing MW resource
+ Used for hipz_h_free_resource_mw
+*/
+static inline int ehca_mrmw_map_rc_free_mw(const u64 rc)
+{
+ switch (rc) {
+ case H_Success: /* resource freed */
+ return (0);
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RH_PARM: /* invalid resource handle */
+ case H_R_STATE: /* invalid resource state */
+ case H_Hardware: /* HCA not operational */
+ return (-EINVAL);
+ case H_Resource: /* Resource in use */
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_free_mw() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief map HIPZ rc to IB retcodes for SMR registrations
+ Used for hipz_h_register_smr.
+*/
+static inline int ehca_mrmw_map_rc_reg_smr(const u64 rc)
+{
+ switch (rc) {
+ case H_Success: /* successful completion */
+ return (0);
+ case H_ADAPTER_PARM: /* invalid adapter handle */
+ case H_RH_PARM: /* invalid resource handle */
+ case H_MEM_PARM: /* invalid MR virtual address */
+ case H_MEM_ACCESS_PARM: /* invalid access controls */
+ case H_NOT_ENOUGH_RESOURCES: /* insufficient resources */
+ return (-EINVAL);
+ case H_Busy: /* long busy */
+ return (-EBUSY);
+ default:
+ return (-EINVAL);
+ }
+} /* end ehca_mrmw_map_rc_reg_smr() */
+
+/*----------------------------------------------------------------------*/
+/*----------------------------------------------------------------------*/
+
+/** @brief MR destructor and constructor
+ used in Reregister MR verb, memsets ehca_mr_t to 0,
+ except struct ib_mr and spinlock
+ */
+static inline void ehca_mr_deletenew(struct ehca_mr *mr)
+{
+ u32 offset = (u64)(&mr->flags) - (u64)mr;
+ memset(&mr->flags, 0, sizeof(*mr) - offset);
+} /* end ehca_mr_deletenew() */
+
+#endif /*_EHCA_MRMW_H_*/

2006-02-18 01:01:21

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 16/22] ehca post send/receive and poll CQ

From: Roland Dreier <[email protected]>

There are an awful lot of magic numbers scattered around. Probably
they should become enums somewhere.

The compatibility defines for using the kernel file in userspace
shouldn't go into the kernel.
---

drivers/infiniband/hw/ehca/ehca_reqs.c | 401 ++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_reqs_core.c | 420 +++++++++++++++++++++++++++
2 files changed, 821 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_reqs.c b/drivers/infiniband/hw/ehca/ehca_reqs.c
new file mode 100644
index 0000000..659e6ba
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_reqs.c
@@ -0,0 +1,401 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * post_send/recv, poll_cq, req_notify
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_reqs.c,v 1.41 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#define DEB_PREFIX "reqs"
+
+#include "ehca_kernel.h"
+#include "ehca_classes.h"
+#include "ehca_tools.h"
+#include "hcp_if.h"
+#include "ehca_qes.h"
+#include "ehca_iverbs.h"
+
+/* include some inline service routines */
+#include "ehca_asm.h"
+#include "ehca_reqs_core.c"
+
+int ehca_post_send(struct ib_qp *qp,
+ struct ib_send_wr *send_wr,
+ struct ib_send_wr **bad_send_wr)
+{
+ struct ehca_qp *my_qp = NULL;
+ struct ib_send_wr *cur_send_wr = NULL;
+ struct ehca_wqe *wqe_p = NULL;
+ int wqe_cnt = 0;
+ int retcode = 0;
+ unsigned long spl_flags = 0;
+
+ EHCA_CHECK_ADR(qp);
+ my_qp = container_of(qp, struct ehca_qp, ib_qp);
+ EHCA_CHECK_QP(my_qp);
+ EHCA_CHECK_ADR(send_wr);
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x send_wr=%p bad_send_wr=%p",
+ my_qp, qp->qp_num, send_wr, bad_send_wr);
+
+ /* LOCK the QUEUE */
+ spin_lock_irqsave(&my_qp->spinlock_s, spl_flags);
+
+ /* loop processes list of send reqs */
+ for (cur_send_wr = send_wr; cur_send_wr != NULL;
+ cur_send_wr = cur_send_wr->next) {
+ void *start_addr =
+ &my_qp->ehca_qp_core.ipz_squeue.current_q_addr;
+ /* get pointer next to free WQE */
+ wqe_p = ipz_QEit_get_inc(&my_qp->ehca_qp_core.ipz_squeue);
+ if (unlikely(wqe_p == NULL)) {
+ /* too many posted work requests: queue overflow */
+ if (bad_send_wr != NULL) {
+ *bad_send_wr = cur_send_wr;
+ }
+ if (wqe_cnt==0) {
+ retcode = -ENOMEM;
+ EDEB_ERR(4, "Too many posted WQEs qp_num=%x",
+ qp->qp_num);
+ }
+ goto post_send_exit0;
+ }
+ /* write a SEND WQE into the QUEUE */
+ retcode = ehca_write_swqe(&my_qp->ehca_qp_core,
+ wqe_p, cur_send_wr);
+ /* if something failed,
+ reset the free entry pointer to the start value
+ */
+ if (unlikely(retcode != 0)) {
+ my_qp->ehca_qp_core.ipz_squeue.current_q_addr =
+ start_addr;
+ *bad_send_wr = cur_send_wr;
+ if (wqe_cnt==0) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Could not write WQE qp_num=%x",
+ qp->qp_num);
+ }
+ goto post_send_exit0;
+ }
+ wqe_cnt++;
+ EDEB(7, "ehca_qp=%p qp_num=%x wqe_cnt=%d",
+ my_qp, qp->qp_num, wqe_cnt);
+ } /* eof for cur_send_wr */
+
+ post_send_exit0:
+ /* UNLOCK the QUEUE */
+ spin_unlock_irqrestore(&my_qp->spinlock_s, spl_flags);
+ iosync(); /* serialize GAL register access */
+ hipz_update_SQA(&my_qp->ehca_qp_core, wqe_cnt);
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x ret=%x wqe_cnt=%d",
+ my_qp, qp->qp_num, retcode, wqe_cnt);
+ return retcode;
+}
+
+int ehca_post_recv(struct ib_qp *qp,
+ struct ib_recv_wr *recv_wr,
+ struct ib_recv_wr **bad_recv_wr)
+{
+ struct ehca_qp *my_qp = NULL;
+ struct ib_recv_wr *cur_recv_wr = NULL;
+ struct ehca_wqe *wqe_p = NULL;
+ int wqe_cnt = 0;
+ int retcode = 0;
+ unsigned long spl_flags = 0;
+
+ EHCA_CHECK_ADR(qp);
+ my_qp = container_of(qp, struct ehca_qp, ib_qp);
+ EHCA_CHECK_QP(my_qp);
+ EHCA_CHECK_ADR(recv_wr);
+ EDEB_EN(7, "ehca_qp=%p qp_num=%x recv_wr=%p bad_recv_wr=%p",
+ my_qp, qp->qp_num, recv_wr, bad_recv_wr);
+
+ /* LOCK the QUEUE */
+ spin_lock_irqsave(&my_qp->spinlock_r, spl_flags);
+
+ /* loop processes list of send reqs */
+ for (cur_recv_wr = recv_wr; cur_recv_wr != NULL;
+ cur_recv_wr = cur_recv_wr->next) {
+ void *start_addr =
+ &my_qp->ehca_qp_core.ipz_rqueue.current_q_addr;
+ /* get pointer next to free WQE */
+ wqe_p = ipz_QEit_get_inc(&my_qp->ehca_qp_core.ipz_rqueue);
+ if (unlikely(wqe_p == NULL)) {
+ /* too many posted work requests: queue overflow */
+ if (bad_recv_wr != NULL) {
+ *bad_recv_wr = cur_recv_wr;
+ }
+ if (wqe_cnt==0) {
+ retcode = -ENOMEM;
+ EDEB_ERR(4, "Too many posted WQEs qp_num=%x",
+ qp->qp_num);
+ }
+ goto post_recv_exit0;
+ }
+ /* write a RECV WQE into the QUEUE */
+ retcode =
+ ehca_write_rwqe(&my_qp->ehca_qp_core, wqe_p, cur_recv_wr);
+ /* if something failed,
+ reset the free entry pointer to the start value
+ */
+ if (unlikely(retcode != 0)) {
+ my_qp->ehca_qp_core.ipz_rqueue.current_q_addr =
+ start_addr;
+ *bad_recv_wr = cur_recv_wr;
+ if (wqe_cnt==0) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Could not write WQE qp_num=%x",
+ qp->qp_num);
+ }
+ goto post_recv_exit0;
+ }
+ wqe_cnt++;
+ EDEB(7, "ehca_qp=%p qp_num=%x wqe_cnt=%d",
+ my_qp, qp->qp_num, wqe_cnt);
+ } /* eof for cur_recv_wr */
+
+ post_recv_exit0:
+ spin_unlock_irqrestore(&my_qp->spinlock_r, spl_flags);
+ iosync(); /* serialize GAL register access */
+ hipz_update_RQA(&my_qp->ehca_qp_core, wqe_cnt);
+ EDEB_EX(7, "ehca_qp=%p qp_num=%x ret=%x wqe_cnt=%d",
+ my_qp, qp->qp_num, retcode, wqe_cnt);
+ return retcode;
+}
+
+/**
+ * Table converts ehca wc opcode to ib
+ * Since we use zero to indicate invalid opcode, the actual ib opcode must
+ * be decremented!!!
+ */
+static const u8 ib_wc_opcode[255] = {
+ [0x01] = IB_WC_RECV+1,
+ [0x02] = IB_WC_RECV_RDMA_WITH_IMM+1,
+ [0x04] = IB_WC_BIND_MW+1,
+ [0x08] = IB_WC_FETCH_ADD+1,
+ [0x10] = IB_WC_COMP_SWAP+1,
+ [0x20] = IB_WC_RDMA_WRITE+1,
+ [0x40] = IB_WC_RDMA_READ+1,
+ [0x80] = IB_WC_SEND+1
+};
+
+/** @brief internal function to poll one entry of cq
+ */
+static inline int ehca_poll_cq_one(struct ib_cq *cq, struct ib_wc *wc)
+{
+ int retcode = 0;
+ struct ehca_cq *my_cq = container_of(cq, struct ehca_cq, ib_cq);
+ struct ehca_cqe *cqe = NULL;
+ int cqe_count = 0;
+
+ EDEB_EN(7, "ehca_cq=%p cq_num=%x wc=%p", my_cq, my_cq->cq_number, wc);
+
+ poll_cq_one_read_cqe:
+ cqe = (struct ehca_cqe *)
+ ipz_QEit_get_inc_valid(&my_cq->ehca_cq_core.ipz_queue);
+ if (cqe == NULL) {
+ retcode = -EAGAIN;
+ EDEB(7, "Completion queue is empty ehca_cq=%p cq_num=%x "
+ "retcode=%x", my_cq, my_cq->cq_number, retcode);
+ goto poll_cq_one_exit0;
+ }
+ cqe_count++;
+ if (unlikely(cqe->status & 0x10)) { /* purge bit set */
+ struct ehca_qp *qp=ehca_cq_get_qp(my_cq, cqe->local_qp_number);
+ int purgeflag = 0;
+ unsigned long spl_flags = 0;
+ if (qp==NULL) { /* should not happen */
+ EDEB_ERR(4, "cq_num=%x qp_num=%x "
+ "could not find qp -> ignore cqe",
+ my_cq->cq_number, cqe->local_qp_number);
+ EDEB_DMP(4, cqe, 64, "cq_num=%x qp_num=%x",
+ my_cq->cq_number, cqe->local_qp_number);
+ /* ignore this purged cqe */
+ goto poll_cq_one_read_cqe;
+ }
+ spin_lock_irqsave(&qp->spinlock_s, spl_flags);
+ purgeflag = qp->sqerr_purgeflag;
+ spin_unlock_irqrestore(&qp->spinlock_s, spl_flags);
+ if (purgeflag!=0) {
+ EDEB(6, "Got CQE with purged bit qp_num=%x src_qp=%x",
+ cqe->local_qp_number, cqe->remote_qp_number);
+ EDEB_DMP(6, cqe, 64, "qp_num=%x src_qp=%x",
+ cqe->local_qp_number, cqe->remote_qp_number);
+ /* ignore this to avoid double cqes of bad wqe
+ that caused sqe and turn off purge flag */
+ qp->sqerr_purgeflag = 0;
+ goto poll_cq_one_read_cqe;
+ }
+ }
+
+ /* tracing cqe */
+ if (IS_EDEB_ON(7)) {
+ EDEB(7, "Received COMPLETION ehca_cq=%p cq_num=%x -----",
+ my_cq, my_cq->cq_number);
+ EDEB_DMP(7, cqe, 64, "ehca_cq=%p cq_num=%x",
+ my_cq, my_cq->cq_number);
+ EDEB(7, "ehca_cq=%p cq_num=%x -------------------------",
+ my_cq, my_cq->cq_number);
+ }
+
+ /* we got a completion! */
+ wc->wr_id = cqe->work_request_id;
+
+ /* eval ib_wc_opcode */
+ wc->opcode = ib_wc_opcode[cqe->optype]-1;
+ if (unlikely(wc->opcode == -1)) {
+ EDEB_ERR(4, "Invalid cqe->OPType=%x cqe->status=%x "
+ "ehca_cq=%p cq_num=%x",
+ cqe->optype, cqe->status, my_cq, my_cq->cq_number);
+ /* dump cqe for other infos */
+ EDEB_DMP(4, cqe, 64, "ehca_cq=%p cq_num=%x", my_cq, my_cq->cq_number);
+ /* update also queue adder to throw away this entry!!! */
+ goto poll_cq_one_exit0;
+ }
+ /* eval ib_wc_status */
+ if (unlikely(cqe->status & 0x80000000)) { /* complete with errors */
+ map_ib_wc_status(cqe->status, &wc->status);
+ wc->vendor_err = wc->status;
+ } else {
+ wc->status = IB_WC_SUCCESS;
+ }
+
+ wc->qp_num = cqe->local_qp_number;
+ wc->byte_len = ntohl(cqe->nr_bytes_transferred);
+ wc->pkey_index = cqe->pkey_index;
+ wc->slid = cqe->rlid;
+ wc->dlid_path_bits = cqe->dlid;
+ wc->src_qp = cqe->remote_qp_number;
+ wc->wc_flags = cqe->w_completion_flags;
+ wc->imm_data = cqe->immediate_data;
+ wc->sl = cqe->service_level;
+
+ if (wc->status != IB_WC_SUCCESS) {
+ EDEB(6, "ehca_cq=%p cq_num=%x WARNING unsuccessful cqe "
+ "OPType=%x status=%x qp_num=%x src_qp=%x wr_id=%lx cqe=%p",
+ my_cq, my_cq->cq_number, cqe->optype, cqe->status,
+ cqe->local_qp_number, cqe->remote_qp_number,
+ cqe->work_request_id, cqe);
+ }
+
+ poll_cq_one_exit0:
+ if (cqe_count>0) {
+ hipz_update_FECA(&my_cq->ehca_cq_core, cqe_count);
+ }
+
+ EDEB_EX(7, "retcode=%x ehca_cq=%p cq_number=%x wc=%p "
+ "status=%x opcode=%x qp_num=%x byte_len=%x",
+ retcode, my_cq, my_cq->cq_number, wc, wc->status,
+ wc->opcode, wc->qp_num, wc->byte_len);
+ return (retcode);
+}
+
+int ehca_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc)
+{
+ struct ehca_cq *my_cq = NULL;
+ int nr = 0;
+ struct ib_wc *current_wc = NULL;
+ int retcode = 0;
+ unsigned long spl_flags = 0;
+
+ EHCA_CHECK_CQ(cq);
+ EHCA_CHECK_ADR(wc);
+
+ my_cq = container_of(cq, struct ehca_cq, ib_cq);
+ EHCA_CHECK_CQ(my_cq);
+
+ EDEB_EN(7, "ehca_cq=%p cq_num=%x num_entries=%d wc=%p",
+ my_cq, my_cq->cq_number, num_entries, wc);
+
+ if (num_entries < 1) {
+ EDEB_ERR(4, "Invalid num_entries=%d ehca_cq=%p cq_num=%x",
+ num_entries, my_cq, my_cq->cq_number);
+ retcode = -EINVAL;
+ goto poll_cq_exit0;
+ }
+
+ current_wc = wc;
+ spin_lock_irqsave(&my_cq->spinlock, spl_flags);
+ for (nr = 0; nr < num_entries; nr++) {
+ retcode = ehca_poll_cq_one(cq, current_wc);
+ if (0 != retcode) {
+ break;
+ }
+ current_wc++;
+ } /* eof for nr */
+ spin_unlock_irqrestore(&my_cq->spinlock, spl_flags);
+ if (-EAGAIN == retcode || 0 == retcode) {
+ retcode = nr;
+ }
+
+ poll_cq_exit0:
+ EDEB_EX(7, "ehca_cq=%p cq_num=%x retcode=%x wc=%p nr_entries=%d",
+ my_cq, my_cq->cq_number, retcode, wc, nr);
+ return (retcode);
+}
+
+int ehca_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify cq_notify)
+{
+ struct ehca_cq *my_cq = NULL;
+ int retcode = 0;
+
+ EHCA_CHECK_CQ(cq);
+ my_cq = container_of(cq, struct ehca_cq, ib_cq);
+ EHCA_CHECK_CQ(my_cq);
+ EDEB_EN(7, "ehca_cq=%p cq_num=%x cq_notif=%x",
+ my_cq, my_cq->cq_number, cq_notify);
+
+ switch (cq_notify) {
+ case IB_CQ_SOLICITED:
+ hipz_set_CQx_N0(&my_cq->ehca_cq_core, 1);
+ break;
+ case IB_CQ_NEXT_COMP:
+ hipz_set_CQx_N1(&my_cq->ehca_cq_core, 1);
+ break;
+ default:
+ retcode = -EINVAL;
+ }
+
+ EDEB_EX(7, "ehca_cq=%p cq_num=%x retcode=%x",
+ my_cq, my_cq->cq_number, retcode);
+
+ return (retcode);
+}
+
+/* eof ehca_reqs.c */
diff --git a/drivers/infiniband/hw/ehca/ehca_reqs_core.c b/drivers/infiniband/hw/ehca/ehca_reqs_core.c
new file mode 100644
index 0000000..c0b7281
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_reqs_core.c
@@ -0,0 +1,420 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * post_send/recv, poll_cq, req_notify
+ * Common code to be included statically in respective user/kernel
+ * modules, i.e. ehca_ureqs.c/ehca_reqs.c
+ * This module contains C code only. Including modules must include
+ * all required header files.
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_reqs_core.c,v 1.40 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+/** THIS following block of defines
+ * replaces ib types of kernel space to corresponding ones in user space,
+ * so that the implemented inline functions below can be compiled and
+ * work in both user and kernel space.
+ * However this ASSUMES that there is no functional differences between ib
+ * types in kernel e.g. ib_send_wr and user space e.g. ibv_send_wr.
+ */
+
+#ifndef __KERNEL__
+#define ib_recv_wr ibv_recv_wr
+#define ib_send_wr ibv_send_wr
+#define ehca_av ehcau_av
+/* ib_wr_opcode */
+#define IB_WR_SEND IBV_WR_SEND
+#define IB_WR_SEND_WITH_IMM IBV_WR_SEND_WITH_IMM
+#define IB_WR_RDMA_WRITE IBV_WR_RDMA_WRITE
+#define IB_WR_RDMA_WRITE_WITH_IMM IBV_WR_RDMA_WRITE_WITH_IMM
+#define IB_WR_RDMA_READ IBV_WR_RDMA_READ
+/* ib_qp_type */
+#define IB_QPT_RC IBV_QPT_RC
+#define IB_QPT_UC IBV_QPT_UC
+#define IB_QPT_UD IBV_QPT_UD
+/* ib_wc_opcode */
+#define ib_wc_opcode ibv_wc_opcode
+#define IB_WC_SEND IBV_WC_SEND
+#define IB_WC_RDMA_WRITE IBV_WC_RDMA_WRITE
+#define IB_WC_RDMA_READ IBV_WC_RDMA_READ
+#define IB_WC_COMP_SWAP IBV_WC_COMP_SWAP
+#define IB_WC_FETCH_ADD IBV_WC_FETCH_ADD
+#define IB_WC_BIND_MW IBV_WC_BIND_MW
+#define IB_WC_RECV IBV_WC_RECV
+#define IB_WC_RECV_RDMA_WITH_IMM IBV_WC_RECV_RDMA_WITH_IMM
+/* ib_wc_status */
+#define ib_wc_status ibv_wc_status
+#define IB_WC_LOC_LEN_ERR IBV_WC_LOC_LEN_ERR
+#define IB_WC_LOC_QP_OP_ERR IBV_WC_LOC_QP_OP_ERR
+#define IB_WC_LOC_EEC_OP_ERR IBV_WC_LOC_EEC_OP_ERR
+#define IB_WC_LOC_PROT_ERR IBV_WC_LOC_PROT_ERR
+#define IB_WC_WR_FLUSH_ERR IBV_WC_WR_FLUSH_ERR
+#define IB_WC_MW_BIND_ERR IBV_WC_MW_BIND_ERR
+#define IB_WC_GENERAL_ERR IBV_WC_GENERAL_ERR
+#define IB_WC_REM_INV_REQ_ERR IBV_WC_REM_INV_REQ_ERR
+#define IB_WC_REM_ACCESS_ERR IBV_WC_REM_ACCESS_ERR
+#define IB_WC_REM_OP_ERR IBV_WC_REM_OP_ERR
+#define IB_WC_REM_INV_RD_REQ_ERR IBV_WC_REM_INV_RD_REQ_ERR
+#define IB_WC_RETRY_EXC_ERR IBV_WC_RETRY_EXC_ERR
+#define IB_WC_RNR_RETRY_EXC_ERR IBV_WC_RNR_RETRY_EXC_ERR
+#define IB_WC_REM_ABORT_ERR IBV_WC_REM_ABORT_ERR
+#define IB_WC_INV_EECN_ERR IBV_WC_INV_EECN_ERR
+#define IB_WC_INV_EEC_STATE_ERR IBV_WC_INV_EEC_STATE_ERR
+#define IB_WC_BAD_RESP_ERR IBV_WC_BAD_RESP_ERR
+#define IB_WC_FATAL_ERR IBV_WC_FATAL_ERR
+#define IB_WC_SUCCESS IBV_WC_SUCCESS
+/* ib_send_flags */
+#define IB_SEND_FENCE IBV_SEND_FENCE
+#define IB_SEND_SIGNALED IBV_SEND_SIGNALED
+#define IB_SEND_SOLICITED IBV_SEND_SOLICITED
+#define IB_SEND_INLINE IBV_SEND_INLINE
+#endif
+
+static inline int ehca_write_rwqe(struct ehca_qp_core *qp_core,
+ struct ehca_wqe *wqe_p,
+ struct ib_recv_wr *recv_wr)
+{
+ u8 cnt_ds;
+ if (unlikely((recv_wr->num_sge < 0) ||
+ (recv_wr->num_sge > qp_core->ipz_rqueue.act_nr_of_sg))) {
+ EDEB_ERR(4, "Invalid number of WQE SGE. "
+ "num_sqe=%x max_nr_of_sg=%x",
+ recv_wr->num_sge, qp_core->ipz_rqueue.act_nr_of_sg);
+ return (-EINVAL); /* invalid SG list length */
+ }
+
+ clear_cacheline(wqe_p);
+ clear_cacheline((u8 *) wqe_p + 32);
+ clear_cacheline((u8 *) wqe_p + 64);
+
+ wqe_p->work_request_id = be64_to_cpu(recv_wr->wr_id);
+ wqe_p->nr_of_data_seg = recv_wr->num_sge;
+
+ for (cnt_ds = 0; cnt_ds < recv_wr->num_sge; cnt_ds++) {
+ wqe_p->u.all_rcv.sg_list[cnt_ds].vaddr =
+ be64_to_cpu(recv_wr->sg_list[cnt_ds].addr);
+ wqe_p->u.all_rcv.sg_list[cnt_ds].lkey =
+ ntohl(recv_wr->sg_list[cnt_ds].lkey);
+ wqe_p->u.all_rcv.sg_list[cnt_ds].length =
+ ntohl(recv_wr->sg_list[cnt_ds].length);
+ }
+
+ if (IS_EDEB_ON(7)) {
+ EDEB(7, "RECEIVE WQE written into queue qp_core=%p", qp_core);
+ EDEB_DMP(7, wqe_p, 16*(6 + wqe_p->nr_of_data_seg),
+ "qp_core=%p", qp_core);
+ }
+
+ return (0);
+}
+
+/* internal use only
+ uncomment this line to enable trace output of GSI send wr */
+/* #define DEBUG_GSI_SEND_WR 1 */
+#if defined(__KERNEL__) && defined(DEBUG_GSI_SEND_WR)
+
+/* need ib_mad struct */
+#include <rdma/ib_mad.h>
+
+static void trace_send_wr_ud(const struct ib_send_wr *send_wr)
+{
+ int idx = 0;
+ int j = 0;
+ while (send_wr != NULL) {
+ struct ib_mad_hdr *mad_hdr = send_wr->wr.ud.mad_hdr;
+ struct ib_sge *sge = send_wr->sg_list;
+ EDEB(4, "send_wr#%x wr_id=%lx num_sge=%x "
+ "send_flags=%x opcode=%x",idx, send_wr->wr_id,
+ send_wr->num_sge, send_wr->send_flags, send_wr->opcode);
+ if (mad_hdr != NULL) {
+ EDEB(4, "send_wr#%x mad_hdr base_version=%x "
+ "mgmt_class=%x class_version=%x method=%x "
+ "status=%x class_specific=%x tid=%lx attr_id=%x "
+ "resv=%x attr_mod=%x",
+ idx, mad_hdr->base_version, mad_hdr->mgmt_class,
+ mad_hdr->class_version, mad_hdr->method,
+ mad_hdr->status, mad_hdr->class_specific,
+ mad_hdr->tid, mad_hdr->attr_id, mad_hdr->resv,
+ mad_hdr->attr_mod);
+ }
+ for (j = 0; j < send_wr->num_sge; j++) {
+#ifdef EHCA_USERDRIVER
+ u8 *data = (u8 *) sge->addr;
+#else
+ u8 *data = (u8 *) abs_to_virt(sge->addr);
+#endif
+ EDEB(4, "send_wr#%x sge#%x addr=%p length=%x lkey=%x",
+ idx, j, data, sge->length, sge->lkey);
+ /* assume length is n*16 */
+ EDEB_DMP(4, data, sge->length, "send_wr#%x sge#%x", idx, j);
+ sge++;
+ } /* eof for j */
+ idx++;
+ send_wr = send_wr->next;
+ } /* eof while send_wr */
+}
+
+#endif /* __KERNEL__ && DEBUG_GSI_SEND_WR */
+
+static inline int ehca_write_swqe(struct ehca_qp_core *qp_core,
+ struct ehca_wqe *wqe_p,
+ const struct ib_send_wr *send_wr)
+{
+ u32 idx;
+ u64 dma_length;
+ struct ehca_av *my_av;
+ u32 remote_qkey = send_wr->wr.ud.remote_qkey;
+
+ clear_cacheline(wqe_p);
+ clear_cacheline((u8 *) wqe_p + 32);
+
+ if (unlikely((send_wr->num_sge < 0) ||
+ (send_wr->num_sge > qp_core->ipz_squeue.act_nr_of_sg))) {
+ EDEB_ERR(4, "Invalid number of WQE SGE. "
+ "num_sqe=%x max_nr_of_sg=%x",
+ send_wr->num_sge, qp_core->ipz_rqueue.act_nr_of_sg);
+ return (-EINVAL); /* invalid SG list length */
+ }
+
+ wqe_p->work_request_id = be64_to_cpu(send_wr->wr_id);
+
+ switch (send_wr->opcode) {
+ case IB_WR_SEND:
+ case IB_WR_SEND_WITH_IMM:
+ wqe_p->optype = WQE_OPTYPE_SEND;
+ break;
+ case IB_WR_RDMA_WRITE:
+ case IB_WR_RDMA_WRITE_WITH_IMM:
+ wqe_p->optype = WQE_OPTYPE_RDMAWRITE;
+ break;
+ case IB_WR_RDMA_READ:
+ wqe_p->optype = WQE_OPTYPE_RDMAREAD;
+ break;
+ default:
+ EDEB_ERR(4, "Invalid opcode=%x", send_wr->opcode);
+ return (-EINVAL); /* invalid opcode */
+ }
+
+ wqe_p->wqef = (send_wr->opcode) & 0xF0;
+
+ wqe_p->wr_flag = 0;
+ if (send_wr->send_flags & IB_SEND_SIGNALED) {
+ wqe_p->wr_flag |= WQE_WRFLAG_REQ_SIGNAL_COM;
+ }
+
+ if (send_wr->opcode == IB_WR_SEND_WITH_IMM ||
+ send_wr->opcode == IB_WR_RDMA_WRITE_WITH_IMM) {
+ /* this might not work as long as HW does not support it */
+ wqe_p->immediate_data = send_wr->imm_data;
+ wqe_p->wr_flag |= WQE_WRFLAG_IMM_DATA_PRESENT;
+ }
+
+ wqe_p->nr_of_data_seg = send_wr->num_sge;
+
+ switch (qp_core->qp_type) {
+#ifdef __KERNEL__
+ case IB_QPT_SMI:
+ case IB_QPT_GSI:
+#endif /* __KERNEL__ */
+ /* no break is intential here */
+ case IB_QPT_UD:
+ /* IB 1.2 spec C10-15 compliance */
+ if (send_wr->wr.ud.remote_qkey & 0x80000000) {
+ remote_qkey = qp_core->qkey;
+ }
+ wqe_p->destination_qp_number =
+ ntohl(send_wr->wr.ud.remote_qpn << 8);
+ wqe_p->local_ee_context_qkey = ntohl(remote_qkey);
+ if (send_wr->wr.ud.ah==NULL) {
+ EDEB_ERR(4, "wr.ud.ah is NULL. qp_core=%p", qp_core);
+ return (-EINVAL);
+ }
+ my_av = container_of(send_wr->wr.ud.ah, struct ehca_av, ib_ah);
+ wqe_p->u.ud_av.ud_av = my_av->av;
+
+ /* omitted check of IB_SEND_INLINE
+ since HW does not support it */
+ for (idx = 0; idx < send_wr->num_sge; idx++) {
+ wqe_p->u.ud_av.sg_list[idx].vaddr =
+ be64_to_cpu(send_wr->sg_list[idx].addr);
+ wqe_p->u.ud_av.sg_list[idx].lkey =
+ ntohl(send_wr->sg_list[idx].lkey);
+ wqe_p->u.ud_av.sg_list[idx].length =
+ ntohl(send_wr->sg_list[idx].length);
+ } /* eof for idx */
+#ifdef __KERNEL__
+ if (qp_core->qp_type == IB_QPT_SMI ||
+ qp_core->qp_type == IB_QPT_GSI) {
+ wqe_p->u.ud_av.ud_av.pmtu = 1;
+ }
+ if (qp_core->qp_type == IB_QPT_GSI) {
+ wqe_p->pkeyi =
+ ntohs(send_wr->wr.ud.pkey_index);
+#ifdef DEBUG_GSI_SEND_WR
+ trace_send_wr_ud(send_wr);
+#endif /* DEBUG_GSI_SEND_WR */
+ }
+#endif /* __KERNEL__ */
+ break;
+
+ case IB_QPT_UC:
+ if (send_wr->send_flags & IB_SEND_FENCE) {
+ wqe_p->wr_flag |= WQE_WRFLAG_FENCE;
+ }
+ /* no break is intential here */
+ case IB_QPT_RC:
+ /*@@TODO atomic???*/
+ wqe_p->u.nud.remote_virtual_adress =
+ be64_to_cpu(send_wr->wr.rdma.remote_addr);
+ wqe_p->u.nud.rkey = ntohl(send_wr->wr.rdma.rkey);
+
+ /* omitted checking of IB_SEND_INLINE
+ since HW does not support it */
+ dma_length = 0;
+ for (idx = 0; idx < send_wr->num_sge; idx++) {
+ wqe_p->u.nud.sg_list[idx].vaddr =
+ be64_to_cpu(send_wr->sg_list[idx].addr);
+ wqe_p->u.nud.sg_list[idx].lkey =
+ ntohl(send_wr->sg_list[idx].lkey);
+ wqe_p->u.nud.sg_list[idx].length =
+ ntohl(send_wr->sg_list[idx].length);
+ dma_length += send_wr->sg_list[idx].length;
+ } /* eof idx */
+ wqe_p->u.nud.atomic_1st_op_dma_len = be64_to_cpu(dma_length);
+
+ break;
+
+ default:
+ EDEB_ERR(4, "Invalid qptype=%x", qp_core->qp_type);
+ return (-EINVAL);
+ }
+
+ if (IS_EDEB_ON(7)) {
+ EDEB(7, "SEND WQE written into queue qp_core=%p ", qp_core);
+ EDEB_DMP(7, wqe_p, 16*(6 + wqe_p->nr_of_data_seg),
+ "qp_core=%p", qp_core);
+ }
+ return (0);
+}
+
+/** @brief convert cqe_status to ib_wc_status
+ */
+static inline void map_ib_wc_status(u32 cqe_status,
+ enum ib_wc_status *wc_status)
+{
+ if (unlikely(cqe_status & 0x80000000)) { /* complete with errors */
+ switch (cqe_status & 0x0000003F) {
+ case 0x01:
+ case 0x21:
+ *wc_status = IB_WC_LOC_LEN_ERR;
+ break;
+ case 0x02:
+ case 0x22:
+ *wc_status = IB_WC_LOC_QP_OP_ERR;
+ break;
+ case 0x03:
+ case 0x23:
+ *wc_status = IB_WC_LOC_EEC_OP_ERR;
+ break;
+ case 0x04:
+ case 0x24:
+ *wc_status = IB_WC_LOC_PROT_ERR;
+ break;
+ case 0x05:
+ case 0x25:
+ *wc_status = IB_WC_WR_FLUSH_ERR;
+ break;
+ case 0x06:
+ *wc_status = IB_WC_MW_BIND_ERR;
+ break;
+ case 0x07: /* remote error - look into bits 20:24 */
+ switch ((cqe_status & 0x0000F800) >> 11) {
+ case 0x0:
+ /* PSN Sequence Error!
+ couldn't find a matching VAPI status! */
+ *wc_status = IB_WC_GENERAL_ERR;
+ break;
+ case 0x1:
+ *wc_status = IB_WC_REM_INV_REQ_ERR;
+ break;
+ case 0x2:
+ *wc_status = IB_WC_REM_ACCESS_ERR;
+ break;
+ case 0x3:
+ *wc_status = IB_WC_REM_OP_ERR;
+ break;
+ case 0x4:
+ *wc_status = IB_WC_REM_INV_RD_REQ_ERR;
+ break;
+ }
+ break;
+ case 0x08:
+ *wc_status = IB_WC_RETRY_EXC_ERR;
+ break;
+ case 0x09:
+ *wc_status = IB_WC_RNR_RETRY_EXC_ERR;
+ break;
+ case 0x0A:
+ case 0x2D:
+ *wc_status = IB_WC_REM_ABORT_ERR;
+ break;
+ case 0x0B:
+ case 0x2E:
+ *wc_status = IB_WC_INV_EECN_ERR;
+ break;
+ case 0x0C:
+ case 0x2F:
+ *wc_status = IB_WC_INV_EEC_STATE_ERR;
+ break;
+ case 0x0D:
+ *wc_status = IB_WC_BAD_RESP_ERR;
+ break;
+ case 0x10:
+ /* WQE purged */
+ *wc_status = IB_WC_WR_FLUSH_ERR;
+ break;
+ default:
+ *wc_status = IB_WC_FATAL_ERR;
+
+ }
+ } else {
+ *wc_status = IB_WC_SUCCESS;
+ }
+}
+

2006-02-18 01:01:20

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 18/22] ehca address vectors, multicast groups, protection domains

From: Roland Dreier <[email protected]>


---

drivers/infiniband/hw/ehca/ehca_av.c | 258 +++++++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_mcast.c | 194 +++++++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_pd.c | 100 ++++++++++++
3 files changed, 552 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_av.c b/drivers/infiniband/hw/ehca/ehca_av.c
new file mode 100644
index 0000000..f5382c2
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_av.c
@@ -0,0 +1,258 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * adress vector functions
+ *
+ * Authors: Reinhard Ernst <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_av.c,v 1.28 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#define DEB_PREFIX "ehav"
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+#include "ehca_iverbs.h"
+#include "hcp_if.h"
+
+struct ib_ah *ehca_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr)
+{
+ extern int ehca_static_rate;
+ int retcode = 0;
+ struct ehca_av *av = NULL;
+
+ EHCA_CHECK_PD_P(pd);
+ EHCA_CHECK_ADR_P(ah_attr);
+
+ EDEB_EN(7,"pd=%p ah_attr=%p", pd, ah_attr);
+
+ av = ehca_av_new();
+ if (!av) {
+ EDEB_ERR(4,"Out of memory pd=%p ah_attr=%p", pd, ah_attr);
+ retcode = -ENOMEM;
+ goto create_ah_exit0;
+ }
+
+ av->av.sl = ah_attr->sl;
+ av->av.dlid = ntohs(ah_attr->dlid);
+ av->av.slid_path_bits = ah_attr->src_path_bits;
+
+ if (ehca_static_rate < 0) {
+ av->av.ipd = ah_attr->static_rate;
+ } else {
+ av->av.ipd = ehca_static_rate;
+ }
+
+ av->av.lnh = ah_attr->ah_flags;
+ av->av.grh.word_0 |= EHCA_BMASK_SET(GRH_IPVERSION_MASK, 6);
+ av->av.grh.word_0 |= EHCA_BMASK_SET(GRH_TCLASS_MASK,
+ ah_attr->grh.traffic_class);
+ av->av.grh.word_0 |= EHCA_BMASK_SET(GRH_FLOWLABEL_MASK,
+ ah_attr->grh.flow_label);
+ av->av.grh.word_0 |= EHCA_BMASK_SET(GRH_HOPLIMIT_MASK,
+ ah_attr->grh.hop_limit);
+ av->av.grh.word_0 |= EHCA_BMASK_SET(GRH_NEXTHEADER_MASK, 0x1B);
+ /* IB transport */
+ av->av.grh.word_0 = be64_to_cpu(av->av.grh.word_0);
+ /* set sgid in grh.word_1 */
+ if (ah_attr->ah_flags & IB_AH_GRH) {
+ int rc = 0;
+ struct ib_port_attr port_attr;
+ union ib_gid gid;
+ memset(&port_attr, 0, sizeof(port_attr));
+ rc = ehca_query_port(pd->device, ah_attr->port_num,
+ &port_attr);
+ if (rc != 0) { /* invalid port number */
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Invalid port number "
+ "ehca_query_port() returned %x "
+ "pd=%p ah_attr=%p", rc, pd, ah_attr);
+ goto create_ah_exit1;
+ }
+ memset(&gid, 0, sizeof(gid));
+ rc = ehca_query_gid(pd->device,
+ ah_attr->port_num,
+ ah_attr->grh.sgid_index, &gid);
+ if (rc != 0) {
+ retcode = -EINVAL;
+ EDEB_ERR(4, "Failed to retrieve sgid "
+ "ehca_query_gid() returned %x "
+ "pd=%p ah_attr=%p", rc, pd, ah_attr);
+ goto create_ah_exit1;
+ }
+ memcpy(&av->av.grh.word_1, &gid, sizeof(gid));
+ }
+ /* for the time beeing we use a hard coded PMTU of 2048 Bytes */
+ av->av.pmtu = 4; /* TODO */
+
+ /* dgid comes in grh.word_3 */
+ memcpy(&av->av.grh.word_3, &ah_attr->grh.dgid,
+ sizeof(ah_attr->grh.dgid));
+
+ EHCA_REGISTER_AV(device, pd);
+
+ EDEB_EX(7,"pd=%p ah_attr=%p av=%p", pd, ah_attr, av);
+ return (&av->ib_ah);
+
+ create_ah_exit1:
+ ehca_av_delete(av);
+
+ create_ah_exit0:
+ EDEB_EX(7,"retcode=%x pd=%p ah_attr=%p", retcode, pd, ah_attr);
+ return ERR_PTR(retcode);
+}
+
+int ehca_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr)
+{
+ struct ehca_av *av = NULL;
+ struct ehca_ud_av new_ehca_av;
+ int ret = 0;
+
+ EHCA_CHECK_AV(ah);
+ EHCA_CHECK_ADR(ah_attr);
+
+ EDEB_EN(7,"ah=%p ah_attr=%p", ah, ah_attr);
+
+ memset(&new_ehca_av, 0, sizeof(new_ehca_av));
+ new_ehca_av.sl = ah_attr->sl;
+ new_ehca_av.dlid = ntohs(ah_attr->dlid);
+ new_ehca_av.slid_path_bits = ah_attr->src_path_bits;
+ new_ehca_av.ipd = ah_attr->static_rate;
+ new_ehca_av.lnh = EHCA_BMASK_SET(GRH_FLAG_MASK,
+ ((ah_attr->ah_flags & IB_AH_GRH) > 0));
+ new_ehca_av.grh.word_0 = EHCA_BMASK_SET(GRH_TCLASS_MASK,
+ ah_attr->grh.traffic_class);
+ new_ehca_av.grh.word_0 |= EHCA_BMASK_SET(GRH_FLOWLABEL_MASK,
+ ah_attr->grh.flow_label);
+ new_ehca_av.grh.word_0 |= EHCA_BMASK_SET(GRH_HOPLIMIT_MASK,
+ ah_attr->grh.hop_limit);
+ new_ehca_av.grh.word_0 |= EHCA_BMASK_SET(GRH_NEXTHEADER_MASK, 0x1b);
+ new_ehca_av.grh.word_0 = be64_to_cpu(new_ehca_av.grh.word_0);
+
+ /* set sgid in grh.word_1 */
+ if (ah_attr->ah_flags & IB_AH_GRH) {
+ int rc = 0;
+ struct ib_port_attr port_attr;
+ union ib_gid gid;
+ memset(&port_attr, 0, sizeof(port_attr));
+ rc = ehca_query_port(ah->device, ah_attr->port_num,
+ &port_attr);
+ if (rc != 0) { /* invalid port number */
+ ret = -EINVAL;
+ EDEB_ERR(4, "Invalid port number "
+ "ehca_query_port() returned %x "
+ "ah=%p ah_attr=%p port_num=%x",
+ rc, ah, ah_attr, ah_attr->port_num);
+ goto modify_ah_exit1;
+ }
+ memset(&gid, 0, sizeof(gid));
+ rc = ehca_query_gid(ah->device,
+ ah_attr->port_num,
+ ah_attr->grh.sgid_index, &gid);
+ if (rc != 0) {
+ ret = -EINVAL;
+ EDEB_ERR(4,
+ "Failed to retrieve sgid "
+ "ehca_query_gid() returned %x "
+ "ah=%p ah_attr=%p port_num=%x "
+ "sgid_index=%x",
+ rc, ah, ah_attr, ah_attr->port_num,
+ ah_attr->grh.sgid_index);
+ goto modify_ah_exit1;
+ }
+ memcpy(&new_ehca_av.grh.word_1, &gid, sizeof(gid));
+ }
+
+ new_ehca_av.pmtu = 4; /* TODO: see comment in create_ah() */
+
+ memcpy(&new_ehca_av.grh.word_3, &ah_attr->grh.dgid,
+ sizeof(ah_attr->grh.dgid));
+
+ av = container_of(ah, struct ehca_av, ib_ah);
+ av->av = new_ehca_av;
+
+ modify_ah_exit1:
+ EDEB_EX(7,"ret=%x ah=%p ah_attr=%p", ret, ah, ah_attr);
+
+ return ret;
+}
+
+int ehca_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr)
+{
+ int ret = 0;
+ struct ehca_av *av = NULL;
+
+ EHCA_CHECK_AV(ah);
+ EHCA_CHECK_ADR(ah_attr);
+
+ EDEB_EN(7,"ah=%p ah_attr=%p", ah, ah_attr);
+
+ av = container_of(ah, struct ehca_av, ib_ah);
+ memcpy(&ah_attr->grh.dgid, &av->av.grh.word_3,
+ sizeof(ah_attr->grh.dgid));
+ ah_attr->sl = av->av.sl;
+
+ ah_attr->dlid = av->av.dlid;
+
+ ah_attr->src_path_bits = av->av.slid_path_bits;
+ ah_attr->static_rate = av->av.ipd;
+ ah_attr->ah_flags = EHCA_BMASK_GET(GRH_FLAG_MASK, av->av.lnh);
+ ah_attr->grh.traffic_class = EHCA_BMASK_GET(GRH_TCLASS_MASK,
+ av->av.grh.word_0);
+ ah_attr->grh.hop_limit = EHCA_BMASK_GET(GRH_HOPLIMIT_MASK,
+ av->av.grh.word_0);
+ ah_attr->grh.flow_label = EHCA_BMASK_GET(GRH_FLOWLABEL_MASK,
+ av->av.grh.word_0);
+
+ EDEB_EX(7,"ah=%p ah_attr=%p ret=%x", ah, ah_attr, ret);
+ return ret;
+}
+
+int ehca_destroy_ah(struct ib_ah *ah)
+{
+ int ret = 0;
+
+ EHCA_CHECK_AV(ah);
+ EHCA_DEREGISTER_AV(ah);
+
+ EDEB_EN(7,"ah=%p", ah);
+
+ ehca_av_delete(container_of(ah, struct ehca_av, ib_ah));
+
+ EDEB_EX(7,"ret=%x ah=%p", ret, ah);
+ return ret;
+}
diff --git a/drivers/infiniband/hw/ehca/ehca_mcast.c b/drivers/infiniband/hw/ehca/ehca_mcast.c
new file mode 100644
index 0000000..b49bcf6
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_mcast.c
@@ -0,0 +1,194 @@
+
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * mcast functions
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ * Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_mcast.c,v 1.20 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#define DEB_PREFIX "mcas"
+
+#include "ehca_kernel.h"
+#include "ehca_classes.h"
+#include "ehca_tools.h"
+#include "hcp_if.h"
+#include "ehca_qes.h"
+#include <linux/module.h>
+#include <linux/err.h>
+#include "ehca_iverbs.h"
+
+#define MAX_MC_LID 0xFFFE
+#define MIN_MC_LID 0xC000 /* Multicast limits */
+#define EHCA_VALID_MULTICAST_GID(gid) ((gid)[0] == 0xFF)
+#define EHCA_VALID_MULTICAST_LID(lid) (((lid) >= MIN_MC_LID) && ((lid) <= MIN_MC_LID))
+
+int ehca_attach_mcast(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
+{
+ struct ehca_qp *my_qp = NULL;
+ struct ehca_shca *shca = NULL;
+ union ib_gid my_gid;
+ u64 hipz_rc = H_Success;
+ int retcode = 0;
+
+ EHCA_CHECK_ADR(ibqp);
+ EHCA_CHECK_ADR(gid);
+
+ my_qp = container_of(ibqp, struct ehca_qp, ib_qp);
+
+ EHCA_CHECK_QP(my_qp);
+ if (ibqp->qp_type != IB_QPT_UD) {
+ EDEB_ERR(4, "invalid qp_type %x gid, retcode=%x",
+ ibqp->qp_type, EINVAL);
+ return (-EINVAL);
+ }
+
+ shca = container_of(ibqp->pd->device, struct ehca_shca, ib_device);
+ EHCA_CHECK_ADR(shca);
+
+ if (!(EHCA_VALID_MULTICAST_GID(gid->raw))) {
+ EDEB_ERR(4, "gid is not valid mulitcast gid retcode=%x",
+ EINVAL);
+ return (-EINVAL);
+ } else if ((lid < MIN_MC_LID) || (lid > MAX_MC_LID)) {
+ EDEB_ERR(4, "lid=%x is not valid mulitcast lid retcode=%x",
+ lid, EINVAL);
+ return (-EINVAL);
+ }
+
+ memcpy(&my_gid.raw, gid->raw, sizeof(union ib_gid));
+
+ hipz_rc = hipz_h_attach_mcqp(shca->ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ my_qp->ehca_qp_core.galpas.kernel,
+ lid, my_gid);
+ if (H_Success != hipz_rc) {
+ EDEB_ERR(4,
+ "ehca_qp=%p qp_num=%x hipz_h_attach_mcqp() failed "
+ "hipz_rc=%lx", my_qp, ibqp->qp_num, hipz_rc);
+ }
+ retcode = ehca2ib_return_code(hipz_rc);
+
+ EDEB_EX(7, "mcast attach retcode=%x\n"
+ "ehca_qp=%p qp_num=%x lid=%x\n"
+ "my_gid= %x %x %x %x\n"
+ " %x %x %x %x\n"
+ " %x %x %x %x\n"
+ " %x %x %x %x\n",
+ retcode, my_qp, ibqp->qp_num, lid,
+ my_gid.raw[0], my_gid.raw[1],
+ my_gid.raw[2], my_gid.raw[3],
+ my_gid.raw[4], my_gid.raw[5],
+ my_gid.raw[6], my_gid.raw[7],
+ my_gid.raw[8], my_gid.raw[9],
+ my_gid.raw[10], my_gid.raw[11],
+ my_gid.raw[12], my_gid.raw[13],
+ my_gid.raw[14], my_gid.raw[15]);
+
+ return retcode;
+}
+
+int ehca_detach_mcast(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
+{
+ struct ehca_qp *my_qp = NULL;
+ struct ehca_shca *shca = NULL;
+ union ib_gid my_gid;
+ u64 hipz_rc = H_Success;
+ int retcode = 0;
+
+ EHCA_CHECK_ADR(ibqp);
+ EHCA_CHECK_ADR(gid);
+
+ my_qp = container_of(ibqp, struct ehca_qp, ib_qp);
+
+ EHCA_CHECK_QP(my_qp);
+ if (ibqp->qp_type != IB_QPT_UD) {
+ EDEB_ERR(4, "invalid qp_type %x gid, retcode=%x",
+ ibqp->qp_type, EINVAL);
+ return (-EINVAL);
+ }
+
+ shca = container_of(ibqp->pd->device, struct ehca_shca, ib_device);
+ EHCA_CHECK_ADR(shca);
+
+ if (!(EHCA_VALID_MULTICAST_GID(gid->raw))) {
+ EDEB_ERR(4, "gid is not valid mulitcast gid retcode=%x",
+ EINVAL);
+ return (-EINVAL);
+ } else if ((lid < MIN_MC_LID) || (lid > MAX_MC_LID)) {
+ EDEB_ERR(4, "lid=%x is not valid mulitcast lid retcode=%x",
+ lid, EINVAL);
+ return (-EINVAL);
+ }
+
+ EDEB_EN(7, "dgid=%p qp_numl=%x lid=%x",
+ gid, ibqp->qp_num, lid);
+
+ memcpy(&my_gid.raw, gid->raw, sizeof(union ib_gid));
+
+ hipz_rc = hipz_h_detach_mcqp(shca->ipz_hca_handle,
+ my_qp->ipz_qp_handle,
+ my_qp->ehca_qp_core.galpas.kernel,
+ lid, my_gid);
+ if (H_Success != hipz_rc) {
+ EDEB_ERR(4,
+ "ehca_qp=%p qp_num=%x hipz_h_detach_mcqp() failed "
+ "hipz_rc=%lx", my_qp, ibqp->qp_num, hipz_rc);
+ }
+ retcode = ehca2ib_return_code(hipz_rc);
+
+ EDEB_EX(7, "mcast detach retcode=%x\n"
+ "ehca_qp=%p qp_num=%x lid=%x\n"
+ "my_gid= %x %x %x %x\n"
+ " %x %x %x %x\n"
+ " %x %x %x %x\n"
+ " %x %x %x %x\n",
+ retcode, my_qp, ibqp->qp_num, lid,
+ my_gid.raw[0], my_gid.raw[1],
+ my_gid.raw[2], my_gid.raw[3],
+ my_gid.raw[4], my_gid.raw[5],
+ my_gid.raw[6], my_gid.raw[7],
+ my_gid.raw[8], my_gid.raw[9],
+ my_gid.raw[10], my_gid.raw[11],
+ my_gid.raw[12], my_gid.raw[13],
+ my_gid.raw[14], my_gid.raw[15]);
+
+ return retcode;
+}
diff --git a/drivers/infiniband/hw/ehca/ehca_pd.c b/drivers/infiniband/hw/ehca/ehca_pd.c
new file mode 100644
index 0000000..e110320
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_pd.c
@@ -0,0 +1,100 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * PD functions
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_pd.c,v 1.25 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#define DEB_PREFIX "vpd "
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+#include "ehca_iverbs.h"
+
+struct ib_pd *ehca_alloc_pd(struct ib_device *device,
+ struct ib_ucontext *context, struct ib_udata *udata)
+{
+ struct ib_pd *mypd = NULL;
+ struct ehca_pd *pd = NULL;
+
+ EDEB_EN(7, "device=%p context=%p udata=%p", device, context, udata);
+
+ EHCA_CHECK_DEVICE_P(device);
+
+ pd = ehca_pd_new();
+ if (!pd) {
+ EDEB_ERR(4, "ERROR device=%p context=%p pd=%p "
+ "out of memory", device, context, mypd);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /* kernel pd when (device,-1,0)
+ * user pd only if context != -1 */
+ if (context == NULL) {
+ /* kernel pds after init reuses always
+ * the one created in ehca_shca_reopen()
+ */
+ struct ehca_shca *shca = container_of(device, struct ehca_shca,
+ ib_device);
+ pd->fw_pd.value = shca->pd->fw_pd.value;
+ } else {
+ pd->fw_pd.value = (u64)pd;
+ }
+
+ mypd = &pd->ib_pd;
+
+ EHCA_REGISTER_PD(device, pd);
+
+ EDEB_EX(7, "device=%p context=%p pd=%p", device, context, mypd);
+
+ return (mypd);
+}
+
+int ehca_dealloc_pd(struct ib_pd *pd)
+{
+ int ret = 0;
+ EDEB_EN(7, "pd=%p", pd);
+
+ EHCA_CHECK_PD(pd);
+ EHCA_DEREGISTER_PD(pd);
+ ehca_pd_delete(container_of(pd, struct ehca_pd, ib_pd));
+
+ EDEB_EX(7, "pd=%p", pd);
+ return ret;
+}

2006-02-18 00:59:39

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 13/22] HCA query functions

From: Roland Dreier <[email protected]>


---

drivers/infiniband/hw/ehca/ehca_hca.c | 321 +++++++++++++++++++++++++++++++++
1 files changed, 321 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_hca.c b/drivers/infiniband/hw/ehca/ehca_hca.c
new file mode 100644
index 0000000..af05a5c
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_hca.c
@@ -0,0 +1,321 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * HCA query functions
+ *
+ * Authors: Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_hca.c,v 1.46 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#undef DEB_PREFIX
+#define DEB_PREFIX "shca"
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+
+#include "hcp_if.h" /* TODO: later via hipz_* header file */
+
+#define TO_MAX_INT(dest, src) \
+ if (src >= INT_MAX) \
+ dest = INT_MAX; \
+ else \
+ dest = src
+
+int ehca_query_device(struct ib_device *ibdev, struct ib_device_attr *props)
+{
+ int ret = 0;
+ struct ehca_shca *shca;
+ struct query_hca_rblock *rblock;
+
+ EDEB_EN(7, "");
+ EHCA_CHECK_DEVICE(ibdev);
+
+ memset(props, 0, sizeof(struct ib_device_attr));
+ shca = container_of(ibdev, struct ehca_shca, ib_device);
+
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (rblock == NULL) {
+ EDEB_ERR(4, "Can't allocate rblock memory.");
+ ret = -ENOMEM;
+ goto query_device0;
+ }
+
+ memset(rblock, 0, PAGE_SIZE);
+
+ if (hipz_h_query_hca(shca->ipz_hca_handle, rblock) != H_Success) {
+ EDEB_ERR(4, "Can't query device properties");
+ ret = -EINVAL;
+ goto query_device1;
+ }
+ props->fw_ver = rblock->hw_ver;
+ /* TODO: memcpy(&props->sys_image_guid, ...); */
+ props->max_mr_size = rblock->max_mr_size;
+ /* TODO: props->page_size_cap */
+ props->vendor_id = rblock->vendor_id >> 8;
+ props->vendor_part_id = rblock->vendor_part_id >> 16;
+ props->hw_ver = rblock->hw_ver;
+ TO_MAX_INT(props->max_qp, (rblock->max_qp - rblock->cur_qp));
+ /* TODO: props->max_qp_wr = */
+ /* TODO: props->device_cap_flags */
+ props->max_sge = rblock->max_sge;
+ props->max_sge_rd = rblock->max_sge_rd;
+ TO_MAX_INT(props->max_qp, (rblock->max_cq - rblock->cur_cq));
+ props->max_cqe = rblock->max_cqe;
+ TO_MAX_INT(props->max_mr, (rblock->max_cq - rblock->cur_mr));
+ TO_MAX_INT(props->max_pd, rblock->max_pd);
+ /* TODO: props->max_qp_rd_atom */
+ /* TODO: props->max_qp_init_rd_atom */
+ /* TODO: props->atomic_cap */
+ /* TODO: props->max_ee */
+ /* TODO: props->max_rdd */
+ props->max_mw = rblock->max_mw;
+ TO_MAX_INT(props->max_mr, (rblock->max_mw - rblock->cur_mw));
+ props->max_raw_ipv6_qp = rblock->max_raw_ipv6_qp;
+ props->max_raw_ethy_qp = rblock->max_raw_ethy_qp;
+ props->max_mcast_grp = rblock->max_mcast_grp;
+ props->max_mcast_qp_attach = rblock->max_qps_attached_mcast_grp;
+ props->max_total_mcast_qp_attach = rblock->max_qps_attached_all_mcast_grp;
+
+ TO_MAX_INT(props->max_ah, rblock->max_ah);
+
+ props->max_fmr = rblock->max_mr;
+ /* TODO: props->max_map_per_fmr */
+
+ /* TODO: props->max_srq */
+ /* TODO: props->max_srq_wr */
+ /* TODO: props->max_srq_sge */
+ props->max_srq = 0;
+ props->max_srq_wr = 0;
+ props->max_srq_sge = 0;
+
+ /* TODO: props->max_pkeys */
+ props->max_pkeys = 16;
+
+ props->local_ca_ack_delay = rblock->local_ca_ack_delay;
+
+ query_device1:
+ kfree(rblock);
+
+ query_device0:
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+int ehca_query_port(struct ib_device *ibdev,
+ u8 port, struct ib_port_attr *props)
+{
+ int ret = 0;
+ struct ehca_shca *shca;
+ struct query_port_rblock *rblock;
+
+ EDEB_EN(7, "port=%x", port);
+ EHCA_CHECK_DEVICE(ibdev);
+
+ memset(props, 0, sizeof(struct ib_port_attr));
+ shca = container_of(ibdev, struct ehca_shca, ib_device);
+
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (rblock == NULL) {
+ EDEB_ERR(4, "Can't allocate rblock memory.");
+ ret = -ENOMEM;
+ goto query_port0;
+ }
+
+ memset(rblock, 0, PAGE_SIZE);
+
+ if (hipz_h_query_port(shca->ipz_hca_handle, port, rblock) != H_Success) {
+ EDEB_ERR(4, "Can't query port properties");
+ ret = -EINVAL;
+ goto query_port1;
+ }
+
+ props->state = rblock->state;
+
+ switch (rblock->max_mtu) {
+ case 0x1:
+ props->active_mtu = props->max_mtu = IB_MTU_256;
+ break;
+ case 0x2:
+ props->active_mtu = props->max_mtu = IB_MTU_512;
+ break;
+ case 0x3:
+ props->active_mtu = props->max_mtu = IB_MTU_1024;
+ break;
+ case 0x4:
+ props->active_mtu = props->max_mtu = IB_MTU_2048;
+ break;
+ case 0x5:
+ props->active_mtu = props->max_mtu = IB_MTU_4096;
+ break;
+ default:
+ EDEB_ERR(4, "Unknown MTU size: %x.", rblock->max_mtu);
+ }
+
+ props->gid_tbl_len = rblock->gid_tbl_len;
+ /* TODO: props->port_cap_flags */
+ props->max_msg_sz = rblock->max_msg_sz;
+ props->bad_pkey_cntr = rblock->bad_pkey_cntr;
+ props->qkey_viol_cntr = rblock->qkey_viol_cntr;
+ props->pkey_tbl_len = rblock->pkey_tbl_len;
+ props->lid = rblock->lid;
+ props->sm_lid = rblock->sm_lid;
+ props->lmc = rblock->lmc;
+ /* TODO: max_vl_num */
+ props->sm_sl = rblock->sm_sl;
+ props->subnet_timeout = rblock->subnet_timeout;
+ props->init_type_reply = rblock->init_type_reply;
+
+ /* TODO: props->active_width */
+ props->active_width = IB_WIDTH_12X;
+ /* TODO: props->active_speed */
+
+ /* TODO: props->phys_state */
+
+ query_port1:
+ kfree(rblock);
+
+ query_port0:
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+int ehca_query_pkey(struct ib_device *ibdev, u8 port, u16 index, u16 *pkey)
+{
+ int ret = 0;
+ struct ehca_shca *shca;
+ struct query_port_rblock *rblock;
+
+ EDEB_EN(7, "port=%x index=%x", port, index);
+ EHCA_CHECK_DEVICE(ibdev);
+
+ if (index > 16) {
+ EDEB_ERR(4, "Invalid index: %x.", index);
+ ret = -EINVAL;
+ goto query_pkey0;
+ }
+
+ shca = container_of(ibdev, struct ehca_shca, ib_device);
+
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (rblock == NULL) {
+ EDEB_ERR(4, "Can't allocate rblock memory.");
+ ret = -ENOMEM;
+ goto query_pkey0;
+ }
+
+ memset(rblock, 0, PAGE_SIZE);
+
+ if (hipz_h_query_port(shca->ipz_hca_handle, port, rblock) != H_Success) {
+ EDEB_ERR(4, "Can't query port properties");
+ ret = -EINVAL;
+ goto query_pkey1;
+ }
+
+ memcpy(pkey, &rblock->pkey_entries + index, sizeof(u16));
+
+ query_pkey1:
+ kfree(rblock);
+
+ query_pkey0:
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+int ehca_query_gid(struct ib_device *ibdev, u8 port,
+ int index, union ib_gid *gid)
+{
+ int ret = 0;
+ struct ehca_shca *shca;
+ struct query_port_rblock *rblock;
+
+ EDEB_EN(7, "port=%x index=%x", port, index);
+ EHCA_CHECK_DEVICE(ibdev);
+
+ if (index > 255) {
+ EDEB_ERR(4, "Invalid index: %x.", index);
+ ret = -EINVAL;
+ goto query_gid0;
+ }
+
+ shca = container_of(ibdev, struct ehca_shca, ib_device);
+
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (rblock == NULL) {
+ EDEB_ERR(4, "Can't allocate rblock memory.");
+ ret = -ENOMEM;
+ goto query_gid0;
+ }
+
+ memset(rblock, 0, PAGE_SIZE);
+
+ if (hipz_h_query_port(shca->ipz_hca_handle, port, rblock) != H_Success) {
+ EDEB_ERR(4, "Can't query port properties");
+ ret = -EINVAL;
+ goto query_gid1;
+ }
+
+ memcpy(&gid->raw[0], &rblock->gid_prefix, sizeof(u64));
+ memcpy(&gid->raw[8], &rblock->guid_entries[index], sizeof(u64));
+
+ query_gid1:
+ kfree(rblock);
+
+ query_gid0:
+ EDEB_EX(7, "ret=%x GID=%lx%lx", ret,
+ *(u64 *) & gid->raw[0],
+ *(u64 *) & gid->raw[8]);
+
+ return ret;
+}
+
+int ehca_modify_port(struct ib_device *ibdev,
+ u8 port, int port_modify_mask,
+ struct ib_port_modify *props)
+{
+ int ret = 0;
+
+ EDEB_EN(7, "port=%x", port);
+ EHCA_CHECK_DEVICE(ibdev);
+
+ /* TODO: implementation */
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}

2006-02-18 01:01:20

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 22/22] ehca Makefile/Kconfig changes

From: Roland Dreier <[email protected]>


---

drivers/infiniband/Kconfig | 2 ++
drivers/infiniband/Makefile | 1 +
drivers/infiniband/hw/ehca/Kbuild | 8 ++++++++
drivers/infiniband/hw/ehca/Kconfig | 6 ++++++
4 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index bdf0891..2b3ad03 100644
--- a/drivers/infiniband/Kconfig
+++ b/drivers/infiniband/Kconfig
@@ -31,6 +31,8 @@ config INFINIBAND_USER_ACCESS

source "drivers/infiniband/hw/mthca/Kconfig"

+source "drivers/infiniband/hw/ehca/Kconfig"
+
source "drivers/infiniband/ulp/ipoib/Kconfig"

source "drivers/infiniband/ulp/srp/Kconfig"
diff --git a/drivers/infiniband/Makefile b/drivers/infiniband/Makefile
index a43fb34..eb7788f 100644
--- a/drivers/infiniband/Makefile
+++ b/drivers/infiniband/Makefile
@@ -1,4 +1,5 @@
obj-$(CONFIG_INFINIBAND) += core/
obj-$(CONFIG_INFINIBAND_MTHCA) += hw/mthca/
+obj-$(CONFIG_INFINIBAND_EHCA) += hw/ehca/
obj-$(CONFIG_INFINIBAND_IPOIB) += ulp/ipoib/
obj-$(CONFIG_INFINIBAND_SRP) += ulp/srp/
diff --git a/drivers/infiniband/hw/ehca/Kbuild b/drivers/infiniband/hw/ehca/Kbuild
new file mode 100644
index 0000000..7b610b1
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/Kbuild
@@ -0,0 +1,8 @@
+obj-$(CONFIG_INFINIBAND_EHCA) += hcad_mod.o
+
+hcad_mod-objs = ehca_main.o ehca_hca.o ipz_pt_fn.o ehca_classes.o ehca_av.o \
+ ehca_pd.o ehca_mrmw.o ehca_cq.o ehca_sqp.o ehca_qp.o hcp_sense.o \
+ ehca_eq.o ehca_irq.o hcp_phyp.o ehca_mcast.o ehca_reqs.o \
+ ehca_uverbs.o
+
+CFLAGS +=-DP_SERIES -DEHCA_USE_HCALL -DEHCA_USE_HCALL_KERNEL
diff --git a/drivers/infiniband/hw/ehca/Kconfig b/drivers/infiniband/hw/ehca/Kconfig
new file mode 100644
index 0000000..b875649
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/Kconfig
@@ -0,0 +1,6 @@
+config INFINIBAND_EHCA
+ tristate "eHCA support"
+ depends on IBMEBUS && INFINIBAND
+ ---help---
+ This is a low level device driver for the IBM
+ GX based Host channel adapters (HCAs)
\ No newline at end of file

2006-02-18 00:59:39

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 17/22] Special QP functions

From: Roland Dreier <[email protected]>

The wait for the port to become active when creating QP 1 seems
bizarre. Why can't we just create QP 1 before the port is active?

What is the issue with creating QP 0? Without QP 0, it's impossible
to run a subnet manager on top of ehca.
---

drivers/infiniband/hw/ehca/ehca_sqp.c | 135 +++++++++++++++++++++++++++++++++
1 files changed, 135 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_sqp.c b/drivers/infiniband/hw/ehca/ehca_sqp.c
new file mode 100644
index 0000000..bbad4cb
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_sqp.c
@@ -0,0 +1,135 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * SQP functions
+ *
+ * Authors: Khadija Souissi <[email protected]>
+ * Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_sqp.c,v 1.35 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#define DEB_PREFIX "e_qp"
+
+#include "ehca_kernel.h"
+#include "ehca_classes.h"
+#include "ehca_tools.h"
+#include "hcp_if.h"
+#include "ehca_qes.h"
+#include "ehca_iverbs.h"
+
+#include <linux/module.h>
+#include <linux/err.h>
+
+extern int ehca_create_aqp1(struct ehca_shca *shca, struct ehca_sport *sport);
+extern int ehca_destroy_aqp1(struct ehca_sport *sport);
+
+extern int ehca_port_act_time;
+
+/**
+ * ehca_define_aqp0 - TODO
+ *
+ * @ehca_qp: : TODO adapter_handle, ipz_qp_handle, galpas.kernel
+ * @qp_init_attr : TODO for port number
+ */
+u64 ehca_define_sqp(struct ehca_shca *shca,
+ struct ehca_qp *ehca_qp,
+ struct ib_qp_init_attr *qp_init_attr)
+{
+
+ u32 pma_qp_nr = 0;
+ u32 bma_qp_nr = 0;
+ u64 ret = H_Success;
+ u8 port = qp_init_attr->port_num;
+ int counter = 0;
+
+ EDEB_EN(7, "port=%x qp_type=%x",
+ port, qp_init_attr->qp_type);
+
+ shca->sport[port - 1].port_state = IB_PORT_DOWN;
+
+ switch (qp_init_attr->qp_type) {
+ case IB_QPT_SMI:
+ /* TODO: function not supported yet */
+ /*
+ ret = hipz_h_define_aqp0(shca->ipz_hca_handle,
+ ehca_qp->ipz_qp_handle,
+ ehca_qp->galpas.kernel,
+ (u32)qp_init_attr->port_num);
+ */
+ break;
+ case IB_QPT_GSI:
+ ret = hipz_h_define_aqp1(shca->ipz_hca_handle,
+ ehca_qp->ipz_qp_handle,
+ ehca_qp->ehca_qp_core.galpas.kernel,
+ (u32) qp_init_attr->port_num,
+ &pma_qp_nr, &bma_qp_nr);
+
+ if (ret != H_Success) {
+ EDEB_ERR(4, "Can't define AQP1 for port %x. rc=%lx",
+ port, ret);
+ goto ehca_define_aqp1;
+ }
+ break;
+ default:
+ ret = H_Parameter;
+ goto ehca_define_aqp1;
+ }
+
+#ifndef EHCA_USERDRIVER
+ while ((shca->sport[port - 1].port_state != IB_PORT_ACTIVE) &&
+ (counter < ehca_port_act_time)) {
+ EDEB(6, "... wait until port %x is active",
+ port);
+ msleep_interruptible(1000);
+ counter++;
+ }
+
+ if (counter == ehca_port_act_time) {
+ EDEB_ERR(4, "Port %x is not active.", port);
+ ret = H_Hardware;
+ }
+#else
+ if (shca->sport[port - 1].port_state != IB_PORT_ACTIVE) {
+ sleep(20);
+ }
+#endif
+
+ ehca_define_aqp1:
+ EDEB_EX(7, "ret=%lx", ret);
+
+ return ret;
+}

2006-02-18 01:03:10

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 21/22] ehca main file

From: Roland Dreier <[email protected]>

What is ehca_show_flightrecorder() trying to do that snprintf() is
not fast enough? If you need to pass a binary structure back to
userspace (with a kernel address in it??) then sysfs is not the right
place to put it. Look at debugfs; or relayfs might make the most
sense for your flightrecorder stuff.
---

drivers/infiniband/hw/ehca/ehca_main.c | 1032 ++++++++++++++++++++++++++++++++
1 files changed, 1032 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_main.c b/drivers/infiniband/hw/ehca/ehca_main.c
new file mode 100644
index 0000000..2e2be06
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_main.c
@@ -0,0 +1,1032 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * module start stop, hca detection
+ *
+ * Authors: Heiko J Schick <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_main.c,v 1.137 2006/02/06 16:20:38 schickhj Exp $
+ */
+
+#define DEB_PREFIX "shca"
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+#include "ehca_classes.h"
+#include "ehca_iverbs.h"
+#include "ehca_eq.h"
+#include "ehca_mrmw.h"
+
+#include "hcp_sense.h" /* TODO: later via hipz_* header file */
+#include "hcp_if.h" /* TODO: later via hipz_* header file */
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Christoph Raisch <[email protected]>");
+MODULE_DESCRIPTION("IBM eServer HCA Driver");
+MODULE_VERSION("EHCA2_0047");
+
+#ifdef EHCA_USERDRIVER
+int ehca_open_aqp1 = 1;
+#else
+int ehca_open_aqp1 = 0;
+#endif
+int ehca_tracelevel = -1;
+int ehca_hw_level = 0;
+int ehca_nr_ports = 2;
+int ehca_use_hp_mr = 0;
+int ehca_port_act_time = 30;
+int ehca_poll_all_eqs = 1;
+int ehca_static_rate = -1;
+
+module_param_named(open_aqp1, ehca_open_aqp1, int, 0);
+module_param_named(tracelevel, ehca_tracelevel, int, 0);
+module_param_named(hw_level, ehca_hw_level, int, 0);
+module_param_named(nr_ports, ehca_nr_ports, int, 0);
+module_param_named(use_hp_mr, ehca_use_hp_mr, int, 0);
+module_param_named(port_act_time, ehca_port_act_time, int, 0);
+module_param_named(poll_all_eqs, ehca_poll_all_eqs, int, 0);
+module_param_named(static_rate, ehca_static_rate, int, 0);
+
+MODULE_PARM_DESC(open_aqp1, "0 no define AQP1 on startup (default),"
+ "1 define AQP1 on startup");
+MODULE_PARM_DESC(tracelevel, "0 maximum performance (no messages),"
+ "9 maximum messages (no performance)");
+MODULE_PARM_DESC(hw_level, "0 autosensing,"
+ "1 v. 0.20,"
+ "2 v. 0.21");
+MODULE_PARM_DESC(nr_ports, "number of connected ports (default: 2)");
+MODULE_PARM_DESC(use_hp_mr, "use high performance MRs,"
+ "0 no (default),"
+ "1 yes");
+MODULE_PARM_DESC(port_act_time, "time to wait for port activation"
+ "(default: 30 sec.)");
+MODULE_PARM_DESC(poll_all_eqs, "polls all event queues periodically"
+ "0 no,"
+ "1 yes (default)");
+MODULE_PARM_DESC(static_rate, "set permanent static rate (default: disabled)");
+
+/* This external trace mask controls what will end up in the
+ * kernel ring buffer. Number 6 means, that everything between
+ * 0 and 5 will be stored.
+ */
+u8 ehca_edeb_mask[EHCA_EDEB_TRACE_MASK_SIZE]={6,6,6,6,
+ 6,6,6,6,
+ 6,6,6,6,
+ 6,6,6,6,
+ 6,6,6,6,
+ 6,6,6,6,
+ 6,6,6,6,
+ 6,6,1,0};
+ /* offset 0x1e is flightrecorder */
+EXPORT_SYMBOL(ehca_edeb_mask);
+
+atomic_t ehca_flightrecorder_index = ATOMIC_INIT(1);
+unsigned long ehca_flightrecorder[EHCA_FLIGHTRECORDER_SIZE];
+EXPORT_SYMBOL(ehca_flightrecorder_index);
+EXPORT_SYMBOL(ehca_flightrecorder);
+
+DECLARE_RWSEM(ehca_qp_idr_sem);
+DECLARE_RWSEM(ehca_cq_idr_sem);
+DEFINE_IDR(ehca_qp_idr);
+DEFINE_IDR(ehca_cq_idr);
+
+struct ehca_module ehca_module;
+struct workqueue_struct *ehca_wq;
+struct task_struct *ehca_kthread_eq;
+
+/**
+ * ehca_init_trace - TODO
+ */
+void ehca_init_trace(void)
+{
+ EDEB_EN(7, "");
+
+ if (ehca_tracelevel != -1) {
+ int i;
+ for (i = 0; i < EHCA_EDEB_TRACE_MASK_SIZE; i++)
+ ehca_edeb_mask[i] = ehca_tracelevel;
+ }
+
+ EDEB_EX(7, "");
+}
+
+/**
+ * ehca_init_flight - TODO
+ */
+void ehca_init_flight(void)
+{
+ EDEB_EN(7, "");
+
+ memset(ehca_flightrecorder, 0xFA,
+ sizeof(unsigned long) * EHCA_FLIGHTRECORDER_SIZE);
+ atomic_set(&ehca_flightrecorder_index, 0);
+ ehca_flightrecorder[0] = 0x12345678abcdef0;
+
+ EDEB_EX(7, "");
+}
+
+/**
+ * ehca_flight_to_printk - TODO
+ */
+void ehca_flight_to_printk(void)
+{
+ int cur_offset = atomic_read(&ehca_flightrecorder_index);
+ int new_offset = cur_offset - (EHCA_FLIGHTRECORDER_BACKLOG * 4);
+ u32 flight_offset;
+ int i;
+
+ if (new_offset < 0)
+ new_offset = EHCA_FLIGHTRECORDER_SIZE + new_offset - 4;
+
+ printk(KERN_ERR
+ "EHCA ----- flight recorder begin "
+ "-------------------------------------------\n");
+
+ for (i = 0; i < EHCA_FLIGHTRECORDER_BACKLOG; i++) {
+ new_offset += 4;
+ flight_offset = (u32) new_offset % EHCA_FLIGHTRECORDER_SIZE;
+
+ printk(KERN_ERR "EHCA %02d: %.16lX %.16lX %.16lX %.16lX\n",
+ i + 1,
+ ehca_flightrecorder[flight_offset],
+ ehca_flightrecorder[flight_offset + 1],
+ ehca_flightrecorder[flight_offset + 2],
+ ehca_flightrecorder[flight_offset + 3]);
+ }
+
+ printk(KERN_ERR
+ "EHCA ----- flight recorder end "
+ "---------------------------------------------\n");
+}
+
+#define EHCA_CACHE_CREATE(name) \
+ ehca_module->cache_##name = \
+ kmem_cache_create("ehca_cache_"#name, \
+ sizeof(struct ehca_##name), \
+ 0, SLAB_HWCACHE_ALIGN, \
+ NULL, NULL); \
+ if (ehca_module->cache_##name == NULL) { \
+ EDEB_ERR(4, "Cannot create "#name" SLAB cache."); \
+ return -ENOMEM; \
+ } \
+
+/**
+ * ehca_caches_create: TODO
+ */
+int ehca_caches_create(struct ehca_module *ehca_module)
+{
+ EDEB_EN(7, "");
+
+ EHCA_CACHE_CREATE(pd);
+ EHCA_CACHE_CREATE(cq);
+ EHCA_CACHE_CREATE(qp);
+ EHCA_CACHE_CREATE(av);
+ EHCA_CACHE_CREATE(mw);
+ EHCA_CACHE_CREATE(mr);
+
+ EDEB_EX(7, "");
+
+ return 0;
+}
+
+#define EHCA_CACHE_DESTROY(name) \
+ ret = kmem_cache_destroy(ehca_module->cache_##name); \
+ if (ret != 0) { \
+ EDEB_ERR(4, "Cannot destroy "#name" SLAB cache. ret=%x", ret); \
+ return ret; \
+ } \
+
+/**
+ * ehca_caches_destroy - TODO
+ */
+int ehca_caches_destroy(struct ehca_module *ehca_module)
+{
+ int ret;
+
+ EDEB_EN(7, "");
+
+ EHCA_CACHE_DESTROY(pd);
+ EHCA_CACHE_DESTROY(cq);
+ EHCA_CACHE_DESTROY(qp);
+ EHCA_CACHE_DESTROY(av);
+ EHCA_CACHE_DESTROY(mw);
+ EHCA_CACHE_DESTROY(mr);
+
+ EDEB_EX(7, "");
+
+ return 0;
+}
+
+#define EHCA_HCAAVER EHCA_BMASK_IBM(32,39)
+#define EHCA_REVID EHCA_BMASK_IBM(40,63)
+
+/**
+ * ehca_num_ports - TODO
+ */
+int ehca_sense_attributes(struct ehca_shca *shca)
+{
+ int ret = -EINVAL;
+ u64 rc = H_Success;
+ struct query_hca_rblock *rblock;
+
+ EDEB_EN(7, "shca=%p", shca);
+
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (rblock == NULL) {
+ EDEB_ERR(4, "Cannot allocate rblock memory.");
+ ret = -ENOMEM;
+ goto num_ports0;
+ }
+
+ memset(rblock, 0, PAGE_SIZE);
+
+ rc = hipz_h_query_hca(shca->ipz_hca_handle, rblock);
+ if (rc != H_Success) {
+ EDEB_ERR(4, "Cannot query device properties.rc=%lx", rc);
+ ret = -EPERM;
+ goto num_ports1;
+ }
+
+ if (ehca_nr_ports == 1)
+ shca->num_ports = 1;
+ else
+ shca->num_ports = (u8) rblock->num_ports;
+
+ EDEB(6, " ... found %x ports", rblock->num_ports);
+
+ if (ehca_hw_level == 0) {
+ u32 hcaaver;
+ u32 revid;
+
+ hcaaver = EHCA_BMASK_GET(EHCA_HCAAVER, rblock->hw_ver);
+ revid = EHCA_BMASK_GET(EHCA_REVID, rblock->hw_ver);
+
+ EDEB(6, " ... hardware version=%x:%x",
+ hcaaver, revid);
+
+ if ((hcaaver == 1) && (revid == 0))
+ shca->hw_level = 0;
+ else if ((hcaaver == 1) && (revid == 1))
+ shca->hw_level = 1;
+ else if ((hcaaver == 1) && (revid == 2))
+ shca->hw_level = 2;
+ }
+ EDEB(6, " ... hardware level=%x", shca->hw_level);
+
+ ret = 0;
+
+ num_ports1:
+ kfree(rblock);
+
+ num_ports0:
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+static int init_node_guid(struct ehca_shca* shca)
+{
+ int ret = 0;
+ struct query_hca_rblock *rblock;
+
+ EDEB_EN(7, "");
+
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (rblock == NULL) {
+ EDEB_ERR(4, "Can't allocate rblock memory.");
+ ret = -ENOMEM;
+ goto init_node_guid0;
+ }
+
+ memset(rblock, 0, PAGE_SIZE);
+
+ if (hipz_h_query_hca(shca->ipz_hca_handle, rblock) != H_Success) {
+ EDEB_ERR(4, "Can't query device properties");
+ ret = -EINVAL;
+ goto init_node_guid1;
+ }
+
+ memcpy(&shca->ib_device.node_guid, &rblock->node_guid, (sizeof(u64)));
+
+ init_node_guid1:
+ kfree(rblock);
+
+ init_node_guid0:
+ EDEB_EX(7, "node_guid=%lx ret=%x", shca->ib_device.node_guid, ret);
+
+ return ret;
+}
+
+int ehca_register_device(struct ehca_shca *shca)
+{
+ int ret = 0;
+
+ EDEB_EN(7, "shca=%p", shca);
+
+ ret = init_node_guid(shca);
+ if (ret != 0)
+ return ret;
+
+ strlcpy(shca->ib_device.name, "ehca%d", IB_DEVICE_NAME_MAX);
+ shca->ib_device.owner = THIS_MODULE;
+
+ /* TODO: ABI ver later with define */
+ shca->ib_device.uverbs_abi_ver = 1;
+ shca->ib_device.uverbs_cmd_mask =
+ (1ull << IB_USER_VERBS_CMD_GET_CONTEXT) |
+ (1ull << IB_USER_VERBS_CMD_QUERY_DEVICE) |
+ (1ull << IB_USER_VERBS_CMD_QUERY_PORT) |
+ (1ull << IB_USER_VERBS_CMD_ALLOC_PD) |
+ (1ull << IB_USER_VERBS_CMD_DEALLOC_PD) |
+ (1ull << IB_USER_VERBS_CMD_REG_MR) |
+ (1ull << IB_USER_VERBS_CMD_DEREG_MR) |
+ (1ull << IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL) |
+ (1ull << IB_USER_VERBS_CMD_CREATE_CQ) |
+ (1ull << IB_USER_VERBS_CMD_DESTROY_CQ) |
+ (1ull << IB_USER_VERBS_CMD_CREATE_QP) |
+ (1ull << IB_USER_VERBS_CMD_MODIFY_QP) |
+ (1ull << IB_USER_VERBS_CMD_DESTROY_QP) |
+ (1ull << IB_USER_VERBS_CMD_ATTACH_MCAST) |
+ (1ull << IB_USER_VERBS_CMD_DETACH_MCAST);
+
+ shca->ib_device.node_type = RDMA_NODE_IB_CA;
+ shca->ib_device.phys_port_cnt = shca->num_ports;
+ shca->ib_device.dma_device = &shca->ibmebus_dev->ofdev.dev;
+ shca->ib_device.query_device = ehca_query_device;
+ shca->ib_device.query_port = ehca_query_port;
+ shca->ib_device.query_gid = ehca_query_gid;
+ shca->ib_device.query_pkey = ehca_query_pkey;
+ /* shca->in_device.modify_device = ehca_modify_device */
+ shca->ib_device.modify_port = ehca_modify_port;
+ shca->ib_device.alloc_ucontext = ehca_alloc_ucontext;
+ shca->ib_device.dealloc_ucontext = ehca_dealloc_ucontext;
+ shca->ib_device.alloc_pd = ehca_alloc_pd;
+ shca->ib_device.dealloc_pd = ehca_dealloc_pd;
+ shca->ib_device.create_ah = ehca_create_ah;
+ /* shca->ib_device.modify_ah = ehca_modify_ah; */
+ shca->ib_device.query_ah = ehca_query_ah;
+ shca->ib_device.destroy_ah = ehca_destroy_ah;
+ shca->ib_device.create_qp = ehca_create_qp;
+ shca->ib_device.modify_qp = ehca_modify_qp;
+ shca->ib_device.query_qp = ehca_query_qp;
+ shca->ib_device.destroy_qp = ehca_destroy_qp;
+ shca->ib_device.post_send = ehca_post_send;
+ shca->ib_device.post_recv = ehca_post_recv;
+ shca->ib_device.create_cq = ehca_create_cq;
+ shca->ib_device.destroy_cq = ehca_destroy_cq;
+
+ /* TODO: disabled due to func signature conflict */
+ /* shca->ib_device.resize_cq = ehca_resize_cq; */
+
+ shca->ib_device.poll_cq = ehca_poll_cq;
+ /* shca->ib_device.peek_cq = ehca_peek_cq; */
+ shca->ib_device.req_notify_cq = ehca_req_notify_cq;
+ /* shca->ib_device.req_ncomp_notif = ehca_req_ncomp_notif; */
+ shca->ib_device.get_dma_mr = ehca_get_dma_mr;
+ shca->ib_device.reg_phys_mr = ehca_reg_phys_mr;
+ shca->ib_device.reg_user_mr = ehca_reg_user_mr;
+ shca->ib_device.query_mr = ehca_query_mr;
+ shca->ib_device.dereg_mr = ehca_dereg_mr;
+ shca->ib_device.rereg_phys_mr = ehca_rereg_phys_mr;
+ shca->ib_device.alloc_mw = ehca_alloc_mw;
+ shca->ib_device.bind_mw = ehca_bind_mw;
+ shca->ib_device.dealloc_mw = ehca_dealloc_mw;
+ shca->ib_device.alloc_fmr = ehca_alloc_fmr;
+ shca->ib_device.map_phys_fmr = ehca_map_phys_fmr;
+ shca->ib_device.unmap_fmr = ehca_unmap_fmr;
+ shca->ib_device.dealloc_fmr = ehca_dealloc_fmr;
+ shca->ib_device.attach_mcast = ehca_attach_mcast;
+ shca->ib_device.detach_mcast = ehca_detach_mcast;
+ /* shca->ib_device.process_mad = ehca_process_mad; */
+ shca->ib_device.mmap = ehca_mmap;
+
+ ret = ib_register_device(&shca->ib_device);
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+/**
+ * ehca_create_aqp1 - TODO
+ *
+ * @shca: TODO
+ */
+static int ehca_create_aqp1(struct ehca_shca *shca, u32 port)
+{
+ struct ehca_sport *sport;
+ struct ib_cq *ibcq;
+ struct ib_qp *ibqp;
+ struct ib_qp_init_attr qp_init_attr;
+ int ret = 0;
+
+ EDEB_EN(7, "shca=%p port=%x", shca, port);
+
+ sport = &shca->sport[port - 1];
+
+ if (sport->ibcq_aqp1 != NULL) {
+ EDEB_ERR(4, "AQP1 CQ is already created.");
+ return -EPERM;
+ }
+
+ ibcq = ib_create_cq(&shca->ib_device, NULL, NULL, (void*)(-1), 10);
+ if (IS_ERR(ibcq)) {
+ EDEB_ERR(4, "Cannot create AQP1 CQ.");
+ return PTR_ERR(ibcq);
+ }
+ sport->ibcq_aqp1 = ibcq;
+
+ if (sport->ibqp_aqp1 != NULL) {
+ EDEB_ERR(4, "AQP1 QP is already created.");
+ ret = -EPERM;
+ goto create_aqp1;
+ }
+
+ memset(&qp_init_attr, 0, sizeof(struct ib_qp_init_attr));
+ qp_init_attr.send_cq = ibcq;
+ qp_init_attr.recv_cq = ibcq;
+ qp_init_attr.sq_sig_type = IB_SIGNAL_ALL_WR;
+ qp_init_attr.cap.max_send_wr = 100;
+ qp_init_attr.cap.max_recv_wr = 100;
+ qp_init_attr.cap.max_send_sge = 2;
+ qp_init_attr.cap.max_recv_sge = 1;
+ qp_init_attr.qp_type = IB_QPT_GSI;
+ qp_init_attr.port_num = port;
+ qp_init_attr.qp_context = NULL;
+ qp_init_attr.event_handler = NULL;
+ qp_init_attr.srq = NULL;
+
+ ibqp = ib_create_qp(&shca->pd->ib_pd, &qp_init_attr);
+ if (IS_ERR(ibqp)) {
+ EDEB_ERR(4, "Cannot create AQP1 QP.");
+ ret = PTR_ERR(ibqp);
+ goto create_aqp1;
+ }
+ sport->ibqp_aqp1 = ibqp;
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+
+ create_aqp1:
+ ib_destroy_cq(sport->ibcq_aqp1);
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+/**
+ * ehca_destroy_aqp1 - TODO
+ */
+static int ehca_destroy_aqp1(struct ehca_sport *sport)
+{
+ int ret = 0;
+
+ EDEB_EN(7, "sport=%p", sport);
+
+ ret = ib_destroy_qp(sport->ibqp_aqp1);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot destroy AQP1 QP. ret=%x", ret);
+ goto destroy_aqp1;
+ }
+
+ ret = ib_destroy_cq(sport->ibcq_aqp1);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy AQP1 CQ. ret=%x", ret);
+
+ destroy_aqp1:
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+static ssize_t ehca_show_debug_level(struct device_driver *ddp, char *buf)
+{
+ int f;
+ int total = 0;
+ total += snprintf(buf + total, PAGE_SIZE - total, "%d",
+ ehca_edeb_mask[0]);
+ for (f = 1; f < EHCA_EDEB_TRACE_MASK_SIZE; f++) {
+ total += snprintf(buf + total, PAGE_SIZE - total, ",%d",
+ ehca_edeb_mask[f]);
+ }
+
+ total += snprintf(buf + total, PAGE_SIZE - total, "\n");
+
+ return total;
+}
+
+static ssize_t ehca_store_debug_level(struct device_driver *ddp,
+ const char *buf, size_t count)
+{
+ int f;
+ for (f = 0; f < EHCA_EDEB_TRACE_MASK_SIZE; f++) {
+ char value = buf[f * 2] - '0';
+ if ((value <= 9) && (count >= f * 2)) {
+ ehca_edeb_mask[f] = value;
+ }
+ }
+ return count;
+}
+DRIVER_ATTR(debug_level, S_IRUSR | S_IWUSR,
+ ehca_show_debug_level, ehca_store_debug_level);
+
+static ssize_t ehca_show_flightrecorder(struct device_driver *ddp,
+ char *buf)
+{
+ /* this is not style compliant, but snprintf is not fast enough */
+ u64 *lbuf = (u64 *) buf;
+ lbuf[0] = (u64) & ehca_flightrecorder;
+ lbuf[1] = EHCA_FLIGHTRECORDER_SIZE;
+ lbuf[2] = atomic_read(&ehca_flightrecorder_index);
+ return sizeof(u64) * 3;
+}
+DRIVER_ATTR(flightrecorder, S_IRUSR, ehca_show_flightrecorder, 0);
+
+void ehca_create_driver_sysfs(struct ibmebus_driver *drv)
+{
+ driver_create_file(&drv->driver, &driver_attr_debug_level);
+ driver_create_file(&drv->driver, &driver_attr_flightrecorder);
+}
+
+void ehca_remove_driver_sysfs(struct ibmebus_driver *drv)
+{
+ driver_remove_file(&drv->driver, &driver_attr_debug_level);
+ driver_remove_file(&drv->driver, &driver_attr_flightrecorder);
+}
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,12)
+#define EHCA_RESOURCE_ATTR_H(name) \
+static ssize_t ehca_show_##name(struct device *dev, \
+ struct device_attribute *attr, \
+ char *buf)
+#else
+#define EHCA_RESOURCE_ATTR_H(name) \
+static ssize_t ehca_show_##name(struct device *dev, \
+ char *buf)
+#endif
+
+#define EHCA_RESOURCE_ATTR(name) \
+EHCA_RESOURCE_ATTR_H(name) \
+{ \
+ struct ehca_shca *shca; \
+ struct query_hca_rblock *rblock; \
+ int len; \
+ \
+ shca = dev->driver_data; \
+ \
+ rblock = kmalloc(PAGE_SIZE, GFP_KERNEL); \
+ if (rblock == NULL) { \
+ EDEB_ERR(4, "Can't allocate rblock memory."); \
+ return 0; \
+ } \
+ \
+ memset(rblock, 0, PAGE_SIZE); \
+ \
+ if (hipz_h_query_hca(shca->ipz_hca_handle, rblock) != H_Success) { \
+ EDEB_ERR(4, "Can't query device properties"); \
+ kfree(rblock); \
+ return 0; \
+ } \
+ \
+ if ((strcmp(#name, "num_ports") == 0) && (ehca_nr_ports == 1)) \
+ len = snprintf(buf, 256, "1"); \
+ else \
+ len = snprintf(buf, 256, "%d", rblock->name); \
+ \
+ if (len < 0) \
+ return 0; \
+ buf[len] = '\n'; \
+ buf[len+1] = 0; \
+ \
+ kfree(rblock); \
+ \
+ return len+1; \
+} \
+static DEVICE_ATTR(name, S_IRUGO, ehca_show_##name, NULL);
+
+EHCA_RESOURCE_ATTR(num_ports);
+EHCA_RESOURCE_ATTR(hw_ver);
+EHCA_RESOURCE_ATTR(max_eq);
+EHCA_RESOURCE_ATTR(cur_eq);
+EHCA_RESOURCE_ATTR(max_cq);
+EHCA_RESOURCE_ATTR(cur_cq);
+EHCA_RESOURCE_ATTR(max_qp);
+EHCA_RESOURCE_ATTR(cur_qp);
+EHCA_RESOURCE_ATTR(max_mr);
+EHCA_RESOURCE_ATTR(cur_mr);
+EHCA_RESOURCE_ATTR(max_mw);
+EHCA_RESOURCE_ATTR(cur_mw);
+EHCA_RESOURCE_ATTR(max_pd);
+EHCA_RESOURCE_ATTR(max_ah);
+
+static ssize_t ehca_show_adapter_handle(struct device *dev,
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,12)
+ struct device_attribute *attr,
+#endif
+ char *buf)
+{
+ struct ehca_shca *shca = dev->driver_data;
+
+ return sprintf(buf, "%lx\n", shca->ipz_hca_handle.handle);
+
+}
+static DEVICE_ATTR(adapter_handle, S_IRUGO, ehca_show_adapter_handle, NULL);
+
+
+
+void ehca_create_device_sysfs(struct ibmebus_dev *dev)
+{
+ device_create_file(&dev->ofdev.dev, &dev_attr_adapter_handle);
+ device_create_file(&dev->ofdev.dev, &dev_attr_num_ports);
+ device_create_file(&dev->ofdev.dev, &dev_attr_hw_ver);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_eq);
+ device_create_file(&dev->ofdev.dev, &dev_attr_cur_eq);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_cq);
+ device_create_file(&dev->ofdev.dev, &dev_attr_cur_cq);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_qp);
+ device_create_file(&dev->ofdev.dev, &dev_attr_cur_qp);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_mr);
+ device_create_file(&dev->ofdev.dev, &dev_attr_cur_mr);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_mw);
+ device_create_file(&dev->ofdev.dev, &dev_attr_cur_mw);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_pd);
+ device_create_file(&dev->ofdev.dev, &dev_attr_max_ah);
+}
+
+void ehca_remove_device_sysfs(struct ibmebus_dev *dev)
+{
+ device_remove_file(&dev->ofdev.dev, &dev_attr_adapter_handle);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_num_ports);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_hw_ver);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_eq);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_cur_eq);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_cq);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_cur_cq);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_qp);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_cur_qp);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_mr);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_cur_mr);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_mw);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_cur_mw);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_pd);
+ device_remove_file(&dev->ofdev.dev, &dev_attr_max_ah);
+}
+
+/**
+ * ehca_probe - TODO
+ */
+static int __devinit ehca_probe(struct ibmebus_dev *dev,
+ const struct of_device_id *id)
+{
+ struct ehca_shca *shca;
+ u64 *handle;
+ struct ib_pd *ibpd;
+ int ret = 0;
+
+ EDEB_EN(7, "name=%s", dev->name);
+
+ handle = (u64 *)get_property(dev->ofdev.node, "ibm,hca-handle", NULL);
+ if (!handle) {
+ EDEB_ERR(4, "Cannot get eHCA handle for adapter: %s.",
+ dev->ofdev.node->full_name);
+ return -ENODEV;
+ }
+
+ if (!(*handle)) {
+ EDEB_ERR(4, "Wrong eHCA handle for adapter: %s.",
+ dev->ofdev.node->full_name);
+ return -ENODEV;
+ }
+
+ shca = (struct ehca_shca *)ib_alloc_device(sizeof(*shca));
+ if (shca == NULL) {
+ EDEB_ERR(4, "Cannot allocate shca memory.");
+ return -ENOMEM;
+ }
+
+ shca->ibmebus_dev = dev;
+ shca->ipz_hca_handle.handle = *handle;
+ dev->ofdev.dev.driver_data = shca;
+
+ ret = ehca_sense_attributes(shca);
+ if (ret < 0) {
+ EDEB_ERR(4, "Cannot sense eHCA attributes.");
+ goto probe1;
+ }
+
+ /* create event queues */
+ ret = ehca_create_eq(shca, &shca->eq, EHCA_EQ, 2048);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot create EQ.");
+ goto probe1;
+ }
+
+ ret = ehca_create_eq(shca, &shca->neq, EHCA_NEQ, 513);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot create NEQ.");
+ goto probe2;
+ }
+
+ /* create internal protection domain */
+ ibpd = ehca_alloc_pd(&shca->ib_device, (void*)(-1), 0);
+ if (IS_ERR(ibpd)) {
+ EDEB_ERR(4, "Cannot create internal PD.");
+ ret = PTR_ERR(ibpd);
+ goto probe3;
+ }
+
+ shca->pd = container_of(ibpd, struct ehca_pd, ib_pd);
+ shca->pd->ib_pd.device = &shca->ib_device;
+
+ /* create internal max MR */
+ if (shca->maxmr == 0) {
+ struct ehca_mr *e_maxmr = 0;
+ ret = ehca_reg_internal_maxmr(shca, shca->pd, &e_maxmr);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot create internal MR. ret=%x", ret);
+ goto probe4;
+ }
+ shca->maxmr = e_maxmr;
+ }
+
+ ret = ehca_register_device(shca);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot register Infiniband device.");
+ goto probe5;
+ }
+
+ /* create AQP1 for port 1 */
+ if (ehca_open_aqp1 == 1) {
+ shca->sport[0].port_state = IB_PORT_DOWN;
+ ret = ehca_create_aqp1(shca, 1);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot create AQP1 for port 1.");
+ goto probe6;
+ }
+ }
+
+ /* create AQP1 for port 2 */
+ if ((ehca_open_aqp1 == 1) && (shca->num_ports == 2)) {
+ shca->sport[1].port_state = IB_PORT_DOWN;
+ ret = ehca_create_aqp1(shca, 2);
+ if (ret != 0) {
+ EDEB_ERR(4, "Cannot create AQP1 for port 2.");
+ goto probe7;
+ }
+ }
+
+ ehca_create_device_sysfs(dev);
+
+ spin_lock(&ehca_module.shca_lock);
+ list_add(&shca->shca_list, &ehca_module.shca_list);
+ spin_unlock(&ehca_module.shca_lock);
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return 0;
+
+ probe7:
+ ret = ehca_destroy_aqp1(&shca->sport[0]);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy AQP1 for port 1. ret=%x", ret);
+
+ probe6:
+ ib_unregister_device(&shca->ib_device);
+
+ probe5:
+ ret = ehca_dereg_internal_maxmr(shca);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy internal MR. ret=%x", ret);
+
+ probe4:
+ ret = ehca_dealloc_pd(&shca->pd->ib_pd);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy internal PD. ret=%x", ret);
+
+ probe3:
+ ret = ehca_destroy_eq(shca, &shca->neq);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy NEQ. ret=%x", ret);
+
+ probe2:
+ ret = ehca_destroy_eq(shca, &shca->eq);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy EQ. ret=%x", ret);
+
+ probe1:
+ ib_dealloc_device(&shca->ib_device);
+
+ EDEB_EX(4, "ret=%x", ret);
+
+ return -EINVAL;
+}
+
+static int __devexit ehca_remove(struct ibmebus_dev *dev)
+{
+ struct ehca_shca *shca = dev->ofdev.dev.driver_data;
+ int ret;
+
+ EDEB_EN(7, "shca=%p", shca);
+
+ ehca_remove_device_sysfs(dev);
+
+ if (ehca_open_aqp1 == 1) {
+ int i;
+
+ for (i = 0; i < shca->num_ports; i++) {
+ ret = ehca_destroy_aqp1(&shca->sport[i]);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy AQP1 for port %x."
+ " ret=%x", ret, i);
+ }
+ }
+
+ ib_unregister_device(&shca->ib_device);
+
+ ret = ehca_dereg_internal_maxmr(shca);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy internal MR. ret=%x", ret);
+
+ ret = ehca_dealloc_pd(&shca->pd->ib_pd);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy internal PD. ret=%x", ret);
+
+ ret = ehca_destroy_eq(shca, &shca->eq);
+ if (ret != 0)
+ EDEB_ERR(4, "Cannot destroy EQ. ret=%x", ret);
+
+ ret = ehca_destroy_eq(shca, &shca->neq);
+ if (ret != 0)
+ EDEB_ERR(4, "Canot destroy NEQ. ret=%x", ret);
+
+ ib_dealloc_device(&shca->ib_device);
+
+ spin_lock(&ehca_module.shca_lock);
+ list_del(&shca->shca_list);
+ spin_unlock(&ehca_module.shca_lock);
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+static struct of_device_id ehca_device_table[] =
+{
+ {
+ .name = "lhca",
+ .compatible = "IBM,lhca",
+ },
+ {},
+};
+
+static struct ibmebus_driver ehca_driver = {
+ .name = "ehca",
+ .id_table = ehca_device_table,
+ .probe = ehca_probe,
+ .remove = ehca_remove,
+};
+
+/**
+ * ehca_module_init - eHCA initialization routine.
+ */
+int __init ehca_module_init(void)
+{
+ int ret = 0;
+
+ printk(KERN_INFO "eHCA Infiniband Device Driver "
+ "(Rel.: EHCA2_0047)\n");
+ EDEB_EN(7, "");
+
+ idr_init(&ehca_qp_idr);
+ idr_init(&ehca_cq_idr);
+
+ INIT_LIST_HEAD(&ehca_module.shca_list);
+ spin_lock_init(&ehca_module.shca_lock);
+
+ ehca_init_trace();
+ ehca_init_flight();
+
+ ehca_wq = create_workqueue("ehca");
+ if (ehca_wq == NULL) {
+ EDEB_ERR(4, "Cannot create workqueue.");
+ ret = -ENOMEM;
+ goto module_init0;
+ }
+
+ if ((ret = ehca_caches_create(&ehca_module)) != 0) {
+ ehca_catastrophic("Cannot create SLAB caches");
+ ret = -ENOMEM;
+ goto module_init1;
+ }
+
+ if ((ret = ibmebus_register_driver(&ehca_driver)) != 0) {
+ ehca_catastrophic("Cannot register eHCA device driver");
+ ret = -EINVAL;
+ goto module_init2;
+ }
+
+ ehca_create_driver_sysfs(&ehca_driver);
+
+ if (ehca_poll_all_eqs != 1) {
+ EDEB_ERR(4, "WARNING!!!");
+ EDEB_ERR(4, "It is possible to lose interrupts.");
+
+ return 0;
+ }
+
+ ehca_kthread_eq = kthread_create(ehca_poll_eqs, &ehca_module,
+ "ehca_poll_eqs");
+ if (IS_ERR(ehca_kthread_eq)) {
+ EDEB_ERR(4, "Cannot create kthread_eq");
+ ret = PTR_ERR(ehca_kthread_eq);
+ goto module_init3;
+ }
+
+ wake_up_process(ehca_kthread_eq);
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return 0;
+
+ module_init3:
+ ehca_remove_driver_sysfs(&ehca_driver);
+ ibmebus_unregister_driver(&ehca_driver);
+
+ module_init2:
+ ehca_caches_destroy(&ehca_module);
+
+ module_init1:
+ destroy_workqueue(ehca_wq);
+
+ module_init0:
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+};
+
+/**
+ * ehca_module_exit - eHCA exit routine.
+ */
+void __exit ehca_module_exit(void)
+{
+ EDEB_EN(7, "");
+
+ if (ehca_poll_all_eqs == 1)
+ kthread_stop(ehca_kthread_eq);
+
+ ehca_remove_driver_sysfs(&ehca_driver);
+ ibmebus_unregister_driver(&ehca_driver);
+
+ if (ehca_caches_destroy(&ehca_module) != 0)
+ ehca_catastrophic("Cannot destroy SLAB caches");
+
+ destroy_workqueue(ehca_wq);
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)
+ idr_destroy_ext(&ehca_cq_idr);
+ idr_destroy_ext(&ehca_qp_idr);
+#else
+ idr_destroy(&ehca_cq_idr);
+ idr_destroy(&ehca_qp_idr);
+#endif
+
+ EDEB_EX(7, "");
+};
+
+module_init(ehca_module_init);
+module_exit(ehca_module_exit);

2006-02-18 01:04:53

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 07/22] Hypercall definitions

From: Roland Dreier <[email protected]>

Do these defines belong in the ehca driver, or should they be put
somewhere in generic hypercall support?
---

drivers/infiniband/hw/ehca/ehca_common.h | 115 ++++++++++++++++++++++++++++++
1 files changed, 115 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_common.h b/drivers/infiniband/hw/ehca/ehca_common.h
new file mode 100644
index 0000000..922f010
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_common.h
@@ -0,0 +1,115 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * hcad local defines
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_common.h,v 1.15 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __EHCA_COMMON_H__
+#define __EHCA_COMMON_H__
+
+#ifdef CONFIG_PPC64
+#include <asm/hvcall.h>
+
+#define H_PARTIAL_STORE 16
+#define H_PAGE_REGISTERED 15
+#define H_IN_PROGRESS 14
+#define H_PARTIAL 5
+#define H_NOT_AVAILABLE 3
+#define H_Closed 2
+#define H_ADAPTER_PARM -17
+#define H_RH_PARM -18
+#define H_RCQ_PARM -19
+#define H_SCQ_PARM -20
+#define H_EQ_PARM -21
+#define H_RT_PARM -22
+#define H_ST_PARM -23
+#define H_SIGT_PARM -24
+#define H_TOKEN_PARM -25
+#define H_MLENGTH_PARM -27
+#define H_MEM_PARM -28
+#define H_MEM_ACCESS_PARM -29
+#define H_ATTR_PARM -30
+#define H_PORT_PARM -31
+#define H_MCG_PARM -32
+#define H_VL_PARM -33
+#define H_TSIZE_PARM -34
+#define H_TRACE_PARM -35
+
+#define H_MASK_PARM -37
+#define H_MCG_FULL -38
+#define H_ALIAS_EXIST -39
+#define H_P_COUNTER -40
+#define H_TABLE_FULL -41
+#define H_ALT_TABLE -42
+#define H_MR_CONDITION -43
+#define H_NOT_ENOUGH_RESOURCES -44
+#define H_R_STATE -45
+#define H_RESCINDEND -46
+
+/* H call defines to be moved to kernel */
+#define H_RESET_EVENTS 0x15C
+#define H_ALLOC_RESOURCE 0x160
+#define H_FREE_RESOURCE 0x164
+#define H_MODIFY_QP 0x168
+#define H_QUERY_QP 0x16C
+#define H_REREGISTER_PMR 0x170
+#define H_REGISTER_SMR 0x174
+#define H_QUERY_MR 0x178
+#define H_QUERY_MW 0x17C
+#define H_QUERY_HCA 0x180
+#define H_QUERY_PORT 0x184
+#define H_MODIFY_PORT 0x188
+#define H_DEFINE_AQP1 0x18C
+#define H_GET_TRACE_BUFFER 0x190
+#define H_DEFINE_AQP0 0x194
+#define H_RESIZE_MR 0x198
+#define H_ATTACH_MCQP 0x19C
+#define H_DETACH_MCQP 0x1A0
+#define H_CREATE_RPT 0x1A4
+#define H_REMOVE_RPT 0x1A8
+#define H_REGISTER_RPAGES 0x1AC
+#define H_DISABLE_AND_GETC 0x1B0
+#define H_ERROR_DATA 0x1B4
+#define H_GET_HCA_INFO 0x1B8
+#define H_GET_PERF_COUNT 0x1BC
+#define H_MANAGE_TRACE 0x1C0
+#define H_QUERY_INT_STATE 0x1E4
+#endif
+
+#endif /* __EHCA_COMMON_H__ */

2006-02-18 00:58:13

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 03/22] pHype specific stuff

From: Roland Dreier <[email protected]>

It's not clear what the connection between hcp_phyp.c and hcp_phyp.h
really is -- they don't seem to very closely related.

Again, hcp_phyp.h has some rather large functions that belong in
a .c file and maybe shouldn't be inlined (although maybe the
generated assembly ends up being small because it's just
fiddling registers around).

For a change, hipz_galpa_load() and hipz_galpa_store() actually
look simple enough that they could probably become inline functions
in a header (and just kill hcp_phyp.c). This would also make the
comments about them being inline in ehca_galpa.h true.

Is ehca_galpha.h needed at all, or can it be folded into another
file? Why is its abstraction needed?
---

drivers/infiniband/hw/ehca/ehca_galpa.h | 74 +++++++
drivers/infiniband/hw/ehca/hcp_phyp.c | 81 +++++++
drivers/infiniband/hw/ehca/hcp_phyp.h | 338 +++++++++++++++++++++++++++++++
3 files changed, 493 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_galpa.h b/drivers/infiniband/hw/ehca/ehca_galpa.h
new file mode 100644
index 0000000..d64115c
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_galpa.h
@@ -0,0 +1,74 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * pSeries interface definitions
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_galpa.h,v 1.6 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __EHCA_GALPA_H__
+#define __EHCA_GALPA_H__
+
+/* eHCA page (mapped into p-memory)
+ resource to access eHCA register pages in CPU address space
+*/
+struct h_galpa {
+ u64 fw_handle;
+ /* for pSeries this is a 64bit memory address where
+ I/O memory is mapped into CPU address space (kv) */
+};
+
+/**
+ resource to access eHCA address space registers, all types
+*/
+struct h_galpas {
+ u32 pid; /*PID of userspace galpa checking */
+ struct h_galpa user; /* user space accessible resource,
+ set to 0 if unused */
+ struct h_galpa kernel; /* kernel space accessible resource,
+ set to 0 if unused */
+};
+/** @brief store value at offset into galpa, will be inline function
+ */
+void hipz_galpa_store(struct h_galpa galpa, u32 offset, u64 value);
+
+/** @brief return value from offset in galpa, will be inline function
+ */
+u64 hipz_galpa_load(struct h_galpa galpa, u32 offset);
+
+#endif /* __EHCA_GALPA_H__ */
diff --git a/drivers/infiniband/hw/ehca/hcp_phyp.c b/drivers/infiniband/hw/ehca/hcp_phyp.c
new file mode 100644
index 0000000..129e61b
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hcp_phyp.c
@@ -0,0 +1,81 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * load store abstraction for ehca register access
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hcp_phyp.c,v 1.10 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#define DEB_PREFIX "PHYP"
+
+#ifdef __KERNEL__
+#include "ehca_kernel.h"
+#include "hipz_hw.h"
+/* #include "hipz_structs.h" */
+/* TODO: still necessary */
+#include "ehca_classes.h"
+#else /* !__KERNEL__ */
+#include "ehca_utools.h"
+#include "ehca_galpa.h"
+#endif
+
+#ifndef EHCA_USERDRIVER /* TODO: is this correct */
+
+u64 hipz_galpa_load(struct h_galpa galpa, u32 offset)
+{
+ u64 addr = galpa.fw_handle + offset;
+ u64 out;
+ EDEB_EN(7, "addr=%lx offset=%x ", addr, offset);
+ out = *(u64 *) addr;
+ EDEB_EX(7, "addr=%lx value=%lx", addr, out);
+ return out;
+};
+
+void hipz_galpa_store(struct h_galpa galpa, u32 offset, u64 value)
+{
+ u64 addr = galpa.fw_handle + offset;
+ EDEB(7, "addr=%lx offset=%x value=%lx", addr,
+ offset, value);
+ *(u64 *) addr = value;
+#ifdef EHCA_USE_HCALL
+ /* hipz_galpa_load(galpa, offset); */
+ /* synchronize explicitly */
+#endif
+};
+
+#endif /* EHCA_USERDRIVER */
diff --git a/drivers/infiniband/hw/ehca/hcp_phyp.h b/drivers/infiniband/hw/ehca/hcp_phyp.h
new file mode 100644
index 0000000..c82fb4b
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hcp_phyp.h
@@ -0,0 +1,338 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Firmware calls
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ * Waleri Fomin <[email protected]>
+ * Gerd Bayer <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hcp_phyp.h,v 1.16 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __HCP_PHYP_H__
+#define __HCP_PHYP_H__
+
+#ifndef EHCA_USERDRIVER
+inline static int hcall_map_page(u64 physaddr, u64 * mapaddr)
+{
+ *mapaddr = (u64)(ioremap(physaddr, 4096));
+
+ EDEB(7, "ioremap physaddr=%lx mapaddr=%lx", physaddr, *mapaddr);
+ return 0;
+}
+
+inline static int hcall_unmap_page(u64 mapaddr)
+{
+ EDEB(7, "mapaddr=%lx", mapaddr);
+ iounmap((void *)(mapaddr));
+ return 0;
+}
+#else
+int hcall_map_page(u64 physaddr, u64 * mapaddr);
+int hcall_unmap_page(u64 mapaddr);
+#endif
+
+struct hcall {
+ u64 regs[11];
+};
+
+/**
+ * @brief returns time to wait in secs for the given long busy error code
+ */
+inline static u32 getLongBusyTimeSecs(int longBusyRetCode)
+{
+ switch (longBusyRetCode) {
+ case H_LongBusyOrder1msec:
+ return 1;
+ case H_LongBusyOrder10msec:
+ return 10;
+ case H_LongBusyOrder100msec:
+ return 100;
+ case H_LongBusyOrder1sec:
+ return 1000;
+ case H_LongBusyOrder10sec:
+ return 10000;
+ case H_LongBusyOrder100sec:
+ return 100000;
+ default:
+ return 1;
+ } /* eof switch */
+}
+
+inline static long plpar_hcall_7arg_7ret(unsigned long opcode,
+ unsigned long arg1, /* <R4 */
+ unsigned long arg2, /* <R5 */
+ unsigned long arg3, /* <R6 */
+ unsigned long arg4, /* <R7 */
+ unsigned long arg5, /* <R8 */
+ unsigned long arg6, /* <R9 */
+ unsigned long arg7, /* <R10 */
+ unsigned long *out1, /* <R4 */
+ unsigned long *out2, /* <R5 */
+ unsigned long *out3, /* <R6 */
+ unsigned long *out4, /* <R7 */
+ unsigned long *out5, /* <R8 */
+ unsigned long *out6, /* <R9 */
+ unsigned long *out7 /* <R10 */
+ )
+{
+ struct hcall hcall_in = {
+ .regs[0] = opcode,
+ .regs[1] = arg1,
+ .regs[2] = arg2,
+ .regs[3] = arg3,
+ .regs[4] = arg4,
+ .regs[5] = arg5,
+ .regs[6] = arg6,
+ .regs[7] = arg7 /*,
+ .regs[8]=arg8 */
+ };
+ struct hcall hcall = hcall_in;
+ int i;
+ long ret;
+ int sleep_msecs;
+ EDEB(7, "HCALL77_IN r3=%lx r4=%lx r5=%lx r6=%lx r7=%lx r8=%lx"
+ " r9=%lx r10=%lx r11=%lx", hcall.regs[0], hcall.regs[1],
+ hcall.regs[2], hcall.regs[3], hcall.regs[4], hcall.regs[5],
+ hcall.regs[6], hcall.regs[7], hcall.regs[8]);
+
+ /* if phype returns LongBusyXXX,
+ * we retry several times, but not forever */
+ for (i = 0; i < 5; i++) {
+ __asm__ __volatile__("mr 3,%10\n"
+ "mr 4,%11\n"
+ "mr 5,%12\n"
+ "mr 6,%13\n"
+ "mr 7,%14\n"
+ "mr 8,%15\n"
+ "mr 9,%16\n"
+ "mr 10,%17\n"
+ "mr 11,%18\n"
+ "mr 12,%19\n"
+ ".long 0x44000022\n"
+ "mr %0,3\n"
+ "mr %1,4\n"
+ "mr %2,5\n"
+ "mr %3,6\n"
+ "mr %4,7\n"
+ "mr %5,8\n"
+ "mr %6,9\n"
+ "mr %7,10\n"
+ "mr %8,11\n"
+ "mr %9,12\n":"=r"(hcall.regs[0]),
+ "=r"(hcall.regs[1]), "=r"(hcall.regs[2]),
+ "=r"(hcall.regs[3]), "=r"(hcall.regs[4]),
+ "=r"(hcall.regs[5]), "=r"(hcall.regs[6]),
+ "=r"(hcall.regs[7]), "=r"(hcall.regs[8]),
+ "=r"(hcall.regs[9])
+ :"r"(hcall.regs[0]), "r"(hcall.regs[1]),
+ "r"(hcall.regs[2]), "r"(hcall.regs[3]),
+ "r"(hcall.regs[4]), "r"(hcall.regs[5]),
+ "r"(hcall.regs[6]), "r"(hcall.regs[7]),
+ "r"(hcall.regs[8]), "r"(hcall.regs[9])
+ :"r0", "r2", "r3", "r4", "r5", "r6", "r7",
+ "r8", "r9", "r10", "r11", "r12", "cc",
+ "xer", "ctr", "lr", "cr0", "cr1", "cr5",
+ "cr6", "cr7");
+
+ EDEB(7, "HCALL77_OUT r3=%lx r4=%lx r5=%lx r6=%lx r7=%lx r8=%lx"
+ "r9=%lx r10=%lx r11=%lx", hcall.regs[0], hcall.regs[1],
+ hcall.regs[2], hcall.regs[3], hcall.regs[4], hcall.regs[5],
+ hcall.regs[6], hcall.regs[7], hcall.regs[8]);
+ ret = hcall.regs[0];
+ *out1 = hcall.regs[1];
+ *out2 = hcall.regs[2];
+ *out3 = hcall.regs[3];
+ *out4 = hcall.regs[4];
+ *out5 = hcall.regs[5];
+ *out6 = hcall.regs[6];
+ *out7 = hcall.regs[7];
+
+ if (!H_isLongBusy(ret)) {
+ if (ret<0) {
+ EDEB_ERR(4, "HCALL77_IN r3=%lx r4=%lx r5=%lx r6=%lx "
+ "r7=%lx r8=%lx r9=%lx r10=%lx",
+ opcode, arg1, arg2, arg3,
+ arg4, arg5, arg6, arg7);
+ EDEB_ERR(4,
+ "HCALL77_OUT r3=%lx r4=%lx r5=%lx "
+ "r6=%lx r7=%lx r8=%lx r9=%lx r10=%lx ",
+ hcall.regs[0], hcall.regs[1],
+ hcall.regs[2], hcall.regs[3],
+ hcall.regs[4], hcall.regs[5],
+ hcall.regs[6], hcall.regs[7]);
+ }
+ return ret;
+ }
+
+ sleep_msecs = getLongBusyTimeSecs(ret);
+ EDEB(7, "Got LongBusy return code from phype. "
+ "Sleep %dmsecs and retry...", sleep_msecs);
+ msleep_interruptible(sleep_msecs);
+ hcall = hcall_in;
+ } /* eof for */
+ EDEB_ERR(4, "HCALL77_OUT ret=H_Busy");
+ return H_Busy;
+}
+
+inline static long plpar_hcall_9arg_9ret(unsigned long opcode,
+ unsigned long arg1, /* <R4 */
+ unsigned long arg2, /* <R5 */
+ unsigned long arg3, /* <R6 */
+ unsigned long arg4, /* <R7 */
+ unsigned long arg5, /* <R8 */
+ unsigned long arg6, /* <R9 */
+ unsigned long arg7, /* <R10 */
+ unsigned long arg8, /* <R11 */
+ unsigned long arg9, /* <R12 */
+ unsigned long *out1, /* <R4 */
+ unsigned long *out2, /* <R5 */
+ unsigned long *out3, /* <R6 */
+ unsigned long *out4, /* <R7 */
+ unsigned long *out5, /* <R8 */
+ unsigned long *out6, /* <R9 */
+ unsigned long *out7, /* <R10 */
+ unsigned long *out8, /* <R11 */
+ unsigned long *out9 /* <R12 */
+ )
+{
+ struct hcall hcall_in = {
+ .regs[0] = opcode,
+ .regs[1] = arg1,
+ .regs[2] = arg2,
+ .regs[3] = arg3,
+ .regs[4] = arg4,
+ .regs[5] = arg5,
+ .regs[6] = arg6,
+ .regs[7] = arg7,
+ .regs[8] = arg8,
+ .regs[9] = arg9,
+ };
+ struct hcall hcall = hcall_in;
+ int i;
+ long ret;
+ int sleep_msecs;
+ EDEB(7,"HCALL99_IN r3=%lx r4=%lx r5=%lx r6=%lx r7=%lx r8=%lx r9=%lx"
+ " r10=%lx r11=%lx r12=%lx",
+ hcall.regs[0], hcall.regs[1], hcall.regs[2], hcall.regs[3],
+ hcall.regs[4], hcall.regs[5], hcall.regs[6], hcall.regs[7],
+ hcall.regs[8], hcall.regs[9]);
+
+ /* if phype returns LongBusyXXX, we retry several times, but not forever */
+ for (i = 0; i < 5; i++) {
+ __asm__ __volatile__("mr 3,%10\n"
+ "mr 4,%11\n"
+ "mr 5,%12\n"
+ "mr 6,%13\n"
+ "mr 7,%14\n"
+ "mr 8,%15\n"
+ "mr 9,%16\n"
+ "mr 10,%17\n"
+ "mr 11,%18\n"
+ "mr 12,%19\n"
+ ".long 0x44000022\n"
+ "mr %0,3\n"
+ "mr %1,4\n"
+ "mr %2,5\n"
+ "mr %3,6\n"
+ "mr %4,7\n"
+ "mr %5,8\n"
+ "mr %6,9\n"
+ "mr %7,10\n"
+ "mr %8,11\n"
+ "mr %9,12\n":"=r"(hcall.regs[0]),
+ "=r"(hcall.regs[1]), "=r"(hcall.regs[2]),
+ "=r"(hcall.regs[3]), "=r"(hcall.regs[4]),
+ "=r"(hcall.regs[5]), "=r"(hcall.regs[6]),
+ "=r"(hcall.regs[7]), "=r"(hcall.regs[8]),
+ "=r"(hcall.regs[9])
+ :"r"(hcall.regs[0]), "r"(hcall.regs[1]),
+ "r"(hcall.regs[2]), "r"(hcall.regs[3]),
+ "r"(hcall.regs[4]), "r"(hcall.regs[5]),
+ "r"(hcall.regs[6]), "r"(hcall.regs[7]),
+ "r"(hcall.regs[8]), "r"(hcall.regs[9])
+ :"r0", "r2", "r3", "r4", "r5", "r6", "r7",
+ "r8", "r9", "r10", "r11", "r12", "cc",
+ "xer", "ctr", "lr", "cr0", "cr1", "cr5",
+ "cr6", "cr7");
+
+ EDEB(7,"HCALL99_OUT r3=%lx r4=%lx r5=%lx r6=%lx r7=%lx r8=%lx "
+ "r9=%lx r10=%lx r11=%lx r12=%lx", hcall.regs[0],
+ hcall.regs[1], hcall.regs[2], hcall.regs[3], hcall.regs[4],
+ hcall.regs[5], hcall.regs[6], hcall.regs[7], hcall.regs[8],
+ hcall.regs[9]);
+ ret = hcall.regs[0];
+ *out1 = hcall.regs[1];
+ *out2 = hcall.regs[2];
+ *out3 = hcall.regs[3];
+ *out4 = hcall.regs[4];
+ *out5 = hcall.regs[5];
+ *out6 = hcall.regs[6];
+ *out7 = hcall.regs[7];
+ *out8 = hcall.regs[8];
+ *out9 = hcall.regs[9];
+
+ if (!H_isLongBusy(ret)) {
+ if (ret<0) {
+ EDEB_ERR(4, "HCALL99_IN r3=%lx r4=%lx r5=%lx r6=%lx "
+ "r7=%lx r8=%lx r9=%lx r10=%lx "
+ "r11=%lx r12=%lx",
+ opcode, arg1, arg2, arg3,
+ arg4, arg5, arg6, arg7,
+ arg8, arg9);
+ EDEB_ERR(4,
+ "HCALL99_OUT r3=%lx r4=%lx r5=%lx "
+ "r6=%lx r7=%lx r8=%lx r9=%lx r10=%lx "
+ "r11=%lx r12=lx",
+ hcall.regs[0], hcall.regs[1],
+ hcall.regs[2], hcall.regs[3],
+ hcall.regs[4], hcall.regs[5],
+ hcall.regs[6], hcall.regs[7],
+ hcall.regs[8]);
+ }
+ return ret;
+ }
+ sleep_msecs = getLongBusyTimeSecs(ret);
+ EDEB(7, "Got LongBusy return code from phype. "
+ "Sleep %dmsecs and retry...", sleep_msecs);
+ msleep_interruptible(sleep_msecs);
+ hcall = hcall_in;
+ } /* eof for */
+ EDEB_ERR(4, "HCALL99_OUT ret=H_Busy");
+ return H_Busy;
+}
+
+#endif

2006-02-18 01:04:19

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 14/22] ehca completion queue handling

From: Roland Dreier <[email protected]>


---

drivers/infiniband/hw/ehca/ehca_cq.c | 416 ++++++++++++++++++++++++++++++++++
1 files changed, 416 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_cq.c b/drivers/infiniband/hw/ehca/ehca_cq.c
new file mode 100644
index 0000000..ebee9c3
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_cq.c
@@ -0,0 +1,416 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Completion queue handling
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Heiko J Schick <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_cq.c,v 1.61 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#define DEB_PREFIX "e_cq"
+
+#include "ehca_kernel.h"
+#include "ehca_common.h"
+#include "ehca_iverbs.h"
+#include "ehca_classes.h"
+#include "ehca_irq.h"
+#include "hcp_if.h"
+#include <linux/err.h>
+#include <asm/uaccess.h>
+
+#define HIPZ_CQ_REGISTER_ORIG 0
+
+int ehca_cq_assign_qp(struct ehca_cq *cq, struct ehca_qp *qp)
+{
+ unsigned int qp_num = qp->ehca_qp_core.real_qp_num;
+ unsigned int key = qp_num%QP_HASHTAB_LEN;
+ unsigned long spl_flags = 0;
+ spin_lock_irqsave(&cq->spinlock, spl_flags);
+ list_add(&qp->list_entries, &cq->qp_hashtab[key]);
+ spin_unlock_irqrestore(&cq->spinlock, spl_flags);
+ EDEB(7, "cq_num=%x real_qp_num=%x", cq->cq_number, qp_num);
+ return 0;
+}
+
+int ehca_cq_unassign_qp(struct ehca_cq *cq, unsigned int real_qp_num)
+{
+ int ret = -EINVAL;
+ unsigned int key = real_qp_num%QP_HASHTAB_LEN;
+ struct list_head *iter = NULL;
+ struct ehca_qp *qp = NULL;
+ unsigned long spl_flags = 0;
+ spin_lock_irqsave(&cq->spinlock, spl_flags);
+ list_for_each(iter, &cq->qp_hashtab[key]) {
+ qp = list_entry(iter, struct ehca_qp, list_entries);
+ if (qp->ehca_qp_core.real_qp_num == real_qp_num) {
+ list_del(iter);
+ EDEB(7, "removed qp from cq .cq_num=%x real_qp_num=%x",
+ cq->cq_number, real_qp_num);
+ ret = 0;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&cq->spinlock, spl_flags);
+ if (ret!=0) {
+ EDEB_ERR(4, "qp not found cq_num=%x real_qp_num=%x",
+ cq->cq_number, real_qp_num);
+ }
+ return ret;
+}
+
+struct ehca_qp* ehca_cq_get_qp(struct ehca_cq *cq, int real_qp_num)
+{
+ struct ehca_qp *ret = NULL;
+ unsigned int key = real_qp_num%QP_HASHTAB_LEN;
+ struct list_head *iter = NULL;
+ struct ehca_qp *qp = NULL;
+ list_for_each(iter, &cq->qp_hashtab[key]) {
+ qp = list_entry(iter, struct ehca_qp, list_entries);
+ if (qp->ehca_qp_core.real_qp_num == real_qp_num) {
+ ret = qp;
+ break;
+ }
+ }
+ return ret;
+}
+
+struct ib_cq *ehca_create_cq(struct ib_device *device, int cqe,
+ struct ib_ucontext *context,
+ struct ib_udata *udata)
+{
+ struct ib_cq *cq = NULL;
+ struct ehca_cq *my_cq = NULL;
+ u32 number_of_entries = cqe;
+ struct ehca_shca *shca = NULL;
+ struct ipz_adapter_handle adapter_handle;
+ struct ipz_eq_handle eq_handle;
+ struct ipz_cq_handle *cq_handle_ref = NULL;
+ u32 act_nr_of_entries = 0;
+ u32 act_pages = 0;
+ u32 counter = 0;
+ void *vpage = NULL;
+ u64 rpage = 0;
+ struct h_galpa gal;
+ u64 CQx_FEC = 0;
+ u64 hipz_rc = H_Success;
+ int ipz_rc = 0;
+ int ret = 0;
+ const u32 additional_cqe=20;
+ int i= 0;
+
+ EHCA_CHECK_DEVICE_P(device);
+ EDEB_EN(7, "device=%p cqe=%x context=%p",
+ device, cqe, context);
+ /* cq's maximum depth is 4GB-64
+ * but we need additional 20 as buffer for receiving errors cqes
+ */
+ if (cqe>=0xFFFFFFFF-64-additional_cqe) {
+ return ERR_PTR(-EINVAL);
+ }
+ number_of_entries += additional_cqe;
+
+ my_cq = ehca_cq_new();
+ if (my_cq == NULL) {
+ cq = ERR_PTR(-ENOMEM);
+ EDEB_ERR(4,
+ "Out of memory for ehca_cq struct "
+ "device=%p", device);
+ goto create_cq_exit0;
+ }
+ cq = &my_cq->ib_cq;
+
+ shca = container_of(device, struct ehca_shca, ib_device);
+ adapter_handle = shca->ipz_hca_handle;
+ eq_handle = shca->eq.ipz_eq_handle;
+ cq_handle_ref = &my_cq->ipz_cq_handle;
+
+ do {
+ if (!idr_pre_get(&ehca_cq_idr, GFP_KERNEL)) {
+ cq = ERR_PTR(-ENOMEM);
+ EDEB_ERR(4,
+ "Can't reserve idr resources. "
+ "device=%p", device);
+ goto create_cq_exit1;
+ }
+
+ down_write(&ehca_cq_idr_sem);
+ ret = idr_get_new(&ehca_cq_idr, my_cq, &my_cq->token);
+ up_write(&ehca_cq_idr_sem);
+
+ } while (ret == -EAGAIN);
+
+ if (ret) {
+ cq = ERR_PTR(-ENOMEM);
+ EDEB_ERR(4,
+ "Can't allocate new idr entry. "
+ "device=%p", device);
+ goto create_cq_exit1;
+ }
+
+ hipz_rc = hipz_h_alloc_resource_cq(adapter_handle,
+ &my_cq->pf,
+ eq_handle,
+ my_cq->token,
+ number_of_entries,
+ cq_handle_ref,
+ &act_nr_of_entries,
+ &act_pages,
+ &my_cq->ehca_cq_core.galpas);
+ if (hipz_rc != H_Success) {
+ EDEB_ERR(4,
+ "hipz_h_alloc_resource_cq() failed "
+ "hipz_rc=%lx device=%p", hipz_rc, device);
+ cq = ERR_PTR(ehca2ib_return_code(hipz_rc));
+ goto create_cq_exit2;
+ }
+
+ ipz_rc =
+ ipz_queue_ctor(&my_cq->ehca_cq_core.ipz_queue, act_pages,
+ EHCA_PAGESIZE, sizeof(struct ehca_cqe), 0);
+ if (!ipz_rc) {
+ EDEB_ERR(4,
+ "ipz_queue_ctor() failed "
+ "ipz_rc=%x device=%p", ipz_rc, device);
+ cq = ERR_PTR(-EINVAL);
+ goto create_cq_exit3;
+ }
+
+ for (counter = 0; counter < act_pages; counter++) {
+ vpage = ipz_QPageit_get_inc(&my_cq->ehca_cq_core.ipz_queue);
+ if (!vpage) {
+ EDEB_ERR(4, "ipz_QPageit_get_inc() "
+ "returns NULL device=%p", device);
+ cq = ERR_PTR(-EAGAIN);
+ goto create_cq_exit4;
+ }
+ rpage = ehca_kv_to_g(vpage);
+
+ hipz_rc = hipz_h_register_rpage_cq(adapter_handle,
+ my_cq->ipz_cq_handle,
+ &my_cq->pf,
+ 0,
+ HIPZ_CQ_REGISTER_ORIG,
+ rpage,
+ 1,
+ my_cq->ehca_cq_core.galpas.
+ kernel);
+
+ if (hipz_rc < H_Success) {
+ EDEB_ERR(4, "hipz_h_register_rpage_cq() failed "
+ "ehca_cq=%p cq_num=%x hipz_rc=%lx "
+ "counter=%i act_pages=%i",
+ my_cq, my_cq->cq_number,
+ hipz_rc, counter, act_pages);
+ cq = ERR_PTR(-EINVAL);
+ goto create_cq_exit4;
+ }
+
+ if (counter == (act_pages - 1)) {
+ vpage = ipz_QPageit_get_inc(
+ &my_cq->ehca_cq_core.ipz_queue);
+ if ((hipz_rc != H_Success) || (vpage != 0)) {
+ EDEB_ERR(4, "Registration of pages not "
+ "complete ehca_cq=%p cq_num=%x "
+ "hipz_rc=%lx",
+ my_cq, my_cq->cq_number, hipz_rc);
+ cq = ERR_PTR(-EAGAIN);
+ goto create_cq_exit4;
+ }
+ } else {
+ if (hipz_rc != H_PAGE_REGISTERED) {
+ EDEB_ERR(4, "Registration of page failed "
+ "ehca_cq=%p cq_num=%x hipz_rc=%lx"
+ "counter=%i act_pages=%i",
+ my_cq, my_cq->cq_number,
+ hipz_rc, counter, act_pages);
+ cq = ERR_PTR(-ENOMEM);
+ goto create_cq_exit4;
+ }
+ }
+ }
+
+ ipz_QEit_reset(&my_cq->ehca_cq_core.ipz_queue);
+
+ gal = my_cq->ehca_cq_core.galpas.kernel;
+ CQx_FEC = hipz_galpa_load(gal, CQTEMM_OFFSET(CQx_FEC));
+ EDEB(8, "ehca_cq=%p cq_num=%x CQx_FEC=%lx",
+ my_cq, my_cq->cq_number, CQx_FEC);
+
+ my_cq->ib_cq.cqe = my_cq->nr_of_entries =
+ act_nr_of_entries-additional_cqe;
+ my_cq->cq_number = (my_cq->ipz_cq_handle.handle) & 0xffff;
+
+ for (i=0; i<QP_HASHTAB_LEN; i++) {
+ INIT_LIST_HEAD(&my_cq->qp_hashtab[i]);
+ }
+
+ if (context) {
+ struct ehca_create_cq_resp resp;
+ struct vm_area_struct * vma;
+ resp.cq_number = my_cq->cq_number;
+ resp.token = my_cq->token;
+ resp.ehca_cq_core = my_cq->ehca_cq_core;
+
+ ehca_mmap_nopage(((u64) (my_cq->token) << 32) | 0x12000000,
+ my_cq->ehca_cq_core.ipz_queue.queue_length,
+ ((void**)&resp.ehca_cq_core.ipz_queue.queue),
+ &vma);
+ my_cq->uspace_queue = (u64)resp.ehca_cq_core.ipz_queue.queue;
+ ehca_mmap_register(my_cq->ehca_cq_core.galpas.user.fw_handle,
+ ((void**)&resp.ehca_cq_core.galpas.kernel.fw_handle),
+ &vma);
+ my_cq->uspace_fwh = (u64)resp.ehca_cq_core.galpas.kernel.fw_handle;
+ if (ib_copy_to_udata(udata, &resp, sizeof(resp))) {
+ EDEB_ERR(4, "Copy to udata failed.");
+ goto create_cq_exit4;
+ }
+ }
+
+ EDEB_EX(7,"retcode=%p ehca_cq=%p cq_num=%x cq_size=%x",
+ cq, my_cq, my_cq->cq_number, act_nr_of_entries);
+ return cq;
+
+ create_cq_exit4:
+ ipz_queue_dtor(&my_cq->ehca_cq_core.ipz_queue);
+
+ create_cq_exit3:
+ hipz_rc = hipz_h_destroy_cq(adapter_handle, my_cq, 1);
+ EDEB(3, "hipz_h_destroy_cq() failed ehca_cq=%p cq_num=%x hipz_rc=%lx",
+ my_cq, my_cq->cq_number, hipz_rc);
+
+ create_cq_exit2:
+ /* dereg idr */
+ down_write(&ehca_cq_idr_sem);
+ idr_remove(&ehca_cq_idr, my_cq->token);
+ up_write(&ehca_cq_idr_sem);
+
+ create_cq_exit1:
+ /* free cq struct */
+ ehca_cq_delete(my_cq);
+
+ create_cq_exit0:
+ EDEB_EX(7, "An error has occured retcode=%p ", cq);
+ return cq;
+}
+
+int ehca_destroy_cq(struct ib_cq *cq)
+{
+ u64 hipz_rc = H_Success;
+ int retcode = 0;
+ struct ehca_cq *my_cq = NULL;
+ int cq_num = 0;
+ struct ib_device *device = NULL;
+ struct ehca_shca *shca = NULL;
+ struct ipz_adapter_handle adapter_handle;
+
+ EHCA_CHECK_CQ(cq);
+ my_cq = container_of(cq, struct ehca_cq, ib_cq);
+ cq_num = my_cq->cq_number;
+ device = cq->device;
+ EHCA_CHECK_DEVICE(device);
+ shca = container_of(device, struct ehca_shca, ib_device);
+ adapter_handle = shca->ipz_hca_handle;
+ EDEB_EN(7, "ehca_cq=%p cq_num=%x",
+ my_cq, my_cq->cq_number);
+
+ down_write(&ehca_cq_idr_sem);
+ idr_remove(&ehca_cq_idr, my_cq->token);
+ up_write(&ehca_cq_idr_sem);
+
+ /* un-mmap if vma alloc */
+ if (my_cq->uspace_queue!=0) {
+ struct ehca_cq_core *cq_core = &my_cq->ehca_cq_core;
+ retcode = ehca_munmap(my_cq->uspace_queue,
+ cq_core->ipz_queue.queue_length);
+ retcode = ehca_munmap(my_cq->uspace_fwh, 4096);
+ }
+
+ hipz_rc = hipz_h_destroy_cq(adapter_handle, my_cq, 0);
+ if (hipz_rc == H_R_STATE) {
+ /* cq in err: read err data and destroy it forcibly */
+ EDEB(4, "ehca_cq=%p cq_num=%x ressource=%lx in err state. "
+ "Try to delete it forcibly.",
+ my_cq, my_cq->cq_number, my_cq->ipz_cq_handle.handle);
+ ehca_error_data(shca, my_cq->ipz_cq_handle.handle);
+ hipz_rc = hipz_h_destroy_cq(adapter_handle, my_cq, 1);
+ if (hipz_rc == H_Success) {
+ EDEB(4, "ehca_cq=%p cq_num=%x deleted successfully.",
+ my_cq, my_cq->cq_number);
+ }
+ }
+ if (hipz_rc != H_Success) {
+ EDEB_ERR(4,"hipz_h_destroy_cq() failed "
+ "hipz_rc=%lx ehca_cq=%p cq_num=%x",
+ hipz_rc, my_cq, my_cq->cq_number);
+ retcode = ehca2ib_return_code(hipz_rc);
+ goto destroy_cq_exit0;/*@TODO*/
+ }
+ ipz_queue_dtor(&my_cq->ehca_cq_core.ipz_queue);
+ ehca_cq_delete(my_cq);
+
+ destroy_cq_exit0:
+ EDEB_EX(7, "ehca_cq=%p cq_num=%x retcode=%x ",
+ my_cq, cq_num, retcode);
+ return retcode;
+}
+
+int ehca_resize_cq(struct ib_cq *cq, int cqe)
+{
+ int retcode = 0;
+ struct ehca_cq *my_cq = NULL;
+
+ if (unlikely(NULL == cq)) {
+ EDEB_ERR(4, "cq is NULL");
+ return -EFAULT;
+ }
+
+ my_cq = container_of(cq, struct ehca_cq, ib_cq);
+ EDEB_EN(7, "ehca_cq=%p cq_num=%x",
+ my_cq, my_cq->cq_number);
+ /*TODO proper resize still needs to be done*/
+ if (cqe > cq->cqe) {
+ retcode = -EINVAL;
+ }
+ EDEB_EX(7, "ehca_cq=%p cq_num=%x",
+ my_cq, my_cq->cq_number);
+ return retcode;
+}
+
+/* eof ehca_cq.c */

2006-02-18 01:06:06

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 12/22] ehca low-level verbs

From: Roland Dreier <[email protected]>

What is ehca_init_module()? It is declared but never defined.
---

drivers/infiniband/hw/ehca/ehca_iverbs.h | 163 ++++++++++++++++++
drivers/infiniband/hw/ehca/ehca_qes.h | 274 ++++++++++++++++++++++++++++++
2 files changed, 437 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_iverbs.h b/drivers/infiniband/hw/ehca/ehca_iverbs.h
new file mode 100644
index 0000000..b1319a9
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_iverbs.h
@@ -0,0 +1,163 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Function definitions for internal functions
+ *
+ * Authors: Heiko J Schick <[email protected]>
+ * Khadija Souissi <[email protected]>
+ * Christoph Raisch <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_iverbs.h,v 1.32 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __EHCA_IVERBS_H__
+#define __EHCA_IVERBS_H__
+
+#include "ehca_classes.h"
+/** ehca internal verb for testuse
+ */
+void ehca_init_module(void);
+
+int ehca_query_device(struct ib_device *ibdev, struct ib_device_attr *props);
+int ehca_query_port(struct ib_device *ibdev,
+ u8 port, struct ib_port_attr *props);
+int ehca_query_pkey(struct ib_device *ibdev, u8 port, u16 index, u16 * pkey);
+int ehca_query_gid(struct ib_device *ibdev, u8 port,
+ int index, union ib_gid *gid);
+int ehca_modify_port(struct ib_device *ibdev,
+ u8 port, int port_modify_mask,
+ struct ib_port_modify *props);
+
+struct ib_pd *ehca_alloc_pd(struct ib_device *device,
+ struct ib_ucontext *context,
+ struct ib_udata *udata);
+
+int ehca_dealloc_pd(struct ib_pd *pd);
+
+struct ib_ah *ehca_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr);
+int ehca_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
+int ehca_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
+int ehca_destroy_ah(struct ib_ah *ah);
+
+struct ib_cq *ehca_create_cq(struct ib_device *device, int cqe,
+ struct ib_ucontext *context,
+ struct ib_udata *udata);
+int ehca_resize_cq(struct ib_cq *cq, int cqe);
+
+int ehca_destroy_cq(struct ib_cq *cq);
+
+int ehca_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc);
+
+int ehca_peek_cq(struct ib_cq *cq, int wc_cnt);
+
+int ehca_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify cq_notify);
+
+struct ib_qp *ehca_create_qp(struct ib_pd *pd,
+ struct ib_qp_init_attr *init_attr,
+ struct ib_udata *udata);
+
+u64 ehca_define_sqp(struct ehca_shca *shca, struct ehca_qp *ibqp,
+ struct ib_qp_init_attr *qp_init_attr);
+
+int ehca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask);
+
+int ehca_query_qp(struct ib_qp *qp, struct ib_qp_attr *qp_attr,
+ int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr);
+
+int ehca_destroy_qp(struct ib_qp *qp);
+
+int ehca_post_send(struct ib_qp *qp,
+ struct ib_send_wr *send_wr, struct ib_send_wr **bad_send_wr);
+
+int ehca_post_recv(struct ib_qp *qp,
+ struct ib_recv_wr *recv_wr, struct ib_recv_wr **bad_recv_wr);
+
+struct ib_mr *ehca_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
+
+struct ib_mr *ehca_reg_phys_mr(struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags, u64 *iova_start);
+
+struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd,
+ struct ib_umem *region,
+ int mr_access_flags, struct ib_udata *udata);
+
+int ehca_rereg_phys_mr(struct ib_mr *mr,
+ int mr_rereg_mask,
+ struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf, int mr_access_flags, u64 *iova_start);
+
+int ehca_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr);
+
+int ehca_dereg_mr(struct ib_mr *mr);
+
+struct ib_mw *ehca_alloc_mw(struct ib_pd *pd);
+
+int ehca_bind_mw(struct ib_qp *qp,
+ struct ib_mw *mw, struct ib_mw_bind *mw_bind);
+
+int ehca_dealloc_mw(struct ib_mw *mw);
+
+struct ib_fmr *ehca_alloc_fmr(struct ib_pd *pd,
+ int mr_access_flags,
+ struct ib_fmr_attr *fmr_attr);
+
+int ehca_map_phys_fmr(struct ib_fmr *fmr,
+ u64 *page_list, int list_len, u64 iova);
+
+int ehca_unmap_fmr(struct list_head *fmr_list);
+
+int ehca_dealloc_fmr(struct ib_fmr *fmr);
+
+int ehca_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
+
+int ehca_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
+
+struct ib_ucontext *ehca_alloc_ucontext(struct ib_device *device,
+ struct ib_udata *udata);
+int ehca_dealloc_ucontext(struct ib_ucontext *context);
+
+int ehca_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+
+int ehca_poll_eqs(void *data);
+
+int ehca_mmap_nopage(u64 foffset,u64 length,void ** mapped,struct vm_area_struct ** vma);
+int ehca_mmap_register(u64 physical,void ** mapped,struct vm_area_struct ** vma);
+int ehca_munmap(unsigned long addr, size_t len);
+
+#endif
diff --git a/drivers/infiniband/hw/ehca/ehca_qes.h b/drivers/infiniband/hw/ehca/ehca_qes.h
new file mode 100644
index 0000000..e9420e3
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/ehca_qes.h
@@ -0,0 +1,274 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Hardware request structures
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: ehca_qes.h,v 1.9 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+
+#ifndef _EHCA_QES_H_
+#define _EHCA_QES_H_
+
+/** DON'T include any kernel related files here!!!
+ * This file is used commonly in user and kernel space!!!
+ */
+
+/**
+ * virtual scatter gather entry to specify remote adresses with length
+ */
+struct ehca_vsgentry {
+ u64 vaddr;
+ u32 lkey;
+ u32 length;
+};
+
+#define GRH_FLAG_MASK EHCA_BMASK_IBM(7,7)
+#define GRH_IPVERSION_MASK EHCA_BMASK_IBM(0,3)
+#define GRH_TCLASS_MASK EHCA_BMASK_IBM(4,12)
+#define GRH_FLOWLABEL_MASK EHCA_BMASK_IBM(13,31)
+#define GRH_PAYLEN_MASK EHCA_BMASK_IBM(32,47)
+#define GRH_NEXTHEADER_MASK EHCA_BMASK_IBM(48,55)
+#define GRH_HOPLIMIT_MASK EHCA_BMASK_IBM(56,63)
+
+/**
+ * Unreliable Datagram Address Vector Format
+ * see IBTA Vol1 chapter 8.3 Global Routing Header
+ */
+struct ehca_ud_av {
+ u8 sl;
+ u8 lnh;
+ u16 dlid;
+ u8 reserved1;
+ u8 reserved2;
+ u8 reserved3;
+ u8 slid_path_bits;
+ u8 reserved4;
+ u8 ipd;
+ u8 reserved5;
+ u8 pmtu;
+ u32 reserved6;
+ u64 reserved7;
+ union {
+ struct {
+ u64 word_0; /* always set to 6 */
+ /*should be 0x1B for IB transport */
+ u64 word_1;
+ u64 word_2;
+ u64 word_3;
+ u64 word_4;
+ } grh;
+ struct {
+ u32 wd_0;
+ u32 wd_1;
+ /* DWord_1 --> SGID */
+
+ u32 sgid_wd3;
+ /* bits 127 - 96 */
+
+ u32 sgid_wd2;
+ /* bits 95 - 64 */
+ /* DWord_2 */
+
+ u32 sgid_wd1;
+ /* bits 63 - 32 */
+
+ u32 sgid_wd0;
+ /* bits 31 - 0 */
+ /* DWord_3 --> DGID */
+
+ u32 dgid_wd3;
+ /* bits 127 - 96
+ **/
+ u32 dgid_wd2;
+ /* bits 95 - 64
+ DWord_4 */
+ u32 dgid_wd1;
+ /* bits 63 - 32 */
+
+ u32 dgid_wd0;
+ /* bits 31 - 0 */
+ } grh_l;
+ };
+};
+
+/* maximum number of sg entries allowed in a WQE */
+#define MAX_WQE_SG_ENTRIES 252
+
+#define WQE_OPTYPE_SEND 0x80
+#define WQE_OPTYPE_RDMAREAD 0x40
+#define WQE_OPTYPE_RDMAWRITE 0x20
+#define WQE_OPTYPE_CMPSWAP 0x10
+#define WQE_OPTYPE_FETCHADD 0x08
+#define WQE_OPTYPE_BIND 0x04
+
+#define WQE_WRFLAG_REQ_SIGNAL_COM 0x80
+#define WQE_WRFLAG_FENCE 0x40
+#define WQE_WRFLAG_IMM_DATA_PRESENT 0x20
+#define WQE_WRFLAG_SOLIC_EVENT 0x10
+
+#define WQEF_CACHE_HINT 0x80
+#define WQEF_CACHE_HINT_RD_WR 0x40
+#define WQEF_TIMED_WQE 0x20
+#define WQEF_PURGE 0x08
+
+#define MW_BIND_ACCESSCTRL_R_WRITE 0x40
+#define MW_BIND_ACCESSCTRL_R_READ 0x20
+#define MW_BIND_ACCESSCTRL_R_ATOMIC 0x10
+
+struct ehca_wqe {
+ u64 work_request_id;
+ u8 optype;
+ u8 wr_flag;
+ u16 pkeyi;
+ u8 wqef;
+ u8 nr_of_data_seg;
+ u16 wqe_provided_slid;
+ u32 destination_qp_number;
+ u32 resync_psn_sqp;
+ u32 local_ee_context_qkey;
+ u32 immediate_data;
+ union {
+ struct {
+ u64 remote_virtual_adress;
+ u32 rkey;
+ u32 reserved;
+ u64 atomic_1st_op_dma_len;
+ u64 atomic_2nd_op;
+ struct ehca_vsgentry sg_list[MAX_WQE_SG_ENTRIES];
+
+ } nud;
+ struct {
+ u64 ehca_ud_av_ptr;
+ u64 reserved1;
+ u64 reserved2;
+ u64 reserved3;
+ struct ehca_vsgentry sg_list[MAX_WQE_SG_ENTRIES];
+ } ud_avp;
+ struct {
+ struct ehca_ud_av ud_av;
+ struct ehca_vsgentry sg_list[MAX_WQE_SG_ENTRIES -
+ 2];
+ } ud_av;
+ struct {
+ u64 reserved0;
+ u64 reserved1;
+ u64 reserved2;
+ u64 reserved3;
+ struct ehca_vsgentry sg_list[MAX_WQE_SG_ENTRIES];
+ } all_rcv;
+
+ struct {
+ u64 reserved;
+ u32 rkey;
+ u32 old_rkey;
+ u64 reserved1;
+ u64 reserved2;
+ u64 virtual_address;
+ u32 reserved3;
+ u32 length;
+ u32 reserved4;
+ u16 reserved5;
+ u8 reserved6;
+ u8 lr_ctl;
+ u32 lkey;
+ u32 reserved7;
+ u64 reserved8;
+ u64 reserved9;
+ u64 reserved10;
+ u64 reserved11;
+ } bind;
+ struct {
+ u64 reserved12;
+ u64 reserved13;
+ u32 size;
+ u32 start;
+ } inline_data;
+ } u;
+
+};
+
+#define WC_SEND_RECEIVE EHCA_BMASK_IBM(0,0)
+#define WC_IMM_DATA EHCA_BMASK_IBM(1,1)
+#define WC_GRH_PRESENT EHCA_BMASK_IBM(2,2)
+#define WC_SE_BIT EHCA_BMASK_IBM(3,3)
+
+struct ehca_cqe {
+ u64 work_request_id;
+ u8 optype;
+ u8 w_completion_flags;
+ u16 reserved1;
+ u32 nr_bytes_transferred;
+ u32 immediate_data;
+ u32 local_qp_number;
+ u8 freed_resource_count;
+ u8 service_level;
+ u16 wqe_count;
+ u32 qp_token;
+ u32 qkey_ee_token;
+ u32 remote_qp_number;
+ u16 dlid;
+ u16 rlid;
+ u16 reserved2;
+ u16 pkey_index;
+ u32 cqe_timestamp;
+ u32 wqe_timestamp;
+ u8 wqe_timestamp_valid;
+ u8 reserved3;
+ u8 reserved4;
+ u8 cqe_flags;
+ u32 status;
+};
+
+struct ehca_eqe {
+ u64 entry;
+};
+
+struct ehca_mrte {
+ u64 starting_va;
+ u64 length; /* length of memory region in bytes*/
+ u32 pd;
+ u8 key_instance;
+ u8 pagesize;
+ u8 mr_control;
+ u8 local_remote_access_ctrl;
+ u8 reserved[0x20 - 0x18];
+ u64 at_pointer[4];
+};
+#endif /*_EHCA_QES_H_*/

2006-02-18 00:58:15

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 04/22] OF adapter probing

From: Roland Dreier <[email protected]>

hipz_probe_adapters() looks a little funny -- it seems to bail out
of all the remaining adapters if one of them isn't quite right.
---

drivers/infiniband/hw/ehca/hcp_sense.c | 144 ++++++++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/hcp_sense.h | 136 ++++++++++++++++++++++++++++++
2 files changed, 280 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/hcp_sense.c b/drivers/infiniband/hw/ehca/hcp_sense.c
new file mode 100644
index 0000000..83fa4a3
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hcp_sense.c
@@ -0,0 +1,144 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * ehca detection and query code for POWER
+ *
+ * Authors: Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hcp_sense.c,v 1.10 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#define DEB_PREFIX "snse"
+
+#include "ehca_kernel.h"
+#include "ehca_tools.h"
+
+int hipz_count_adapters(void)
+{
+ int num = 0;
+ struct device_node *dn = NULL;
+
+ EDEB_EN(7, "");
+
+ while ((dn = of_find_node_by_name(dn, "lhca"))) {
+ num++;
+ }
+
+ of_node_put(dn);
+
+ if (num == 0) {
+ EDEB_ERR(4, "No lhca node name was found in the"
+ " Open Firmware device tree.");
+ return -ENODEV;
+ }
+
+ EDEB(6, " ... found %x adapter(s)", num);
+
+ EDEB_EX(7, "num=%x", num);
+
+ return num;
+}
+
+int hipz_probe_adapters(char **adapter_list)
+{
+ int ret = 0;
+ int num = 0;
+ struct device_node *dn = NULL;
+ char *loc;
+
+ EDEB_EN(7, "adapter_list=%p", adapter_list);
+
+ while ((dn = of_find_node_by_name(dn, "lhca"))) {
+ loc = get_property(dn, "ibm,loc-code", NULL);
+ if (loc == NULL) {
+ EDEB_ERR(4, "No ibm,loc-code property for"
+ " lhca Open Firmware device tree node.");
+ ret = -ENODEV;
+ goto probe_adapters0;
+ }
+
+ adapter_list[num] = loc;
+ EDEB(6, " ... found adapter[%x] with loc-code: %s", num, loc);
+ num++;
+ }
+
+ probe_adapters0:
+ of_node_put(dn);
+
+ EDEB_EX(7, "ret=%x", ret);
+
+ return ret;
+}
+
+u64 hipz_get_adapter_handle(char *adapter)
+{
+ struct device_node *dn = NULL;
+ char *loc;
+ u64 *u64data = NULL;
+ u64 ret = 0;
+
+ EDEB_EN(7, "adapter=%p", adapter);
+
+ while ((dn = of_find_node_by_name(dn, "lhca"))) {
+ loc = get_property(dn, "ibm,loc-code", NULL);
+ if (loc == NULL) {
+ EDEB_ERR(4, "No ibm,loc-code property for"
+ " lhca Open Firmware device tree node.");
+ goto get_adapter_handle0;
+ }
+
+ if (strcmp(loc, adapter) == 0) {
+ u64data =
+ (u64 *) get_property(dn, "ibm,hca-handle", NULL);
+ break;
+ }
+ }
+
+ if (u64data == NULL) {
+ EDEB_ERR(4, "No ibm,hca-handle property for"
+ " lhca Open Firmware device tree node with"
+ " ibm,loc-code: %s.", adapter);
+ goto get_adapter_handle0;
+ }
+
+ ret = *u64data;
+
+ get_adapter_handle0:
+ of_node_put(dn);
+
+ EDEB_EX(7, "ret=%lx",ret);
+
+ return ret;
+}
diff --git a/drivers/infiniband/hw/ehca/hcp_sense.h b/drivers/infiniband/hw/ehca/hcp_sense.h
new file mode 100644
index 0000000..a49040b
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hcp_sense.h
@@ -0,0 +1,136 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * ehca detection and query code for POWER
+ *
+ * Authors: Heiko J Schick <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hcp_sense.h,v 1.11 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef HCP_SENSE_H
+#define HCP_SENSE_H
+
+int hipz_count_adapters(void);
+int hipz_probe_adapters(char **adapter_list);
+u64 hipz_get_adapter_handle(char *adapter);
+
+/* query hca response block */
+struct query_hca_rblock {
+ u32 cur_reliable_dg;
+ u32 cur_qp;
+ u32 cur_cq;
+ u32 cur_eq;
+ u32 cur_mr;
+ u32 cur_mw;
+ u32 cur_ee_context;
+ u32 cur_mcast_grp;
+ u32 cur_qp_attached_mcast_grp;
+ u32 reserved1;
+ u32 cur_ipv6_qp;
+ u32 cur_eth_qp;
+ u32 cur_hp_mr;
+ u32 reserved2[3];
+ u32 max_rd_domain;
+ u32 max_qp;
+ u32 max_cq;
+ u32 max_eq;
+ u32 max_mr;
+ u32 max_hp_mr;
+ u32 max_mw;
+ u32 max_mrwpte;
+ u32 max_special_mrwpte;
+ u32 max_rd_ee_context;
+ u32 max_mcast_grp;
+ u32 max_qps_attached_all_mcast_grp;
+ u32 max_qps_attached_mcast_grp;
+ u32 max_raw_ipv6_qp;
+ u32 max_raw_ethy_qp;
+ u32 internal_clock_frequency;
+ u32 max_pd;
+ u32 max_ah;
+ u32 max_cqe;
+ u32 max_wqes_wq;
+ u32 max_partitions;
+ u32 max_rr_ee_context;
+ u32 max_rr_qp;
+ u32 max_rr_hca;
+ u32 max_act_wqs_ee_context;
+ u32 max_act_wqs_qp;
+ u32 max_sge;
+ u32 max_sge_rd;
+ u32 memory_page_size_supported;
+ u64 max_mr_size;
+ u32 local_ca_ack_delay;
+ u32 num_ports;
+ u32 vendor_id;
+ u32 vendor_part_id;
+ u32 hw_ver;
+ u64 node_guid;
+ u64 hca_cap_indicators;
+ u32 data_counter_register_size;
+ u32 max_shared_rq;
+ u32 max_isns_eq;
+ u32 max_neq;
+} __attribute__ ((packed));
+
+/* query port response block */
+struct query_port_rblock {
+ u32 state;
+ u32 bad_pkey_cntr;
+ u32 lmc;
+ u32 lid;
+ u32 subnet_timeout;
+ u32 qkey_viol_cntr;
+ u32 sm_sl;
+ u32 sm_lid;
+ u32 capability_mask;
+ u32 init_type_reply;
+ u32 pkey_tbl_len;
+ u32 gid_tbl_len;
+ u64 gid_prefix;
+ u32 port_nr;
+ u16 pkey_entries[16];
+ u8 reserved1[32];
+ u32 trent_size;
+ u32 trbuf_size;
+ u64 max_msg_sz;
+ u32 max_mtu;
+ u32 vl_cap;
+ u8 reserved2[1900];
+ u64 guid_entries[255];
+} __attribute__ ((packed));
+
+#endif

2006-02-18 01:05:46

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 05/22] HW register abstractions

From: Roland Dreier <[email protected]>

Does hipz_structs.h really need a whole file to hold 5 #defines?
---

drivers/infiniband/hw/ehca/hipz_fns.h | 83 ++++++
drivers/infiniband/hw/ehca/hipz_fns_core.h | 123 +++++++++
drivers/infiniband/hw/ehca/hipz_hw.h | 382 ++++++++++++++++++++++++++++
drivers/infiniband/hw/ehca/hipz_structs.h | 54 ++++
4 files changed, 642 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/hipz_fns.h b/drivers/infiniband/hw/ehca/hipz_fns.h
new file mode 100644
index 0000000..4231b65
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hipz_fns.h
@@ -0,0 +1,83 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * HW abstraction register functions
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hipz_fns.h,v 1.15 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __HIPZ_FNS_H__
+#define __HIPZ_FNS_H__
+
+#include "hipz_structs.h"
+#include "ehca_classes.h"
+#include "hipz_hw.h"
+#ifndef EHCA_USE_HCALL
+#include "sim_gal.h"
+#endif
+
+#include "hipz_fns_core.h"
+
+#define hipz_galpa_store_eq(gal,offset,value)\
+ hipz_galpa_store(gal,EQTEMM_OFFSET(offset),value)
+#define hipz_galpa_load_eq(gal,offset)\
+ hipz_galpa_load(gal,EQTEMM_OFFSET(offset))
+
+#define hipz_galpa_store_qped(gal,offset,value)\
+ hipz_galpa_store(gal,QPEDMM_OFFSET(offset),value)
+#define hipz_galpa_load_qped(gal,offset)\
+ hipz_galpa_load(gal,QPEDMM_OFFSET(offset))
+
+#define hipz_galpa_store_mrmw(gal,offset,value)\
+ hipz_galpa_store(gal,MRMWMM_OFFSET(offset),value)
+#define hipz_galpa_load_mrmw(gal,offset)\
+ hipz_galpa_load(gal,MRMWMM_OFFSET(offset))
+
+inline static void hipz_load_FEC(struct ehca_cq_core *cq_core, u32 * count)
+{
+ uint64_t reg = 0;
+ EDEB_EN(7, "cq_core=%p", cq_core);
+ {
+ struct h_galpa gal = cq_core->galpas.kernel;
+ reg = hipz_galpa_load_cq(gal, CQx_FEC);
+ *count = EHCA_BMASK_GET(CQx_FEC_CQE_cnt, reg);
+ }
+ EDEB_EX(7,"cq_core=%p CQx_FEC=%lx", cq_core,reg);
+}
+
+#endif /* __IPZ_IF_H__ */
diff --git a/drivers/infiniband/hw/ehca/hipz_fns_core.h b/drivers/infiniband/hw/ehca/hipz_fns_core.h
new file mode 100644
index 0000000..a60b808
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hipz_fns_core.h
@@ -0,0 +1,123 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * HW abstraction register functions
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Hoang-Nam Nguyen <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hipz_fns_core.h,v 1.10 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __HIPZ_FNS_CORE_H__
+#define __HIPZ_FNS_CORE_H__
+
+#include "ehca_galpa.h"
+#include "hipz_hw.h"
+
+#define hipz_galpa_store_cq(gal,offset,value)\
+ hipz_galpa_store(gal,CQTEMM_OFFSET(offset),value)
+#define hipz_galpa_load_cq(gal,offset)\
+ hipz_galpa_load(gal,CQTEMM_OFFSET(offset))
+
+#define hipz_galpa_store_qp(gal,offset,value)\
+ hipz_galpa_store(gal,QPTEMM_OFFSET(offset),value)
+#define hipz_galpa_load_qp(gal,offset)\
+ hipz_galpa_load(gal,QPTEMM_OFFSET(offset))
+
+inline static void hipz_update_SQA(struct ehca_qp_core *qp_core, u16 nr_wqes)
+{
+ struct h_galpa gal;
+
+ EDEB_EN(7, "qp_core=%p", qp_core);
+ gal = qp_core->galpas.kernel;
+ /* ringing doorbell :-) */
+ hipz_galpa_store_qp(gal, QPx_SQA, EHCA_BMASK_SET(QPx_SQAdder, nr_wqes));
+ EDEB_EX(7, "qp_core=%p QPx_SQA = %i", qp_core, nr_wqes);
+}
+
+inline static void hipz_update_RQA(struct ehca_qp_core *qp_core, u16 nr_wqes)
+{
+ struct h_galpa gal;
+
+ EDEB_EN(7, "qp_core=%p", qp_core);
+ gal = qp_core->galpas.kernel;
+ /* ringing doorbell :-) */
+ hipz_galpa_store_qp(gal, QPx_RQA, EHCA_BMASK_SET(QPx_RQAdder, nr_wqes));
+ EDEB_EX(7, "qp_core=%p QPx_RQA = %i", qp_core, nr_wqes);
+}
+
+inline static void hipz_update_FECA(struct ehca_cq_core *cq_core, u32 nr_cqes)
+{
+ struct h_galpa gal;
+
+ EDEB_EN(7, "cq_core=%p", cq_core);
+ gal = cq_core->galpas.kernel;
+ hipz_galpa_store_cq(gal, CQx_FECA,
+ EHCA_BMASK_SET(CQx_FECAdder, nr_cqes));
+ EDEB_EX(7, "cq_core=%p CQx_FECA = %i", cq_core, nr_cqes);
+}
+
+inline static void hipz_set_CQx_N0(struct ehca_cq_core *cq_core, u32 value)
+{
+ struct h_galpa gal;
+ u64 CQx_N0_reg = 0;
+
+ EDEB_EN(7, "cq_core=%p event on solicited completion -- write CQx_N0",
+ cq_core);
+ gal = cq_core->galpas.kernel;
+ hipz_galpa_store_cq(gal, CQx_N0,
+ EHCA_BMASK_SET(CQx_N0_generate_solicited_comp_event,
+ value));
+ CQx_N0_reg = hipz_galpa_load_cq(gal, CQx_N0);
+ EDEB_EX(7, "cq_core=%p loaded CQx_N0=%lx", cq_core,(unsigned long)CQx_N0_reg);
+}
+
+inline static void hipz_set_CQx_N1(struct ehca_cq_core *cq_core, u32 value)
+{
+ struct h_galpa gal;
+ u64 CQx_N1_reg = 0;
+
+ EDEB_EN(7, "cq_core=%p event on completion -- write CQx_N1",
+ cq_core);
+ gal = cq_core->galpas.kernel;
+ hipz_galpa_store_cq(gal, CQx_N1,
+ EHCA_BMASK_SET(CQx_N1_generate_comp_event, value));
+ CQx_N1_reg = hipz_galpa_load_cq(gal, CQx_N1);
+ EDEB_EX(7, "cq_core=%p loaded CQx_N1=%lx", cq_core,(unsigned long)CQx_N1_reg);
+}
+
+#endif /* __HIPZ_FNC_CORE_H__ */
diff --git a/drivers/infiniband/hw/ehca/hipz_hw.h b/drivers/infiniband/hw/ehca/hipz_hw.h
new file mode 100644
index 0000000..6fa005b
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hipz_hw.h
@@ -0,0 +1,382 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * eHCA register definitions
+ *
+ * Authors: Christoph Raisch <[email protected]>
+ * Reinhard Ernst <[email protected]>
+ * Waleri Fomin <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hipz_hw.h,v 1.7 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __HIPZ_HW_H__
+#define __HIPZ_HW_H__
+
+#ifdef __KERNEL__
+#include "ehca_tools.h"
+#include "ehca_kernel.h"
+#else /* !__KERNEL__ */
+#include "ehca_utools.h"
+#endif
+
+/** @brief Queue Pair Table Memory
+ */
+struct hipz_QPTEMM {
+ u64 QPx_HCR;
+#define QPx_HCR_PKEY_Mode EHCA_BMASK_IBM(1,2)
+#define QPx_HCR_Special_QP_Mode EHCA_BMASK_IBM(6,7)
+ u64 QPx_C;
+#define QPx_C_Enabled EHCA_BMASK_IBM(0,0)
+#define QPx_C_Disabled EHCA_BMASK_IBM(1,1)
+#define QPx_C_Req_State EHCA_BMASK_IBM(16,23)
+#define QPx_C_Res_State EHCA_BMASK_IBM(25,31)
+#define QPx_C_disable_ETE_check EHCA_BMASK_IBM(7,7)
+ u64 QPx_HERR;
+ u64 QPx_AER;
+/* 0x20*/
+ u64 QPx_SQA;
+#define QPx_SQAdder EHCA_BMASK_IBM(48,63)
+ u64 QPx_SQC;
+ u64 QPx_RQA;
+#define QPx_RQAdder EHCA_BMASK_IBM(48,63)
+ u64 QPx_RQC;
+/* 0x40*/
+ u64 QPx_ST;
+ u64 QPx_PMSTATE;
+#define QPx_PMSTATE_BITS EHCA_BMASK_IBM(30,31)
+ u64 QPx_PMFA;
+ u64 QPx_PKEY;
+#define QPx_PKEY_value EHCA_BMASK_IBM(48,63)
+/* 0x60*/
+ u64 QPx_PKEYA;
+#define QPx_PKEYA_index0 EHCA_BMASK_IBM(0,15)
+#define QPx_PKEYA_index1 EHCA_BMASK_IBM(16,31)
+#define QPx_PKEYA_index2 EHCA_BMASK_IBM(32,47)
+#define QPx_PKEYA_index3 EHCA_BMASK_IBM(48,63)
+ u64 QPx_PKEYB;
+#define QPx_PKEYB_index4 EHCA_BMASK_IBM(0,15)
+#define QPx_PKEYB_index5 EHCA_BMASK_IBM(16,31)
+#define QPx_PKEYB_index6 EHCA_BMASK_IBM(32,47)
+#define QPx_PKEYB_index7 EHCA_BMASK_IBM(48,63)
+ u64 QPx_PKEYC;
+#define QPx_PKEYC_index8 EHCA_BMASK_IBM(0,15)
+#define QPx_PKEYC_index9 EHCA_BMASK_IBM(16,31)
+#define QPx_PKEYC_index10 EHCA_BMASK_IBM(32,47)
+#define QPx_PKEYC_index11 EHCA_BMASK_IBM(48,63)
+ u64 QPx_PKEYD;
+#define QPx_PKEYD_index12 EHCA_BMASK_IBM(0,15)
+#define QPx_PKEYD_index13 EHCA_BMASK_IBM(16,31)
+#define QPx_PKEYD_index14 EHCA_BMASK_IBM(32,47)
+#define QPx_PKEYD_index15 EHCA_BMASK_IBM(48,63)
+/* 0x80*/
+ u64 QPx_QKEY;
+#define QPx_QKEY_value EHCA_BMASK_IBM(32,63)
+ u64 QPx_DQP;
+#define QPx_DQP_number EHCA_BMASK_IBM(40,63)
+ u64 QPx_DLIDP;
+#define QPx_DLID_PRIMARY EHCA_BMASK_IBM(48,63)
+#define QPx_DLIDP_GRH EHCA_BMASK_IBM(31,31)
+ u64 QPx_PORTP;
+#define QPx_PORT_Primary EHCA_BMASK_IBM(57,63)
+/* 0xa0*/
+ u64 QPx_SLIDP;
+#define QPx_SLIDP_p_path EHCA_BMASK_IBM(48,63)
+#define QPx_SLIDP_lmc EHCA_BMASK_IBM(37,39)
+ u64 QPx_SLIDPP;
+#define QPx_SLID_PRIM_PATH EHCA_BMASK_IBM(57,63)
+ u64 QPx_DLIDA;
+#define QPx_DLIDA_GRH EHCA_BMASK_IBM(31,31)
+ u64 QPx_PORTA;
+#define QPx_PORT_Alternate EHCA_BMASK_IBM(57,63)
+/* 0xc0*/
+ u64 QPx_SLIDA;
+ u64 QPx_SLIDPA;
+ u64 QPx_SLVL;
+#define QPx_SLVL_BITS EHCA_BMASK_IBM(56,59)
+#define QPx_SLVL_VL EHCA_BMASK_IBM(60,63)
+ u64 QPx_IPD;
+#define QPx_IPD_max_static_rate EHCA_BMASK_IBM(56,63)
+/* 0xe0*/
+ u64 QPx_MTU;
+#define QPx_MTU_size EHCA_BMASK_IBM(56,63)
+ u64 QPx_LATO;
+#define QPx_LATO_BITS EHCA_BMASK_IBM(59,63)
+ u64 QPx_RLIMIT;
+#define QPx_RETRY_COUNT EHCA_BMASK_IBM(61,63)
+ u64 QPx_RNRLIMIT;
+#define QPx_RNR_RETRY_COUNT EHCA_BMASK_IBM(61,63)
+/* 0x100*/
+ u64 QPx_T;
+ u64 QPx_SQHP;
+ u64 QPx_SQPTP;
+ u64 QPx_NSPSN;
+#define QPx_NSPSN_value EHCA_BMASK_IBM(40,63)
+/* 0x120*/
+ u64 QPx_NSPSNHWM;
+#define QPx_NSPSNHWM_value EHCA_BMASK_IBM(40,63)
+ u64 reserved1;
+ u64 QPx_SDSI;
+ u64 QPx_SDSBC;
+/* 0x140*/
+ u64 QPx_SQWSIZE;
+#define QPx_SQWSIZE_value EHCA_BMASK_IBM(61,63)
+ u64 QPx_SQWTS;
+ u64 QPx_LSN;
+ u64 QPx_NSSN;
+/* 0x160 */
+ u64 QPx_MOR;
+#define QPx_MOR_value EHCA_BMASK_IBM(48,63)
+ u64 QPx_COR;
+ u64 QPx_SQSIZE;
+#define QPx_SQSIZE_value EHCA_BMASK_IBM(60,63)
+ u64 QPx_ERC;
+/* 0x180*/
+ u64 QPx_RNRRC;
+#define QPx_RNRRESP_value EHCA_BMASK_IBM(59,63)
+ u64 QPx_ERNRWT;
+ u64 QPx_RNRRESP;
+#define QPx_RNRRESP_WTR EHCA_BMASK_IBM(59,63)
+ u64 QPx_LMSNA;
+/* 0x1a0 */
+ u64 QPx_SQHPC;
+ u64 QPx_SQCPTP;
+ u64 QPx_SIGT;
+ u64 QPx_WQECNT;
+/* 0x1c0*/
+
+ u64 QPx_RQHP;
+ u64 QPx_RQPTP;
+ u64 QPx_RQSIZE;
+#define QPx_RQSIZE_value EHCA_BMASK_IBM(60,63)
+ u64 QPx_NRR;
+#define QPx_NRR_value EHCA_BMASK_IBM(61,63)
+/* 0x1e0*/
+ u64 QPx_RDMAC;
+#define QPx_RDMAC_value EHCA_BMASK_IBM(61,63)
+ u64 QPx_NRPSN;
+#define QPx_NRPSN_value EHCA_BMASK_IBM(40,63)
+ u64 QPx_LAPSN;
+#define QPx_LAPSN_value EHCA_BMASK_IBM(40,63)
+ u64 QPx_LCR;
+/* 0x200*/
+ u64 QPx_RWC;
+ u64 QPx_RWVA;
+ u64 QPx_RDSI;
+ u64 QPx_RDSBC;
+/* 0x220*/
+ u64 QPx_RQWSIZE;
+#define QPx_RQWSIZE_value EHCA_BMASK_IBM(61,63)
+ u64 QPx_CRMSN;
+ u64 QPx_RDD;
+#define QPx_RDD_VALUE EHCA_BMASK_IBM(32,63)
+ u64 QPx_LARPSN;
+#define QPx_LARPSN_value EHCA_BMASK_IBM(40,63)
+/* 0x240*/
+ u64 QPx_PD;
+ u64 QPx_SCQN;
+ u64 QPx_RCQN;
+ u64 QPx_AEQN;
+/* 0x260*/
+ u64 QPx_AAELOG;
+ u64 QPx_RAM;
+ u64 QPx_RDMAQE0;
+ u64 QPx_RDMAQE1;
+/* 0x280*/
+ u64 QPx_RDMAQE2;
+ u64 QPx_RDMAQE3;
+ u64 QPx_NRPSNHWM;
+#define QPx_NRPSNHWM_value EHCA_BMASK_IBM(40,63)
+/* 0x298*/
+ u64 reserved[(0x400 - 0x298) / 8];
+/* 0x400 extended data */
+ u64 reserved_ext[(0x500 - 0x400) / 8];
+/* 0x500 */
+ u64 reserved2[(0x1000 - 0x500) / 8];
+/* 0x1000 */
+};
+
+#define QPTEMM_OFFSET(x) offsetof(struct hipz_QPTEMM,x)
+
+/** @brief MRMWPT Entry Memory Map
+ */
+struct hipz_MRMWMM {
+ /* 0x00 */
+ u64 MRx_HCR;
+#define MRx_HCR_LPARID_VALID EHCA_BMASK_IBM(0,0)
+
+ u64 MRx_C;
+ u64 MRx_HERR;
+ u64 MRx_AER;
+ /* 0x20 */
+ u64 MRx_PP;
+ u64 reserved1;
+ u64 reserved2;
+ u64 reserved3;
+ /* 0x40 */
+ u64 reserved4[(0x200 - 0x40) / 8];
+ /* 0x200 */
+ u64 MRx_CTL[64];
+
+};
+
+#define MRMWMM_OFFSET(x) offsetof(struct hipz_MRMWMM,x)
+
+/** @brief QPEDMM
+ */
+struct hipz_QPEDMM {
+ /* 0x00 */
+ u64 reserved0[(0x400) / 8];
+ /* 0x400 */
+ u64 QPEDx_PHH;
+#define QPEDx_PHH_TClass EHCA_BMASK_IBM(4,11)
+#define QPEDx_PHH_HopLimit EHCA_BMASK_IBM(56,63)
+#define QPEDx_PHH_FlowLevel EHCA_BMASK_IBM(12,31)
+ u64 QPEDx_PPSGP;
+#define QPEDx_PPSGP_PPPidx EHCA_BMASK_IBM(0,63)
+ /* 0x410 */
+ u64 QPEDx_PPSGU;
+#define QPEDx_PPSGU_PPPSGID EHCA_BMASK_IBM(0,63)
+ u64 QPEDx_PPDGP;
+ /* 0x420 */
+ u64 QPEDx_PPDGU;
+ u64 QPEDx_APH;
+ /* 0x430 */
+ u64 QPEDx_APSGP;
+ u64 QPEDx_APSGU;
+ /* 0x440 */
+ u64 QPEDx_APDGP;
+ u64 QPEDx_APDGU;
+ /* 0x450 */
+ u64 QPEDx_APAV;
+ u64 QPEDx_APSAV;
+ /* 0x460 */
+ u64 QPEDx_HCR;
+ u64 reserved1[4];
+ /* 0x488 */
+ u64 QPEDx_RRL0;
+ /* 0x490 */
+ u64 QPEDx_RRRKEY0;
+ u64 QPEDx_RRVA0;
+ /* 0x4A0 */
+ u64 reserved2;
+ u64 QPEDx_RRL1;
+ /* 0x4B0 */
+ u64 QPEDx_RRRKEY1;
+ u64 QPEDx_RRVA1;
+ /* 0x4C0 */
+ u64 reserved3;
+ u64 QPEDx_RRL2;
+ /* 0x4D0 */
+ u64 QPEDx_RRRKEY2;
+ u64 QPEDx_RRVA2;
+ /* 0x4E0 */
+ u64 reserved4;
+ u64 QPEDx_RRL3;
+ /* 0x4F0 */
+ u64 QPEDx_RRRKEY3;
+ u64 QPEDx_RRVA3;
+};
+
+#define QPEDMM_OFFSET(x) offsetof(struct hipz_QPEDMM,x)
+
+/** @brief CQ Table Entry Memory Map
+ */
+struct hipz_CQTEMM {
+ u64 CQx_HCR;
+#define CQx_HCR_LPARID_valid EHCA_BMASK_IBM(0,0)
+ u64 CQx_C;
+#define CQx_C_Enable EHCA_BMASK_IBM(0,0)
+#define CQx_C_Disable_Complete EHCA_BMASK_IBM(1,1)
+#define CQx_C_Error_Reset EHCA_BMASK_IBM(23,23)
+ u64 CQx_HERR;
+ u64 CQx_AER;
+/* 0x20 */
+ u64 CQx_PTP;
+ u64 CQx_TP;
+#define CQx_FEC_CQE_cnt EHCA_BMASK_IBM(32,63)
+ u64 CQx_FEC;
+ u64 CQx_FECA;
+#define CQx_FECAdder EHCA_BMASK_IBM(32,63)
+/* 0x40 */
+ u64 CQx_EP;
+#define CQx_EP_Event_Pending EHCA_BMASK_IBM(0,0)
+#define CQx_EQ_number EHCA_BMASK_IBM(0,15)
+#define CQx_EQ_CQtoken EHCA_BMASK_IBM(32,63)
+ u64 CQx_EQ;
+/* 0x50 */
+ u64 reserved1;
+ u64 CQx_N0;
+#define CQx_N0_generate_solicited_comp_event EHCA_BMASK_IBM(0,0)
+/* 0x60 */
+ u64 CQx_N1;
+#define CQx_N1_generate_comp_event EHCA_BMASK_IBM(0,0)
+ u64 reserved2[(0x1000 - 0x60) / 8];
+/* 0x1000 */
+};
+
+#define CQTEMM_OFFSET(x) offsetof(struct hipz_CQTEMM,x)
+
+/** @brief EQ Table Entry Memory Map
+ */
+struct hipz_EQTEMM {
+ u64 EQx_HCR;
+#define EQx_HCR_LPARID_valid EHCA_BMASK_IBM(0,0)
+#define EQx_HCR_ENABLE_PSB EHCA_BMASK_IBM(8,8)
+ u64 EQx_C;
+#define EQx_C_Enable EHCA_BMASK_IBM(0,0)
+#define EQx_C_Error_Reset EHCA_BMASK_IBM(23,23)
+#define EQx_C_Comp_Event EHCA_BMASK_IBM(17,17)
+
+ u64 EQx_HERR;
+ u64 EQx_AER;
+/* 0x20 */
+ u64 EQx_PTP;
+ u64 EQx_TP;
+ u64 EQx_SSBA;
+ u64 EQx_PSBA;
+
+/* 0x40 */
+ u64 EQx_CEC;
+ u64 EQx_MEQL;
+ u64 EQx_XISBI;
+ u64 EQx_XISC;
+/* 0x60 */
+ u64 EQx_IT;
+
+};
+#define EQTEMM_OFFSET(x) offsetof(struct hipz_EQTEMM,x)
+
+#endif
diff --git a/drivers/infiniband/hw/ehca/hipz_structs.h b/drivers/infiniband/hw/ehca/hipz_structs.h
new file mode 100644
index 0000000..bd2dcad
--- /dev/null
+++ b/drivers/infiniband/hw/ehca/hipz_structs.h
@@ -0,0 +1,54 @@
+/*
+ * IBM eServer eHCA Infiniband device driver for Linux on POWER
+ *
+ * Infiniband Firmware structure definition
+ *
+ * Authors: Waleri Fomin <[email protected]>
+ * Christoph Raisch <[email protected]>
+ *
+ * Copyright (c) 2005 IBM Corporation
+ *
+ * All rights reserved.
+ *
+ * This source code is distributed under a dual license of GPL v2.0 and OpenIB
+ * BSD.
+ *
+ * OpenIB BSD License
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
+ * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $Id: hipz_structs.h,v 1.8 2006/02/06 10:17:34 schickhj Exp $
+ */
+
+#ifndef __HIPZ_STRUCTS_H__
+#define __HIPZ_STRUCTS_H__
+
+/* access control defines for MR/MW */
+#define HIPZ_ACCESSCTRL_L_WRITE 0x00800000
+#define HIPZ_ACCESSCTRL_R_WRITE 0x00400000
+#define HIPZ_ACCESSCTRL_R_READ 0x00200000
+#define HIPZ_ACCESSCTRL_R_ATOMIC 0x00100000
+#define HIPZ_ACCESSCTRL_MW_BIND 0x00080000
+
+#endif /* __IPZ_IF_H__ */

2006-02-18 01:54:29

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 04/22] OF adapter probing

On Fri, Feb 17, 2006 at 04:57:14PM -0800, Roland Dreier wrote:
> +int hipz_count_adapters(void)
> +{
> + int num = 0;
> + struct device_node *dn = NULL;
> +
> + EDEB_EN(7, "");
> +
> + while ((dn = of_find_node_by_name(dn, "lhca"))) {
> + num++;
> + }

The { } are not needed here.

> +
> + of_node_put(dn);
> +
> + if (num == 0) {
> + EDEB_ERR(4, "No lhca node name was found in the"
> + " Open Firmware device tree.");
> + return -ENODEV;
> + }
> +
> + EDEB(6, " ... found %x adapter(s)", num);
> +
> + EDEB_EX(7, "num=%x", num);
> +
> + return num;
> +}
> +
> +int hipz_probe_adapters(char **adapter_list)
> +{
> + int ret = 0;
> + int num = 0;
> + struct device_node *dn = NULL;
> + char *loc;
> +
> + EDEB_EN(7, "adapter_list=%p", adapter_list);
> +
> + while ((dn = of_find_node_by_name(dn, "lhca"))) {
> + loc = get_property(dn, "ibm,loc-code", NULL);
> + if (loc == NULL) {
> + EDEB_ERR(4, "No ibm,loc-code property for"
> + " lhca Open Firmware device tree node.");
> + ret = -ENODEV;
> + goto probe_adapters0;
> + }
> +
> + adapter_list[num] = loc;
> + EDEB(6, " ... found adapter[%x] with loc-code: %s", num, loc);
> + num++;
> + }
> +
> + probe_adapters0:
> + of_node_put(dn);

Please use tabs everywhere.

Hm, wait, that's a label. Put it where it belongs, over on the left
please.

thanks,

greg k-h

2006-02-18 01:58:17

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Fri, Feb 17, 2006 at 04:57:07PM -0800, Roland Dreier wrote:
> From: Roland Dreier <[email protected]>
>
> This is a very large file with way too much code for a .h file.
> The functions look too big to be inlined also. Is there any way
> for this code to move to a .c file?

Roland, your comments are fine, but what about the original author's
descriptions of what each patch are?

Come on, IBM allows developers to post code to lkml, just look at the
archives for proof. For them to use a proxy like this is very strange,
and also, there is no Signed-off-by: record from the original authors,
which is not ok.

And why aren't you using the standard firmware interface in the kernel?

> +#ifndef CONFIG_PPC64
> +#ifndef Z_SERIES
> +#warning "included with wrong target, this is a p file"
> +#endif
> +#endif

It's a "p" file? What's that?

Is this even needed?

thanks,

greg k-h

2006-02-18 02:05:32

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

Greg> Roland, your comments are fine, but what about the original
Greg> author's descriptions of what each patch are?

This is actually me breaking up a giant driver into pieces small
enough to post to lkml without hitting the 100 KB limit.

This is just an RFC -- I assume the driver is going to get merged in
the end as one big git changeset with a changelog like "add driver for
IBM eHCA InfiniBand adapters".

Greg> Come on, IBM allows developers to post code to lkml, just
Greg> look at the archives for proof. For them to use a proxy
Greg> like this is very strange, and also, there is no
Greg> Signed-off-by: record from the original authors, which is
Greg> not ok.

Well, the eHCA guys tell me that they can't post patches to lkml.

You're right that the final merge will have to have an IBM
Signed-off-by: line but as I said this is just an RFC. There are many
reasons beyond patch format issues that make this stuff unmergeable as-is.

Greg> And why aren't you using the standard firmware interface in
Greg> the kernel?

This is actually stuff to talk to the firmware that sits below the
kernel on IBM ppc64 machines, not an interface to load device firmware
from userspace.

- R.

2006-02-18 10:59:50

by Heiko Carstens

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

> Come on, IBM allows developers to post code to lkml, just look at the
> archives for proof. For them to use a proxy like this is very strange,

Things aren't always that easy at IBM. You should know best :)

Heiko

2006-02-18 12:17:57

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 01/22] Add powerpc-specific clear_cacheline(), which just compiles to "dcbz".

On Fri, Feb 17, 2006 at 04:57:04PM -0800, Roland Dreier wrote:
> From: Roland Dreier <[email protected]>
>
> This is horribly non-portable.

Yes. If this is needed it should go to an asm/ header, not in a driver.

2006-02-18 12:19:14

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Fri, Feb 17, 2006 at 04:57:07PM -0800, Roland Dreier wrote:
> From: Roland Dreier <[email protected]>
>
> This is a very large file with way too much code for a .h file.
> The functions look too big to be inlined also. Is there any way
> for this code to move to a .c file?
> ---
>
> drivers/infiniband/hw/ehca/hcp_if.h | 2022 +++++++++++++++++++++++++++++++++++

> +#include "ehca_tools.h"
> +#include "hipz_structs.h"
> +#include "ehca_classes.h"
> +
> +#ifndef EHCA_USE_HCALL
> +#include "hcz_queue.h"
> +#include "hcz_mrmw.h"
> +#include "hcz_emmio.h"
> +#include "sim_prom.h"
> +#endif
> +#include "hipz_fns.h"
> +#include "hcp_sense.h"
> +#include "ehca_irq.h"
> +
> +#ifndef CONFIG_PPC64
> +#ifndef Z_SERIES
> +#warning "included with wrong target, this is a p file"
> +#endif
> +#endif
> +
> +#ifdef EHCA_USE_HCALL
> +
> +#ifndef EHCA_USERDRIVER
> +#include "hcp_phyp.h"
> +#else
> +#include "testbench/hcallbridge.h"
> +#endif
> +#endif

the ifdefs should all go away and the build system should make sure it's
only built for the right platforms.

2006-02-18 12:20:16

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Fri, Feb 17, 2006 at 06:04:56PM -0800, Roland Dreier wrote:
> Greg> Roland, your comments are fine, but what about the original
> Greg> author's descriptions of what each patch are?
>
> This is actually me breaking up a giant driver into pieces small
> enough to post to lkml without hitting the 100 KB limit.
>
> This is just an RFC -- I assume the driver is going to get merged in
> the end as one big git changeset with a changelog like "add driver for
> IBM eHCA InfiniBand adapters".
>
> Greg> Come on, IBM allows developers to post code to lkml, just
> Greg> look at the archives for proof. For them to use a proxy
> Greg> like this is very strange, and also, there is no
> Greg> Signed-off-by: record from the original authors, which is
> Greg> not ok.
>
> Well, the eHCA guys tell me that they can't post patches to lkml.

Then they lie. And not posting to lkml is a good reason not to merge
an otherwise perfect driver. (which this one is far from)

2006-02-18 12:23:18

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 03/22] pHype specific stuff

> +u64 hipz_galpa_load(struct h_galpa galpa, u32 offset)
> +{
> + u64 addr = galpa.fw_handle + offset;
> + u64 out;
> + EDEB_EN(7, "addr=%lx offset=%x ", addr, offset);
> + out = *(u64 *) addr;

why does this cast an u64 to a pointer?

> +#ifndef EHCA_USERDRIVER
> +inline static int hcall_map_page(u64 physaddr, u64 * mapaddr)
> +{
> + *mapaddr = (u64)(ioremap(physaddr, 4096));
> +
> + EDEB(7, "ioremap physaddr=%lx mapaddr=%lx", physaddr, *mapaddr);
> + return 0;

ioremap returns void __iomem * and casting that to any integer type is
wrong.

> +inline static int hcall_unmap_page(u64 mapaddr)
> +{
> + EDEB(7, "mapaddr=%lx", mapaddr);
> + iounmap((void *)(mapaddr));
> + return 0;

dito for iounmap and casting back.

guys, please run this driver through sparse, thanks.

> + /* if phype returns LongBusyXXX,
> + * we retry several times, but not forever */
> + for (i = 0; i < 5; i++) {
> + __asm__ __volatile__("mr 3,%10\n"
> + "mr 4,%11\n"
> + "mr 5,%12\n"

assembly code under drivers/ is not acceptable. please create
and <asm/ehca.h> for it or something similar.

2006-02-18 12:26:41

by Muli Ben-Yehuda

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, Feb 18, 2006 at 12:20:11PM +0000, Christoph Hellwig wrote:

> > Well, the eHCA guys tell me that they can't post patches to lkml.
>
> Then they lie. And not posting to lkml is a good reason not to merge
> an otherwise perfect driver. (which this one is far from)

I don't speak for IBM or the authors, but there are perfectly
reasonable reasons to ask someone else to post a patch on your behalf
- including but not limited to to only being able to use Lotus Notes
with one's IBM email. I'm sure you've all seen the travesties that
Notes inflicts on inline patches.

Cheers,
Muli
--
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/

2006-02-18 12:29:14

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, Feb 18, 2006 at 02:26:31PM +0200, Muli Ben-Yehuda wrote:
> I don't speak for IBM or the authors, but there are perfectly
> reasonable reasons to ask someone else to post a patch on your behalf
> - including but not limited to to only being able to use Lotus Notes
> with one's IBM email. I'm sure you've all seen the travesties that
> Notes inflicts on inline patches.

sure. and there's free webmail accounts that take about 10 minutes to
setup as well as various people offering shell access to linux machines
if you ask nicely. so this really is not an issue. I think this is more
about ibm politics (espeically in boeblingen) sometimes making it pretty
hard to post things. But that doesn't mean it's impossible, it just means
they didn't try hard enough.

2006-02-18 12:32:41

by Arjan van de Ven

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, 2006-02-18 at 14:26 +0200, Muli Ben-Yehuda wrote:
> On Sat, Feb 18, 2006 at 12:20:11PM +0000, Christoph Hellwig wrote:
>
> > > Well, the eHCA guys tell me that they can't post patches to lkml.
> >
> > Then they lie. And not posting to lkml is a good reason not to merge
> > an otherwise perfect driver. (which this one is far from)
>
> I don't speak for IBM or the authors, but there are perfectly
> reasonable reasons to ask someone else to post a patch on your behalf
> - including but not limited to to only being able to use Lotus Notes
> with one's IBM email. I'm sure you've all seen the travesties that
> Notes inflicts on inline patches.

there are ways around that with webmail etc.

The bigger issue is: if people can't be bothered to do those steps, why
would they be bothered to do this for maintenance and bugfixes etc etc?
Basically it's now already a de-facto unmaintained driver....


2006-02-18 12:46:31

by Heiko J Schick

[permalink] [raw]
Subject: Re: [openib-general] [PATCH 04/22] OF adapter probing

Hello Roland,

sorry, this file is not used anymore. The functions

int hipz_count_adapters(void);
int hipz_probe_adapters(char **adapter_list);
u64 hipz_get_adapter_handle(char *adapter);

nowadays handled by the IBMEBUS [1] bus device driver.

[1]: http://www.kernel.org/git/?p=linux/kernel/git/torvalds/
linux-2.6.git;a=commit;h=d7a301033f1990188f65abf4fe8e5b90ef0e3888

Regards,
Heiko

On Feb 18, 2006, at 1:57 AM, Roland Dreier wrote:

> From: Roland Dreier <[email protected]>
>
> hipz_probe_adapters() looks a little funny -- it seems to bail out
> of all the remaining adapters if one of them isn't quite right.
> ---
>
> drivers/infiniband/hw/ehca/hcp_sense.c | 144 +++++++++++++++++++++
> +++++++++++
> drivers/infiniband/hw/ehca/hcp_sense.h | 136 +++++++++++++++++++++
> +++++++++
> 2 files changed, 280 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/infiniband/hw/ehca/hcp_sense.c b/drivers/
> infiniband/hw/ehca/hcp_sense.c
> new file mode 100644
> index 0000000..83fa4a3
> --- /dev/null
> +++ b/drivers/infiniband/hw/ehca/hcp_sense.c
> @@ -0,0 +1,144 @@
> +/*
> + * IBM eServer eHCA Infiniband device driver for Linux on POWER
> + *
> + * ehca detection and query code for POWER
> + *
> + * Authors: Heiko J Schick <[email protected]>
> + *
> + * Copyright (c) 2005 IBM Corporation
> + *
> + * All rights reserved.
> + *
> + * This source code is distributed under a dual license of GPL
> v2.0 and OpenIB
> + * BSD.
> + *
> + * OpenIB BSD License
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following
> conditions are met:
> + *
> + * Redistributions of source code must retain the above copyright
> notice, this
> + * list of conditions and the following disclaimer.
> + *
> + * Redistributions in binary form must reproduce the above
> copyright notice,
> + * this list of conditions and the following disclaimer in the
> documentation
> + * and/or other materials
> + * provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS "AS IS"
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
> PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
> CONTRIBUTORS BE
> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
> EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
> PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
> + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
> LIABILITY, WHETHER
> + * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
> OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> ADVISED OF THE
> + * POSSIBILITY OF SUCH DAMAGE.
> + *
> + * $Id: hcp_sense.c,v 1.10 2006/02/06 10:17:34 schickhj Exp $
> + */
> +
> +#define DEB_PREFIX "snse"
> +
> +#include "ehca_kernel.h"
> +#include "ehca_tools.h"
> +
> +int hipz_count_adapters(void)
> +{
> + int num = 0;
> + struct device_node *dn = NULL;
> +
> + EDEB_EN(7, "");
> +
> + while ((dn = of_find_node_by_name(dn, "lhca"))) {
> + num++;
> + }
> +
> + of_node_put(dn);
> +
> + if (num == 0) {
> + EDEB_ERR(4, "No lhca node name was found in the"
> + " Open Firmware device tree.");
> + return -ENODEV;
> + }
> +
> + EDEB(6, " ... found %x adapter(s)", num);
> +
> + EDEB_EX(7, "num=%x", num);
> +
> + return num;
> +}
> +
> +int hipz_probe_adapters(char **adapter_list)
> +{
> + int ret = 0;
> + int num = 0;
> + struct device_node *dn = NULL;
> + char *loc;
> +
> + EDEB_EN(7, "adapter_list=%p", adapter_list);
> +
> + while ((dn = of_find_node_by_name(dn, "lhca"))) {
> + loc = get_property(dn, "ibm,loc-code", NULL);
> + if (loc == NULL) {
> + EDEB_ERR(4, "No ibm,loc-code property for"
> + " lhca Open Firmware device tree node.");
> + ret = -ENODEV;
> + goto probe_adapters0;
> + }
> +
> + adapter_list[num] = loc;
> + EDEB(6, " ... found adapter[%x] with loc-code: %s", num, loc);
> + num++;
> + }
> +
> + probe_adapters0:
> + of_node_put(dn);
> +
> + EDEB_EX(7, "ret=%x", ret);
> +
> + return ret;
> +}
> +
> +u64 hipz_get_adapter_handle(char *adapter)
> +{
> + struct device_node *dn = NULL;
> + char *loc;
> + u64 *u64data = NULL;
> + u64 ret = 0;
> +
> + EDEB_EN(7, "adapter=%p", adapter);
> +
> + while ((dn = of_find_node_by_name(dn, "lhca"))) {
> + loc = get_property(dn, "ibm,loc-code", NULL);
> + if (loc == NULL) {
> + EDEB_ERR(4, "No ibm,loc-code property for"
> + " lhca Open Firmware device tree node.");
> + goto get_adapter_handle0;
> + }
> +
> + if (strcmp(loc, adapter) == 0) {
> + u64data =
> + (u64 *) get_property(dn, "ibm,hca-handle", NULL);
> + break;
> + }
> + }
> +
> + if (u64data == NULL) {
> + EDEB_ERR(4, "No ibm,hca-handle property for"
> + " lhca Open Firmware device tree node with"
> + " ibm,loc-code: %s.", adapter);
> + goto get_adapter_handle0;
> + }
> +
> + ret = *u64data;
> +
> + get_adapter_handle0:
> + of_node_put(dn);
> +
> + EDEB_EX(7, "ret=%lx",ret);
> +
> + return ret;
> +}
> diff --git a/drivers/infiniband/hw/ehca/hcp_sense.h b/drivers/
> infiniband/hw/ehca/hcp_sense.h
> new file mode 100644
> index 0000000..a49040b
> --- /dev/null
> +++ b/drivers/infiniband/hw/ehca/hcp_sense.h
> @@ -0,0 +1,136 @@
> +/*
> + * IBM eServer eHCA Infiniband device driver for Linux on POWER
> + *
> + * ehca detection and query code for POWER
> + *
> + * Authors: Heiko J Schick <[email protected]>
> + *
> + * Copyright (c) 2005 IBM Corporation
> + *
> + * All rights reserved.
> + *
> + * This source code is distributed under a dual license of GPL
> v2.0 and OpenIB
> + * BSD.
> + *
> + * OpenIB BSD License
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following
> conditions are met:
> + *
> + * Redistributions of source code must retain the above copyright
> notice, this
> + * list of conditions and the following disclaimer.
> + *
> + * Redistributions in binary form must reproduce the above
> copyright notice,
> + * this list of conditions and the following disclaimer in the
> documentation
> + * and/or other materials
> + * provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS "AS IS"
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
> PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
> CONTRIBUTORS BE
> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
> EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
> PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
> + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
> LIABILITY, WHETHER
> + * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
> OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> ADVISED OF THE
> + * POSSIBILITY OF SUCH DAMAGE.
> + *
> + * $Id: hcp_sense.h,v 1.11 2006/02/06 10:17:34 schickhj Exp $
> + */
> +
> +#ifndef HCP_SENSE_H
> +#define HCP_SENSE_H
> +
> +int hipz_count_adapters(void);
> +int hipz_probe_adapters(char **adapter_list);
> +u64 hipz_get_adapter_handle(char *adapter);
> +
> +/* query hca response block */
> +struct query_hca_rblock {
> + u32 cur_reliable_dg;
> + u32 cur_qp;
> + u32 cur_cq;
> + u32 cur_eq;
> + u32 cur_mr;
> + u32 cur_mw;
> + u32 cur_ee_context;
> + u32 cur_mcast_grp;
> + u32 cur_qp_attached_mcast_grp;
> + u32 reserved1;
> + u32 cur_ipv6_qp;
> + u32 cur_eth_qp;
> + u32 cur_hp_mr;
> + u32 reserved2[3];
> + u32 max_rd_domain;
> + u32 max_qp;
> + u32 max_cq;
> + u32 max_eq;
> + u32 max_mr;
> + u32 max_hp_mr;
> + u32 max_mw;
> + u32 max_mrwpte;
> + u32 max_special_mrwpte;
> + u32 max_rd_ee_context;
> + u32 max_mcast_grp;
> + u32 max_qps_attached_all_mcast_grp;
> + u32 max_qps_attached_mcast_grp;
> + u32 max_raw_ipv6_qp;
> + u32 max_raw_ethy_qp;
> + u32 internal_clock_frequency;
> + u32 max_pd;
> + u32 max_ah;
> + u32 max_cqe;
> + u32 max_wqes_wq;
> + u32 max_partitions;
> + u32 max_rr_ee_context;
> + u32 max_rr_qp;
> + u32 max_rr_hca;
> + u32 max_act_wqs_ee_context;
> + u32 max_act_wqs_qp;
> + u32 max_sge;
> + u32 max_sge_rd;
> + u32 memory_page_size_supported;
> + u64 max_mr_size;
> + u32 local_ca_ack_delay;
> + u32 num_ports;
> + u32 vendor_id;
> + u32 vendor_part_id;
> + u32 hw_ver;
> + u64 node_guid;
> + u64 hca_cap_indicators;
> + u32 data_counter_register_size;
> + u32 max_shared_rq;
> + u32 max_isns_eq;
> + u32 max_neq;
> +} __attribute__ ((packed));
> +
> +/* query port response block */
> +struct query_port_rblock {
> + u32 state;
> + u32 bad_pkey_cntr;
> + u32 lmc;
> + u32 lid;
> + u32 subnet_timeout;
> + u32 qkey_viol_cntr;
> + u32 sm_sl;
> + u32 sm_lid;
> + u32 capability_mask;
> + u32 init_type_reply;
> + u32 pkey_tbl_len;
> + u32 gid_tbl_len;
> + u64 gid_prefix;
> + u32 port_nr;
> + u16 pkey_entries[16];
> + u8 reserved1[32];
> + u32 trent_size;
> + u32 trbuf_size;
> + u64 max_msg_sz;
> + u32 max_mtu;
> + u32 vl_cap;
> + u8 reserved2[1900];
> + u64 guid_entries[255];
> +} __attribute__ ((packed));
> +
> +#endif
> _______________________________________________
> openib-general mailing list
> [email protected]
> http://openib.org/mailman/listinfo/openib-general
>
> To unsubscribe, please visit http://openib.org/mailman/listinfo/
> openib-general
>

2006-02-18 15:09:37

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [openib-general] [PATCH 08/22] Generic ehca headers

On Fri, Feb 17, 2006 at 04:57:23PM -0800, Roland Dreier wrote:
> From: Roland Dreier <[email protected]>
>
> The defines of TRUE and FALSE look rather useless. Why are they needed?
>
> What is struct ehca_cache for? It doesn't seem to be used anywhere.
>
> ehca_kv_to_g() looks completely horrible. The whole idea of using
> vmalloc()ed kernel memory to do DMA seems unacceptable to me.

When you want to do scatter-gather dma on kernel-virtual contingous
areas allocate the pages individually and map them into kva using
vmap(). Then dma can be performed using dma_map_page, or in case
you have lots of pages dma_map_sg after creating an S/G list.

2006-02-18 16:02:50

by Roland Dreier

[permalink] [raw]
Subject: Re: [openib-general] [PATCH 04/22] OF adapter probing

Heiko> Hello Roland, sorry, this file is not used anymore. The
Heiko> functions

OK, please delete it from the svn tree.

Thanks,
Roland

2006-02-18 16:32:36

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

Arjan> The bigger issue is: if people can't be bothered to do
Arjan> those steps, why would they be bothered to do this for
Arjan> maintenance and bugfixes etc etc? Basically it's now
Arjan> already a de-facto unmaintained driver....

I don't think that's really a fair statement. The IBM people have
been active and responsive in maintaining their driving in the
openib.org svn tree. However, they asked me to post their driver for
review because it would be difficult for them to do it.

IBM people: can you clarify the restrictions you have? Why do you
feel you can't post your own driver for review? Will you be able to
post smaller patches to lkml in the future if the driver is merged?

Thanks,
Roland

2006-02-18 17:05:59

by Arjan van de Ven

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, 2006-02-18 at 08:32 -0800, Roland Dreier wrote:
> Arjan> The bigger issue is: if people can't be bothered to do
> Arjan> those steps, why would they be bothered to do this for
> Arjan> maintenance and bugfixes etc etc? Basically it's now
> Arjan> already a de-facto unmaintained driver....
>
> I don't think that's really a fair statement.

It's a concern at least; if they're just having trouble posting really
big files that's one thing.. if they're not allowed to post at all
that's another.

> IBM people: can you clarify the restrictions you have? Why do you
> feel you can't post your own driver for review? Will you be able to
> post smaller patches to lkml in the future if the driver is merged?

And can you respond to questions and user questions on lkml?


2006-02-18 18:15:32

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, Feb 18, 2006 at 08:32:28AM -0800, Roland Dreier wrote:
> Arjan> The bigger issue is: if people can't be bothered to do
> Arjan> those steps, why would they be bothered to do this for
> Arjan> maintenance and bugfixes etc etc? Basically it's now
> Arjan> already a de-facto unmaintained driver....
>
> I don't think that's really a fair statement. The IBM people have
> been active and responsive in maintaining their driving in the
> openib.org svn tree. However, they asked me to post their driver for
> review because it would be difficult for them to do it.

Checking stuff into a private svn tree is vastly different from posting
to lkml in public. In fact, it looks like the svn tree is so far ahead
of the in-kernel stuff, that most people are just using it instead of
the in-kernel code.

I know at least one company has asked a distro to just accept the svn
snapshot over the in-kernel IB code, which makes me wonder if the
in-kernel stuff is even useful to people? Why have it, if companies
insist on using the out-of-tree stuff instead?

thanks,

greg k-h

2006-02-18 18:19:36

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, Feb 18, 2006 at 10:15:09AM -0800, Greg KH wrote:
> On Sat, Feb 18, 2006 at 08:32:28AM -0800, Roland Dreier wrote:
> > Arjan> The bigger issue is: if people can't be bothered to do
> > Arjan> those steps, why would they be bothered to do this for
> > Arjan> maintenance and bugfixes etc etc? Basically it's now
> > Arjan> already a de-facto unmaintained driver....
> >
> > I don't think that's really a fair statement. The IBM people have
> > been active and responsive in maintaining their driving in the
> > openib.org svn tree. However, they asked me to post their driver for
> > review because it would be difficult for them to do it.
>
> Checking stuff into a private svn tree is vastly different from posting
> to lkml in public. In fact, it looks like the svn tree is so far ahead
> of the in-kernel stuff, that most people are just using it instead of
> the in-kernel code.
>
> I know at least one company has asked a distro to just accept the svn
> snapshot over the in-kernel IB code, which makes me wonder if the
> in-kernel stuff is even useful to people? Why have it, if companies
> insist on using the out-of-tree stuff instead?

The openib tree isn't private. It's mostly just a staging area for
development. Any company that wants it included into a distro release
is completely clueless.

2006-02-18 18:53:05

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

Greg> Checking stuff into a private svn tree is vastly different
Greg> from posting to lkml in public. In fact, it looks like the
Greg> svn tree is so far ahead of the in-kernel stuff, that most
Greg> people are just using it instead of the in-kernel code.

It's not a private svn tree -- the IBM ehca development is available
to anyone via svn at https://openib.org/svn/gen2/trunk/src/linux-kernel/infiniband/hw/ehca

Greg> I know at least one company has asked a distro to just
Greg> accept the svn snapshot over the in-kernel IB code, which
Greg> makes me wonder if the in-kernel stuff is even useful to
Greg> people? Why have it, if companies insist on using the
Greg> out-of-tree stuff instead?

The IB driver stack is still in its early stages, so although I'm
pushing for things to be merged as fast as possible, the unfortunate
fact is that lots of things that people want to use (including the IBM
ehca driver) are not upstream and are not ready to go upstream yet.
But that doesn't mean we should give up on merging them.

Distro politics are just distro politics -- and there will always be
pressure on distros to ship stuff that's not upstream yet.

- R.

2006-02-18 19:53:42

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, Feb 18, 2006 at 10:52:58AM -0800, Roland Dreier wrote:
> Greg> Checking stuff into a private svn tree is vastly different
> Greg> from posting to lkml in public. In fact, it looks like the
> Greg> svn tree is so far ahead of the in-kernel stuff, that most
> Greg> people are just using it instead of the in-kernel code.
>
> It's not a private svn tree -- the IBM ehca development is available
> to anyone via svn at https://openib.org/svn/gen2/trunk/src/linux-kernel/infiniband/hw/ehca

Sorry, I didn't mean to say "private", but rather, "seperate".
Doing kernel development in a seperate development tree from the
mainline kernel is very problematic, as has been documented many times
in the past.

> Distro politics are just distro politics -- and there will always be
> pressure on distros to ship stuff that's not upstream yet.

Luckily the distros know better than to accept this anymore, as they
have been burned too many times in the past...

thanks,

greg k-h

2006-02-18 21:32:00

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

Greg> Sorry, I didn't mean to say "private", but rather,
Greg> "seperate". Doing kernel development in a seperate
Greg> development tree from the mainline kernel is very
Greg> problematic, as has been documented many times in the past.

As a general rule I agree with that. However, the openib svn tree
we're talking about is not some project that is off in space never
merging with the kernel; as Christoph said, it's really just a staging
area for stuff that isn't ready for upstream yet.n

Perhaps it would be more politically correct to use git to develop
kernel code, but in the end that's really just a technical difference
that shouldn't matter.

Roland> Distro politics are just distro politics -- and there will
Roland> always be pressure on distros to ship stuff that's not
Roland> upstream yet.

Greg> Luckily the distros know better than to accept this anymore,
Greg> as they have been burned too many times in the past...

OK, that's great. But now I don't understand your original point.
You say there are people putting pressure on distros to ship what's in
openib svn rather than the upstream kernel, but if the distros are
going to ignore them, what does it matter?

And this thread started with me trying to help the IBM people make
progress towards merging a big chunk of that svn tree upstream. That
should make you happy, right?

- R.

2006-02-18 23:30:04

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

On Sat, Feb 18, 2006 at 01:31:52PM -0800, Roland Dreier wrote:
> Greg> Sorry, I didn't mean to say "private", but rather,
> Greg> "seperate". Doing kernel development in a seperate
> Greg> development tree from the mainline kernel is very
> Greg> problematic, as has been documented many times in the past.
>
> As a general rule I agree with that. However, the openib svn tree
> we're talking about is not some project that is off in space never
> merging with the kernel; as Christoph said, it's really just a staging
> area for stuff that isn't ready for upstream yet.n
>
> Perhaps it would be more politically correct to use git to develop
> kernel code, but in the end that's really just a technical difference
> that shouldn't matter.

Yes, that doesn't matter. But it seems that the svn tree is vastly
different from the in-kernel code. So much so that some companies feel
that the in-kernel stuff just isn't worth running at all.

> Roland> Distro politics are just distro politics -- and there will
> Roland> always be pressure on distros to ship stuff that's not
> Roland> upstream yet.
>
> Greg> Luckily the distros know better than to accept this anymore,
> Greg> as they have been burned too many times in the past...
>
> OK, that's great. But now I don't understand your original point.
> You say there are people putting pressure on distros to ship what's in
> openib svn rather than the upstream kernel, but if the distros are
> going to ignore them, what does it matter?

It takes a _lot_ of effort to ignore them, as it's very difficult to do
so. Especially when companies try to play the different distros off of
each other, but that's not an issue that the mainline kernel developers
need to worry about :)

> And this thread started with me trying to help the IBM people make
> progress towards merging a big chunk of that svn tree upstream. That
> should make you happy, right?

Yes, that does make me happy. But it doesn't make me happy to see IBM
not being able to participate in kernel development by posting and
defending their own code to lkml. I thought IBM knew better than
that...

thanks,

greg k-h

2006-02-19 00:09:38

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 02/22] Firmware interface code for IB device.

Greg> Yes, that doesn't matter. But it seems that the svn tree is
Greg> vastly different from the in-kernel code. So much so that
Greg> some companies feel that the in-kernel stuff just isn't
Greg> worth running at all.

I don't want to belabor this issue... but the svn tree is not vastly
different than what's in the kernel. It has some things that aren't
upstream yet, and which are important to some people. For example,
the IBM ehca driver we're talking about, as well as the PathScale
driver, SDP (sockets direct protocol), etc. It just takes time for
this new code to get to the point where both the developers of the new
stuff feel it's ready to be merged, and the kernel community agrees
that it should be merged.

Greg> Yes, that does make me happy. But it doesn't make me happy
Greg> to see IBM not being able to participate in kernel
Greg> development by posting and defending their own code to lkml.
Greg> I thought IBM knew better than that...

Agreed. But let's not get sidetracked on that internal IBM issue.
The ehca developers have assured me that they can and will participate
in the thread reviewing their driver. It seems like it's better for
me to help them work around their internal problems by acting as a
proxy, than for me to delay merging their driver just because someone
in IBM management is clueless.

- R.

2006-02-20 15:03:32

by Anton Blanchard

[permalink] [raw]
Subject: Re: [PATCH 01/22] Add powerpc-specific clear_cacheline(), which just compiles to "dcbz".


Hi,

> This is horribly non-portable. How much of a performance difference
> does it make? How does it do on ppc64 systems where the cacheline
> size is not 32?

Yes, if anything we should catch cacheline aligned, multiple cacheline
sized zeroing in memset.

Anton

2006-02-20 15:06:07

by Christoph Raisch

[permalink] [raw]
Subject: Re: [PATCH 00/22] [RFC] IBM eHCA InfiniBand adapter driver


Roland,
as you already stated we really have a problem that we're not able to send
"large" pieces of code to the kernel mailing list.
It's perfectly ok for us to send patches to the openib.org mailing list and
svn.
This is something we still try to resolve with legal.
So thank you Roland for acting as a proxy here...
We have the ok to contribute to any ehca related discussion on kernel
mailing-list and ppc64-mailing list, and are absolutely willing to do so!

Adding a new driver for a complex new hardware isn't the regular linux
develpment case, especially if there's no base code in linux kernel to
patch against...
In our case this patch resulted in 22 postings.
Some people already noticed that there's still quite some road ahead of
us... but we're abolutely willing to work that, and we had to start at some
place.
Some coments will result in modifications to all files.
I guess posting 22 new patch files (diff against NIL) each week is sort of
a DoS attack on the mailing list and we'll end up in peoples spam folders
pretty quickly...
So what's the recomended way to proceed here?


Gruss / Regards . . . Christoph Raisch

christoph raisch, HCAD teamlead

Roland Dreier wrote on 18.02.2006 01:55:32:

> Here's a series of patches that add an InfiniBand adapter driver
> for IBM eHCA hardware. Please look it over with an eye towards issues
> that need to be addressed before merging this upstream.
>

2006-02-20 15:13:38

by Anton Blanchard

[permalink] [raw]
Subject: Re: [PATCH 07/22] Hypercall definitions


Hi,

> Do these defines belong in the ehca driver, or should they be put
> somewhere in generic hypercall support?

Agreed, I think they should go into include/asm-powerpc/hvcall.h

Anton

2006-02-20 15:13:37

by Anton Blanchard

[permalink] [raw]
Subject: Re: [PATCH 03/22] pHype specific stuff


Hi,

> +inline static u32 getLongBusyTimeSecs(int longBusyRetCode)
> +{
> + switch (longBusyRetCode) {
> + case H_LongBusyOrder1msec:
> + return 1;
> + case H_LongBusyOrder10msec:
> + return 10;
> + case H_LongBusyOrder100msec:
> + return 100;
> + case H_LongBusyOrder1sec:
> + return 1000;
> + case H_LongBusyOrder10sec:
> + return 10000;
> + case H_LongBusyOrder100sec:
> + return 100000;
> + default:
> + return 1;
> + } /* eof switch */
> +}

Since this actually returns milliseconds it might be worth making it
obvious in the function name. Also no need to use studly caps for the
function name and variable. We will fix the studly caps H_LongBusy*
stuff another day :)

> +inline static long plpar_hcall_7arg_7ret(unsigned long opcode,
> +inline static long plpar_hcall_9arg_9ret(unsigned long opcode,

These belong in arch/powerpc/platforms/pseries/hvCall.S

Anton

2006-02-20 15:23:43

by Anton Blanchard

[permalink] [raw]
Subject: Re: [PATCH 21/22] ehca main file


Hi,

> What is ehca_show_flightrecorder() trying to do that snprintf() is
> not fast enough? If you need to pass a binary structure back to
> userspace (with a kernel address in it??) then sysfs is not the right
> place to put it. Look at debugfs; or relayfs might make the most
> sense for your flightrecorder stuff.

I agree debugfs or relayfs would be better suited. Of course as the
driver matures this form of debug is probably not required at all.

> +#include "hcp_sense.h" /* TODO: later via hipz_* header file */
> +#include "hcp_if.h" /* TODO: later via hipz_* header file */

I count 88 TODOs in the driver, it would be nice to get rid of some of
them like the two above, so we can concentrate on the important TODOs :)

> +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,12)
> +#define EHCA_RESOURCE_ATTR_H(name) \
> +static ssize_t ehca_show_##name(struct device *dev, \
> + struct device_attribute *attr, \
> + char *buf)
> +#else
> +#define EHCA_RESOURCE_ATTR_H(name) \
> +static ssize_t ehca_show_##name(struct device *dev, \
> + char *buf)
> +#endif

No need for kernel version ifdefs.

Anton

2006-02-20 16:24:23

by Heiko J Schick

[permalink] [raw]
Subject: Re: [openib-general] Re: [PATCH 21/22] ehca main file

Hello Anton,

thanks for your help!

>>+#include "hcp_sense.h" /* TODO: later via hipz_* header file */
>>+#include "hcp_if.h" /* TODO: later via hipz_* header file */
>
>
> I count 88 TODOs in the driver, it would be nice to get rid of some of
> them like the two above, so we can concentrate on the important TODOs :)

We will remove the TODOs soon as possible.

>>+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,12)
>>+#define EHCA_RESOURCE_ATTR_H(name) \
>>+static ssize_t ehca_show_##name(struct device *dev, \
>>+ struct device_attribute *attr, \
>>+ char *buf)
>>+#else
>>+#define EHCA_RESOURCE_ATTR_H(name) \
>>+static ssize_t ehca_show_##name(struct device *dev, \
>>+ char *buf)
>>+#endif
>
>
> No need for kernel version ifdefs.

The point is that our module have to run on Linux 2.6.5-7.244 (SuSE SLES 9 SP3), too.
This was the reason why we've included the ifdefs. We can change the ifdefs to
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2.6.5) to mark that this code is used for
Linux 2.6.5 compatibility.

Regards,
Heiko

2006-02-20 16:53:04

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 21/22] ehca main file

Anton> No need for kernel version ifdefs.

Sorry, I tried to strip these out before posting the patch, but I
missed one.

Anyway, totally agree on the ifdefs and I will be double-extra-sure
that the final version doesn't include them.

- R.

2006-02-20 16:55:34

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 00/22] [RFC] IBM eHCA InfiniBand adapter driver

Christoph> I guess posting 22 new patch files (diff against NIL)
Christoph> each week is sort of a DoS attack on the mailing list
Christoph> and we'll end up in peoples spam folders pretty
Christoph> quickly... So what's the recomended way to proceed
Christoph> here?

I don't think there's any other way to proceed. For each version, you
should carefully note down the feedback that you received and how you
are responding to each suggestion, and include that with the patch
file. But it's too much to expect for people to keep context for a
patch under review, so even though it generates a lot of email, I
think that including the whole series is the only way to go.

Perhaps the list admins disagree with me though ;)

- R.

2006-02-20 17:44:07

by Stephen Poole

[permalink] [raw]
Subject: [openib-general] Re: [PATCH 00/22] [RFC] IBM eHCA InfiniBand adapter driver

If every open source company was being sued for $3B I think many
companies would be a bit timid. :-) IBM has been working this issue
at all levels. It will happen when IBM Legal has figured out all of
the necessary paths in order to cover any potential law suits.
Unfortunately, the open source path has been muddied by some folks.

Steve...

At 4:06 PM +0100 2/20/06, Christoph Raisch wrote:
>Roland,
>as you already stated we really have a problem that we're not able to send
>"large" pieces of code to the kernel mailing list.
>It's perfectly ok for us to send patches to the openib.org mailing list and
>svn.
>This is something we still try to resolve with legal.
>So thank you Roland for acting as a proxy here...
>We have the ok to contribute to any ehca related discussion on kernel
>mailing-list and ppc64-mailing list, and are absolutely willing to do so!
>
>Adding a new driver for a complex new hardware isn't the regular linux
>develpment case, especially if there's no base code in linux kernel to
>patch against...
>In our case this patch resulted in 22 postings.
>Some people already noticed that there's still quite some road ahead of
>us... but we're abolutely willing to work that, and we had to start at some
>place.
>Some coments will result in modifications to all files.
>I guess posting 22 new patch files (diff against NIL) each week is sort of
>a DoS attack on the mailing list and we'll end up in peoples spam folders
>pretty quickly...
>So what's the recomended way to proceed here?
>
>
>Gruss / Regards . . . Christoph Raisch
>
>christoph raisch, HCAD teamlead
>
>Roland Dreier wrote on 18.02.2006 01:55:32:
>
>> Here's a series of patches that add an InfiniBand adapter driver
>> for IBM eHCA hardware. Please look it over with an eye towards issues
>> that need to be addressed before merging this upstream.
>>
>
>_______________________________________________
>openib-general mailing list
>[email protected]
>http://openib.org/mailman/listinfo/openib-general
>
>To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


--
Steve Poole ([email protected])
Office: 505.665.9662
Los Alamos National Laboratory Cell:
505.699.3807
CCN - Special Projects / Advanced Development Fax:
505.665.7793
P.O. Box 1663, MS B255
Los Alamos, NM. 87545
03149801S

2006-02-20 18:33:01

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [openib-general] Re: [PATCH 21/22] ehca main file

On Tuesday 21 February 2006 03:09, Heiko J Schick wrote:
> ?>>+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,12)
> ?>>+#define EHCA_RESOURCE_ATTR_H(name) ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? \
> ?>>+static ssize_t ?ehca_show_##name(struct device *dev, ? ? ? ? ? ? ? ? ? ? ? \
> ?>>+???????????????????????????? struct device_attribute *attr, ? ? ? ? ? ?\
> ?>>+???????????????????????????? char *buf)
> ?>>+#else
> ?>>+#define EHCA_RESOURCE_ATTR_H(name) ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? \
> ?>>+static ssize_t ?ehca_show_##name(struct device *dev, ? ? ? ? ? ? ? ? ? ? ? \
> ?>>+???????????????????????????? char *buf)
> ?>>+#endif
> ?>
> ?>
> ?> No need for kernel version ifdefs.
>
> The point is that our module have to run on Linux 2.6.5-7.244 (SuSE SLES 9 SP3), too.
> This was the reason why we've included the ifdefs. We can change the ifdefs to
> #if LINUX_VERSION_CODE >= KERNEL_VERSION(2.6.5) to mark that this code is used for
> Linux 2.6.5 compatibility.

That only makes sense as long as you have a common source code for both
that also is under your control. As soon as the driver enters the mainline
kernel, it is no longer helpful to have these checks in it, because other
people will start making changes to the driver that you don't want to
have in the 2.6.5 version.

You cannot avoid forking the code in the long term, but fortunately the
need to backport fixes to the old version should also decrease over time.

Arnd <><