2010-12-06 07:16:27

by Jeff Ohlstein

[permalink] [raw]
Subject: [PATCH 0/5] SMP support for msm

This series adds support for SMP on msm targets, and specifically adds
support for SMP on the msm8x60.

Jeff Ohlstein (3):
msm: timer: SMP timer support for msm
msm: hotplug: support cpu hotplug on msm
msm: add SMP support for msm

Stepan Moskovchenko (1):
msm: scm-boot: Support for setting cold/warm boot addresses

Stephen Boyd (1):
msm: Secure Channel Manager (SCM) support

arch/arm/mach-msm/Kconfig | 5 +
arch/arm/mach-msm/Makefile | 4 +
arch/arm/mach-msm/headsmp.S | 43 ++++
arch/arm/mach-msm/hotplug.c | 101 ++++++++
arch/arm/mach-msm/include/mach/msm_iomap-8x60.h | 6 +-
arch/arm/mach-msm/include/mach/smp.h | 2 +
arch/arm/mach-msm/io.c | 1 +
arch/arm/mach-msm/platsmp.c | 147 ++++++++++++
arch/arm/mach-msm/scm-boot.c | 40 ++++
arch/arm/mach-msm/scm-boot.h | 38 +++
arch/arm/mach-msm/scm.c | 280 +++++++++++++++++++++++
arch/arm/mach-msm/scm.h | 41 ++++
arch/arm/mach-msm/timer.c | 139 +++++++++++-
13 files changed, 837 insertions(+), 10 deletions(-)
create mode 100644 arch/arm/mach-msm/headsmp.S
create mode 100644 arch/arm/mach-msm/hotplug.c
create mode 100644 arch/arm/mach-msm/platsmp.c
create mode 100644 arch/arm/mach-msm/scm-boot.c
create mode 100644 arch/arm/mach-msm/scm-boot.h
create mode 100644 arch/arm/mach-msm/scm.c
create mode 100644 arch/arm/mach-msm/scm.h

--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.


2010-12-06 07:16:33

by Jeff Ohlstein

[permalink] [raw]
Subject: [PATCH 1/5] msm: Secure Channel Manager (SCM) support

From: Stephen Boyd <[email protected]>

SCM is the protocol used to communicate between the secure and
non-secure code executing on the applications processor. The
non-secure side uses a physically contiguous buffer to pass
information to the secure side; where the buffer conforms to a
format that is agreed upon by both sides. The use of a buffer
allows multiple pending requests to be in flight on the secure
side. It also benefits use cases where the command or response
buffer contains large chunks of data.

Signed-off-by: Stephen Boyd <[email protected]>
---
arch/arm/mach-msm/Kconfig | 4 +
arch/arm/mach-msm/Makefile | 1 +
arch/arm/mach-msm/scm.c | 280 ++++++++++++++++++++++++++++++++++++++++++++
arch/arm/mach-msm/scm.h | 41 +++++++
4 files changed, 326 insertions(+), 0 deletions(-)
create mode 100644 arch/arm/mach-msm/scm.c
create mode 100644 arch/arm/mach-msm/scm.h

diff --git a/arch/arm/mach-msm/Kconfig b/arch/arm/mach-msm/Kconfig
index 31e5fd6..ab5338f 100644
--- a/arch/arm/mach-msm/Kconfig
+++ b/arch/arm/mach-msm/Kconfig
@@ -44,6 +44,7 @@ config ARCH_MSM8X60
select CPU_V7
select MSM_V2_TLMM
select MSM_GPIOMUX
+ select MSM_SCM if SMP

endchoice

@@ -164,4 +165,7 @@ config MSM_GPIOMUX

config MSM_V2_TLMM
bool
+
+config MSM_SCM
+ bool
endif
diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index b5a7b07..eed503b 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -18,6 +18,7 @@ obj-$(CONFIG_MSM_PROC_COMM) += clock.o
obj-$(CONFIG_ARCH_QSD8X50) += sirc.o
obj-$(CONFIG_MSM_SMD) += smd.o smd_debug.o
obj-$(CONFIG_MSM_SMD) += last_radio_log.o
+obj-$(CONFIG_MSM_SCM) += scm.o

obj-$(CONFIG_MACH_TROUT) += board-trout.o board-trout-gpio.o board-trout-mmc.o devices-msm7x00.o
obj-$(CONFIG_MACH_HALIBUT) += board-halibut.o devices-msm7x00.o
diff --git a/arch/arm/mach-msm/scm.c b/arch/arm/mach-msm/scm.c
new file mode 100644
index 0000000..e09bc94
--- /dev/null
+++ b/arch/arm/mach-msm/scm.c
@@ -0,0 +1,280 @@
+/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+ * 02110-1301, USA.
+ */
+
+#include <linux/slab.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/cacheflush.h>
+
+#include "scm.h"
+
+#define SCM_ENOMEM -5
+#define SCM_EOPNOTSUPP -4
+#define SCM_EINVAL_ADDR -3
+#define SCM_EINVAL_ARG -2
+#define SCM_ERROR -1
+#define SCM_INTERRUPTED 1
+
+static DEFINE_MUTEX(scm_lock);
+
+/**
+ * struct scm_command - one SCM command buffer
+ * @len: total available memory for command and response
+ * @buf_offset: start of command buffer
+ * @resp_hdr_offset: start of response buffer
+ * @id: command to be executed
+ *
+ * An SCM command is layed out in memory as follows:
+ *
+ * ------------------- <--- struct scm_command
+ * | command header |
+ * ------------------- <--- scm_get_command_buffer()
+ * | command buffer |
+ * ------------------- <--- struct scm_response and
+ * | response header | scm_command_to_response()
+ * ------------------- <--- scm_get_response_buffer()
+ * | response buffer |
+ * -------------------
+ *
+ * There can be arbitrary padding between the headers and buffers so
+ * you should always use the appropriate scm_get_*_buffer() routines
+ * to access the buffers in a safe manner.
+ */
+struct scm_command {
+ u32 len;
+ u32 buf_offset;
+ u32 resp_hdr_offset;
+ u32 id;
+};
+
+/**
+ * struct scm_response - one SCM response buffer
+ * @len: total available memory for response
+ * @buf_offset: start of response data relative to start of scm_response
+ * @is_complete: indicates if the command has finished processing
+ */
+struct scm_response {
+ u32 len;
+ u32 buf_offset;
+ u32 is_complete;
+};
+
+/**
+ * alloc_scm_command() - Allocate an SCM command
+ * @cmd_size - size of the command buffer
+ * @resp_size - size of the response buffer
+ * @handle - dma handle
+ *
+ * Allocate an SCM command, including enough room for the command
+ * and response headers as well as the command and response buffers.
+ *
+ * Returns a valid &scm_command on success or %NULL if the allocation fails.
+ */
+static struct scm_command *alloc_scm_command(size_t cmd_size, size_t resp_size,
+ dma_addr_t *handle)
+{
+ struct scm_command *cmd;
+ size_t len = sizeof(*cmd) + sizeof(struct scm_response) + cmd_size +
+ resp_size;
+
+ cmd = dma_alloc_coherent(NULL, len, handle, GFP_KERNEL);
+ if (cmd) {
+ cmd->len = len;
+ cmd->buf_offset = sizeof(*cmd);
+ cmd->resp_hdr_offset = cmd->buf_offset + cmd_size;
+ }
+ return cmd;
+}
+
+/**
+ * free_scm_command() - Free an SCM command
+ * @cmd - command to free
+ * @handle - dma handle
+ *
+ * Free an SCM command.
+ */
+static inline void free_scm_command(struct scm_command *cmd, dma_addr_t handle)
+{
+ dma_free_coherent(NULL, cmd->len, cmd, handle);
+}
+
+/**
+ * scm_command_to_response() - Get a pointer to a scm_response
+ * @cmd - command
+ *
+ * Returns a pointer to a response for a command.
+ */
+static inline struct scm_response *scm_command_to_response(
+ const struct scm_command *cmd)
+{
+ return (void *)cmd + cmd->resp_hdr_offset;
+}
+
+/**
+ * scm_get_command_buffer() - Get a pointer to a command buffer
+ * @cmd - command
+ *
+ * Returns a pointer to the command buffer of a command.
+ */
+static inline void *scm_get_command_buffer(const struct scm_command *cmd)
+{
+ return (void *)cmd + cmd->buf_offset;
+}
+
+/**
+ * scm_get_response_buffer() - Get a pointer to a response buffer
+ * @rsp - response
+ *
+ * Returns a pointer to a response buffer of a response.
+ */
+static inline void *scm_get_response_buffer(const struct scm_response *rsp)
+{
+ return (void *)rsp + rsp->buf_offset;
+}
+
+static int scm_remap_error(int err)
+{
+ switch (err) {
+ case SCM_ERROR:
+ return -EIO;
+ case SCM_EINVAL_ADDR:
+ case SCM_EINVAL_ARG:
+ return -EINVAL;
+ case SCM_EOPNOTSUPP:
+ return -EOPNOTSUPP;
+ case SCM_ENOMEM:
+ return -ENOMEM;
+ }
+ return -EINVAL;
+}
+
+static u32 smc(dma_addr_t cmd_addr)
+{
+ int context_id;
+ register u32 r0 asm("r0") = 1;
+ register u32 r1 asm("r1") = (u32)&context_id;
+ register u32 r2 asm("r2") = (u32)cmd_addr;
+ asm(
+ __asmeq("%0", "r0")
+ __asmeq("%1", "r0")
+ __asmeq("%2", "r1")
+ __asmeq("%3", "r2")
+ "smc #0 @ switch to secure world\n"
+ : "=r" (r0)
+ : "r" (r0), "r" (r1), "r" (r2)
+ : "r3");
+ return r0;
+}
+
+static int __scm_call(const dma_addr_t cmd)
+{
+ int ret;
+
+ /*
+ * Flush the entire cache here so callers don't have to remember
+ * to flush the cache when passing physical addresses to the secure
+ * side in the buffer.
+ */
+ flush_cache_all();
+ do {
+ ret = smc(cmd);
+ if (ret < 0) {
+ ret = scm_remap_error(ret);
+ break;
+ }
+ } while (ret == SCM_INTERRUPTED);
+
+ return ret;
+}
+
+/**
+ * scm_call() - Send an SCM command
+ * @svc_id - service identifier
+ * @cmd_id - command identifier
+ * @cmd_buf - command buffer
+ * @cmd_len - length of the command buffer
+ * @resp_buf - response buffer
+ * @resp_len - length of the response buffer
+ *
+ * Sends a command to the SCM and waits for the command to finish processing.
+ */
+int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len,
+ void *resp_buf, size_t resp_len)
+{
+ int ret;
+ struct scm_command *cmd;
+ struct scm_response *rsp;
+ dma_addr_t handle;
+
+ cmd = alloc_scm_command(cmd_len, resp_len, &handle);
+ if (!cmd)
+ return -ENOMEM;
+
+ cmd->id = (svc_id << 10) | cmd_id;
+ if (cmd_buf)
+ memcpy(scm_get_command_buffer(cmd), cmd_buf, cmd_len);
+ wmb();
+
+ mutex_lock(&scm_lock);
+ ret = __scm_call(handle);
+ mutex_unlock(&scm_lock);
+ if (ret)
+ goto out;
+
+ rsp = scm_command_to_response(cmd);
+ do {
+ rmb();
+ } while (!rsp->is_complete);
+
+ if (resp_buf)
+ memcpy(resp_buf, scm_get_response_buffer(rsp), resp_len);
+out:
+ free_scm_command(cmd, handle);
+ return ret;
+}
+EXPORT_SYMBOL(scm_call);
+
+u32 scm_get_version(void)
+{
+ int context_id;
+ static u32 version = -1;
+ register u32 r0 asm("r0") = 0x1 << 8;
+ register u32 r1 asm("r1") = (u32)&context_id;
+
+ if (version != -1)
+ return version;
+
+ mutex_lock(&scm_lock);
+ asm(
+ __asmeq("%0", "r1")
+ __asmeq("%1", "r0")
+ __asmeq("%2", "r1")
+ "smc #0 @ switch to secure world\n"
+ : "=r" (r1)
+ : "r" (r0), "r" (r1)
+ : "r2", "r3");
+ version = r1;
+ mutex_unlock(&scm_lock);
+
+ return version;
+}
+EXPORT_SYMBOL(scm_get_version);
diff --git a/arch/arm/mach-msm/scm.h b/arch/arm/mach-msm/scm.h
new file mode 100644
index 0000000..261786b
--- /dev/null
+++ b/arch/arm/mach-msm/scm.h
@@ -0,0 +1,41 @@
+/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ * * Neither the name of Code Aurora Forum, Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __MACH_SCM_H
+#define __MACH_SCM_H
+
+#define SCM_SVC_BOOT 0x1
+#define SCM_SVC_PIL 0x2
+
+extern int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len,
+ void *resp_buf, size_t resp_len);
+
+#define SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF))
+
+extern u32 scm_get_version(void);
+
+#endif
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-06 07:16:41

by Jeff Ohlstein

[permalink] [raw]
Subject: [PATCH 4/5] msm: hotplug: support cpu hotplug on msm

Signed-off-by: Jeff Ohlstein <[email protected]>
---
arch/arm/mach-msm/Makefile | 2 +
arch/arm/mach-msm/hotplug.c | 101 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 103 insertions(+), 0 deletions(-)
create mode 100644 arch/arm/mach-msm/hotplug.c

diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index 34cd0a8..7a11b4a 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -20,6 +20,8 @@ obj-$(CONFIG_MSM_SMD) += smd.o smd_debug.o
obj-$(CONFIG_MSM_SMD) += last_radio_log.o
obj-$(CONFIG_MSM_SCM) += scm.o scm-boot.o

+obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
+
obj-$(CONFIG_MACH_TROUT) += board-trout.o board-trout-gpio.o board-trout-mmc.o devices-msm7x00.o
obj-$(CONFIG_MACH_HALIBUT) += board-halibut.o devices-msm7x00.o
obj-$(CONFIG_ARCH_MSM7X30) += board-msm7x30.o devices-msm7x30.o
diff --git a/arch/arm/mach-msm/hotplug.c b/arch/arm/mach-msm/hotplug.c
new file mode 100644
index 0000000..4891165
--- /dev/null
+++ b/arch/arm/mach-msm/hotplug.c
@@ -0,0 +1,101 @@
+/*
+ * Copyright (C) 2002 ARM Ltd.
+ * All Rights Reserved
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/smp.h>
+#include <linux/completion.h>
+#include <linux/irq.h>
+
+#include <asm/cacheflush.h>
+
+#include <mach/smp.h>
+
+static DECLARE_COMPLETION(cpu_killed);
+
+static inline void cpu_enter_lowpower(void)
+{
+ flush_cache_all();
+}
+
+static inline void cpu_leave_lowpower(void)
+{
+ pen_release = -1;
+}
+
+static inline void platform_do_lowpower(unsigned int cpu)
+{
+ /* Just enter wfe for now. */
+ for (;;) {
+ asm("wfe");
+ if (pen_release == cpu) {
+ /*
+ * OK, proper wakeup, we're done
+ */
+ break;
+ }
+
+ /*
+ * getting here, means that we have come out of WFI without
+ * having been woken up - this shouldn't happen
+ *
+ * The trouble is, letting people know about this is not really
+ * possible, since we are currently running incoherently, and
+ * therefore cannot safely call printk() or anything else
+ */
+ pr_debug("CPU%u: spurious wakeup call\n", cpu);
+ }
+}
+
+int platform_cpu_kill(unsigned int cpu)
+{
+ return wait_for_completion_timeout(&cpu_killed, 50000);
+}
+
+/*
+ * platform-specific code to shutdown a CPU
+ *
+ * Called with IRQs disabled
+ */
+void platform_cpu_die(unsigned int cpu)
+{
+#ifdef DEBUG
+ unsigned int this_cpu = hard_smp_processor_id();
+
+ if (cpu != this_cpu) {
+ printk(KERN_CRIT "Eek! platform_cpu_die running on %u, should be %u\n",
+ this_cpu, cpu);
+ BUG();
+ }
+#endif
+
+ printk(KERN_NOTICE "CPU%u: shutdown\n", cpu);
+ complete(&cpu_killed);
+
+ /*
+ * we're ready for shutdown now, so do it
+ */
+ cpu_enter_lowpower();
+ platform_do_lowpower(cpu);
+
+ /*
+ * bring this CPU back into the world of cache
+ * coherency, and then restore interrupts
+ */
+ cpu_leave_lowpower();
+}
+
+int platform_cpu_disable(unsigned int cpu)
+{
+ /*
+ * we don't allow CPU 0 to be shutdown (it is still too special
+ * e.g. clock tick interrupts)
+ */
+ return cpu == 0 ? -EPERM : 0;
+}
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-06 07:16:42

by Jeff Ohlstein

[permalink] [raw]
Subject: [PATCH 2/5] msm: scm-boot: Support for setting cold/warm boot addresses

From: Stepan Moskovchenko <[email protected]>

Add support for setting the cold boot address of core 1 and
the warm boot addresses of cores 0 and 1 using a secure
domain call.

Signed-off-by: Stepan Moskovchenko <[email protected]>
---
arch/arm/mach-msm/Makefile | 2 +-
arch/arm/mach-msm/scm-boot.c | 40 ++++++++++++++++++++++++++++++++++++++++
arch/arm/mach-msm/scm-boot.h | 38 ++++++++++++++++++++++++++++++++++++++
3 files changed, 79 insertions(+), 1 deletions(-)
create mode 100644 arch/arm/mach-msm/scm-boot.c
create mode 100644 arch/arm/mach-msm/scm-boot.h

diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index eed503b..34cd0a8 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -18,7 +18,7 @@ obj-$(CONFIG_MSM_PROC_COMM) += clock.o
obj-$(CONFIG_ARCH_QSD8X50) += sirc.o
obj-$(CONFIG_MSM_SMD) += smd.o smd_debug.o
obj-$(CONFIG_MSM_SMD) += last_radio_log.o
-obj-$(CONFIG_MSM_SCM) += scm.o
+obj-$(CONFIG_MSM_SCM) += scm.o scm-boot.o

obj-$(CONFIG_MACH_TROUT) += board-trout.o board-trout-gpio.o board-trout-mmc.o devices-msm7x00.o
obj-$(CONFIG_MACH_HALIBUT) += board-halibut.o devices-msm7x00.o
diff --git a/arch/arm/mach-msm/scm-boot.c b/arch/arm/mach-msm/scm-boot.c
new file mode 100644
index 0000000..debacf5
--- /dev/null
+++ b/arch/arm/mach-msm/scm-boot.c
@@ -0,0 +1,40 @@
+/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+ * 02110-1301, USA.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "scm.h"
+#include "scm-boot.h"
+
+/*
+ * Set the cold/warm boot address for one of the CPU cores.
+ */
+int scm_set_boot_addr(void *addr, int flags)
+{
+ struct {
+ unsigned int flags;
+ void *addr;
+ } cmd;
+
+ cmd.addr = addr;
+ cmd.flags = flags;
+ return scm_call(SCM_SVC_BOOT, SCM_BOOT_ADDR,
+ &cmd, sizeof(cmd), NULL, 0);
+}
+EXPORT_SYMBOL(scm_set_boot_addr);
+
diff --git a/arch/arm/mach-msm/scm-boot.h b/arch/arm/mach-msm/scm-boot.h
new file mode 100644
index 0000000..fdc1374
--- /dev/null
+++ b/arch/arm/mach-msm/scm-boot.h
@@ -0,0 +1,38 @@
+/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ * * Neither the name of Code Aurora Forum, Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __MACH_SCM_BOOT_H
+#define __MACH_SCM_BOOT_H
+
+#define SCM_BOOT_ADDR 0x1
+#define SCM_FLAG_COLDBOOT_CPU1 0x1
+#define SCM_FLAG_WARMBOOT_CPU1 0x2
+#define SCM_FLAG_WARMBOOT_CPU0 0x4
+
+int scm_set_boot_addr(void *addr, int flags);
+
+#endif
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-06 07:16:44

by Jeff Ohlstein

[permalink] [raw]
Subject: [PATCH 5/5] msm: add SMP support for msm

Signed-off-by: Jeff Ohlstein <[email protected]>
---
arch/arm/mach-msm/Kconfig | 1 +
arch/arm/mach-msm/Makefile | 1 +
arch/arm/mach-msm/headsmp.S | 43 ++++++++++
arch/arm/mach-msm/include/mach/smp.h | 2 +
arch/arm/mach-msm/platsmp.c | 147 ++++++++++++++++++++++++++++++++++
5 files changed, 194 insertions(+), 0 deletions(-)
create mode 100644 arch/arm/mach-msm/headsmp.S
create mode 100644 arch/arm/mach-msm/platsmp.c

diff --git a/arch/arm/mach-msm/Kconfig b/arch/arm/mach-msm/Kconfig
index ab5338f..8c57425 100644
--- a/arch/arm/mach-msm/Kconfig
+++ b/arch/arm/mach-msm/Kconfig
@@ -40,6 +40,7 @@ config ARCH_MSM8X60
bool "MSM8X60"
select MACH_MSM8X60_SURF if (!MACH_MSM8X60_RUMI3 && !MACH_MSM8X60_SIM \
&& !MACH_MSM8X60_FFA)
+ select ARCH_MSM_SCORPIONMP
select ARM_GIC
select CPU_V7
select MSM_V2_TLMM
diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index 7a11b4a..1945f9c 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_MSM_SMD) += last_radio_log.o
obj-$(CONFIG_MSM_SCM) += scm.o scm-boot.o

obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
+obj-$(CONFIG_SMP) += headsmp.o platsmp.o

obj-$(CONFIG_MACH_TROUT) += board-trout.o board-trout-gpio.o board-trout-mmc.o devices-msm7x00.o
obj-$(CONFIG_MACH_HALIBUT) += board-halibut.o devices-msm7x00.o
diff --git a/arch/arm/mach-msm/headsmp.S b/arch/arm/mach-msm/headsmp.S
new file mode 100644
index 0000000..438cfeb
--- /dev/null
+++ b/arch/arm/mach-msm/headsmp.S
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) 2003 ARM Limited
+ * All Rights Reserved
+ * Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/linkage.h>
+#include <linux/init.h>
+
+/*
+ * MSM specific entry point for secondary CPUs. This provides
+ * a "holding pen" into which all secondary cores are held until we're
+ * ready for them to initialise.
+ *
+ * This is executing in physical space with cache's off.
+ */
+ENTRY(msm_secondary_startup)
+ mrc p15, 0, r0, c0, c0, 5 @ MPIDR
+ and r0, r0, #15 @ What CPU am I
+ adr r4, 1f @ address of
+ ldmia r4, {r5, r6} @ load curr addr and pen_rel addr
+ sub r4, r4, r5 @ determine virtual/phys offsets
+ add r6, r6, r4 @ apply
+pen:
+ wfe
+ dsb @ ensure subsequent access is
+ @ after event
+
+ ldr r7, [r6] @ pen_rel has cpu to remove from reset
+ cmp r7, r0 @ are we lucky?
+ bne pen
+
+ /*
+ * we've been released from the holding pen: secondary_stack
+ * should now contain the SVC stack for this core
+ */
+ b secondary_startup
+
+1: .long .
+ .long pen_release
diff --git a/arch/arm/mach-msm/include/mach/smp.h b/arch/arm/mach-msm/include/mach/smp.h
index 3ff7bf5..79f94b0 100644
--- a/arch/arm/mach-msm/include/mach/smp.h
+++ b/arch/arm/mach-msm/include/mach/smp.h
@@ -36,4 +36,6 @@ static inline void smp_cross_call(const struct cpumask *mask)
gic_raise_softirq(mask, 1);
}

+extern int pen_release;
+extern void msm_secondary_startup(void);
#endif
diff --git a/arch/arm/mach-msm/platsmp.c b/arch/arm/mach-msm/platsmp.c
new file mode 100644
index 0000000..fbce6de
--- /dev/null
+++ b/arch/arm/mach-msm/platsmp.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2002 ARM Ltd.
+ * All Rights Reserved
+ * Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/init.h>
+#include <linux/cpumask.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+
+#include <asm/hardware/gic.h>
+#include <asm/cacheflush.h>
+#include <asm/mach-types.h>
+
+#include <mach/smp.h>
+#include <mach/hardware.h>
+#include <mach/msm_iomap.h>
+
+#include "scm-boot.h"
+
+#define SECONDARY_CPU_WAIT_MS 10
+
+#define VDD_SC1_ARRAY_CLAMP_GFS_CTL 0x15A0
+#define SCSS_CPU1CORE_RESET 0xD80
+#define SCSS_DBG_STATUS_CORE_PWRDUP 0xE64
+
+int pen_release = -1;
+
+int get_core_count(void)
+{
+#ifdef CONFIG_NR_CPUS
+ return CONFIG_NR_CPUS;
+#else
+ return 1;
+#endif
+}
+
+/* Initialize the present map (set_cpu_present(i, true)). */
+void smp_prepare_cpus(unsigned int max_cpus)
+{
+ int i;
+ unsigned int cpu = smp_processor_id();
+
+ smp_store_cpu_info(cpu);
+
+ for (i = 0; i < max_cpus; i++)
+ set_cpu_present(i, true);
+}
+
+void smp_init_cpus(void)
+{
+ unsigned int i, ncores = get_core_count();
+
+ for (i = 0; i < ncores; i++)
+ set_cpu_possible(i, true);
+}
+
+static void prepare_cold_cpu(unsigned int cpu)
+{
+ int ret;
+ ret = scm_set_boot_addr((void *)
+ virt_to_phys(msm_secondary_startup),
+ SCM_FLAG_COLDBOOT_CPU1);
+ if (ret == 0) {
+ void *sc1_base_ptr;
+ sc1_base_ptr = ioremap_nocache(0x00902000, SZ_4K*2);
+ if (sc1_base_ptr) {
+ writel(0, sc1_base_ptr + VDD_SC1_ARRAY_CLAMP_GFS_CTL);
+ writel(0, sc1_base_ptr + SCSS_CPU1CORE_RESET);
+ writel(3, sc1_base_ptr + SCSS_DBG_STATUS_CORE_PWRDUP);
+ iounmap(sc1_base_ptr);
+ }
+ } else
+ printk(KERN_DEBUG "Failed to set secondary core boot "
+ "address\n");
+}
+
+/* Executed by primary CPU, brings other CPUs out of reset. Called at boot
+ as well as when a CPU is coming out of shutdown induced by echo 0 >
+ /sys/devices/.../cpuX.
+*/
+int boot_secondary(unsigned int cpu, struct task_struct *idle)
+{
+ static int cold_boot_done;
+ int cnt = 0;
+ printk(KERN_DEBUG "Starting secondary CPU %d\n", cpu);
+
+ if (cold_boot_done == false) {
+ prepare_cold_cpu(cpu);
+ cold_boot_done = true;
+ }
+
+ pen_release = cpu;
+ dmac_flush_range((void *)&pen_release,
+ (void *)(&pen_release + sizeof(pen_release)));
+ __asm__("sev");
+ dsb();
+
+ /* Use smp_cross_call() to send a soft interrupt to wake up
+ * the other core.
+ */
+ smp_cross_call(cpumask_of(cpu));
+
+ while (pen_release != 0xFFFFFFFF) {
+ smp_rmb();
+ msleep_interruptible(1);
+ if (cnt++ >= SECONDARY_CPU_WAIT_MS)
+ break;
+ }
+
+ return 0;
+}
+
+/* Mask for edge trigger PPIs except AVS_SVICINT and AVS_SVICINTSWDONE */
+#define GIC_PPI_EDGE_MASK 0xFFFFD7FF
+
+/* Initialization routine for secondary CPUs after they are brought out of
+ * reset.
+*/
+void platform_secondary_init(unsigned int cpu)
+{
+ printk(KERN_DEBUG "%s: cpu:%d\n", __func__, cpu);
+
+ trace_hardirqs_off();
+
+ writel(GIC_PPI_EDGE_MASK, MSM_QGIC_DIST_BASE + GIC_DIST_CONFIG + 4);
+
+ /* RUMI does not adhere to GIC spec by enabling STIs by default.
+ * Enable/clear is supposed to be RO for STIs, but is RW on RUMI.
+ */
+ if (!machine_is_msm8x60_sim())
+ writel(0x0000FFFF, MSM_QGIC_DIST_BASE + GIC_DIST_ENABLE_SET);
+
+ /*
+ * setup GIC (GIC number NOT CPU number and the base address of the
+ * GIC CPU interface
+ */
+ gic_cpu_init(0, MSM_QGIC_CPU_BASE);
+ pen_release = -1;
+ smp_wmb();
+}
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-06 07:16:43

by Jeff Ohlstein

[permalink] [raw]
Subject: [PATCH 3/5] msm: timer: SMP timer support for msm

Signed-off-by: Jeff Ohlstein <[email protected]>
---
arch/arm/mach-msm/include/mach/msm_iomap-8x60.h | 6 +-
arch/arm/mach-msm/io.c | 1 +
arch/arm/mach-msm/timer.c | 139 +++++++++++++++++++++--
3 files changed, 136 insertions(+), 10 deletions(-)

diff --git a/arch/arm/mach-msm/include/mach/msm_iomap-8x60.h b/arch/arm/mach-msm/include/mach/msm_iomap-8x60.h
index 45bab50..873e0b7 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap-8x60.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap-8x60.h
@@ -60,7 +60,11 @@

#define MSM_TMR_BASE IOMEM(0xF0200000)
#define MSM_TMR_PHYS 0x02000000
-#define MSM_TMR_SIZE (SZ_1M)
+#define MSM_TMR_SIZE SZ_4K
+
+#define MSM_TMR0_BASE IOMEM(0xF0201000)
+#define MSM_TMR0_PHYS 0x02040000
+#define MSM_TMR0_SIZE SZ_4K

#define MSM_GPT_BASE (MSM_TMR_BASE + 0x4)
#define MSM_DGT_BASE (MSM_TMR_BASE + 0x24)
diff --git a/arch/arm/mach-msm/io.c b/arch/arm/mach-msm/io.c
index d36b610..b826b6b 100644
--- a/arch/arm/mach-msm/io.c
+++ b/arch/arm/mach-msm/io.c
@@ -105,6 +105,7 @@ static struct map_desc msm8x60_io_desc[] __initdata = {
MSM_DEVICE(QGIC_DIST),
MSM_DEVICE(QGIC_CPU),
MSM_DEVICE(TMR),
+ MSM_DEVICE(TMR0),
MSM_DEVICE(ACC),
MSM_DEVICE(GCC),
};
diff --git a/arch/arm/mach-msm/timer.c b/arch/arm/mach-msm/timer.c
index 950100f..40f6d17 100644
--- a/arch/arm/mach-msm/timer.c
+++ b/arch/arm/mach-msm/timer.c
@@ -47,6 +47,19 @@ enum {

#define GPT_HZ 32768

+enum timer_location {
+ LOCAL_TIMER = 0,
+ GLOBAL_TIMER = 1,
+};
+
+#ifdef MSM_TMR0_BASE
+#define MSM_TMR_GLOBAL (MSM_TMR0_BASE - MSM_TMR_BASE)
+#else
+#define MSM_TMR_GLOBAL 0
+#endif
+
+#define MSM_GLOBAL_TIMER MSM_CLOCK_DGT
+
#if defined(CONFIG_ARCH_QSD8X50)
#define DGT_HZ (19200000 / 4) /* 19.2 MHz / 4 by default */
#define MSM_DGT_SHIFT (0)
@@ -67,38 +80,89 @@ struct msm_clock {
uint32_t shift;
};

+enum {
+ MSM_CLOCK_GPT,
+ MSM_CLOCK_DGT,
+ NR_TIMERS,
+};
+
+
+static struct msm_clock msm_clocks[];
+static struct clock_event_device *local_clock_event;
+
static irqreturn_t msm_timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *evt = dev_id;
+ if (smp_processor_id() != 0)
+ evt = local_clock_event;
+ if (evt->event_handler == NULL)
+ return IRQ_HANDLED;
evt->event_handler(evt);
return IRQ_HANDLED;
}

+static uint32_t msm_read_timer_count(struct msm_clock *clock,
+ enum timer_location global)
+{
+ uint32_t t1;
+
+ if (global)
+ t1 = readl(clock->regbase + TIMER_COUNT_VAL + MSM_TMR_GLOBAL);
+ else
+ t1 = readl(clock->regbase + TIMER_COUNT_VAL);
+
+ return t1;
+}
+
static cycle_t msm_gpt_read(struct clocksource *cs)
{
- return readl(MSM_GPT_BASE + TIMER_COUNT_VAL);
+ struct msm_clock *clock = &msm_clocks[MSM_CLOCK_GPT];
+
+ return msm_read_timer_count(clock, GLOBAL_TIMER);
}

static cycle_t msm_dgt_read(struct clocksource *cs)
{
- return readl(MSM_DGT_BASE + TIMER_COUNT_VAL) >> MSM_DGT_SHIFT;
+ struct msm_clock *clock = &msm_clocks[MSM_CLOCK_DGT];
+
+ return msm_read_timer_count(clock, GLOBAL_TIMER) >> MSM_DGT_SHIFT;
+}
+
+static struct msm_clock *clockevent_to_clock(struct clock_event_device *evt)
+{
+#ifdef CONFIG_SMP
+ int i;
+ for (i = 0; i < NR_TIMERS; i++)
+ if (evt == &(msm_clocks[i].clockevent))
+ return &msm_clocks[i];
+ return &msm_clocks[MSM_GLOBAL_TIMER];
+#else
+ return container_of(evt, struct msm_clock, clockevent);
+#endif
}

static int msm_timer_set_next_event(unsigned long cycles,
struct clock_event_device *evt)
{
- struct msm_clock *clock = container_of(evt, struct msm_clock, clockevent);
- uint32_t now = readl(clock->regbase + TIMER_COUNT_VAL);
+ struct msm_clock *clock = clockevent_to_clock(evt);
+ uint32_t now = msm_read_timer_count(clock, LOCAL_TIMER);
uint32_t alarm = now + (cycles << clock->shift);
int late;

writel(alarm, clock->regbase + TIMER_MATCH_VAL);
- now = readl(clock->regbase + TIMER_COUNT_VAL);
+
+ now = msm_read_timer_count(clock, LOCAL_TIMER);
late = now - alarm;
if (late >= (-2 << clock->shift) && late < DGT_HZ*5) {
- printk(KERN_NOTICE "msm_timer_set_next_event(%lu) clock %s, "
- "alarm already expired, now %x, alarm %x, late %d\n",
- cycles, clock->clockevent.name, now, alarm, late);
+ static int print_limit = 10;
+ if (print_limit > 0) {
+ print_limit--;
+ printk(KERN_NOTICE "msm_timer_set_next_event(%lu) "
+ "clock %s, alarm already expired, now %x, "
+ "alarm %x, late %d%s\n",
+ cycles, clock->clockevent.name, now, alarm, late,
+ print_limit ? "" : " stop printing");
+ }
return -ETIME;
}
return 0;
@@ -107,7 +171,11 @@ static int msm_timer_set_next_event(unsigned long cycles,
static void msm_timer_set_mode(enum clock_event_mode mode,
struct clock_event_device *evt)
{
- struct msm_clock *clock = container_of(evt, struct msm_clock, clockevent);
+ struct msm_clock *clock = clockevent_to_clock(evt);
+ unsigned long irq_flags;
+
+ local_irq_save(irq_flags);
+
switch (mode) {
case CLOCK_EVT_MODE_RESUME:
case CLOCK_EVT_MODE_PERIODIC:
@@ -120,6 +188,7 @@ static void msm_timer_set_mode(enum clock_event_mode mode,
writel(0, clock->regbase + TIMER_ENABLE);
break;
}
+ local_irq_restore(irq_flags);
}

static struct msm_clock msm_clocks[] = {
@@ -220,6 +289,58 @@ static void __init msm_timer_init(void)
}
}

+#ifdef CONFIG_SMP
+void local_timer_setup(struct clock_event_device *evt)
+{
+ unsigned long flags;
+ struct msm_clock *clock = &msm_clocks[MSM_GLOBAL_TIMER];
+
+#ifdef CONFIG_ARCH_MSM8X60
+ writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL);
+#endif
+
+ if (!local_clock_event) {
+ writel(0, clock->regbase + TIMER_ENABLE);
+ writel(0, clock->regbase + TIMER_CLEAR);
+ writel(~0, clock->regbase + TIMER_MATCH_VAL);
+ }
+ evt->irq = clock->irq.irq;
+ evt->name = "local_timer";
+ evt->features = CLOCK_EVT_FEAT_ONESHOT;
+ evt->rating = clock->clockevent.rating;
+ evt->set_mode = msm_timer_set_mode;
+ evt->set_next_event = msm_timer_set_next_event;
+ evt->shift = clock->clockevent.shift;
+ evt->mult = div_sc(clock->freq, NSEC_PER_SEC, evt->shift);
+ evt->max_delta_ns =
+ clockevent_delta2ns(0xf0000000 >> clock->shift, evt);
+ evt->min_delta_ns = clockevent_delta2ns(4, evt);
+ evt->cpumask = cpumask_of(smp_processor_id());
+
+ local_clock_event = evt;
+
+ local_irq_save(flags);
+ get_irq_chip(clock->irq.irq)->unmask(clock->irq.irq);
+ local_irq_restore(flags);
+
+ clockevents_register_device(evt);
+}
+
+int local_timer_ack(void)
+{
+ return 1;
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
+void __cpuexit local_timer_stop(void)
+{
+ local_clock_event->set_mode(CLOCK_EVT_MODE_SHUTDOWN, local_clock_event);
+ get_irq_chip(local_clock_event->irq)->mask(local_clock_event->irq);
+ local_clock_event = NULL;
+}
+#endif
+#endif
+
struct sys_timer msm_timer = {
.init = msm_timer_init
};
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-06 09:56:47

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 3/5] msm: timer: SMP timer support for msm

On Sun, 5 Dec 2010, Jeff Ohlstein wrote:
> +
> +static struct msm_clock msm_clocks[];
> +static struct clock_event_device *local_clock_event;
> +
> static irqreturn_t msm_timer_interrupt(int irq, void *dev_id)
> {
> struct clock_event_device *evt = dev_id;
> + if (smp_processor_id() != 0)
> + evt = local_clock_event;

Why is dev_id not pointing to the correct device in the first place?

> + if (evt->event_handler == NULL)
> + return IRQ_HANDLED;
> evt->event_handler(evt);
> return IRQ_HANDLED;
> }
>
> +static uint32_t msm_read_timer_count(struct msm_clock *clock,
> + enum timer_location global)
> +{
> + uint32_t t1;
> +
> + if (global)
> + t1 = readl(clock->regbase + TIMER_COUNT_VAL + MSM_TMR_GLOBAL);
> + else
> + t1 = readl(clock->regbase + TIMER_COUNT_VAL);
> +
> + return t1;

Adding such conditionals into a fast path is brilliant. Granted, gcc
might optimize it out, but I'm not sure given the number of call sites.

What's the point of this exercise ?

> +}
> +
> static cycle_t msm_gpt_read(struct clocksource *cs)
> {
> - return readl(MSM_GPT_BASE + TIMER_COUNT_VAL);
> + struct msm_clock *clock = &msm_clocks[MSM_CLOCK_GPT];
> + return msm_read_timer_count(clock, GLOBAL_TIMER);

Why don't you store the address of the read register including the
shift value into the msm_clock structure ?

clock->counter_reg
clock->counter_shift

And then embedd the clocksource struct there as well, so you can
dereference msm_clock with container_of() and reduce the 3 read
functions to a single one.

> }
>
> static cycle_t msm_dgt_read(struct clocksource *cs)
> {
> - return readl(MSM_DGT_BASE + TIMER_COUNT_VAL) >> MSM_DGT_SHIFT;
> + struct msm_clock *clock = &msm_clocks[MSM_CLOCK_DGT];
> +
> + return msm_read_timer_count(clock, GLOBAL_TIMER) >> MSM_DGT_SHIFT;
> +}
> +
> +static struct msm_clock *clockevent_to_clock(struct clock_event_device *evt)
> +{
> +#ifdef CONFIG_SMP
> + int i;
> + for (i = 0; i < NR_TIMERS; i++)
> + if (evt == &(msm_clocks[i].clockevent))
> + return &msm_clocks[i];

Why don't you use container_of here as well ?

> + return &msm_clocks[MSM_GLOBAL_TIMER];
> +#else
> + return container_of(evt, struct msm_clock, clockevent);
> +#endif
> }
>
> static int msm_timer_set_next_event(unsigned long cycles,
> struct clock_event_device *evt)
> {
> - struct msm_clock *clock = container_of(evt, struct msm_clock, clockevent);
> - uint32_t now = readl(clock->regbase + TIMER_COUNT_VAL);
> + struct msm_clock *clock = clockevent_to_clock(evt);
> + uint32_t now = msm_read_timer_count(clock, LOCAL_TIMER);
> uint32_t alarm = now + (cycles << clock->shift);
> int late;
>
> writel(alarm, clock->regbase + TIMER_MATCH_VAL);
> - now = readl(clock->regbase + TIMER_COUNT_VAL);
> +
> + now = msm_read_timer_count(clock, LOCAL_TIMER);
> late = now - alarm;
> if (late >= (-2 << clock->shift) && late < DGT_HZ*5) {
> - printk(KERN_NOTICE "msm_timer_set_next_event(%lu) clock %s, "
> - "alarm already expired, now %x, alarm %x, late %d\n",
> - cycles, clock->clockevent.name, now, alarm, late);
> + static int print_limit = 10;
> + if (print_limit > 0) {
> + print_limit--;
> + printk(KERN_NOTICE "msm_timer_set_next_event(%lu) "
> + "clock %s, alarm already expired, now %x, "
> + "alarm %x, late %d%s\n",
> + cycles, clock->clockevent.name, now, alarm, late,
> + print_limit ? "" : " stop printing");
> + }

The generic clockevents layer has already a check for this. No need
for extra printk spam.

> return -ETIME;
> }
> return 0;
> @@ -107,7 +171,11 @@ static int msm_timer_set_next_event(unsigned long cycles,
> static void msm_timer_set_mode(enum clock_event_mode mode,
> struct clock_event_device *evt)
> {
> - struct msm_clock *clock = container_of(evt, struct msm_clock, clockevent);
> + struct msm_clock *clock = clockevent_to_clock(evt);
> + unsigned long irq_flags;
> +
> + local_irq_save(irq_flags);

Always called with interrupts disabled.

> +
> switch (mode) {
> case CLOCK_EVT_MODE_RESUME:
> case CLOCK_EVT_MODE_PERIODIC:
> @@ -120,6 +188,7 @@ static void msm_timer_set_mode(enum clock_event_mode mode,
> writel(0, clock->regbase + TIMER_ENABLE);
> break;
> }
> + local_irq_restore(irq_flags);
> }
>
> static struct msm_clock msm_clocks[] = {
> @@ -220,6 +289,58 @@ static void __init msm_timer_init(void)
> }
> }
>
> +#ifdef CONFIG_SMP
> +void local_timer_setup(struct clock_event_device *evt)
> +{
> + unsigned long flags;
> + struct msm_clock *clock = &msm_clocks[MSM_GLOBAL_TIMER];
> +
> +#ifdef CONFIG_ARCH_MSM8X60
> + writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL);
> +#endif
> +
> + if (!local_clock_event) {
> + writel(0, clock->regbase + TIMER_ENABLE);
> + writel(0, clock->regbase + TIMER_CLEAR);
> + writel(~0, clock->regbase + TIMER_MATCH_VAL);
> + }
> + evt->irq = clock->irq.irq;
> + evt->name = "local_timer";
> + evt->features = CLOCK_EVT_FEAT_ONESHOT;
> + evt->rating = clock->clockevent.rating;
> + evt->set_mode = msm_timer_set_mode;
> + evt->set_next_event = msm_timer_set_next_event;
> + evt->shift = clock->clockevent.shift;
> + evt->mult = div_sc(clock->freq, NSEC_PER_SEC, evt->shift);
> + evt->max_delta_ns =
> + clockevent_delta2ns(0xf0000000 >> clock->shift, evt);
> + evt->min_delta_ns = clockevent_delta2ns(4, evt);
> + evt->cpumask = cpumask_of(smp_processor_id());
> +
> + local_clock_event = evt;
> +
> + local_irq_save(flags);
> + get_irq_chip(clock->irq.irq)->unmask(clock->irq.irq);

Why are you fiddling wiht the irqchip functions directly ? Please use
disable_irq/enable_irq if at all.

> + local_irq_restore(flags);
> +
> + clockevents_register_device(evt);
> +}
> +
> +int local_timer_ack(void)
> +{
> + return 1;

Shouldn't that be an inline ? Why calling code which the compiler
could optimize out.

> +}
> +
> +#ifdef CONFIG_HOTPLUG_CPU
> +void __cpuexit local_timer_stop(void)
> +{
> + local_clock_event->set_mode(CLOCK_EVT_MODE_SHUTDOWN, local_clock_event);

Aarg. No. The generic code already handles cpu hotplug.

Thanks,

tglx

2010-12-06 10:21:17

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 3/5] msm: timer: SMP timer support for msm

On Mon, Dec 06, 2010 at 10:56:14AM +0100, Thomas Gleixner wrote:
> > + local_irq_save(flags);
> > + get_irq_chip(clock->irq.irq)->unmask(clock->irq.irq);
>
> Why are you fiddling wiht the irqchip functions directly ? Please use
> disable_irq/enable_irq if at all.

PPI. The interrupt has to be enabled by the very same CPU that wants
to receive the interrupt. Other CPUs on the system do not have access
to the interrupt enable bits for PPIs.

That's something which genirq can't handle because it doesn't _actually_
support real per-CPU interrupts - iow, ones which are truely private to
CPU N.

Eg, if IRQ 29 is the local timer interrupt, then CPU0 has its own IRQ29
which is distinctly different - and has separate enable registers and
ultimately different timer hardware - from CPU1's IRQ29.

On the other SMP platforms, these interrupts aren't handled by genirq,
but we do control them via code like the above (which I'm about to kill
off and move that detail into gic.c.)

2010-12-06 11:12:17

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 3/5] msm: timer: SMP timer support for msm

On Mon, 6 Dec 2010, Russell King - ARM Linux wrote:

> On Mon, Dec 06, 2010 at 10:56:14AM +0100, Thomas Gleixner wrote:
> > > + local_irq_save(flags);
> > > + get_irq_chip(clock->irq.irq)->unmask(clock->irq.irq);
> >
> > Why are you fiddling wiht the irqchip functions directly ? Please use
> > disable_irq/enable_irq if at all.
>
> PPI. The interrupt has to be enabled by the very same CPU that wants
> to receive the interrupt. Other CPUs on the system do not have access
> to the interrupt enable bits for PPIs.

That's fine, but the code is called on that very cpu anyway, so
enable_irq() ends up calling the very same chip->unmask()

> That's something which genirq can't handle because it doesn't _actually_
> support real per-CPU interrupts - iow, ones which are truely private to
> CPU N.

So what you want to avoid are the enable/disable_irq() side effects
(setting/clearing the IRQ_DISABLED flags etc.) as they would apply to
the other cpus as well - which would be bogus of course.

We could actually solve that in the genirq code in a halfways simple
way. Now that we get the references to the irq descriptors via
irq_to_desc() we could add a function which marks an irq as percpu and
use the main irq descriptor as a place holder which allocates percpu
memory for the real descriptors. irq_to_desc() would lookup the main
descriptor and hand back the one for the current cpu if the percpu
pointer is set. That would allow you to use the generic functions at
least with some care (functions need to be called from migration
disabled code).

Thanks,

tglx

2010-12-06 20:02:12

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: [PATCH 1/5] msm: Secure Channel Manager (SCM) support

On Sun, 05 Dec 2010 23:16:14 PST, Jeff Ohlstein said:

> SCM is the protocol used to communicate between the secure and
> non-secure code executing on the applications processor.

Does the presence of SCM hardware imply SMP? From reading this, it's unclear
why the dependence on SMP - it looks like scm.c is something that hardware that
has one ARM processor and a 'secure processor' would still want. Or is the
'secure processor' just another (arbitrarily labeled) ARM CPU participating
as a full SMP processor?


Attachments:
(No filename) (227.00 B)

2010-12-06 20:52:50

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 1/5] msm: Secure Channel Manager (SCM) support

On Mon, Dec 06, 2010 at 03:00:48PM -0500, [email protected] wrote:
> On Sun, 05 Dec 2010 23:16:14 PST, Jeff Ohlstein said:
>
> > SCM is the protocol used to communicate between the secure and
> > non-secure code executing on the applications processor.
>
> Does the presence of SCM hardware imply SMP? From reading this, it's unclear
> why the dependence on SMP - it looks like scm.c is something that hardware that
> has one ARM processor and a 'secure processor' would still want. Or is the
> 'secure processor' just another (arbitrarily labeled) ARM CPU participating
> as a full SMP processor?

It's referring to the secure mode, which is a separate address space and
system running on the same CPU.

2010-12-07 04:49:13

by Jeff Ohlstein

[permalink] [raw]
Subject: Re: [PATCH 3/5] msm: timer: SMP timer support for msm

Thomas Gleixner wrote:
> On Sun, 5 Dec 2010, Jeff Ohlstein wrote:
>> static irqreturn_t msm_timer_interrupt(int irq, void *dev_id)
>> {
>> struct clock_event_device *evt = dev_id;
>> + if (smp_processor_id() != 0)
>> + evt = local_clock_event;
>
> Why is dev_id not pointing to the correct device in the first place?
dev_id is specified per struct irqaction, which is registered once per
irq number. However, each core has a separate clock_event_device. Since
the timer irq has the same irq number on both cores, we need to know
what core we are on to know which clock_event_device is the correct one.
>>
>> +static uint32_t msm_read_timer_count(struct msm_clock *clock,
>> + enum timer_location global)
>> +{
>> + uint32_t t1;
>> +
>> + if (global)
>> + t1 = readl(clock->regbase + TIMER_COUNT_VAL + MSM_TMR_GLOBAL);
>> + else
>> + t1 = readl(clock->regbase + TIMER_COUNT_VAL);
>> +
>> + return t1;
>
> Adding such conditionals into a fast path is brilliant. Granted, gcc
> might optimize it out, but I'm not sure given the number of call sites.
>
> What's the point of this exercise ?
The reason is I had a common function for reading a timer count, but
sometimes we want to read the cpu local timer, such as in the case of
set_next_event, but sometimes I want to read a global timer, which is at
a different address. However, you are right that it is silly to put a
conditional there, especially when which branch I want is static at the
callsite.
>> +}
>> +
>> static cycle_t msm_gpt_read(struct clocksource *cs)
>> {
>> - return readl(MSM_GPT_BASE + TIMER_COUNT_VAL);
>> + struct msm_clock *clock = &msm_clocks[MSM_CLOCK_GPT];
>> + return msm_read_timer_count(clock, GLOBAL_TIMER);
>
> Why don't you store the address of the read register including the
> shift value into the msm_clock structure ?
>
> clock->counter_reg
> clock->counter_shift
>
> And then embedd the clocksource struct there as well, so you can
> dereference msm_clock with container_of() and reduce the 3 read
> functions to a single one.
Done.
>> +static struct msm_clock *clockevent_to_clock(struct clock_event_device *evt)
>> +{
>> +#ifdef CONFIG_SMP
>> + int i;
>> + for (i = 0; i < NR_TIMERS; i++)
>> + if (evt == &(msm_clocks[i].clockevent))
>> + return &msm_clocks[i];
>
> Why don't you use container_of here as well ?
The clockevent could be the local_clock_event, which is not embedded
into a struct msm_clock. However, its parameters will be the same as one
of the existing msm_clock entries, so use that.
>> if (late >= (-2 << clock->shift) && late < DGT_HZ*5) {
>> - printk(KERN_NOTICE "msm_timer_set_next_event(%lu) clock %s, "
>> - "alarm already expired, now %x, alarm %x, late %d\n",
>> - cycles, clock->clockevent.name, now, alarm, late);
>> + static int print_limit = 10;
>> + if (print_limit > 0) {
>> + print_limit--;
>> + printk(KERN_NOTICE "msm_timer_set_next_event(%lu) "
>> + "clock %s, alarm already expired, now %x, "
>> + "alarm %x, late %d%s\n",
>> + cycles, clock->clockevent.name, now, alarm, late,
>> + print_limit ? "" : " stop printing");
>> + }
>
> The generic clockevents layer has already a check for this. No need
> for extra printk spam.
Done.
>> @@ -107,7 +171,11 @@ static int msm_timer_set_next_event(unsigned long cycles,
>> static void msm_timer_set_mode(enum clock_event_mode mode,
>> struct clock_event_device *evt)
>> {
>> - struct msm_clock *clock = container_of(evt, struct msm_clock, clockevent);
>> + struct msm_clock *clock = clockevent_to_clock(evt);
>> + unsigned long irq_flags;
>> +
>> + local_irq_save(irq_flags);
>
> Always called with interrupts disabled.
Done.
>> + local_clock_event = evt;
>> +
>> + local_irq_save(flags);
>> + get_irq_chip(clock->irq.irq)->unmask(clock->irq.irq);
> Why are you fiddling wiht the irqchip functions directly ? Please use
> disable_irq/enable_irq if at all.
As stated by Russell, this is due to the fact that the timer interrupts
are private to each core, and share the same irq number on each core.
>> +int local_timer_ack(void)
>> +{
>> + return 1;
>
> Shouldn't that be an inline ? Why calling code which the compiler
> could optimize out.
Done.
>> +#ifdef CONFIG_HOTPLUG_CPU
>> +void __cpuexit local_timer_stop(void)
>> +{
>> + local_clock_event->set_mode(CLOCK_EVT_MODE_SHUTDOWN, local_clock_event);
>
> Aarg. No. The generic code already handles cpu hotplug.
So what needs to be done in local_timer_stop? Just stopping the timer
from ticking? Aren't I going to want to do all the same things my
set_mode function does in the shutdown case? I understand not calling
into the clockevents functions, would you be opposed to me directly
calling my set_mode function?

-Jeff

--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-07 08:17:34

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 3/5] msm: timer: SMP timer support for msm

On Mon, Dec 06, 2010 at 10:56:14AM +0100, Thomas Gleixner wrote:
> On Sun, 5 Dec 2010, Jeff Ohlstein wrote:
> > +#ifdef CONFIG_HOTPLUG_CPU
> > +void __cpuexit local_timer_stop(void)
> > +{
> > + local_clock_event->set_mode(CLOCK_EVT_MODE_SHUTDOWN, local_clock_event);
>
> Aarg. No. The generic code already handles cpu hotplug.

Can you show where this is handled by the generic code? I can't find
where this is handled.

2010-12-15 07:48:30

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 1/5] msm: Secure Channel Manager (SCM) support

Hi!

> +/**
> + * struct scm_response - one SCM response buffer
> + * @len: total available memory for response
> + * @buf_offset: start of response data relative to start of scm_response
> + * @is_complete: indicates if the command has finished processing
> + */
> +struct scm_response {
> + u32 len;
> + u32 buf_offset;
> + u32 is_complete;
> +};

Alignment is off here.

> +static u32 smc(dma_addr_t cmd_addr)
> +{
> + int context_id;
> + register u32 r0 asm("r0") = 1;
> + register u32 r1 asm("r1") = (u32)&context_id;
> + register u32 r2 asm("r2") = (u32)cmd_addr;

Are these neccessary?

> + asm(
> + __asmeq("%0", "r0")
> + __asmeq("%1", "r0")
> + __asmeq("%2", "r1")
> + __asmeq("%3", "r2")
> + "smc #0 @ switch to secure world\n"
> + : "=r" (r0)
> + : "r" (r0), "r" (r1), "r" (r2)
> + : "r3");
> + return r0;
> +}

> +u32 scm_get_version(void)
> +{
> + int context_id;
> + static u32 version = -1;
> + register u32 r0 asm("r0") = 0x1 << 8;
> + register u32 r1 asm("r1") = (u32)&context_id;

And does this even work?

> + if (version != -1)
> + return version;
> +
> + mutex_lock(&scm_lock);
> + asm(
> + __asmeq("%0", "r1")
> + __asmeq("%1", "r0")
> + __asmeq("%2", "r1")
> + "smc #0 @ switch to secure world\n"
> + : "=r" (r1)
> + : "r" (r0), "r" (r1)
> + : "r2", "r3");
> + version = r1;
> + mutex_unlock(&scm_lock);
> +
> + return version;
> +}
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2010-12-15 14:06:01

by David Brown

[permalink] [raw]
Subject: Re: [PATCH 1/5] msm: Secure Channel Manager (SCM) support

On Wed, Dec 15, 2010 at 08:48:11AM +0100, Pavel Machek wrote:

> > +static u32 smc(dma_addr_t cmd_addr)
> > +{
> > + int context_id;
> > + register u32 r0 asm("r0") = 1;
> > + register u32 r1 asm("r1") = (u32)&context_id;
> > + register u32 r2 asm("r2") = (u32)cmd_addr;
>
> Are these neccessary?

The values have to be in specific registers. Without them it doesn't
generate the right code.

> > + asm(
> > + __asmeq("%0", "r0")
> > + __asmeq("%1", "r0")
> > + __asmeq("%2", "r1")
> > + __asmeq("%3", "r2")
> > + "smc #0 @ switch to secure world\n"
> > + : "=r" (r0)
> > + : "r" (r0), "r" (r1), "r" (r2)
> > + : "r3");
> > + return r0;
> > +}
>
> > +u32 scm_get_version(void)
> > +{
> > + int context_id;
> > + static u32 version = -1;
> > + register u32 r0 asm("r0") = 0x1 << 8;
> > + register u32 r1 asm("r1") = (u32)&context_id;
>
> And does this even work?

In what sense? It generates the desired code.

> > + if (version != -1)
> > + return version;
> > +
> > + mutex_lock(&scm_lock);
> > + asm(
> > + __asmeq("%0", "r1")
> > + __asmeq("%1", "r0")
> > + __asmeq("%2", "r1")
> > + "smc #0 @ switch to secure world\n"
> > + : "=r" (r1)
> > + : "r" (r0), "r" (r1)
> > + : "r2", "r3");
> > + version = r1;
> > + mutex_unlock(&scm_lock);
> > +
> > + return version;
> > +}

--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

2010-12-15 16:07:31

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 1/5] msm: Secure Channel Manager (SCM) support

On Wed, Dec 15, 2010 at 06:05:58AM -0800, David Brown wrote:
> On Wed, Dec 15, 2010 at 08:48:11AM +0100, Pavel Machek wrote:
>
> > > +static u32 smc(dma_addr_t cmd_addr)
> > > +{
> > > + int context_id;
> > > + register u32 r0 asm("r0") = 1;
> > > + register u32 r1 asm("r1") = (u32)&context_id;
> > > + register u32 r2 asm("r2") = (u32)cmd_addr;
> >
> > Are these neccessary?
>
> The values have to be in specific registers. Without them it doesn't
> generate the right code.
...
> > > + int context_id;
> > > + static u32 version = -1;
> > > + register u32 r0 asm("r0") = 0x1 << 8;
> > > + register u32 r1 asm("r1") = (u32)&context_id;
> >
> > And does this even work?
>
> In what sense? It generates the desired code.

What you're doing is entirely valid and supported by gcc. There's no
problem with it.

> > > + mutex_lock(&scm_lock);
> > > + asm(
> > > + __asmeq("%0", "r1")
> > > + __asmeq("%1", "r0")
> > > + __asmeq("%2", "r1")

And even if you use an older gcc version where this trips over bugs,
these asmeq() will save you from having silently generated wrong code.

So don't worry about this, you're not doing anything wrong.

2010-12-20 12:22:06

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 3/5] msm: timer: SMP timer support for msm

On Tue, Dec 07, 2010 at 08:17:01AM +0000, Russell King - ARM Linux wrote:
> On Mon, Dec 06, 2010 at 10:56:14AM +0100, Thomas Gleixner wrote:
> > On Sun, 5 Dec 2010, Jeff Ohlstein wrote:
> > > +#ifdef CONFIG_HOTPLUG_CPU
> > > +void __cpuexit local_timer_stop(void)
> > > +{
> > > + local_clock_event->set_mode(CLOCK_EVT_MODE_SHUTDOWN, local_clock_event);
> >
> > Aarg. No. The generic code already handles cpu hotplug.
>
> Can you show where this is handled by the generic code? I can't find
> where this is handled.

Having checked this on Versatile Express with a printk in the ->set_mode
callback, the generic code does not handle shutting down clock event
devices on CPU hot-unplug.

I've analyzed why this is:

The hrtimer() code hooks into the hotplug notifier list, and when it
receives a CPU_DEAD or CPU_DEAD_FROZEN message, it calls
clockevents_notify(CLOCK_EVT_NOTIFY_CPU_DEAD).

The clockevents code calls its own notifier list, which ultimately calls
tick_notify(). This eventually calls tick_shutdown().

tick_shutdown() looks up the per-cpu tick device, uncouples the clock
event device from the tick device, and sets the clock event device mode
to CLOCK_EVT_MODE_UNUSED. Note this comment:

/*
* Prevent that the clock events layer tries to call
* the set mode function!
*/
dev->mode = CLOCK_EVT_MODE_UNUSED;

translating that into English:

"Prevent the clock events layer calling the set_mode function"

That might be it - to confirm:

It then calls clockevents_exchange_device(dev, NULL).

clockevents_exchange_device(old,new) calls
clockevents_set_mode(old, CLOCK_EVT_MODE_UNUSED), which then does:

if (dev->mode != mode) {

However, as dev->mode was set to CLOCK_EVT_MODE_UNUSED, and mode is
CLOCK_EVT_MODE_UNUSED, we don't call into the code which supplies the
clock event device at all, leaving it with no way to fully shutdown
its hardware. It seems this is done on purpose, though it's not clear
why.

This is why arch code, such as ARM, has to implement its own solution
to shutdown clock event devices external to the generic clockevents code.
Until we have a generic solution, I think I'm going to take the above
code extract into the ARM generic local timer code and kill off the
local_timer_stop() function.