2024-05-27 15:51:47

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 01/23] drm/amd/display: Exit idle optimizations before HDCP execution

From: Nicholas Kazlauskas <[email protected]>

[ Upstream commit f30a3bea92bdab398531129d187629fb1d28f598 ]

[WHY]
PSP can access DCN registers during command submission and we need
to ensure that DCN is not in PG before doing so.

[HOW]
Add a callback to DM to lock and notify DC for idle optimization exit.
It can't be DC directly because of a potential race condition with the
link protection thread and the rest of DM operation.

Cc: Mario Limonciello <[email protected]>
Cc: Alex Deucher <[email protected]>
Reviewed-by: Charlene Liu <[email protected]>
Acked-by: Alex Hung <[email protected]>
Signed-off-by: Nicholas Kazlauskas <[email protected]>
Tested-by: Daniel Wheeler <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c | 10 ++++++++++
drivers/gpu/drm/amd/display/modules/inc/mod_hdcp.h | 8 ++++++++
2 files changed, 18 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
index 5e01c6e24cbc8..9a5a1726acaf8 100644
--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
+++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
@@ -88,6 +88,14 @@ static uint8_t is_cp_desired_hdcp2(struct mod_hdcp *hdcp)
!hdcp->connection.is_hdcp2_revoked;
}

+static void exit_idle_optimizations(struct mod_hdcp *hdcp)
+{
+ struct mod_hdcp_dm *dm = &hdcp->config.dm;
+
+ if (dm->funcs.exit_idle_optimizations)
+ dm->funcs.exit_idle_optimizations(dm->handle);
+}
+
static enum mod_hdcp_status execution(struct mod_hdcp *hdcp,
struct mod_hdcp_event_context *event_ctx,
union mod_hdcp_transition_input *input)
@@ -543,6 +551,8 @@ enum mod_hdcp_status mod_hdcp_process_event(struct mod_hdcp *hdcp,
memset(&event_ctx, 0, sizeof(struct mod_hdcp_event_context));
event_ctx.event = event;

+ exit_idle_optimizations(hdcp);
+
/* execute and transition */
exec_status = execution(hdcp, &event_ctx, &hdcp->auth.trans_input);
trans_status = transition(
diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_hdcp.h b/drivers/gpu/drm/amd/display/modules/inc/mod_hdcp.h
index a4d344a4db9e1..cdb17b093f2b8 100644
--- a/drivers/gpu/drm/amd/display/modules/inc/mod_hdcp.h
+++ b/drivers/gpu/drm/amd/display/modules/inc/mod_hdcp.h
@@ -156,6 +156,13 @@ struct mod_hdcp_ddc {
} funcs;
};

+struct mod_hdcp_dm {
+ void *handle;
+ struct {
+ void (*exit_idle_optimizations)(void *handle);
+ } funcs;
+};
+
struct mod_hdcp_psp {
void *handle;
void *funcs;
@@ -272,6 +279,7 @@ struct mod_hdcp_display_query {
struct mod_hdcp_config {
struct mod_hdcp_psp psp;
struct mod_hdcp_ddc ddc;
+ struct mod_hdcp_dm dm;
uint8_t index;
};

--
2.43.0



2024-05-27 15:52:11

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 03/23] ASoC: Intel: sof_cs42l42: rename BT offload quirk

From: Brent Lu <[email protected]>

[ Upstream commit 109896246a5311aa05692ecf38c0d71e1837fe23 ]

Rename the quirk in preparation for future changes: common quriks will
be defined and handled in board helper module.

Reviewed-by: Bard Liao <[email protected]>
Signed-off-by: Brent Lu <[email protected]>
Signed-off-by: Pierre-Louis Bossart <[email protected]>
Link: https://msgid.link/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/soc/intel/boards/sof_cs42l42.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/sound/soc/intel/boards/sof_cs42l42.c b/sound/soc/intel/boards/sof_cs42l42.c
index 323b86c42ef95..330d596b2eb6d 100644
--- a/sound/soc/intel/boards/sof_cs42l42.c
+++ b/sound/soc/intel/boards/sof_cs42l42.c
@@ -34,7 +34,7 @@
#define SOF_CS42L42_NUM_HDMIDEV_MASK (GENMASK(9, 7))
#define SOF_CS42L42_NUM_HDMIDEV(quirk) \
(((quirk) << SOF_CS42L42_NUM_HDMIDEV_SHIFT) & SOF_CS42L42_NUM_HDMIDEV_MASK)
-#define SOF_BT_OFFLOAD_PRESENT BIT(25)
+#define SOF_CS42L42_BT_OFFLOAD_PRESENT BIT(25)
#define SOF_CS42L42_SSP_BT_SHIFT 26
#define SOF_CS42L42_SSP_BT_MASK (GENMASK(28, 26))
#define SOF_CS42L42_SSP_BT(quirk) \
@@ -268,7 +268,7 @@ static int sof_audio_probe(struct platform_device *pdev)

ctx->ssp_codec = sof_cs42l42_quirk & SOF_CS42L42_SSP_CODEC_MASK;

- if (sof_cs42l42_quirk & SOF_BT_OFFLOAD_PRESENT)
+ if (sof_cs42l42_quirk & SOF_CS42L42_BT_OFFLOAD_PRESENT)
ctx->bt_offload_present = true;

/* update dai_link */
@@ -306,7 +306,7 @@ static const struct platform_device_id board_ids[] = {
.driver_data = (kernel_ulong_t)(SOF_CS42L42_SSP_CODEC(0) |
SOF_CS42L42_SSP_AMP(1) |
SOF_CS42L42_NUM_HDMIDEV(4) |
- SOF_BT_OFFLOAD_PRESENT |
+ SOF_CS42L42_BT_OFFLOAD_PRESENT |
SOF_CS42L42_SSP_BT(2)),
},
{ }
--
2.43.0


2024-05-27 15:53:12

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 05/23] platform/x86: toshiba_acpi: Add quirk for buttons on Z830

From: Arvid Norlander <[email protected]>

[ Upstream commit 23f1d8b47d125dcd8c1ec62a91164e6bc5d691d0 ]

The Z830 has some buttons that will only work properly as "quickstart"
buttons. To enable them in that mode, a value between 1 and 7 must be
used for HCI_HOTKEY_EVENT. Windows uses 0x5 on this laptop so use that for
maximum predictability and compatibility.

As there is not yet a known way of auto detection, this patch uses a DMI
quirk table. A module parameter is exposed to allow setting this on other
models for testing.

Signed-off-by: Arvid Norlander <[email protected]>
Tested-by: Hans de Goede <[email protected]>
Reviewed-by: Hans de Goede <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/platform/x86/toshiba_acpi.c | 36 ++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
index 77244c9aa60d2..16e941449b144 100644
--- a/drivers/platform/x86/toshiba_acpi.c
+++ b/drivers/platform/x86/toshiba_acpi.c
@@ -57,6 +57,11 @@ module_param(turn_on_panel_on_resume, int, 0644);
MODULE_PARM_DESC(turn_on_panel_on_resume,
"Call HCI_PANEL_POWER_ON on resume (-1 = auto, 0 = no, 1 = yes");

+static int hci_hotkey_quickstart = -1;
+module_param(hci_hotkey_quickstart, int, 0644);
+MODULE_PARM_DESC(hci_hotkey_quickstart,
+ "Call HCI_HOTKEY_EVENT with value 0x5 for quickstart button support (-1 = auto, 0 = no, 1 = yes");
+
#define TOSHIBA_WMI_EVENT_GUID "59142400-C6A3-40FA-BADB-8A2652834100"

/* Scan code for Fn key on TOS1900 models */
@@ -136,6 +141,7 @@ MODULE_PARM_DESC(turn_on_panel_on_resume,
#define HCI_ACCEL_MASK 0x7fff
#define HCI_ACCEL_DIRECTION_MASK 0x8000
#define HCI_HOTKEY_DISABLE 0x0b
+#define HCI_HOTKEY_ENABLE_QUICKSTART 0x05
#define HCI_HOTKEY_ENABLE 0x09
#define HCI_HOTKEY_SPECIAL_FUNCTIONS 0x10
#define HCI_LCD_BRIGHTNESS_BITS 3
@@ -2731,10 +2737,15 @@ static int toshiba_acpi_enable_hotkeys(struct toshiba_acpi_dev *dev)
return -ENODEV;

/*
+ * Enable quickstart buttons if supported.
+ *
* Enable the "Special Functions" mode only if they are
* supported and if they are activated.
*/
- if (dev->kbd_function_keys_supported && dev->special_functions)
+ if (hci_hotkey_quickstart)
+ result = hci_write(dev, HCI_HOTKEY_EVENT,
+ HCI_HOTKEY_ENABLE_QUICKSTART);
+ else if (dev->kbd_function_keys_supported && dev->special_functions)
result = hci_write(dev, HCI_HOTKEY_EVENT,
HCI_HOTKEY_SPECIAL_FUNCTIONS);
else
@@ -3258,7 +3269,14 @@ static const char *find_hci_method(acpi_handle handle)
* works. toshiba_acpi_resume() uses HCI_PANEL_POWER_ON to avoid changing
* the configured brightness level.
*/
-static const struct dmi_system_id turn_on_panel_on_resume_dmi_ids[] = {
+#define QUIRK_TURN_ON_PANEL_ON_RESUME BIT(0)
+/*
+ * Some Toshibas use "quickstart" keys. On these, HCI_HOTKEY_EVENT must use
+ * the value HCI_HOTKEY_ENABLE_QUICKSTART.
+ */
+#define QUIRK_HCI_HOTKEY_QUICKSTART BIT(1)
+
+static const struct dmi_system_id toshiba_dmi_quirks[] = {
{
/* Toshiba Portégé R700 */
/* https://bugzilla.kernel.org/show_bug.cgi?id=21012 */
@@ -3266,6 +3284,7 @@ static const struct dmi_system_id turn_on_panel_on_resume_dmi_ids[] = {
DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
DMI_MATCH(DMI_PRODUCT_NAME, "PORTEGE R700"),
},
+ .driver_data = (void *)QUIRK_TURN_ON_PANEL_ON_RESUME,
},
{
/* Toshiba Satellite/Portégé R830 */
@@ -3275,6 +3294,7 @@ static const struct dmi_system_id turn_on_panel_on_resume_dmi_ids[] = {
DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
DMI_MATCH(DMI_PRODUCT_NAME, "R830"),
},
+ .driver_data = (void *)QUIRK_TURN_ON_PANEL_ON_RESUME,
},
{
/* Toshiba Satellite/Portégé Z830 */
@@ -3282,6 +3302,7 @@ static const struct dmi_system_id turn_on_panel_on_resume_dmi_ids[] = {
DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
DMI_MATCH(DMI_PRODUCT_NAME, "Z830"),
},
+ .driver_data = (void *)(QUIRK_TURN_ON_PANEL_ON_RESUME | QUIRK_HCI_HOTKEY_QUICKSTART),
},
};

@@ -3290,6 +3311,8 @@ static int toshiba_acpi_add(struct acpi_device *acpi_dev)
struct toshiba_acpi_dev *dev;
const char *hci_method;
u32 dummy;
+ const struct dmi_system_id *dmi_id;
+ long quirks = 0;
int ret = 0;

if (toshiba_acpi)
@@ -3442,8 +3465,15 @@ static int toshiba_acpi_add(struct acpi_device *acpi_dev)
}
#endif

+ dmi_id = dmi_first_match(toshiba_dmi_quirks);
+ if (dmi_id)
+ quirks = (long)dmi_id->driver_data;
+
if (turn_on_panel_on_resume == -1)
- turn_on_panel_on_resume = dmi_check_system(turn_on_panel_on_resume_dmi_ids);
+ turn_on_panel_on_resume = !!(quirks & QUIRK_TURN_ON_PANEL_ON_RESUME);
+
+ if (hci_hotkey_quickstart == -1)
+ hci_hotkey_quickstart = !!(quirks & QUIRK_HCI_HOTKEY_QUICKSTART);

toshiba_wwan_available(dev);
if (dev->wwan_supported)
--
2.43.0


2024-05-27 15:53:31

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 06/23] cgroup/cpuset: Make cpuset hotplug processing synchronous

From: Waiman Long <[email protected]>

[ Upstream commit 2125c0034c5dfd61171b494bd309bb7637bff6eb ]

Since commit 3a5a6d0c2b03("cpuset: don't nest cgroup_mutex inside
get_online_cpus()"), cpuset hotplug was done asynchronously via a work
function. This is to avoid recursive locking of cgroup_mutex.

Since then, the cgroup locking scheme has changed quite a bit. A
cpuset_mutex was introduced to protect cpuset specific operations.
The cpuset_mutex is then replaced by a cpuset_rwsem. With commit
d74b27d63a8b ("cgroup/cpuset: Change cpuset_rwsem and hotplug lock
order"), cpu_hotplug_lock is acquired before cpuset_rwsem. Later on,
cpuset_rwsem is reverted back to cpuset_mutex. All these locking changes
allow the hotplug code to call into cpuset core directly.

The following commits were also merged due to the asynchronous nature
of cpuset hotplug processing.

- commit b22afcdf04c9 ("cpu/hotplug: Cure the cpusets trainwreck")
- commit 50e76632339d ("sched/cpuset/pm: Fix cpuset vs. suspend-resume
bugs")
- commit 28b89b9e6f7b ("cpuset: handle race between CPU hotplug and
cpuset_hotplug_work")

Clean up all these bandages by making cpuset hotplug
processing synchronous again with the exception that the call to
cgroup_transfer_tasks() to transfer tasks out of an empty cgroup v1
cpuset, if necessary, will still be done via a work function due to the
existing cgroup_mutex -> cpu_hotplug_lock dependency. It is possible
to reverse that dependency, but that will require updating a number of
different cgroup controllers. This special hotplug code path should be
rarely taken anyway.

As all the cpuset states will be updated by the end of the hotplug
operation, we can revert most the above commits except commit
50e76632339d ("sched/cpuset/pm: Fix cpuset vs. suspend-resume bugs")
which is partially reverted. Also removing some cpus_read_lock trylock
attempts in the cpuset partition code as they are no longer necessary
since the cpu_hotplug_lock is now held for the whole duration of the
cpuset hotplug code path.

Signed-off-by: Waiman Long <[email protected]>
Tested-by: Valentin Schneider <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/cpuset.h | 3 -
kernel/cgroup/cpuset.c | 141 ++++++++++++++++-------------------------
kernel/cpu.c | 48 --------------
kernel/power/process.c | 2 -
4 files changed, 56 insertions(+), 138 deletions(-)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 0ce6ff0d9c9aa..de4cf0ee96f79 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -70,7 +70,6 @@ extern int cpuset_init(void);
extern void cpuset_init_smp(void);
extern void cpuset_force_rebuild(void);
extern void cpuset_update_active_cpus(void);
-extern void cpuset_wait_for_hotplug(void);
extern void inc_dl_tasks_cs(struct task_struct *task);
extern void dec_dl_tasks_cs(struct task_struct *task);
extern void cpuset_lock(void);
@@ -185,8 +184,6 @@ static inline void cpuset_update_active_cpus(void)
partition_sched_domains(1, NULL, NULL);
}

-static inline void cpuset_wait_for_hotplug(void) { }
-
static inline void inc_dl_tasks_cs(struct task_struct *task) { }
static inline void dec_dl_tasks_cs(struct task_struct *task) { }
static inline void cpuset_lock(void) { }
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 4237c8748715d..d8d3439eda4e4 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -201,6 +201,14 @@ struct cpuset {
struct list_head remote_sibling;
};

+/*
+ * Legacy hierarchy call to cgroup_transfer_tasks() is handled asynchrously
+ */
+struct cpuset_remove_tasks_struct {
+ struct work_struct work;
+ struct cpuset *cs;
+};
+
/*
* Exclusive CPUs distributed out to sub-partitions of top_cpuset
*/
@@ -449,12 +457,6 @@ static DEFINE_SPINLOCK(callback_lock);

static struct workqueue_struct *cpuset_migrate_mm_wq;

-/*
- * CPU / memory hotplug is handled asynchronously.
- */
-static void cpuset_hotplug_workfn(struct work_struct *work);
-static DECLARE_WORK(cpuset_hotplug_work, cpuset_hotplug_workfn);
-
static DECLARE_WAIT_QUEUE_HEAD(cpuset_attach_wq);

static inline void check_insane_mems_config(nodemask_t *nodes)
@@ -540,22 +542,10 @@ static void guarantee_online_cpus(struct task_struct *tsk,
rcu_read_lock();
cs = task_cs(tsk);

- while (!cpumask_intersects(cs->effective_cpus, pmask)) {
+ while (!cpumask_intersects(cs->effective_cpus, pmask))
cs = parent_cs(cs);
- if (unlikely(!cs)) {
- /*
- * The top cpuset doesn't have any online cpu as a
- * consequence of a race between cpuset_hotplug_work
- * and cpu hotplug notifier. But we know the top
- * cpuset's effective_cpus is on its way to be
- * identical to cpu_online_mask.
- */
- goto out_unlock;
- }
- }
- cpumask_and(pmask, pmask, cs->effective_cpus);

-out_unlock:
+ cpumask_and(pmask, pmask, cs->effective_cpus);
rcu_read_unlock();
}

@@ -1217,7 +1207,7 @@ static void rebuild_sched_domains_locked(void)
/*
* If we have raced with CPU hotplug, return early to avoid
* passing doms with offlined cpu to partition_sched_domains().
- * Anyways, cpuset_hotplug_workfn() will rebuild sched domains.
+ * Anyways, cpuset_handle_hotplug() will rebuild sched domains.
*
* With no CPUs in any subpartitions, top_cpuset's effective CPUs
* should be the same as the active CPUs, so checking only top_cpuset
@@ -1260,12 +1250,17 @@ static void rebuild_sched_domains_locked(void)
}
#endif /* CONFIG_SMP */

-void rebuild_sched_domains(void)
+static void rebuild_sched_domains_cpuslocked(void)
{
- cpus_read_lock();
mutex_lock(&cpuset_mutex);
rebuild_sched_domains_locked();
mutex_unlock(&cpuset_mutex);
+}
+
+void rebuild_sched_domains(void)
+{
+ cpus_read_lock();
+ rebuild_sched_domains_cpuslocked();
cpus_read_unlock();
}

@@ -2079,14 +2074,11 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,

/*
* For partcmd_update without newmask, it is being called from
- * cpuset_hotplug_workfn() where cpus_read_lock() wasn't taken.
- * Update the load balance flag and scheduling domain if
- * cpus_read_trylock() is successful.
+ * cpuset_handle_hotplug(). Update the load balance flag and
+ * scheduling domain accordingly.
*/
- if ((cmd == partcmd_update) && !newmask && cpus_read_trylock()) {
+ if ((cmd == partcmd_update) && !newmask)
update_partition_sd_lb(cs, old_prs);
- cpus_read_unlock();
- }

notify_partition_change(cs, old_prs);
return 0;
@@ -3599,8 +3591,8 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
* proceeding, so that we don't end up keep removing tasks added
* after execution capability is restored.
*
- * cpuset_hotplug_work calls back into cgroup core via
- * cgroup_transfer_tasks() and waiting for it from a cgroupfs
+ * cpuset_handle_hotplug may call back into cgroup core asynchronously
+ * via cgroup_transfer_tasks() and waiting for it from a cgroupfs
* operation like this one can lead to a deadlock through kernfs
* active_ref protection. Let's break the protection. Losing the
* protection is okay as we check whether @cs is online after
@@ -3609,7 +3601,6 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
*/
css_get(&cs->css);
kernfs_break_active_protection(of->kn);
- flush_work(&cpuset_hotplug_work);

cpus_read_lock();
mutex_lock(&cpuset_mutex);
@@ -4354,6 +4345,16 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
}
}

+static void cpuset_migrate_tasks_workfn(struct work_struct *work)
+{
+ struct cpuset_remove_tasks_struct *s;
+
+ s = container_of(work, struct cpuset_remove_tasks_struct, work);
+ remove_tasks_in_empty_cpuset(s->cs);
+ css_put(&s->cs->css);
+ kfree(s);
+}
+
static void
hotplug_update_tasks_legacy(struct cpuset *cs,
struct cpumask *new_cpus, nodemask_t *new_mems,
@@ -4383,12 +4384,21 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
/*
* Move tasks to the nearest ancestor with execution resources,
* This is full cgroup operation which will also call back into
- * cpuset. Should be done outside any lock.
+ * cpuset. Execute it asynchronously using workqueue.
*/
- if (is_empty) {
- mutex_unlock(&cpuset_mutex);
- remove_tasks_in_empty_cpuset(cs);
- mutex_lock(&cpuset_mutex);
+ if (is_empty && cs->css.cgroup->nr_populated_csets &&
+ css_tryget_online(&cs->css)) {
+ struct cpuset_remove_tasks_struct *s;
+
+ s = kzalloc(sizeof(*s), GFP_KERNEL);
+ if (WARN_ON_ONCE(!s)) {
+ css_put(&cs->css);
+ return;
+ }
+
+ s->cs = cs;
+ INIT_WORK(&s->work, cpuset_migrate_tasks_workfn);
+ schedule_work(&s->work);
}
}

@@ -4421,30 +4431,6 @@ void cpuset_force_rebuild(void)
force_rebuild = true;
}

-/*
- * Attempt to acquire a cpus_read_lock while a hotplug operation may be in
- * progress.
- * Return: true if successful, false otherwise
- *
- * To avoid circular lock dependency between cpuset_mutex and cpus_read_lock,
- * cpus_read_trylock() is used here to acquire the lock.
- */
-static bool cpuset_hotplug_cpus_read_trylock(void)
-{
- int retries = 0;
-
- while (!cpus_read_trylock()) {
- /*
- * CPU hotplug still in progress. Retry 5 times
- * with a 10ms wait before bailing out.
- */
- if (++retries > 5)
- return false;
- msleep(10);
- }
- return true;
-}
-
/**
* cpuset_hotplug_update_tasks - update tasks in a cpuset for hotunplug
* @cs: cpuset in interest
@@ -4493,13 +4479,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
compute_partition_effective_cpumask(cs, &new_cpus);

if (remote && cpumask_empty(&new_cpus) &&
- partition_is_populated(cs, NULL) &&
- cpuset_hotplug_cpus_read_trylock()) {
+ partition_is_populated(cs, NULL)) {
remote_partition_disable(cs, tmp);
compute_effective_cpumask(&new_cpus, cs, parent);
remote = false;
cpuset_force_rebuild();
- cpus_read_unlock();
}

/*
@@ -4519,18 +4503,8 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
else if (is_partition_valid(parent) && is_partition_invalid(cs))
partcmd = partcmd_update;

- /*
- * cpus_read_lock needs to be held before calling
- * update_parent_effective_cpumask(). To avoid circular lock
- * dependency between cpuset_mutex and cpus_read_lock,
- * cpus_read_trylock() is used here to acquire the lock.
- */
if (partcmd >= 0) {
- if (!cpuset_hotplug_cpus_read_trylock())
- goto update_tasks;
-
update_parent_effective_cpumask(cs, partcmd, NULL, tmp);
- cpus_read_unlock();
if ((partcmd == partcmd_invalidate) || is_partition_valid(cs)) {
compute_partition_effective_cpumask(cs, &new_cpus);
cpuset_force_rebuild();
@@ -4558,8 +4532,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
}

/**
- * cpuset_hotplug_workfn - handle CPU/memory hotunplug for a cpuset
- * @work: unused
+ * cpuset_handle_hotplug - handle CPU/memory hot{,un}plug for a cpuset
*
* This function is called after either CPU or memory configuration has
* changed and updates cpuset accordingly. The top_cpuset is always
@@ -4573,8 +4546,10 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
*
* Note that CPU offlining during suspend is ignored. We don't modify
* cpusets across suspend/resume cycles at all.
+ *
+ * CPU / memory hotplug is handled synchronously.
*/
-static void cpuset_hotplug_workfn(struct work_struct *work)
+static void cpuset_handle_hotplug(void)
{
static cpumask_t new_cpus;
static nodemask_t new_mems;
@@ -4585,6 +4560,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
if (on_dfl && !alloc_cpumasks(NULL, &tmp))
ptmp = &tmp;

+ lockdep_assert_cpus_held();
mutex_lock(&cpuset_mutex);

/* fetch the available cpus/mems and find out which changed how */
@@ -4666,7 +4642,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
/* rebuild sched domains if cpus_allowed has changed */
if (cpus_updated || force_rebuild) {
force_rebuild = false;
- rebuild_sched_domains();
+ rebuild_sched_domains_cpuslocked();
}

free_cpumasks(NULL, ptmp);
@@ -4679,12 +4655,7 @@ void cpuset_update_active_cpus(void)
* inside cgroup synchronization. Bounce actual hotplug processing
* to a work item to avoid reverse locking order.
*/
- schedule_work(&cpuset_hotplug_work);
-}
-
-void cpuset_wait_for_hotplug(void)
-{
- flush_work(&cpuset_hotplug_work);
+ cpuset_handle_hotplug();
}

/*
@@ -4695,7 +4666,7 @@ void cpuset_wait_for_hotplug(void)
static int cpuset_track_online_nodes(struct notifier_block *self,
unsigned long action, void *arg)
{
- schedule_work(&cpuset_hotplug_work);
+ cpuset_handle_hotplug();
return NOTIFY_OK;
}

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 63447eb85dab6..563877d6c28b6 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1208,52 +1208,6 @@ void __init cpuhp_threads_init(void)
kthread_unpark(this_cpu_read(cpuhp_state.thread));
}

-/*
- *
- * Serialize hotplug trainwrecks outside of the cpu_hotplug_lock
- * protected region.
- *
- * The operation is still serialized against concurrent CPU hotplug via
- * cpu_add_remove_lock, i.e. CPU map protection. But it is _not_
- * serialized against other hotplug related activity like adding or
- * removing of state callbacks and state instances, which invoke either the
- * startup or the teardown callback of the affected state.
- *
- * This is required for subsystems which are unfixable vs. CPU hotplug and
- * evade lock inversion problems by scheduling work which has to be
- * completed _before_ cpu_up()/_cpu_down() returns.
- *
- * Don't even think about adding anything to this for any new code or even
- * drivers. It's only purpose is to keep existing lock order trainwrecks
- * working.
- *
- * For cpu_down() there might be valid reasons to finish cleanups which are
- * not required to be done under cpu_hotplug_lock, but that's a different
- * story and would be not invoked via this.
- */
-static void cpu_up_down_serialize_trainwrecks(bool tasks_frozen)
-{
- /*
- * cpusets delegate hotplug operations to a worker to "solve" the
- * lock order problems. Wait for the worker, but only if tasks are
- * _not_ frozen (suspend, hibernate) as that would wait forever.
- *
- * The wait is required because otherwise the hotplug operation
- * returns with inconsistent state, which could even be observed in
- * user space when a new CPU is brought up. The CPU plug uevent
- * would be delivered and user space reacting on it would fail to
- * move tasks to the newly plugged CPU up to the point where the
- * work has finished because up to that point the newly plugged CPU
- * is not assignable in cpusets/cgroups. On unplug that's not
- * necessarily a visible issue, but it is still inconsistent state,
- * which is the real problem which needs to be "fixed". This can't
- * prevent the transient state between scheduling the work and
- * returning from waiting for it.
- */
- if (!tasks_frozen)
- cpuset_wait_for_hotplug();
-}
-
#ifdef CONFIG_HOTPLUG_CPU
#ifndef arch_clear_mm_cpumask_cpu
#define arch_clear_mm_cpumask_cpu(cpu, mm) cpumask_clear_cpu(cpu, mm_cpumask(mm))
@@ -1494,7 +1448,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
*/
lockup_detector_cleanup();
arch_smt_update();
- cpu_up_down_serialize_trainwrecks(tasks_frozen);
return ret;
}

@@ -1728,7 +1681,6 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
out:
cpus_write_unlock();
arch_smt_update();
- cpu_up_down_serialize_trainwrecks(tasks_frozen);
return ret;
}

diff --git a/kernel/power/process.c b/kernel/power/process.c
index cae81a87cc91e..66ac067d9ae64 100644
--- a/kernel/power/process.c
+++ b/kernel/power/process.c
@@ -194,8 +194,6 @@ void thaw_processes(void)
__usermodehelper_set_disable_depth(UMH_FREEZING);
thaw_workqueues();

- cpuset_wait_for_hotplug();
-
read_lock(&tasklist_lock);
for_each_process_thread(g, p) {
/* No other threads should have PF_SUSPEND_TASK set */
--
2.43.0


2024-05-27 15:53:59

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 09/23] ASoC: Intel: sof_sdw: add quirk for Dell SKU 0C0F

From: Pierre-Louis Bossart <[email protected]>

[ Upstream commit b10cb955c6c0b8dbd9a768166d71cc12680b7fdf ]

The JD1 jack detection doesn't seem to work, use JD2.
Also use the 4 speaker configuration.

Link: https://github.com/thesofproject/linux/issues/4900
Signed-off-by: Pierre-Louis Bossart <[email protected]>
Reviewed-by: Bard Liao <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/soc/intel/boards/sof_sdw.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
index f4f08e9031506..76ed72716b3ca 100644
--- a/sound/soc/intel/boards/sof_sdw.c
+++ b/sound/soc/intel/boards/sof_sdw.c
@@ -429,6 +429,16 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
RT711_JD2 |
SOF_SDW_FOUR_SPK),
},
+ {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0C0F")
+ },
+ .driver_data = (void *)(SOF_SDW_TGL_HDMI |
+ RT711_JD2 |
+ SOF_SDW_FOUR_SPK),
+ },
{
.callback = sof_sdw_quirk_cb,
.matches = {
--
2.43.0


2024-05-27 15:54:15

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 11/23] drm/lima: include pp bcast irq in timeout handler check

From: Erico Nunes <[email protected]>

[ Upstream commit d8100caf40a35904d27ce446fb2088b54277997a ]

In commit 53cb55b20208 ("drm/lima: handle spurious timeouts due to high
irq latency") a check was added to detect an unexpectedly high interrupt
latency timeout.
With further investigation it was noted that on Mali-450 the pp bcast
irq may also be a trigger of race conditions against the timeout
handler, so add it to this check too.

Signed-off-by: Erico Nunes <[email protected]>
Signed-off-by: Qiang Yu <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/lima/lima_sched.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index 00b19adfc8881..66841503a6183 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -422,6 +422,8 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
*/
for (i = 0; i < pipe->num_processor; i++)
synchronize_irq(pipe->processor[i]->irq);
+ if (pipe->bcast_processor)
+ synchronize_irq(pipe->bcast_processor->irq);

if (dma_fence_is_signaled(task->fence)) {
DRM_WARN("%s unexpectedly high interrupt latency\n", lima_ip_name(ip));
--
2.43.0


2024-05-27 15:54:28

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 13/23] platform/x86: x86-android-tablets: Unregister devices in reverse order

From: Hans de Goede <[email protected]>

[ Upstream commit 3de0f2627ef849735f155c1818247f58404dddfe ]

Not all subsystems support a device getting removed while there are
still consumers of the device with a reference to the device.

One example of this is the regulator subsystem. If a regulator gets
unregistered while there are still drivers holding a reference
a WARN() at drivers/regulator/core.c:5829 triggers, e.g.:

WARNING: CPU: 1 PID: 1587 at drivers/regulator/core.c:5829 regulator_unregister
Hardware name: Intel Corp. VALLEYVIEW C0 PLATFORM/BYT-T FFD8, BIOS BLADE_21.X64.0005.R00.1504101516 FFD8_X64_R_2015_04_10_1516 04/10/2015
RIP: 0010:regulator_unregister
Call Trace:
<TASK>
regulator_unregister
devres_release_group
i2c_device_remove
device_release_driver_internal
bus_remove_device
device_del
device_unregister
x86_android_tablet_remove

On the Lenovo Yoga Tablet 2 series the bq24190 charger chip also provides
a 5V boost converter output for powering USB devices connected to the micro
USB port, the bq24190-charger driver exports this as a Vbus regulator.

On the 830 (8") and 1050 ("10") models this regulator is controlled by
a platform_device and x86_android_tablet_remove() removes platform_device-s
before i2c_clients so the consumer gets removed first.

But on the 1380 (13") model there is a lc824206xa micro-USB switch
connected over I2C and the extcon driver for that controls the regulator.
The bq24190 i2c-client *must* be registered first, because that creates
the regulator with the lc824206xa listed as its consumer. If the regulator
has not been registered yet the lc824206xa driver will end up getting
a dummy regulator.

Since in this case both the regulator provider and consumer are I2C
devices, the only way to ensure that the consumer is unregistered first
is to unregister the I2C devices in reverse order of in which they were
created.

For consistency and to avoid similar problems in the future change
x86_android_tablet_remove() to unregister all device types in reverse
order.

Signed-off-by: Hans de Goede <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/platform/x86/x86-android-tablets/core.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/platform/x86/x86-android-tablets/core.c b/drivers/platform/x86/x86-android-tablets/core.c
index a3415f1c0b5f8..6559bb4ea7305 100644
--- a/drivers/platform/x86/x86-android-tablets/core.c
+++ b/drivers/platform/x86/x86-android-tablets/core.c
@@ -278,25 +278,25 @@ static void x86_android_tablet_remove(struct platform_device *pdev)
{
int i;

- for (i = 0; i < serdev_count; i++) {
+ for (i = serdev_count - 1; i >= 0; i--) {
if (serdevs[i])
serdev_device_remove(serdevs[i]);
}

kfree(serdevs);

- for (i = 0; i < pdev_count; i++)
+ for (i = pdev_count - 1; i >= 0; i--)
platform_device_unregister(pdevs[i]);

kfree(pdevs);
kfree(buttons);

- for (i = 0; i < spi_dev_count; i++)
+ for (i = spi_dev_count - 1; i >= 0; i--)
spi_unregister_device(spi_devs[i]);

kfree(spi_devs);

- for (i = 0; i < i2c_client_count; i++)
+ for (i = i2c_client_count - 1; i >= 0; i--)
i2c_unregister_device(i2c_clients[i]);

kfree(i2c_clients);
--
2.43.0


2024-05-27 15:55:11

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 16/23] ALSA: hda/realtek: Add quirks for Lenovo 13X

From: Stefan Binding <[email protected]>

[ Upstream commit 25f46354dca912c84f1f79468fd636a94b8d287a ]

Add laptop using CS35L41 HDA.
This laptop does not have _DSD, so require entries in property
configuration table for cs35l41_hda driver.

Signed-off-by: Stefan Binding <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/pci/hda/patch_realtek.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index cee387aa458f4..0e56ddc365765 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -10499,6 +10499,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
SND_PCI_QUIRK(0x17aa, 0x3855, "Legion 7 16ITHG6", ALC287_FIXUP_LEGION_16ITHG6),
+ SND_PCI_QUIRK(0x17aa, 0x3865, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3866, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7/7i", ALC287_FIXUP_LENOVO_LEGION_7),
SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C),
--
2.43.0


2024-05-27 15:55:24

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 17/23] media: lgdt3306a: Add a check against null-pointer-def

From: Zheyu Ma <[email protected]>

[ Upstream commit c1115ddbda9c930fba0fdd062e7a8873ebaf898d ]

The driver should check whether the client provides the platform_data.

The following log reveals it:

[ 29.610324] BUG: KASAN: null-ptr-deref in kmemdup+0x30/0x40
[ 29.610730] Read of size 40 at addr 0000000000000000 by task bash/414
[ 29.612820] Call Trace:
[ 29.613030] <TASK>
[ 29.613201] dump_stack_lvl+0x56/0x6f
[ 29.613496] ? kmemdup+0x30/0x40
[ 29.613754] print_report.cold+0x494/0x6b7
[ 29.614082] ? kmemdup+0x30/0x40
[ 29.614340] kasan_report+0x8a/0x190
[ 29.614628] ? kmemdup+0x30/0x40
[ 29.614888] kasan_check_range+0x14d/0x1d0
[ 29.615213] memcpy+0x20/0x60
[ 29.615454] kmemdup+0x30/0x40
[ 29.615700] lgdt3306a_probe+0x52/0x310
[ 29.616339] i2c_device_probe+0x951/0xa90

Link: https://lore.kernel.org/linux-media/[email protected]
Signed-off-by: Zheyu Ma <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/media/dvb-frontends/lgdt3306a.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c
index 2638875924153..231b45632ad5a 100644
--- a/drivers/media/dvb-frontends/lgdt3306a.c
+++ b/drivers/media/dvb-frontends/lgdt3306a.c
@@ -2176,6 +2176,11 @@ static int lgdt3306a_probe(struct i2c_client *client)
struct dvb_frontend *fe;
int ret;

+ if (!client->dev.platform_data) {
+ dev_err(&client->dev, "platform data is mandatory\n");
+ return -EINVAL;
+ }
+
config = kmemdup(client->dev.platform_data,
sizeof(struct lgdt3306a_config), GFP_KERNEL);
if (config == NULL) {
--
2.43.0


2024-05-27 15:56:05

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 20/23] media: intel/ipu6: Fix build with !ACPI

From: Ricardo Ribalda <[email protected]>

[ Upstream commit 8810e055b57543f3465cf3c15ba4980f9f14a84e ]

Modify the code so it can be compiled tested in configurations that do
not have ACPI enabled.

It fixes the following errors:
drivers/media/pci/intel/ipu-bridge.c:103:30: error: implicit declaration of function ‘acpi_device_handle’; did you mean ‘acpi_fwnode_handle’? [-Werror=implicit-function-declaration]
drivers/media/pci/intel/ipu-bridge.c:103:30: warning: initialization of ‘acpi_handle’ {aka ‘void *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
drivers/media/pci/intel/ipu-bridge.c:110:17: error: implicit declaration of function ‘for_each_acpi_dev_match’ [-Werror=implicit-function-declaration]
drivers/media/pci/intel/ipu-bridge.c:110:74: error: expected ‘;’ before ‘for_each_acpi_consumer_dev’
drivers/media/pci/intel/ipu-bridge.c:104:29: warning: unused variable ‘consumer’ [-Wunused-variable]
drivers/media/pci/intel/ipu-bridge.c:103:21: warning: unused variable ‘handle’ [-Wunused-variable]
drivers/media/pci/intel/ipu-bridge.c:166:38: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:185:43: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:191:30: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:196:30: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:202:30: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:223:31: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:236:18: error: implicit declaration of function ‘acpi_get_physical_device_location’ [-Werror=implicit-function-declaration]
drivers/media/pci/intel/ipu-bridge.c:236:56: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:238:31: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:256:31: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:275:31: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:280:30: error: invalid use of undefined type ‘struct acpi_device’
drivers/media/pci/intel/ipu-bridge.c:469:26: error: implicit declaration of function ‘acpi_device_hid’; did you mean ‘dmi_device_id’? [-Werror=implicit-function-declaration]
drivers/media/pci/intel/ipu-bridge.c:468:74: warning: format ‘%s’ expects argument of type ‘char *’, but argument 4 has type ‘int’ [-Wformat=]
drivers/media/pci/intel/ipu-bridge.c:637:58: error: expected ‘;’ before ‘{’ token
drivers/media/pci/intel/ipu-bridge.c:696:1: warning: label ‘err_put_adev’ defined but not used [-Wunused-label]
drivers/media/pci/intel/ipu-bridge.c:693:1: warning: label ‘err_put_ivsc’ defined but not used [-Wunused-label]
drivers/media/pci/intel/ipu-bridge.c:691:1: warning: label ‘err_free_swnodes’ defined but not used [-Wunused-label]
drivers/media/pci/intel/ipu-bridge.c:632:40: warning: unused variable ‘primary’ [-Wunused-variable]
drivers/media/pci/intel/ipu-bridge.c:632:31: warning: unused variable ‘fwnode’ [-Wunused-variable]
drivers/media/pci/intel/ipu-bridge.c:733:73: error: expected ‘;’ before ‘{’ token
drivers/media/pci/intel/ipu-bridge.c:725:24: warning: unused variable ‘csi_dev’ [-Wunused-variable]
drivers/media/pci/intel/ipu-bridge.c:724:43: warning: unused variable ‘adev’ [-Wunused-variable]
drivers/media/pci/intel/ipu-bridge.c:599:12: warning: ‘ipu_bridge_instantiate_ivsc’ defined but not used [-Wunused-function]
drivers/media/pci/intel/ipu-bridge.c:444:13: warning: ‘ipu_bridge_create_connection_swnodes’ defined but not used [-Wunused-function]
drivers/media/pci/intel/ipu-bridge.c:297:13: warning: ‘ipu_bridge_create_fwnode_properties’ defined but not used [-Wunused-function]
drivers/media/pci/intel/ipu-bridge.c:155:12: warning: ‘ipu_bridge_check_ivsc_dev’ defined but not used [-Wunused-function]

Signed-off-by: Ricardo Ribalda <[email protected]>
Signed-off-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/media/pci/intel/ipu-bridge.c | 66 ++++++++++++++++++++--------
1 file changed, 47 insertions(+), 19 deletions(-)

diff --git a/drivers/media/pci/intel/ipu-bridge.c b/drivers/media/pci/intel/ipu-bridge.c
index e994db4f4d914..61750cc98d705 100644
--- a/drivers/media/pci/intel/ipu-bridge.c
+++ b/drivers/media/pci/intel/ipu-bridge.c
@@ -15,6 +15,8 @@
#include <media/ipu-bridge.h>
#include <media/v4l2-fwnode.h>

+#define ADEV_DEV(adev) ACPI_PTR(&((adev)->dev))
+
/*
* 92335fcf-3203-4472-af93-7b4453ac29da
*
@@ -87,6 +89,7 @@ static const char * const ipu_vcm_types[] = {
"lc898212axb",
};

+#if IS_ENABLED(CONFIG_ACPI)
/*
* Used to figure out IVSC acpi device by ipu_bridge_get_ivsc_acpi_dev()
* instead of device and driver match to probe IVSC device.
@@ -100,13 +103,13 @@ static const struct acpi_device_id ivsc_acpi_ids[] = {

static struct acpi_device *ipu_bridge_get_ivsc_acpi_dev(struct acpi_device *adev)
{
- acpi_handle handle = acpi_device_handle(adev);
- struct acpi_device *consumer, *ivsc_adev;
unsigned int i;

for (i = 0; i < ARRAY_SIZE(ivsc_acpi_ids); i++) {
const struct acpi_device_id *acpi_id = &ivsc_acpi_ids[i];
+ struct acpi_device *consumer, *ivsc_adev;

+ acpi_handle handle = acpi_device_handle(adev);
for_each_acpi_dev_match(ivsc_adev, acpi_id->id, NULL, -1)
/* camera sensor depends on IVSC in DSDT if exist */
for_each_acpi_consumer_dev(ivsc_adev, consumer)
@@ -118,6 +121,12 @@ static struct acpi_device *ipu_bridge_get_ivsc_acpi_dev(struct acpi_device *adev

return NULL;
}
+#else
+static struct acpi_device *ipu_bridge_get_ivsc_acpi_dev(struct acpi_device *adev)
+{
+ return NULL;
+}
+#endif

static int ipu_bridge_match_ivsc_dev(struct device *dev, const void *adev)
{
@@ -163,7 +172,7 @@ static int ipu_bridge_check_ivsc_dev(struct ipu_sensor *sensor,
csi_dev = ipu_bridge_get_ivsc_csi_dev(adev);
if (!csi_dev) {
acpi_dev_put(adev);
- dev_err(&adev->dev, "Failed to find MEI CSI dev\n");
+ dev_err(ADEV_DEV(adev), "Failed to find MEI CSI dev\n");
return -ENODEV;
}

@@ -182,24 +191,25 @@ static int ipu_bridge_read_acpi_buffer(struct acpi_device *adev, char *id,
acpi_status status;
int ret = 0;

- status = acpi_evaluate_object(adev->handle, id, NULL, &buffer);
+ status = acpi_evaluate_object(ACPI_PTR(adev->handle),
+ id, NULL, &buffer);
if (ACPI_FAILURE(status))
return -ENODEV;

obj = buffer.pointer;
if (!obj) {
- dev_err(&adev->dev, "Couldn't locate ACPI buffer\n");
+ dev_err(ADEV_DEV(adev), "Couldn't locate ACPI buffer\n");
return -ENODEV;
}

if (obj->type != ACPI_TYPE_BUFFER) {
- dev_err(&adev->dev, "Not an ACPI buffer\n");
+ dev_err(ADEV_DEV(adev), "Not an ACPI buffer\n");
ret = -ENODEV;
goto out_free_buff;
}

if (obj->buffer.length > size) {
- dev_err(&adev->dev, "Given buffer is too small\n");
+ dev_err(ADEV_DEV(adev), "Given buffer is too small\n");
ret = -EINVAL;
goto out_free_buff;
}
@@ -220,7 +230,7 @@ static u32 ipu_bridge_parse_rotation(struct acpi_device *adev,
case IPU_SENSOR_ROTATION_INVERTED:
return 180;
default:
- dev_warn(&adev->dev,
+ dev_warn(ADEV_DEV(adev),
"Unknown rotation %d. Assume 0 degree rotation\n",
ssdb->degree);
return 0;
@@ -230,12 +240,14 @@ static u32 ipu_bridge_parse_rotation(struct acpi_device *adev,
static enum v4l2_fwnode_orientation ipu_bridge_parse_orientation(struct acpi_device *adev)
{
enum v4l2_fwnode_orientation orientation;
- struct acpi_pld_info *pld;
- acpi_status status;
+ struct acpi_pld_info *pld = NULL;
+ acpi_status status = AE_ERROR;

+#if IS_ENABLED(CONFIG_ACPI)
status = acpi_get_physical_device_location(adev->handle, &pld);
+#endif
if (ACPI_FAILURE(status)) {
- dev_warn(&adev->dev, "_PLD call failed, using default orientation\n");
+ dev_warn(ADEV_DEV(adev), "_PLD call failed, using default orientation\n");
return V4L2_FWNODE_ORIENTATION_EXTERNAL;
}

@@ -253,7 +265,8 @@ static enum v4l2_fwnode_orientation ipu_bridge_parse_orientation(struct acpi_dev
orientation = V4L2_FWNODE_ORIENTATION_EXTERNAL;
break;
default:
- dev_warn(&adev->dev, "Unknown _PLD panel val %d\n", pld->panel);
+ dev_warn(ADEV_DEV(adev), "Unknown _PLD panel val %d\n",
+ pld->panel);
orientation = V4L2_FWNODE_ORIENTATION_EXTERNAL;
break;
}
@@ -272,12 +285,12 @@ int ipu_bridge_parse_ssdb(struct acpi_device *adev, struct ipu_sensor *sensor)
return ret;

if (ssdb.vcmtype > ARRAY_SIZE(ipu_vcm_types)) {
- dev_warn(&adev->dev, "Unknown VCM type %d\n", ssdb.vcmtype);
+ dev_warn(ADEV_DEV(adev), "Unknown VCM type %d\n", ssdb.vcmtype);
ssdb.vcmtype = 0;
}

if (ssdb.lanes > IPU_MAX_LANES) {
- dev_err(&adev->dev, "Number of lanes in SSDB is invalid\n");
+ dev_err(ADEV_DEV(adev), "Number of lanes in SSDB is invalid\n");
return -EINVAL;
}

@@ -465,8 +478,14 @@ static void ipu_bridge_create_connection_swnodes(struct ipu_bridge *bridge,
sensor->ipu_properties);

if (sensor->csi_dev) {
+ const char *device_hid = "";
+
+#if IS_ENABLED(CONFIG_ACPI)
+ device_hid = acpi_device_hid(sensor->ivsc_adev);
+#endif
+
snprintf(sensor->ivsc_name, sizeof(sensor->ivsc_name), "%s-%u",
- acpi_device_hid(sensor->ivsc_adev), sensor->link);
+ device_hid, sensor->link);

nodes[SWNODE_IVSC_HID] = NODE_SENSOR(sensor->ivsc_name,
sensor->ivsc_properties);
@@ -631,11 +650,15 @@ static int ipu_bridge_connect_sensor(const struct ipu_sensor_config *cfg,
{
struct fwnode_handle *fwnode, *primary;
struct ipu_sensor *sensor;
- struct acpi_device *adev;
+ struct acpi_device *adev = NULL;
int ret;

+#if IS_ENABLED(CONFIG_ACPI)
for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
- if (!adev->status.enabled)
+#else
+ while (true) {
+#endif
+ if (!ACPI_PTR(adev->status.enabled))
continue;

if (bridge->n_sensors >= IPU_MAX_PORTS) {
@@ -671,7 +694,7 @@ static int ipu_bridge_connect_sensor(const struct ipu_sensor_config *cfg,
goto err_free_swnodes;
}

- sensor->adev = acpi_dev_get(adev);
+ sensor->adev = ACPI_PTR(acpi_dev_get(adev));

primary = acpi_fwnode_handle(adev);
primary->secondary = fwnode;
@@ -727,11 +750,16 @@ static int ipu_bridge_ivsc_is_ready(void)
unsigned int i;

for (i = 0; i < ARRAY_SIZE(ipu_supported_sensors); i++) {
+#if IS_ENABLED(CONFIG_ACPI)
const struct ipu_sensor_config *cfg =
&ipu_supported_sensors[i];

for_each_acpi_dev_match(sensor_adev, cfg->hid, NULL, -1) {
- if (!sensor_adev->status.enabled)
+#else
+ while (true) {
+ sensor_adev = NULL;
+#endif
+ if (!ACPI_PTR(sensor_adev->status.enabled))
continue;

adev = ipu_bridge_get_ivsc_acpi_dev(sensor_adev);
--
2.43.0


2024-05-27 15:56:20

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 21/23] media: mtk-vcodec: potential null pointer deference in SCP

From: Fullway Wang <[email protected]>

[ Upstream commit 53dbe08504442dc7ba4865c09b3bbf5fe849681b ]

The return value of devm_kzalloc() needs to be checked to avoid
NULL pointer deference. This is similar to CVE-2022-3113.

Link: https://lore.kernel.org/linux-media/PH7PR20MB5925094DAE3FD750C7E39E01BF712@PH7PR20MB5925.namprd20.prod.outlook.com
Signed-off-by: Fullway Wang <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
.../media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
index 6bbe55de6ce9a..ff23b225db705 100644
--- a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
+++ b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
@@ -79,6 +79,8 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_scp_init(void *priv, enum mtk_vcodec_fw_use
}

fw = devm_kzalloc(&plat_dev->dev, sizeof(*fw), GFP_KERNEL);
+ if (!fw)
+ return ERR_PTR(-ENOMEM);
fw->type = SCP;
fw->ops = &mtk_vcodec_rproc_msg;
fw->scp = scp;
--
2.43.0


2024-05-27 15:56:32

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 22/23] powerpc/io: Avoid clang null pointer arithmetic warnings

From: Michael Ellerman <[email protected]>

[ Upstream commit 03c0f2c2b2220fc9cf8785cd7b61d3e71e24a366 ]

With -Wextra clang warns about pointer arithmetic using a null pointer.
When building with CONFIG_PCI=n, that triggers a warning in the IO
accessors, eg:

In file included from linux/arch/powerpc/include/asm/io.h:672:
linux/arch/powerpc/include/asm/io-defs.h:23:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
23 | DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port), pio, port)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...
linux/arch/powerpc/include/asm/io.h:591:53: note: expanded from macro '__do_inb'
591 | #define __do_inb(port) readb((PCI_IO_ADDR)_IO_BASE + port);
| ~~~~~~~~~~~~~~~~~~~~~ ^

That is because when CONFIG_PCI=n, _IO_BASE is defined as 0.

Although _IO_BASE is defined as plain 0, the cast (PCI_IO_ADDR) converts
it to void * before the addition with port happens.

Instead the addition can be done first, and then the cast. The resulting
value will be the same, but avoids the warning, and also avoids void
pointer arithmetic which is apparently non-standard.

Reported-by: Naresh Kamboju <[email protected]>
Closes: https://lore.kernel.org/all/CA+G9fYtEh8zmq8k8wE-8RZwW-Qr927RLTn+KqGnq1F=ptaaNsA@mail.gmail.com
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://msgid.link/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/include/asm/io.h | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index 08c550ed49be6..ba2e13bb879dc 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -585,12 +585,12 @@ __do_out_asm(_rec_outl, "stwbrx")
#define __do_inw(port) _rec_inw(port)
#define __do_inl(port) _rec_inl(port)
#else /* CONFIG_PPC32 */
-#define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)_IO_BASE+port);
-#define __do_outw(val, port) writew(val,(PCI_IO_ADDR)_IO_BASE+port);
-#define __do_outl(val, port) writel(val,(PCI_IO_ADDR)_IO_BASE+port);
-#define __do_inb(port) readb((PCI_IO_ADDR)_IO_BASE + port);
-#define __do_inw(port) readw((PCI_IO_ADDR)_IO_BASE + port);
-#define __do_inl(port) readl((PCI_IO_ADDR)_IO_BASE + port);
+#define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)(_IO_BASE+port));
+#define __do_outw(val, port) writew(val,(PCI_IO_ADDR)(_IO_BASE+port));
+#define __do_outl(val, port) writel(val,(PCI_IO_ADDR)(_IO_BASE+port));
+#define __do_inb(port) readb((PCI_IO_ADDR)(_IO_BASE + port));
+#define __do_inw(port) readw((PCI_IO_ADDR)(_IO_BASE + port));
+#define __do_inl(port) readl((PCI_IO_ADDR)(_IO_BASE + port));
#endif /* !CONFIG_PPC32 */

#ifdef CONFIG_EEH
@@ -606,12 +606,12 @@ __do_out_asm(_rec_outl, "stwbrx")
#define __do_writesw(a, b, n) _outsw(PCI_FIX_ADDR(a),(b),(n))
#define __do_writesl(a, b, n) _outsl(PCI_FIX_ADDR(a),(b),(n))

-#define __do_insb(p, b, n) readsb((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
-#define __do_insw(p, b, n) readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
-#define __do_insl(p, b, n) readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
-#define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
-#define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
-#define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
+#define __do_insb(p, b, n) readsb((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
+#define __do_insw(p, b, n) readsw((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
+#define __do_insl(p, b, n) readsl((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
+#define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
+#define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
+#define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))

#define __do_memset_io(addr, c, n) \
_memset_io(PCI_FIX_ADDR(addr), c, n)
--
2.43.0


2024-05-27 15:56:41

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 23/23] platform/x86: p2sb: Don't init until unassigned resources have been assigned

From: Ben Fradella <[email protected]>

[ Upstream commit 2c6370e6607663fc5fa0fd9ed58e2e01014898c7 ]

The P2SB could get an invalid BAR from the BIOS, and that won't be fixed
up until pcibios_assign_resources(), which is an fs_initcall().

- Move p2sb_fs_init() to an fs_initcall_sync(). This is still early
enough to avoid a race with any dependent drivers.

- Add a check for IORESOURCE_UNSET in p2sb_valid_resource() to catch
unset BARs going forward.

- Return error values from p2sb_fs_init() so that the 'initcall_debug'
cmdline arg provides useful data.

Signed-off-by: Ben Fradella <[email protected]>
Acked-by: Andy Shevchenko <[email protected]>
Tested-by: Klara Modin <[email protected]>
Reviewed-by: Shin'ichiro Kawasaki <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Hans de Goede <[email protected]>
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/platform/x86/p2sb.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/drivers/platform/x86/p2sb.c b/drivers/platform/x86/p2sb.c
index 3d66e1d4eb1f5..1ac30034f3e59 100644
--- a/drivers/platform/x86/p2sb.c
+++ b/drivers/platform/x86/p2sb.c
@@ -56,12 +56,9 @@ static int p2sb_get_devfn(unsigned int *devfn)
return 0;
}

-static bool p2sb_valid_resource(struct resource *res)
+static bool p2sb_valid_resource(const struct resource *res)
{
- if (res->flags)
- return true;
-
- return false;
+ return res->flags & ~IORESOURCE_UNSET;
}

/* Copy resource from the first BAR of the device in question */
@@ -220,16 +217,20 @@ EXPORT_SYMBOL_GPL(p2sb_bar);

static int __init p2sb_fs_init(void)
{
- p2sb_cache_resources();
- return 0;
+ return p2sb_cache_resources();
}

/*
- * pci_rescan_remove_lock to avoid access to unhidden P2SB devices can
- * not be locked in sysfs pci bus rescan path because of deadlock. To
- * avoid the deadlock, access to P2SB devices with the lock at an early
- * step in kernel initialization and cache required resources. This
- * should happen after subsys_initcall which initializes PCI subsystem
- * and before device_initcall which requires P2SB resources.
+ * pci_rescan_remove_lock() can not be locked in sysfs PCI bus rescan path
+ * because of deadlock. To avoid the deadlock, access P2SB devices with the lock
+ * at an early step in kernel initialization and cache required resources.
+ *
+ * We want to run as early as possible. If the P2SB was assigned a bad BAR,
+ * we'll need to wait on pcibios_assign_resources() to fix it. So, our list of
+ * initcall dependencies looks something like this:
+ *
+ * ...
+ * subsys_initcall (pci_subsys_init)
+ * fs_initcall (pcibios_assign_resources)
*/
-fs_initcall(p2sb_fs_init);
+fs_initcall_sync(p2sb_fs_init);
--
2.43.0


2024-05-27 15:59:07

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 02/23] drm/amd/display: Workaround register access in idle race with cursor

From: Nicholas Kazlauskas <[email protected]>

[ Upstream commit b5b6d6251579a29dafdad25f4bc7f3ff7bfd2c86 ]

[Why]
Cursor update can be pre-empted by a request for setting target flip
submission.

This causes an issue where we're in the middle of the exit sequence
trying to log to DM, but the pre-emption starts another DMCUB
command submission that requires being out of idle.

The DC lock aqusition can fail, and depending on the DM/OS interface
it's possible that the function inserted into this thread must not fail.

This means that lock aqusition must be skipped and exit *must* occur.

[How]
Modify when we consider idle as active. Consider it exited only once
the exit has fully finished.

Consider it as entered prior to actual notification.

Since we're on the same core/thread the cached values are coherent
and we'll see that we still need to exit. Once the cursor update resumes
it'll continue doing the double exit but this won't cause a functional
issue, just a (potential) redundant operation.

Reviewed-by: Duncan Ma <[email protected]>
Acked-by: Wayne Lin <[email protected]>
Signed-off-by: Nicholas Kazlauskas <[email protected]>
Tested-by: Daniel Wheeler <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 23 +++++++++++++++-----
1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
index 6083b1dcf050a..a72e849eced3f 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
@@ -1340,16 +1340,27 @@ void dc_dmub_srv_apply_idle_power_optimizations(const struct dc *dc, bool allow_
* Powering up the hardware requires notifying PMFW and DMCUB.
* Clearing the driver idle allow requires a DMCUB command.
* DMCUB commands requires the DMCUB to be powered up and restored.
- *
- * Exit out early to prevent an infinite loop of DMCUB commands
- * triggering exit low power - use software state to track this.
*/
- dc_dmub_srv->idle_allowed = allow_idle;

- if (!allow_idle)
+ if (!allow_idle) {
dc_dmub_srv_exit_low_power_state(dc);
- else
+ /*
+ * Idle is considered fully exited only after the sequence above
+ * fully completes. If we have a race of two threads exiting
+ * at the same time then it's safe to perform the sequence
+ * twice as long as we're not re-entering.
+ *
+ * Infinite command submission is avoided by using the
+ * dm_execute_dmub_cmd submission instead of the "wake" helpers.
+ */
+ dc_dmub_srv->idle_allowed = false;
+ } else {
+ /* Consider idle as notified prior to the actual submission to
+ * prevent multiple entries. */
+ dc_dmub_srv->idle_allowed = true;
+
dc_dmub_srv_notify_idle(dc, allow_idle);
+ }
}

bool dc_wake_and_execute_dmub_cmd(const struct dc_context *ctx, union dmub_rb_cmd *cmd,
--
2.43.0


2024-05-27 15:59:36

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 04/23] ima: Fix use-after-free on a dentry's dname.name

From: Stefan Berger <[email protected]>

[ Upstream commit be84f32bb2c981ca670922e047cdde1488b233de ]

->d_name.name can change on rename and the earlier value can be freed;
there are conditions sufficient to stabilize it (->d_lock on dentry,
->d_lock on its parent, ->i_rwsem exclusive on the parent's inode,
rename_lock), but none of those are met at any of the sites. Take a stable
snapshot of the name instead.

Link: https://lore.kernel.org/all/20240202182732.GE2087318@ZenIV/
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Stefan Berger <[email protected]>
Signed-off-by: Mimi Zohar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
security/integrity/ima/ima_api.c | 16 ++++++++++++----
security/integrity/ima/ima_template_lib.c | 17 ++++++++++++++---
2 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c
index b37d043d5748c..1856981e33df3 100644
--- a/security/integrity/ima/ima_api.c
+++ b/security/integrity/ima/ima_api.c
@@ -245,8 +245,8 @@ int ima_collect_measurement(struct ima_iint_cache *iint, struct file *file,
const char *audit_cause = "failed";
struct inode *inode = file_inode(file);
struct inode *real_inode = d_real_inode(file_dentry(file));
- const char *filename = file->f_path.dentry->d_name.name;
struct ima_max_digest_data hash;
+ struct name_snapshot filename;
struct kstat stat;
int result = 0;
int length;
@@ -317,9 +317,13 @@ int ima_collect_measurement(struct ima_iint_cache *iint, struct file *file,
if (file->f_flags & O_DIRECT)
audit_cause = "failed(directio)";

+ take_dentry_name_snapshot(&filename, file->f_path.dentry);
+
integrity_audit_msg(AUDIT_INTEGRITY_DATA, inode,
- filename, "collect_data", audit_cause,
- result, 0);
+ filename.name.name, "collect_data",
+ audit_cause, result, 0);
+
+ release_dentry_name_snapshot(&filename);
}
return result;
}
@@ -432,6 +436,7 @@ void ima_audit_measurement(struct ima_iint_cache *iint,
*/
const char *ima_d_path(const struct path *path, char **pathbuf, char *namebuf)
{
+ struct name_snapshot filename;
char *pathname = NULL;

*pathbuf = __getname();
@@ -445,7 +450,10 @@ const char *ima_d_path(const struct path *path, char **pathbuf, char *namebuf)
}

if (!pathname) {
- strscpy(namebuf, path->dentry->d_name.name, NAME_MAX);
+ take_dentry_name_snapshot(&filename, path->dentry);
+ strscpy(namebuf, filename.name.name, NAME_MAX);
+ release_dentry_name_snapshot(&filename);
+
pathname = namebuf;
}

diff --git a/security/integrity/ima/ima_template_lib.c b/security/integrity/ima/ima_template_lib.c
index 6cd0add524cdc..3b2cb8f1002e6 100644
--- a/security/integrity/ima/ima_template_lib.c
+++ b/security/integrity/ima/ima_template_lib.c
@@ -483,7 +483,10 @@ static int ima_eventname_init_common(struct ima_event_data *event_data,
bool size_limit)
{
const char *cur_filename = NULL;
+ struct name_snapshot filename;
u32 cur_filename_len = 0;
+ bool snapshot = false;
+ int ret;

BUG_ON(event_data->filename == NULL && event_data->file == NULL);

@@ -496,7 +499,10 @@ static int ima_eventname_init_common(struct ima_event_data *event_data,
}

if (event_data->file) {
- cur_filename = event_data->file->f_path.dentry->d_name.name;
+ take_dentry_name_snapshot(&filename,
+ event_data->file->f_path.dentry);
+ snapshot = true;
+ cur_filename = filename.name.name;
cur_filename_len = strlen(cur_filename);
} else
/*
@@ -505,8 +511,13 @@ static int ima_eventname_init_common(struct ima_event_data *event_data,
*/
cur_filename_len = IMA_EVENT_NAME_LEN_MAX;
out:
- return ima_write_template_field_data(cur_filename, cur_filename_len,
- DATA_FMT_STRING, field_data);
+ ret = ima_write_template_field_data(cur_filename, cur_filename_len,
+ DATA_FMT_STRING, field_data);
+
+ if (snapshot)
+ release_dentry_name_snapshot(&filename);
+
+ return ret;
}

/*
--
2.43.0


2024-05-27 16:00:26

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 07/23] drm/amd/display: add root clock control function pointer to fix display corruption

From: "Xi (Alex) Liu" <[email protected]>

[ Upstream commit de2d1105a3757742b45b0d8270b3c8734cd6b6f8 ]

[Why and how]

External display has corruption because no root clock control function. Add the function pointer to fix the issue.

Reviewed-by: Daniel Miess <[email protected]>
Reviewed-by: Nicholas Kazlauskas <[email protected]>
Acked-by: Roman Li <[email protected]>
Signed-off-by: Xi (Alex) Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
index 670255c9bc822..4dca5c5a8318f 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
@@ -147,6 +147,7 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
//.hubp_pg_control = dcn35_hubp_pg_control,
.enable_power_gating_plane = dcn35_enable_power_gating_plane,
.dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ .dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
.program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
.update_odm = dcn35_update_odm,
.set_hdr_multiplier = dcn10_set_hdr_multiplier,
--
2.43.0


2024-05-27 16:00:41

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 08/23] ASoC: Intel: sof_sdw: add JD2 quirk for HP Omen 14

From: Pierre-Louis Bossart <[email protected]>

[ Upstream commit 4fee07fbf47d2a5f1065d985459e5ce7bf7969f0 ]

The default JD1 does not seem to work, use JD2 instead.

Signed-off-by: Pierre-Louis Bossart <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/soc/intel/boards/sof_sdw.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
index 08f330ed5c2ea..f4f08e9031506 100644
--- a/sound/soc/intel/boards/sof_sdw.c
+++ b/sound/soc/intel/boards/sof_sdw.c
@@ -495,6 +495,15 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
SOF_BT_OFFLOAD_SSP(1) |
SOF_SSP_BT_OFFLOAD_PRESENT),
},
+ {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OMEN Transcend Gaming Laptop"),
+ },
+ .driver_data = (void *)(RT711_JD2),
+ },
+
/* LunarLake devices */
{
.callback = sof_sdw_quirk_cb,
--
2.43.0


2024-05-27 16:01:05

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 10/23] drm/lima: add mask irq callback to gp and pp

From: Erico Nunes <[email protected]>

[ Upstream commit 49c13b4d2dd4a831225746e758893673f6ae961c ]

This is needed because we want to reset those devices in device-agnostic
code such as lima_sched.
In particular, masking irqs will be useful before a hard reset to
prevent race conditions.

Signed-off-by: Erico Nunes <[email protected]>
Signed-off-by: Qiang Yu <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/lima/lima_bcast.c | 12 ++++++++++++
drivers/gpu/drm/lima/lima_bcast.h | 3 +++
drivers/gpu/drm/lima/lima_gp.c | 8 ++++++++
drivers/gpu/drm/lima/lima_pp.c | 18 ++++++++++++++++++
drivers/gpu/drm/lima/lima_sched.h | 1 +
5 files changed, 42 insertions(+)

diff --git a/drivers/gpu/drm/lima/lima_bcast.c b/drivers/gpu/drm/lima/lima_bcast.c
index fbc43f243c54d..6d000504e1a4e 100644
--- a/drivers/gpu/drm/lima/lima_bcast.c
+++ b/drivers/gpu/drm/lima/lima_bcast.c
@@ -43,6 +43,18 @@ void lima_bcast_suspend(struct lima_ip *ip)

}

+int lima_bcast_mask_irq(struct lima_ip *ip)
+{
+ bcast_write(LIMA_BCAST_BROADCAST_MASK, 0);
+ bcast_write(LIMA_BCAST_INTERRUPT_MASK, 0);
+ return 0;
+}
+
+int lima_bcast_reset(struct lima_ip *ip)
+{
+ return lima_bcast_hw_init(ip);
+}
+
int lima_bcast_init(struct lima_ip *ip)
{
int i;
diff --git a/drivers/gpu/drm/lima/lima_bcast.h b/drivers/gpu/drm/lima/lima_bcast.h
index 465ee587bceb2..cd08841e47879 100644
--- a/drivers/gpu/drm/lima/lima_bcast.h
+++ b/drivers/gpu/drm/lima/lima_bcast.h
@@ -13,4 +13,7 @@ void lima_bcast_fini(struct lima_ip *ip);

void lima_bcast_enable(struct lima_device *dev, int num_pp);

+int lima_bcast_mask_irq(struct lima_ip *ip);
+int lima_bcast_reset(struct lima_ip *ip);
+
#endif
diff --git a/drivers/gpu/drm/lima/lima_gp.c b/drivers/gpu/drm/lima/lima_gp.c
index 6b354e2fb61d4..e15295071533b 100644
--- a/drivers/gpu/drm/lima/lima_gp.c
+++ b/drivers/gpu/drm/lima/lima_gp.c
@@ -233,6 +233,13 @@ static void lima_gp_task_mmu_error(struct lima_sched_pipe *pipe)
lima_sched_pipe_task_done(pipe);
}

+static void lima_gp_task_mask_irq(struct lima_sched_pipe *pipe)
+{
+ struct lima_ip *ip = pipe->processor[0];
+
+ gp_write(LIMA_GP_INT_MASK, 0);
+}
+
static int lima_gp_task_recover(struct lima_sched_pipe *pipe)
{
struct lima_ip *ip = pipe->processor[0];
@@ -365,6 +372,7 @@ int lima_gp_pipe_init(struct lima_device *dev)
pipe->task_error = lima_gp_task_error;
pipe->task_mmu_error = lima_gp_task_mmu_error;
pipe->task_recover = lima_gp_task_recover;
+ pipe->task_mask_irq = lima_gp_task_mask_irq;

return 0;
}
diff --git a/drivers/gpu/drm/lima/lima_pp.c b/drivers/gpu/drm/lima/lima_pp.c
index d0d2db0ef1ce8..a4a2ffe6527c2 100644
--- a/drivers/gpu/drm/lima/lima_pp.c
+++ b/drivers/gpu/drm/lima/lima_pp.c
@@ -429,6 +429,9 @@ static void lima_pp_task_error(struct lima_sched_pipe *pipe)

lima_pp_hard_reset(ip);
}
+
+ if (pipe->bcast_processor)
+ lima_bcast_reset(pipe->bcast_processor);
}

static void lima_pp_task_mmu_error(struct lima_sched_pipe *pipe)
@@ -437,6 +440,20 @@ static void lima_pp_task_mmu_error(struct lima_sched_pipe *pipe)
lima_sched_pipe_task_done(pipe);
}

+static void lima_pp_task_mask_irq(struct lima_sched_pipe *pipe)
+{
+ int i;
+
+ for (i = 0; i < pipe->num_processor; i++) {
+ struct lima_ip *ip = pipe->processor[i];
+
+ pp_write(LIMA_PP_INT_MASK, 0);
+ }
+
+ if (pipe->bcast_processor)
+ lima_bcast_mask_irq(pipe->bcast_processor);
+}
+
static struct kmem_cache *lima_pp_task_slab;
static int lima_pp_task_slab_refcnt;

@@ -468,6 +485,7 @@ int lima_pp_pipe_init(struct lima_device *dev)
pipe->task_fini = lima_pp_task_fini;
pipe->task_error = lima_pp_task_error;
pipe->task_mmu_error = lima_pp_task_mmu_error;
+ pipe->task_mask_irq = lima_pp_task_mask_irq;

return 0;
}
diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h
index 6bd4f3b701091..85b23ba901d53 100644
--- a/drivers/gpu/drm/lima/lima_sched.h
+++ b/drivers/gpu/drm/lima/lima_sched.h
@@ -80,6 +80,7 @@ struct lima_sched_pipe {
void (*task_error)(struct lima_sched_pipe *pipe);
void (*task_mmu_error)(struct lima_sched_pipe *pipe);
int (*task_recover)(struct lima_sched_pipe *pipe);
+ void (*task_mask_irq)(struct lima_sched_pipe *pipe);

struct work_struct recover_work;
};
--
2.43.0


2024-05-27 16:01:30

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 12/23] drm/lima: mask irqs in timeout path before hard reset

From: Erico Nunes <[email protected]>

[ Upstream commit a421cc7a6a001b70415aa4f66024fa6178885a14 ]

There is a race condition in which a rendering job might take just long
enough to trigger the drm sched job timeout handler but also still
complete before the hard reset is done by the timeout handler.
This runs into race conditions not expected by the timeout handler.
In some very specific cases it currently may result in a refcount
imbalance on lima_pm_idle, with a stack dump such as:

[10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
..
[10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
..
[10136.669628] Call trace:
[10136.669634] lima_devfreq_record_idle+0xa0/0xb0
[10136.669646] lima_sched_pipe_task_done+0x5c/0xb0
[10136.669656] lima_gp_irq_handler+0xa8/0x120
[10136.669666] __handle_irq_event_percpu+0x48/0x160
[10136.669679] handle_irq_event+0x4c/0xc0

We can prevent that race condition entirely by masking the irqs at the
beginning of the timeout handler, at which point we give up on waiting
for that job entirely.
The irqs will be enabled again at the next hard reset which is already
done as a recovery by the timeout handler.

Signed-off-by: Erico Nunes <[email protected]>
Reviewed-by: Qiang Yu <[email protected]>
Signed-off-by: Qiang Yu <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/lima/lima_sched.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index 66841503a6183..bbf3f8feab944 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -430,6 +430,13 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
return DRM_GPU_SCHED_STAT_NOMINAL;
}

+ /*
+ * The task might still finish while this timeout handler runs.
+ * To prevent a race condition on its completion, mask all irqs
+ * on the running core until the next hard reset completes.
+ */
+ pipe->task_mask_irq(pipe);
+
if (!pipe->error)
DRM_ERROR("%s job timeout\n", lima_ip_name(ip));

--
2.43.0


2024-05-27 16:01:54

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 14/23] platform/x86: x86-android-tablets: Add Lenovo Yoga Tablet 2 Pro 1380F/L data

From: Hans de Goede <[email protected]>

[ Upstream commit 3eee73ad42c3899d97e073bf2c41e7670a3c575c ]

The Lenovo Yoga Tablet 2 Pro 1380F/L is a x86 ACPI tablet which ships with
Android x86 as factory OS. Its DSDT contains a bunch of I2C devices which
are not actually there, causing various resource conflicts. Enumeration of
these is skipped through the acpi_quirk_skip_i2c_client_enumeration().

Add support for manually instantiating the I2C + other devices which are
actually present on this tablet by adding the necessary device info to
the x86-android-tablets module.

Signed-off-by: Hans de Goede <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
.../platform/x86/x86-android-tablets/dmi.c | 18 ++
.../platform/x86/x86-android-tablets/lenovo.c | 216 ++++++++++++++++++
.../x86-android-tablets/x86-android-tablets.h | 1 +
3 files changed, 235 insertions(+)

diff --git a/drivers/platform/x86/x86-android-tablets/dmi.c b/drivers/platform/x86/x86-android-tablets/dmi.c
index 5d6c12494f082..141a2d25e83be 100644
--- a/drivers/platform/x86/x86-android-tablets/dmi.c
+++ b/drivers/platform/x86/x86-android-tablets/dmi.c
@@ -104,6 +104,24 @@ const struct dmi_system_id x86_android_tablet_ids[] __initconst = {
},
.driver_data = (void *)&lenovo_yogabook_x91_info,
},
+ {
+ /*
+ * Lenovo Yoga Tablet 2 Pro 1380F/L (13") This has more or less
+ * the same BIOS as the 830F/L or 1050F/L (8" and 10") below,
+ * but unlike the 8" / 10" models which share the same mainboard
+ * this model has a different mainboard.
+ * This match for the 13" model MUST come before the 8" + 10"
+ * match since that one will also match the 13" model!
+ */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corp."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VALLEYVIEW C0 PLATFORM"),
+ DMI_MATCH(DMI_BOARD_NAME, "BYT-T FFD8"),
+ /* Full match so as to NOT match the 830/1050 BIOS */
+ DMI_MATCH(DMI_BIOS_VERSION, "BLADE_21.X64.0005.R00.1504101516"),
+ },
+ .driver_data = (void *)&lenovo_yoga_tab2_1380_info,
+ },
{
/*
* Lenovo Yoga Tablet 2 830F/L or 1050F/L (The 8" and 10"
diff --git a/drivers/platform/x86/x86-android-tablets/lenovo.c b/drivers/platform/x86/x86-android-tablets/lenovo.c
index c297391955adb..16fa04d604a09 100644
--- a/drivers/platform/x86/x86-android-tablets/lenovo.c
+++ b/drivers/platform/x86/x86-android-tablets/lenovo.c
@@ -19,6 +19,7 @@
#include <linux/pinctrl/machine.h>
#include <linux/platform_data/lp855x.h>
#include <linux/platform_device.h>
+#include <linux/power/bq24190_charger.h>
#include <linux/reboot.h>
#include <linux/rmi.h>
#include <linux/spi/spi.h>
@@ -565,6 +566,221 @@ static void lenovo_yoga_tab2_830_1050_exit(void)
}
}

+/*
+ * Lenovo Yoga Tablet 2 Pro 1380F/L
+ *
+ * The Lenovo Yoga Tablet 2 Pro 1380F/L mostly has the same design as the 830F/L
+ * and the 1050F/L so this re-uses some of the handling for that from above.
+ */
+static const char * const lc824206xa_chg_det_psy[] = { "lc824206xa-charger-detect" };
+
+static const struct property_entry lenovo_yoga_tab2_1380_bq24190_props[] = {
+ PROPERTY_ENTRY_STRING_ARRAY("supplied-from", lc824206xa_chg_det_psy),
+ PROPERTY_ENTRY_REF("monitored-battery", &generic_lipo_hv_4v35_battery_node),
+ PROPERTY_ENTRY_BOOL("omit-battery-class"),
+ PROPERTY_ENTRY_BOOL("disable-reset"),
+ { }
+};
+
+static const struct software_node lenovo_yoga_tab2_1380_bq24190_node = {
+ .properties = lenovo_yoga_tab2_1380_bq24190_props,
+};
+
+/* For enabling the bq24190 5V boost based on id-pin */
+static struct regulator_consumer_supply lc824206xa_consumer = {
+ .supply = "vbus",
+ .dev_name = "i2c-lc824206xa",
+};
+
+static const struct regulator_init_data lenovo_yoga_tab2_1380_bq24190_vbus_init_data = {
+ .constraints = {
+ .name = "bq24190_vbus",
+ .valid_ops_mask = REGULATOR_CHANGE_STATUS,
+ },
+ .consumer_supplies = &lc824206xa_consumer,
+ .num_consumer_supplies = 1,
+};
+
+struct bq24190_platform_data lenovo_yoga_tab2_1380_bq24190_pdata = {
+ .regulator_init_data = &lenovo_yoga_tab2_1380_bq24190_vbus_init_data,
+};
+
+static const struct property_entry lenovo_yoga_tab2_1380_lc824206xa_props[] = {
+ PROPERTY_ENTRY_BOOL("onnn,enable-miclr-for-dcp"),
+ { }
+};
+
+static const struct software_node lenovo_yoga_tab2_1380_lc824206xa_node = {
+ .properties = lenovo_yoga_tab2_1380_lc824206xa_props,
+};
+
+static const char * const lenovo_yoga_tab2_1380_lms303d_mount_matrix[] = {
+ "0", "-1", "0",
+ "-1", "0", "0",
+ "0", "0", "1"
+};
+
+static const struct property_entry lenovo_yoga_tab2_1380_lms303d_props[] = {
+ PROPERTY_ENTRY_STRING_ARRAY("mount-matrix", lenovo_yoga_tab2_1380_lms303d_mount_matrix),
+ { }
+};
+
+static const struct software_node lenovo_yoga_tab2_1380_lms303d_node = {
+ .properties = lenovo_yoga_tab2_1380_lms303d_props,
+};
+
+static const struct x86_i2c_client_info lenovo_yoga_tab2_1380_i2c_clients[] __initconst = {
+ {
+ /* BQ27541 fuel-gauge */
+ .board_info = {
+ .type = "bq27541",
+ .addr = 0x55,
+ .dev_name = "bq27541",
+ .swnode = &fg_bq24190_supply_node,
+ },
+ .adapter_path = "\\_SB_.I2C1",
+ }, {
+ /* bq24292i battery charger */
+ .board_info = {
+ .type = "bq24190",
+ .addr = 0x6b,
+ .dev_name = "bq24292i",
+ .swnode = &lenovo_yoga_tab2_1380_bq24190_node,
+ .platform_data = &lenovo_yoga_tab2_1380_bq24190_pdata,
+ },
+ .adapter_path = "\\_SB_.I2C1",
+ .irq_data = {
+ .type = X86_ACPI_IRQ_TYPE_GPIOINT,
+ .chip = "INT33FC:02",
+ .index = 2,
+ .trigger = ACPI_EDGE_SENSITIVE,
+ .polarity = ACPI_ACTIVE_HIGH,
+ .con_id = "bq24292i_irq",
+ },
+ }, {
+ /* LP8557 Backlight controller */
+ .board_info = {
+ .type = "lp8557",
+ .addr = 0x2c,
+ .dev_name = "lp8557",
+ .platform_data = &lenovo_lp8557_pwm_and_reg_pdata,
+ },
+ .adapter_path = "\\_SB_.I2C3",
+ }, {
+ /* LC824206XA Micro USB Switch */
+ .board_info = {
+ .type = "lc824206xa",
+ .addr = 0x48,
+ .dev_name = "lc824206xa",
+ .swnode = &lenovo_yoga_tab2_1380_lc824206xa_node,
+ },
+ .adapter_path = "\\_SB_.I2C3",
+ .irq_data = {
+ .type = X86_ACPI_IRQ_TYPE_GPIOINT,
+ .chip = "INT33FC:02",
+ .index = 1,
+ .trigger = ACPI_LEVEL_SENSITIVE,
+ .polarity = ACPI_ACTIVE_LOW,
+ .con_id = "lc824206xa_irq",
+ },
+ }, {
+ /* AL3320A ambient light sensor */
+ .board_info = {
+ .type = "al3320a",
+ .addr = 0x1c,
+ .dev_name = "al3320a",
+ },
+ .adapter_path = "\\_SB_.I2C5",
+ }, {
+ /* LSM303DA accelerometer + magnetometer */
+ .board_info = {
+ .type = "lsm303d",
+ .addr = 0x1d,
+ .dev_name = "lsm303d",
+ .swnode = &lenovo_yoga_tab2_1380_lms303d_node,
+ },
+ .adapter_path = "\\_SB_.I2C5",
+ }, {
+ /* Synaptics RMI touchscreen */
+ .board_info = {
+ .type = "rmi4_i2c",
+ .addr = 0x38,
+ .dev_name = "rmi4_i2c",
+ .platform_data = &lenovo_yoga_tab2_830_1050_rmi_pdata,
+ },
+ .adapter_path = "\\_SB_.I2C6",
+ .irq_data = {
+ .type = X86_ACPI_IRQ_TYPE_APIC,
+ .index = 0x45,
+ .trigger = ACPI_EDGE_SENSITIVE,
+ .polarity = ACPI_ACTIVE_HIGH,
+ },
+ }
+};
+
+static const struct platform_device_info lenovo_yoga_tab2_1380_pdevs[] __initconst = {
+ {
+ /* For the Tablet 2 Pro 1380's custom fast charging driver */
+ .name = "lenovo-yoga-tab2-pro-1380-fastcharger",
+ .id = PLATFORM_DEVID_NONE,
+ },
+};
+
+const char * const lenovo_yoga_tab2_1380_modules[] __initconst = {
+ "bq24190_charger", /* For the Vbus regulator for lc824206xa */
+ NULL
+};
+
+static int __init lenovo_yoga_tab2_1380_init(void)
+{
+ int ret;
+
+ /* To verify that the DMI matching works vs the 830 / 1050 models */
+ pr_info("detected Lenovo Yoga Tablet 2 Pro 1380F/L\n");
+
+ ret = lenovo_yoga_tab2_830_1050_init_codec();
+ if (ret)
+ return ret;
+
+ /* SYS_OFF_PRIO_FIRMWARE + 1 so that it runs before acpi_power_off */
+ lenovo_yoga_tab2_830_1050_sys_off_handler =
+ register_sys_off_handler(SYS_OFF_MODE_POWER_OFF, SYS_OFF_PRIO_FIRMWARE + 1,
+ lenovo_yoga_tab2_830_1050_power_off, NULL);
+ if (IS_ERR(lenovo_yoga_tab2_830_1050_sys_off_handler))
+ return PTR_ERR(lenovo_yoga_tab2_830_1050_sys_off_handler);
+
+ return 0;
+}
+
+static struct gpiod_lookup_table lenovo_yoga_tab2_1380_fc_gpios = {
+ .dev_id = "serial0-0",
+ .table = {
+ GPIO_LOOKUP("INT33FC:00", 57, "uart3_txd", GPIO_ACTIVE_HIGH),
+ GPIO_LOOKUP("INT33FC:00", 61, "uart3_rxd", GPIO_ACTIVE_HIGH),
+ { }
+ },
+};
+
+static struct gpiod_lookup_table * const lenovo_yoga_tab2_1380_gpios[] = {
+ &lenovo_yoga_tab2_830_1050_codec_gpios,
+ &lenovo_yoga_tab2_1380_fc_gpios,
+ NULL
+};
+
+const struct x86_dev_info lenovo_yoga_tab2_1380_info __initconst = {
+ .i2c_client_info = lenovo_yoga_tab2_1380_i2c_clients,
+ .i2c_client_count = ARRAY_SIZE(lenovo_yoga_tab2_1380_i2c_clients),
+ .pdev_info = lenovo_yoga_tab2_1380_pdevs,
+ .pdev_count = ARRAY_SIZE(lenovo_yoga_tab2_1380_pdevs),
+ .gpio_button = &lenovo_yoga_tab2_830_1050_lid,
+ .gpio_button_count = 1,
+ .gpiod_lookup_tables = lenovo_yoga_tab2_1380_gpios,
+ .bat_swnode = &generic_lipo_hv_4v35_battery_node,
+ .modules = lenovo_yoga_tab2_1380_modules,
+ .init = lenovo_yoga_tab2_1380_init,
+ .exit = lenovo_yoga_tab2_830_1050_exit,
+};
+
/* Lenovo Yoga Tab 3 Pro YT3-X90F */

/*
diff --git a/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h b/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
index 468993edfeee2..821dc094b0254 100644
--- a/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
+++ b/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
@@ -112,6 +112,7 @@ extern const struct x86_dev_info czc_p10t;
extern const struct x86_dev_info lenovo_yogabook_x90_info;
extern const struct x86_dev_info lenovo_yogabook_x91_info;
extern const struct x86_dev_info lenovo_yoga_tab2_830_1050_info;
+extern const struct x86_dev_info lenovo_yoga_tab2_1380_info;
extern const struct x86_dev_info lenovo_yt3_info;
extern const struct x86_dev_info medion_lifetab_s10346_info;
extern const struct x86_dev_info nextbook_ares8_info;
--
2.43.0


2024-05-27 16:02:11

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 15/23] ALSA: hda/realtek: Add quirks for HP Omen models using CS35L41

From: Stefan Binding <[email protected]>

[ Upstream commit 875e0cd59758a3d636ce94936287787514305095 ]

Add 4 laptops using CS35L41 HDA.
None of these laptops have _DSD, so require entries in property
configuration table for cs35l41_hda driver.

Signed-off-by: Stefan Binding <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/pci/hda/patch_realtek.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index b29739bd330b1..cee387aa458f4 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -10148,6 +10148,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8b96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x103c, 0x8bdf, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
@@ -10168,6 +10170,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x8c47, "HP EliteBook 840 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8c48, "HP EliteBook 860 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8c49, "HP Elite x360 830 2-in-1 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8c4d, "HP Omen", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8c4e, "HP Omen", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x103c, 0x8c4f, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x103c, 0x8c50, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
SND_PCI_QUIRK(0x103c, 0x8c51, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
--
2.43.0


2024-05-27 16:02:54

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 18/23] powerpc: make fadump resilient with memory add/remove events

From: Sourabh Jain <[email protected]>

[ Upstream commit c6c5b14dac0d1bd0da8b4d1d3b77f18eb9085fcb ]

Due to changes in memory resources caused by either memory hotplug or
online/offline events, the elfcorehdr, which describes the CPUs and
memory of the crashed kernel to the kernel that collects the dump (known
as second/fadump kernel), becomes outdated. Consequently, attempting
dump collection with an outdated elfcorehdr can lead to failed or
inaccurate dump collection.

Memory hotplug or online/offline events is referred as memory add/remove
events in reset of the commit message.

The current solution to address the aforementioned issue is as follows:
Monitor memory add/remove events in userspace using udev rules, and
re-register fadump whenever there are changes in memory resources. This
leads to the creation of a new elfcorehdr with updated system memory
information.

There are several notable issues associated with re-registering fadump
for every memory add/remove events.

1. Bulk memory add/remove events with udev-based fadump re-registration
can lead to race conditions and, more importantly, it creates a wide
window during which fadump is inactive until all memory add/remove
events are settled.
2. Re-registering fadump for every memory add/remove event is
inefficient.
3. The memory for elfcorehdr is allocated based on the memblock regions
available during early boot and remains fixed thereafter. However, if
elfcorehdr is later recreated with additional memblock regions, its
size will increase, potentially leading to memory corruption.

Address the aforementioned challenges by shifting the creation of
elfcorehdr from the first kernel (also referred as the crashed kernel),
where it was created and frequently recreated for every memory
add/remove event, to the fadump kernel. As a result, the elfcorehdr only
needs to be created once, thus eliminating the necessity to re-register
fadump during memory add/remove events.

At present, the first kernel prepares fadump header and stores it in the
fadump reserved area. The fadump header includes the start address of
the elfcorehdr, crashing CPU details, and other relevant information. In
the event of a crash in the first kernel, the second/fadump boots and
accesses the fadump header prepared by the first kernel. It then
performs the following steps in a platform-specific function
[rtas|opal]_fadump_process:

1. Sanity check for fadump header
2. Update CPU notes in elfcorehdr

Along with the above, update the setup_fadump()/fadump.c to create
elfcorehdr and set its address to the global variable elfcorehdr_addr
for the vmcore module to process it in the second/fadump kernel.

Section below outlines the information required to create the elfcorehdr
and the changes made to make it available to the fadump kernel if it's
not already.

To create elfcorehdr, the following crashed kernel information is
required: CPU notes, vmcoreinfo, and memory ranges.

At present, the CPU notes are already prepared in the fadump kernel, so
no changes are needed in that regard. The fadump kernel has access to
all crashed kernel memory regions, including boot memory regions that
are relocated by firmware to fadump reserved areas, so no changes for
that either. However, it is necessary to add new members to the fadump
header, i.e., the 'fadump_crash_info_header' structure, in order to pass
the crashed kernel's vmcoreinfo address and its size to fadump kernel.

In addition to the vmcoreinfo address and size, there are a few other
attributes also added to the fadump_crash_info_header structure.

1. version:
It stores the fadump header version, which is currently set to 1.
This provides flexibility to update the fadump crash info header in
the future without changing the magic number. For each change in the
fadump header, the version will be increased. This will help the
updated kernel determine how to handle kernel dumps from older
kernels. The magic number remains relevant for checking fadump header
corruption.

2. pt_regs_sz/cpu_mask_sz:
Store size of pt_regs and cpu_mask structure of first kernel. These
attributes are used to prevent dump processing if the sizes of
pt_regs or cpu_mask structure differ between the first and fadump
kernels.

Note: if either first/crashed kernel or second/fadump kernel do not have
the changes introduced here then kernel fail to collect the dump and
prints relevant error message on the console.

Signed-off-by: Sourabh Jain <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://msgid.link/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/include/asm/fadump-internal.h | 31 +-
arch/powerpc/kernel/fadump.c | 361 +++++++++++--------
arch/powerpc/platforms/powernv/opal-fadump.c | 22 +-
arch/powerpc/platforms/pseries/rtas-fadump.c | 34 +-
4 files changed, 242 insertions(+), 206 deletions(-)

diff --git a/arch/powerpc/include/asm/fadump-internal.h b/arch/powerpc/include/asm/fadump-internal.h
index 27f9e11eda28c..5d706a7acc8a4 100644
--- a/arch/powerpc/include/asm/fadump-internal.h
+++ b/arch/powerpc/include/asm/fadump-internal.h
@@ -42,13 +42,38 @@ static inline u64 fadump_str_to_u64(const char *str)

#define FADUMP_CPU_UNKNOWN (~((u32)0))

-#define FADUMP_CRASH_INFO_MAGIC fadump_str_to_u64("FADMPINF")
+/*
+ * The introduction of new fields in the fadump crash info header has
+ * led to a change in the magic key from `FADMPINF` to `FADMPSIG` for
+ * identifying a kernel crash from an old kernel.
+ *
+ * To prevent the need for further changes to the magic number in the
+ * event of future modifications to the fadump crash info header, a
+ * version field has been introduced to track the fadump crash info
+ * header version.
+ *
+ * Consider a few points before adding new members to the fadump crash info
+ * header structure:
+ *
+ * - Append new members; avoid adding them in between.
+ * - Non-primitive members should have a size member as well.
+ * - For every change in the fadump header, increment the
+ * fadump header version. This helps the updated kernel decide how to
+ * handle kernel dumps from older kernels.
+ */
+#define FADUMP_CRASH_INFO_MAGIC_OLD fadump_str_to_u64("FADMPINF")
+#define FADUMP_CRASH_INFO_MAGIC fadump_str_to_u64("FADMPSIG")
+#define FADUMP_HEADER_VERSION 1

/* fadump crash info structure */
struct fadump_crash_info_header {
u64 magic_number;
- u64 elfcorehdr_addr;
+ u32 version;
u32 crashing_cpu;
+ u64 vmcoreinfo_raddr;
+ u64 vmcoreinfo_size;
+ u32 pt_regs_sz;
+ u32 cpu_mask_sz;
struct pt_regs regs;
struct cpumask cpu_mask;
};
@@ -94,6 +119,8 @@ struct fw_dump {
u64 boot_mem_regs_cnt;

unsigned long fadumphdr_addr;
+ u64 elfcorehdr_addr;
+ u64 elfcorehdr_size;
unsigned long cpu_notes_buf_vaddr;
unsigned long cpu_notes_buf_size;

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index d14eda1e85896..35254fc1516b2 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -53,8 +53,6 @@ static struct kobject *fadump_kobj;
static atomic_t cpus_in_fadump;
static DEFINE_MUTEX(fadump_mutex);

-static struct fadump_mrange_info crash_mrange_info = { "crash", NULL, 0, 0, 0, false };
-
#define RESERVED_RNGS_SZ 16384 /* 16K - 128 entries */
#define RESERVED_RNGS_CNT (RESERVED_RNGS_SZ / \
sizeof(struct fadump_memory_range))
@@ -373,12 +371,6 @@ static unsigned long __init get_fadump_area_size(void)
size = PAGE_ALIGN(size);
size += fw_dump.boot_memory_size;
size += sizeof(struct fadump_crash_info_header);
- size += sizeof(struct elfhdr); /* ELF core header.*/
- size += sizeof(struct elf_phdr); /* place holder for cpu notes */
- /* Program headers for crash memory regions. */
- size += sizeof(struct elf_phdr) * (memblock_num_regions(memory) + 2);
-
- size = PAGE_ALIGN(size);

/* This is to hold kernel metadata on platforms that support it */
size += (fw_dump.ops->fadump_get_metadata_size ?
@@ -931,36 +923,6 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
return 0;
}

-static int fadump_exclude_reserved_area(u64 start, u64 end)
-{
- u64 ra_start, ra_end;
- int ret = 0;
-
- ra_start = fw_dump.reserve_dump_area_start;
- ra_end = ra_start + fw_dump.reserve_dump_area_size;
-
- if ((ra_start < end) && (ra_end > start)) {
- if ((start < ra_start) && (end > ra_end)) {
- ret = fadump_add_mem_range(&crash_mrange_info,
- start, ra_start);
- if (ret)
- return ret;
-
- ret = fadump_add_mem_range(&crash_mrange_info,
- ra_end, end);
- } else if (start < ra_start) {
- ret = fadump_add_mem_range(&crash_mrange_info,
- start, ra_start);
- } else if (ra_end < end) {
- ret = fadump_add_mem_range(&crash_mrange_info,
- ra_end, end);
- }
- } else
- ret = fadump_add_mem_range(&crash_mrange_info, start, end);
-
- return ret;
-}
-
static int fadump_init_elfcore_header(char *bufp)
{
struct elfhdr *elf;
@@ -997,52 +959,6 @@ static int fadump_init_elfcore_header(char *bufp)
return 0;
}

-/*
- * Traverse through memblock structure and setup crash memory ranges. These
- * ranges will be used create PT_LOAD program headers in elfcore header.
- */
-static int fadump_setup_crash_memory_ranges(void)
-{
- u64 i, start, end;
- int ret;
-
- pr_debug("Setup crash memory ranges.\n");
- crash_mrange_info.mem_range_cnt = 0;
-
- /*
- * Boot memory region(s) registered with firmware are moved to
- * different location at the time of crash. Create separate program
- * header(s) for this memory chunk(s) with the correct offset.
- */
- for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) {
- start = fw_dump.boot_mem_addr[i];
- end = start + fw_dump.boot_mem_sz[i];
- ret = fadump_add_mem_range(&crash_mrange_info, start, end);
- if (ret)
- return ret;
- }
-
- for_each_mem_range(i, &start, &end) {
- /*
- * skip the memory chunk that is already added
- * (0 through boot_memory_top).
- */
- if (start < fw_dump.boot_mem_top) {
- if (end > fw_dump.boot_mem_top)
- start = fw_dump.boot_mem_top;
- else
- continue;
- }
-
- /* add this range excluding the reserved dump area. */
- ret = fadump_exclude_reserved_area(start, end);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
/*
* If the given physical address falls within the boot memory region then
* return the relocated address that points to the dump region reserved
@@ -1073,36 +989,50 @@ static inline unsigned long fadump_relocate(unsigned long paddr)
return raddr;
}

-static int fadump_create_elfcore_headers(char *bufp)
+static void __init populate_elf_pt_load(struct elf_phdr *phdr, u64 start,
+ u64 size, unsigned long long offset)
{
- unsigned long long raddr, offset;
- struct elf_phdr *phdr;
+ phdr->p_align = 0;
+ phdr->p_memsz = size;
+ phdr->p_filesz = size;
+ phdr->p_paddr = start;
+ phdr->p_offset = offset;
+ phdr->p_type = PT_LOAD;
+ phdr->p_flags = PF_R|PF_W|PF_X;
+ phdr->p_vaddr = (unsigned long)__va(start);
+}
+
+static void __init fadump_populate_elfcorehdr(struct fadump_crash_info_header *fdh)
+{
+ char *bufp;
struct elfhdr *elf;
- int i, j;
+ struct elf_phdr *phdr;
+ u64 boot_mem_dest_offset;
+ unsigned long long i, ra_start, ra_end, ra_size, mstart, mend;

+ bufp = (char *) fw_dump.elfcorehdr_addr;
fadump_init_elfcore_header(bufp);
elf = (struct elfhdr *)bufp;
bufp += sizeof(struct elfhdr);

/*
- * setup ELF PT_NOTE, place holder for cpu notes info. The notes info
- * will be populated during second kernel boot after crash. Hence
- * this PT_NOTE will always be the first elf note.
+ * Set up ELF PT_NOTE, a placeholder for CPU notes information.
+ * The notes info will be populated later by platform-specific code.
+ * Hence, this PT_NOTE will always be the first ELF note.
*
* NOTE: Any new ELF note addition should be placed after this note.
*/
phdr = (struct elf_phdr *)bufp;
bufp += sizeof(struct elf_phdr);
phdr->p_type = PT_NOTE;
- phdr->p_flags = 0;
- phdr->p_vaddr = 0;
- phdr->p_align = 0;
-
- phdr->p_offset = 0;
- phdr->p_paddr = 0;
- phdr->p_filesz = 0;
- phdr->p_memsz = 0;
-
+ phdr->p_flags = 0;
+ phdr->p_vaddr = 0;
+ phdr->p_align = 0;
+ phdr->p_offset = 0;
+ phdr->p_paddr = 0;
+ phdr->p_filesz = 0;
+ phdr->p_memsz = 0;
+ /* Increment number of program headers. */
(elf->e_phnum)++;

/* setup ELF PT_NOTE for vmcoreinfo */
@@ -1112,55 +1042,66 @@ static int fadump_create_elfcore_headers(char *bufp)
phdr->p_flags = 0;
phdr->p_vaddr = 0;
phdr->p_align = 0;
-
- phdr->p_paddr = fadump_relocate(paddr_vmcoreinfo_note());
- phdr->p_offset = phdr->p_paddr;
- phdr->p_memsz = phdr->p_filesz = VMCOREINFO_NOTE_SIZE;
-
+ phdr->p_paddr = phdr->p_offset = fdh->vmcoreinfo_raddr;
+ phdr->p_memsz = phdr->p_filesz = fdh->vmcoreinfo_size;
/* Increment number of program headers. */
(elf->e_phnum)++;

- /* setup PT_LOAD sections. */
- j = 0;
- offset = 0;
- raddr = fw_dump.boot_mem_addr[0];
- for (i = 0; i < crash_mrange_info.mem_range_cnt; i++) {
- u64 mbase, msize;
-
- mbase = crash_mrange_info.mem_ranges[i].base;
- msize = crash_mrange_info.mem_ranges[i].size;
- if (!msize)
- continue;
-
+ /*
+ * Setup PT_LOAD sections. first include boot memory regions
+ * and then add rest of the memory regions.
+ */
+ boot_mem_dest_offset = fw_dump.boot_mem_dest_addr;
+ for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) {
phdr = (struct elf_phdr *)bufp;
bufp += sizeof(struct elf_phdr);
- phdr->p_type = PT_LOAD;
- phdr->p_flags = PF_R|PF_W|PF_X;
- phdr->p_offset = mbase;
-
- if (mbase == raddr) {
- /*
- * The entire real memory region will be moved by
- * firmware to the specified destination_address.
- * Hence set the correct offset.
- */
- phdr->p_offset = fw_dump.boot_mem_dest_addr + offset;
- if (j < (fw_dump.boot_mem_regs_cnt - 1)) {
- offset += fw_dump.boot_mem_sz[j];
- raddr = fw_dump.boot_mem_addr[++j];
- }
+ populate_elf_pt_load(phdr, fw_dump.boot_mem_addr[i],
+ fw_dump.boot_mem_sz[i],
+ boot_mem_dest_offset);
+ /* Increment number of program headers. */
+ (elf->e_phnum)++;
+ boot_mem_dest_offset += fw_dump.boot_mem_sz[i];
+ }
+
+ /* Memory reserved for fadump in first kernel */
+ ra_start = fw_dump.reserve_dump_area_start;
+ ra_size = get_fadump_area_size();
+ ra_end = ra_start + ra_size;
+
+ phdr = (struct elf_phdr *)bufp;
+ for_each_mem_range(i, &mstart, &mend) {
+ /* Boot memory regions already added, skip them now */
+ if (mstart < fw_dump.boot_mem_top) {
+ if (mend > fw_dump.boot_mem_top)
+ mstart = fw_dump.boot_mem_top;
+ else
+ continue;
}

- phdr->p_paddr = mbase;
- phdr->p_vaddr = (unsigned long)__va(mbase);
- phdr->p_filesz = msize;
- phdr->p_memsz = msize;
- phdr->p_align = 0;
+ /* Handle memblock regions overlaps with fadump reserved area */
+ if ((ra_start < mend) && (ra_end > mstart)) {
+ if ((mstart < ra_start) && (mend > ra_end)) {
+ populate_elf_pt_load(phdr, mstart, ra_start - mstart, mstart);
+ /* Increment number of program headers. */
+ (elf->e_phnum)++;
+ bufp += sizeof(struct elf_phdr);
+ phdr = (struct elf_phdr *)bufp;
+ populate_elf_pt_load(phdr, ra_end, mend - ra_end, ra_end);
+ } else if (mstart < ra_start) {
+ populate_elf_pt_load(phdr, mstart, ra_start - mstart, mstart);
+ } else if (ra_end < mend) {
+ populate_elf_pt_load(phdr, ra_end, mend - ra_end, ra_end);
+ }
+ } else {
+ /* No overlap with fadump reserved memory region */
+ populate_elf_pt_load(phdr, mstart, mend - mstart, mstart);
+ }

/* Increment number of program headers. */
(elf->e_phnum)++;
+ bufp += sizeof(struct elf_phdr);
+ phdr = (struct elf_phdr *) bufp;
}
- return 0;
}

static unsigned long init_fadump_header(unsigned long addr)
@@ -1175,14 +1116,25 @@ static unsigned long init_fadump_header(unsigned long addr)

memset(fdh, 0, sizeof(struct fadump_crash_info_header));
fdh->magic_number = FADUMP_CRASH_INFO_MAGIC;
- fdh->elfcorehdr_addr = addr;
+ fdh->version = FADUMP_HEADER_VERSION;
/* We will set the crashing cpu id in crash_fadump() during crash. */
fdh->crashing_cpu = FADUMP_CPU_UNKNOWN;
+
+ /*
+ * The physical address and size of vmcoreinfo are required in the
+ * second kernel to prepare elfcorehdr.
+ */
+ fdh->vmcoreinfo_raddr = fadump_relocate(paddr_vmcoreinfo_note());
+ fdh->vmcoreinfo_size = VMCOREINFO_NOTE_SIZE;
+
+
+ fdh->pt_regs_sz = sizeof(struct pt_regs);
/*
* When LPAR is terminated by PYHP, ensure all possible CPUs'
* register data is processed while exporting the vmcore.
*/
fdh->cpu_mask = *cpu_possible_mask;
+ fdh->cpu_mask_sz = sizeof(struct cpumask);

return addr;
}
@@ -1190,8 +1142,6 @@ static unsigned long init_fadump_header(unsigned long addr)
static int register_fadump(void)
{
unsigned long addr;
- void *vaddr;
- int ret;

/*
* If no memory is reserved then we can not register for firmware-
@@ -1200,18 +1150,10 @@ static int register_fadump(void)
if (!fw_dump.reserve_dump_area_size)
return -ENODEV;

- ret = fadump_setup_crash_memory_ranges();
- if (ret)
- return ret;
-
addr = fw_dump.fadumphdr_addr;

/* Initialize fadump crash info header. */
addr = init_fadump_header(addr);
- vaddr = __va(addr);
-
- pr_debug("Creating ELF core headers at %#016lx\n", addr);
- fadump_create_elfcore_headers(vaddr);

/* register the future kernel dump with firmware. */
pr_debug("Registering for firmware-assisted kernel dump...\n");
@@ -1230,7 +1172,6 @@ void fadump_cleanup(void)
} else if (fw_dump.dump_registered) {
/* Un-register Firmware-assisted dump if it was registered. */
fw_dump.ops->fadump_unregister(&fw_dump);
- fadump_free_mem_ranges(&crash_mrange_info);
}

if (fw_dump.ops->fadump_cleanup)
@@ -1416,6 +1357,22 @@ static void fadump_release_memory(u64 begin, u64 end)
fadump_release_reserved_area(tstart, end);
}

+static void fadump_free_elfcorehdr_buf(void)
+{
+ if (fw_dump.elfcorehdr_addr == 0 || fw_dump.elfcorehdr_size == 0)
+ return;
+
+ /*
+ * Before freeing the memory of `elfcorehdr`, reset the global
+ * `elfcorehdr_addr` to prevent modules like `vmcore` from accessing
+ * invalid memory.
+ */
+ elfcorehdr_addr = ELFCORE_ADDR_ERR;
+ fadump_free_buffer(fw_dump.elfcorehdr_addr, fw_dump.elfcorehdr_size);
+ fw_dump.elfcorehdr_addr = 0;
+ fw_dump.elfcorehdr_size = 0;
+}
+
static void fadump_invalidate_release_mem(void)
{
mutex_lock(&fadump_mutex);
@@ -1427,6 +1384,7 @@ static void fadump_invalidate_release_mem(void)
fadump_cleanup();
mutex_unlock(&fadump_mutex);

+ fadump_free_elfcorehdr_buf();
fadump_release_memory(fw_dump.boot_mem_top, memblock_end_of_DRAM());
fadump_free_cpu_notes_buf();

@@ -1632,6 +1590,102 @@ static void __init fadump_init_files(void)
return;
}

+static int __init fadump_setup_elfcorehdr_buf(void)
+{
+ int elf_phdr_cnt;
+ unsigned long elfcorehdr_size;
+
+ /*
+ * Program header for CPU notes comes first, followed by one for
+ * vmcoreinfo, and the remaining program headers correspond to
+ * memory regions.
+ */
+ elf_phdr_cnt = 2 + fw_dump.boot_mem_regs_cnt + memblock_num_regions(memory);
+ elfcorehdr_size = sizeof(struct elfhdr) + (elf_phdr_cnt * sizeof(struct elf_phdr));
+ elfcorehdr_size = PAGE_ALIGN(elfcorehdr_size);
+
+ fw_dump.elfcorehdr_addr = (u64)fadump_alloc_buffer(elfcorehdr_size);
+ if (!fw_dump.elfcorehdr_addr) {
+ pr_err("Failed to allocate %lu bytes for elfcorehdr\n",
+ elfcorehdr_size);
+ return -ENOMEM;
+ }
+ fw_dump.elfcorehdr_size = elfcorehdr_size;
+ return 0;
+}
+
+/*
+ * Check if the fadump header of crashed kernel is compatible with fadump kernel.
+ *
+ * It checks the magic number, endianness, and size of non-primitive type
+ * members of fadump header to ensure safe dump collection.
+ */
+static bool __init is_fadump_header_compatible(struct fadump_crash_info_header *fdh)
+{
+ if (fdh->magic_number == FADUMP_CRASH_INFO_MAGIC_OLD) {
+ pr_err("Old magic number, can't process the dump.\n");
+ return false;
+ }
+
+ if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) {
+ if (fdh->magic_number == swab64(FADUMP_CRASH_INFO_MAGIC))
+ pr_err("Endianness mismatch between the crashed and fadump kernels.\n");
+ else
+ pr_err("Fadump header is corrupted.\n");
+
+ return false;
+ }
+
+ /*
+ * Dump collection is not safe if the size of non-primitive type members
+ * of the fadump header do not match between crashed and fadump kernel.
+ */
+ if (fdh->pt_regs_sz != sizeof(struct pt_regs) ||
+ fdh->cpu_mask_sz != sizeof(struct cpumask)) {
+ pr_err("Fadump header size mismatch.\n");
+ return false;
+ }
+
+ return true;
+}
+
+static void __init fadump_process(void)
+{
+ struct fadump_crash_info_header *fdh;
+
+ fdh = (struct fadump_crash_info_header *) __va(fw_dump.fadumphdr_addr);
+ if (!fdh) {
+ pr_err("Crash info header is empty.\n");
+ goto err_out;
+ }
+
+ /* Avoid processing the dump if fadump header isn't compatible */
+ if (!is_fadump_header_compatible(fdh))
+ goto err_out;
+
+ /* Allocate buffer for elfcorehdr */
+ if (fadump_setup_elfcorehdr_buf())
+ goto err_out;
+
+ fadump_populate_elfcorehdr(fdh);
+
+ /* Let platform update the CPU notes in elfcorehdr */
+ if (fw_dump.ops->fadump_process(&fw_dump) < 0)
+ goto err_out;
+
+ /*
+ * elfcorehdr is now ready to be exported.
+ *
+ * set elfcorehdr_addr so that vmcore module will export the
+ * elfcorehdr through '/proc/vmcore'.
+ */
+ elfcorehdr_addr = virt_to_phys((void *)fw_dump.elfcorehdr_addr);
+ return;
+
+err_out:
+ fadump_invalidate_release_mem();
+}
+
/*
* Prepare for firmware-assisted dump.
*/
@@ -1651,12 +1705,7 @@ int __init setup_fadump(void)
* saving it to the disk.
*/
if (fw_dump.dump_active) {
- /*
- * if dump process fails then invalidate the registration
- * and release memory before proceeding for re-registration.
- */
- if (fw_dump.ops->fadump_process(&fw_dump) < 0)
- fadump_invalidate_release_mem();
+ fadump_process();
}
/* Initialize the kernel dump memory structure and register with f/w */
else if (fw_dump.reserve_dump_area_size) {
diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c
index 964f464b1b0e3..767a6b19e42a8 100644
--- a/arch/powerpc/platforms/powernv/opal-fadump.c
+++ b/arch/powerpc/platforms/powernv/opal-fadump.c
@@ -513,8 +513,8 @@ opal_fadump_build_cpu_notes(struct fw_dump *fadump_conf,
final_note(note_buf);

pr_debug("Updating elfcore header (%llx) with cpu notes\n",
- fdh->elfcorehdr_addr);
- fadump_update_elfcore_header(__va(fdh->elfcorehdr_addr));
+ fadump_conf->elfcorehdr_addr);
+ fadump_update_elfcore_header((char *)fadump_conf->elfcorehdr_addr);
return 0;
}

@@ -526,12 +526,7 @@ static int __init opal_fadump_process(struct fw_dump *fadump_conf)
if (!opal_fdm_active || !fadump_conf->fadumphdr_addr)
return rc;

- /* Validate the fadump crash info header */
fdh = __va(fadump_conf->fadumphdr_addr);
- if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) {
- pr_err("Crash info header is not valid.\n");
- return rc;
- }

#ifdef CONFIG_OPAL_CORE
/*
@@ -545,18 +540,7 @@ static int __init opal_fadump_process(struct fw_dump *fadump_conf)
kernel_initiated = true;
#endif

- rc = opal_fadump_build_cpu_notes(fadump_conf, fdh);
- if (rc)
- return rc;
-
- /*
- * We are done validating dump info and elfcore header is now ready
- * to be exported. set elfcorehdr_addr so that vmcore module will
- * export the elfcore header through '/proc/vmcore'.
- */
- elfcorehdr_addr = fdh->elfcorehdr_addr;
-
- return rc;
+ return opal_fadump_build_cpu_notes(fadump_conf, fdh);
}

static void opal_fadump_region_show(struct fw_dump *fadump_conf,
diff --git a/arch/powerpc/platforms/pseries/rtas-fadump.c b/arch/powerpc/platforms/pseries/rtas-fadump.c
index b5853e9fcc3c1..214f37788b2dc 100644
--- a/arch/powerpc/platforms/pseries/rtas-fadump.c
+++ b/arch/powerpc/platforms/pseries/rtas-fadump.c
@@ -375,11 +375,8 @@ static int __init rtas_fadump_build_cpu_notes(struct fw_dump *fadump_conf)
}
final_note(note_buf);

- if (fdh) {
- pr_debug("Updating elfcore header (%llx) with cpu notes\n",
- fdh->elfcorehdr_addr);
- fadump_update_elfcore_header(__va(fdh->elfcorehdr_addr));
- }
+ pr_debug("Updating elfcore header (%llx) with cpu notes\n", fadump_conf->elfcorehdr_addr);
+ fadump_update_elfcore_header((char *)fadump_conf->elfcorehdr_addr);
return 0;

error_out:
@@ -389,14 +386,11 @@ static int __init rtas_fadump_build_cpu_notes(struct fw_dump *fadump_conf)
}

/*
- * Validate and process the dump data stored by firmware before exporting
- * it through '/proc/vmcore'.
+ * Validate and process the dump data stored by the firmware, and update
+ * the CPU notes of elfcorehdr.
*/
static int __init rtas_fadump_process(struct fw_dump *fadump_conf)
{
- struct fadump_crash_info_header *fdh;
- int rc = 0;
-
if (!fdm_active || !fadump_conf->fadumphdr_addr)
return -EINVAL;

@@ -415,25 +409,7 @@ static int __init rtas_fadump_process(struct fw_dump *fadump_conf)
return -EINVAL;
}

- /* Validate the fadump crash info header */
- fdh = __va(fadump_conf->fadumphdr_addr);
- if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) {
- pr_err("Crash info header is not valid.\n");
- return -EINVAL;
- }
-
- rc = rtas_fadump_build_cpu_notes(fadump_conf);
- if (rc)
- return rc;
-
- /*
- * We are done validating dump info and elfcore header is now ready
- * to be exported. set elfcorehdr_addr so that vmcore module will
- * export the elfcore header through '/proc/vmcore'.
- */
- elfcorehdr_addr = fdh->elfcorehdr_addr;
-
- return 0;
+ return rtas_fadump_build_cpu_notes(fadump_conf);
}

static void rtas_fadump_region_show(struct fw_dump *fadump_conf,
--
2.43.0


2024-05-27 16:03:03

by Sasha Levin

[permalink] [raw]
Subject: [PATCH AUTOSEL 6.9 19/23] powerpc/pseries: Enforce hcall result buffer validity and size

From: Nathan Lynch <[email protected]>

[ Upstream commit ff2e185cf73df480ec69675936c4ee75a445c3e4 ]

plpar_hcall(), plpar_hcall9(), and related functions expect callers to
provide valid result buffers of certain minimum size. Currently this
is communicated only through comments in the code and the compiler has
no idea.

For example, if I write a bug like this:

long retbuf[PLPAR_HCALL_BUFSIZE]; // should be PLPAR_HCALL9_BUFSIZE
plpar_hcall9(H_ALLOCATE_VAS_WINDOW, retbuf, ...);

This compiles with no diagnostics emitted, but likely results in stack
corruption at runtime when plpar_hcall9() stores results past the end
of the array. (To be clear this is a contrived example and I have not
found a real instance yet.)

To make this class of error less likely, we can use explicitly-sized
array parameters instead of pointers in the declarations for the hcall
APIs. When compiled with -Warray-bounds[1], the code above now
provokes a diagnostic like this:

error: array argument is too small;
is of size 32, callee requires at least 72 [-Werror,-Warray-bounds]
60 | plpar_hcall9(H_ALLOCATE_VAS_WINDOW, retbuf,
| ^ ~~~~~~

[1] Enabled for LLVM builds but not GCC for now. See commit
0da6e5fd6c37 ("gcc: disable '-Warray-bounds' for gcc-13 too") and
related changes.

Signed-off-by: Nathan Lynch <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://msgid.link/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/include/asm/hvcall.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index a41e542ba94dd..39cd1ca4ccb9c 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -524,7 +524,7 @@ long plpar_hcall_norets_notrace(unsigned long opcode, ...);
* Used for all but the craziest of phyp interfaces (see plpar_hcall9)
*/
#define PLPAR_HCALL_BUFSIZE 4
-long plpar_hcall(unsigned long opcode, unsigned long *retbuf, ...);
+long plpar_hcall(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL_BUFSIZE], ...);

/**
* plpar_hcall_raw: - Make a hypervisor call without calculating hcall stats
@@ -538,7 +538,7 @@ long plpar_hcall(unsigned long opcode, unsigned long *retbuf, ...);
* plpar_hcall, but plpar_hcall_raw works in real mode and does not
* calculate hypervisor call statistics.
*/
-long plpar_hcall_raw(unsigned long opcode, unsigned long *retbuf, ...);
+long plpar_hcall_raw(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL_BUFSIZE], ...);

/**
* plpar_hcall9: - Make a pseries hypervisor call with up to 9 return arguments
@@ -549,8 +549,8 @@ long plpar_hcall_raw(unsigned long opcode, unsigned long *retbuf, ...);
* PLPAR_HCALL9_BUFSIZE to size the return argument buffer.
*/
#define PLPAR_HCALL9_BUFSIZE 9
-long plpar_hcall9(unsigned long opcode, unsigned long *retbuf, ...);
-long plpar_hcall9_raw(unsigned long opcode, unsigned long *retbuf, ...);
+long plpar_hcall9(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL9_BUFSIZE], ...);
+long plpar_hcall9_raw(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL9_BUFSIZE], ...);

/* pseries hcall tracing */
extern struct static_key hcall_tracepoint_key;
--
2.43.0


2024-05-30 11:53:09

by Sourabh Jain

[permalink] [raw]
Subject: Re: [PATCH AUTOSEL 6.9 18/23] powerpc: make fadump resilient with memory add/remove events

Hello Sasha,

Thank you for considering this patch for the stable tree 6.9, 6.8, 6.6,
and 6.1.

This patch does two things:
1. Fixes a potential memory corruption issue mentioned as the third
point in the commit message
2. Enables the kernel to avoid unnecessary fadump re-registration on
memory add/remove events

To make the second functionality available to users, I request you also
consider the upstream patch
mentioned below along with this patch. Both patches were part of the
same patch series.

https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id=bc446c5acabadeb38b61b565535401c5dfdd1214

Now, there was also a third patch in the same patch series, but that is
about a documentation update.

Link to patch series:
https://lore.kernel.org/all/[email protected]/

Thanks,
Sourabh Jain


On 27/05/24 21:20, Sasha Levin wrote:
> From: Sourabh Jain <[email protected]>
>
> [ Upstream commit c6c5b14dac0d1bd0da8b4d1d3b77f18eb9085fcb ]
>
> Due to changes in memory resources caused by either memory hotplug or
> online/offline events, the elfcorehdr, which describes the CPUs and
> memory of the crashed kernel to the kernel that collects the dump (known
> as second/fadump kernel), becomes outdated. Consequently, attempting
> dump collection with an outdated elfcorehdr can lead to failed or
> inaccurate dump collection.
>
> Memory hotplug or online/offline events is referred as memory add/remove
> events in reset of the commit message.
>
> The current solution to address the aforementioned issue is as follows:
> Monitor memory add/remove events in userspace using udev rules, and
> re-register fadump whenever there are changes in memory resources. This
> leads to the creation of a new elfcorehdr with updated system memory
> information.
>
> There are several notable issues associated with re-registering fadump
> for every memory add/remove events.
>
> 1. Bulk memory add/remove events with udev-based fadump re-registration
> can lead to race conditions and, more importantly, it creates a wide
> window during which fadump is inactive until all memory add/remove
> events are settled.
> 2. Re-registering fadump for every memory add/remove event is
> inefficient.
> 3. The memory for elfcorehdr is allocated based on the memblock regions
> available during early boot and remains fixed thereafter. However, if
> elfcorehdr is later recreated with additional memblock regions, its
> size will increase, potentially leading to memory corruption.
>
> Address the aforementioned challenges by shifting the creation of
> elfcorehdr from the first kernel (also referred as the crashed kernel),
> where it was created and frequently recreated for every memory
> add/remove event, to the fadump kernel. As a result, the elfcorehdr only
> needs to be created once, thus eliminating the necessity to re-register
> fadump during memory add/remove events.
>
> At present, the first kernel prepares fadump header and stores it in the
> fadump reserved area. The fadump header includes the start address of
> the elfcorehdr, crashing CPU details, and other relevant information. In
> the event of a crash in the first kernel, the second/fadump boots and
> accesses the fadump header prepared by the first kernel. It then
> performs the following steps in a platform-specific function
> [rtas|opal]_fadump_process:
>
> 1. Sanity check for fadump header
> 2. Update CPU notes in elfcorehdr
>
> Along with the above, update the setup_fadump()/fadump.c to create
> elfcorehdr and set its address to the global variable elfcorehdr_addr
> for the vmcore module to process it in the second/fadump kernel.
>
> Section below outlines the information required to create the elfcorehdr
> and the changes made to make it available to the fadump kernel if it's
> not already.
>
> To create elfcorehdr, the following crashed kernel information is
> required: CPU notes, vmcoreinfo, and memory ranges.
>
> At present, the CPU notes are already prepared in the fadump kernel, so
> no changes are needed in that regard. The fadump kernel has access to
> all crashed kernel memory regions, including boot memory regions that
> are relocated by firmware to fadump reserved areas, so no changes for
> that either. However, it is necessary to add new members to the fadump
> header, i.e., the 'fadump_crash_info_header' structure, in order to pass
> the crashed kernel's vmcoreinfo address and its size to fadump kernel.
>
> In addition to the vmcoreinfo address and size, there are a few other
> attributes also added to the fadump_crash_info_header structure.
>
> 1. version:
> It stores the fadump header version, which is currently set to 1.
> This provides flexibility to update the fadump crash info header in
> the future without changing the magic number. For each change in the
> fadump header, the version will be increased. This will help the
> updated kernel determine how to handle kernel dumps from older
> kernels. The magic number remains relevant for checking fadump header
> corruption.
>
> 2. pt_regs_sz/cpu_mask_sz:
> Store size of pt_regs and cpu_mask structure of first kernel. These
> attributes are used to prevent dump processing if the sizes of
> pt_regs or cpu_mask structure differ between the first and fadump
> kernels.
>
> Note: if either first/crashed kernel or second/fadump kernel do not have
> the changes introduced here then kernel fail to collect the dump and
> prints relevant error message on the console.
>
> Signed-off-by: Sourabh Jain <[email protected]>
> Signed-off-by: Michael Ellerman <[email protected]>
> Link: https://msgid.link/[email protected]
> Signed-off-by: Sasha Levin <[email protected]>
> ---
> arch/powerpc/include/asm/fadump-internal.h | 31 +-
> arch/powerpc/kernel/fadump.c | 361 +++++++++++--------
> arch/powerpc/platforms/powernv/opal-fadump.c | 22 +-
> arch/powerpc/platforms/pseries/rtas-fadump.c | 34 +-
> 4 files changed, 242 insertions(+), 206 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/fadump-internal.h b/arch/powerpc/include/asm/fadump-internal.h
> index 27f9e11eda28c..5d706a7acc8a4 100644
> --- a/arch/powerpc/include/asm/fadump-internal.h
> +++ b/arch/powerpc/include/asm/fadump-internal.h
> @@ -42,13 +42,38 @@ static inline u64 fadump_str_to_u64(const char *str)
>
> #define FADUMP_CPU_UNKNOWN (~((u32)0))
>
> -#define FADUMP_CRASH_INFO_MAGIC fadump_str_to_u64("FADMPINF")
> +/*
> + * The introduction of new fields in the fadump crash info header has
> + * led to a change in the magic key from `FADMPINF` to `FADMPSIG` for
> + * identifying a kernel crash from an old kernel.
> + *
> + * To prevent the need for further changes to the magic number in the
> + * event of future modifications to the fadump crash info header, a
> + * version field has been introduced to track the fadump crash info
> + * header version.
> + *
> + * Consider a few points before adding new members to the fadump crash info
> + * header structure:
> + *
> + * - Append new members; avoid adding them in between.
> + * - Non-primitive members should have a size member as well.
> + * - For every change in the fadump header, increment the
> + * fadump header version. This helps the updated kernel decide how to
> + * handle kernel dumps from older kernels.
> + */
> +#define FADUMP_CRASH_INFO_MAGIC_OLD fadump_str_to_u64("FADMPINF")
> +#define FADUMP_CRASH_INFO_MAGIC fadump_str_to_u64("FADMPSIG")
> +#define FADUMP_HEADER_VERSION 1
>
> /* fadump crash info structure */
> struct fadump_crash_info_header {
> u64 magic_number;
> - u64 elfcorehdr_addr;
> + u32 version;
> u32 crashing_cpu;
> + u64 vmcoreinfo_raddr;
> + u64 vmcoreinfo_size;
> + u32 pt_regs_sz;
> + u32 cpu_mask_sz;
> struct pt_regs regs;
> struct cpumask cpu_mask;
> };
> @@ -94,6 +119,8 @@ struct fw_dump {
> u64 boot_mem_regs_cnt;
>
> unsigned long fadumphdr_addr;
> + u64 elfcorehdr_addr;
> + u64 elfcorehdr_size;
> unsigned long cpu_notes_buf_vaddr;
> unsigned long cpu_notes_buf_size;
>
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index d14eda1e85896..35254fc1516b2 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -53,8 +53,6 @@ static struct kobject *fadump_kobj;
> static atomic_t cpus_in_fadump;
> static DEFINE_MUTEX(fadump_mutex);
>
> -static struct fadump_mrange_info crash_mrange_info = { "crash", NULL, 0, 0, 0, false };
> -
> #define RESERVED_RNGS_SZ 16384 /* 16K - 128 entries */
> #define RESERVED_RNGS_CNT (RESERVED_RNGS_SZ / \
> sizeof(struct fadump_memory_range))
> @@ -373,12 +371,6 @@ static unsigned long __init get_fadump_area_size(void)
> size = PAGE_ALIGN(size);
> size += fw_dump.boot_memory_size;
> size += sizeof(struct fadump_crash_info_header);
> - size += sizeof(struct elfhdr); /* ELF core header.*/
> - size += sizeof(struct elf_phdr); /* place holder for cpu notes */
> - /* Program headers for crash memory regions. */
> - size += sizeof(struct elf_phdr) * (memblock_num_regions(memory) + 2);
> -
> - size = PAGE_ALIGN(size);
>
> /* This is to hold kernel metadata on platforms that support it */
> size += (fw_dump.ops->fadump_get_metadata_size ?
> @@ -931,36 +923,6 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
> return 0;
> }
>
> -static int fadump_exclude_reserved_area(u64 start, u64 end)
> -{
> - u64 ra_start, ra_end;
> - int ret = 0;
> -
> - ra_start = fw_dump.reserve_dump_area_start;
> - ra_end = ra_start + fw_dump.reserve_dump_area_size;
> -
> - if ((ra_start < end) && (ra_end > start)) {
> - if ((start < ra_start) && (end > ra_end)) {
> - ret = fadump_add_mem_range(&crash_mrange_info,
> - start, ra_start);
> - if (ret)
> - return ret;
> -
> - ret = fadump_add_mem_range(&crash_mrange_info,
> - ra_end, end);
> - } else if (start < ra_start) {
> - ret = fadump_add_mem_range(&crash_mrange_info,
> - start, ra_start);
> - } else if (ra_end < end) {
> - ret = fadump_add_mem_range(&crash_mrange_info,
> - ra_end, end);
> - }
> - } else
> - ret = fadump_add_mem_range(&crash_mrange_info, start, end);
> -
> - return ret;
> -}
> -
> static int fadump_init_elfcore_header(char *bufp)
> {
> struct elfhdr *elf;
> @@ -997,52 +959,6 @@ static int fadump_init_elfcore_header(char *bufp)
> return 0;
> }
>
> -/*
> - * Traverse through memblock structure and setup crash memory ranges. These
> - * ranges will be used create PT_LOAD program headers in elfcore header.
> - */
> -static int fadump_setup_crash_memory_ranges(void)
> -{
> - u64 i, start, end;
> - int ret;
> -
> - pr_debug("Setup crash memory ranges.\n");
> - crash_mrange_info.mem_range_cnt = 0;
> -
> - /*
> - * Boot memory region(s) registered with firmware are moved to
> - * different location at the time of crash. Create separate program
> - * header(s) for this memory chunk(s) with the correct offset.
> - */
> - for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) {
> - start = fw_dump.boot_mem_addr[i];
> - end = start + fw_dump.boot_mem_sz[i];
> - ret = fadump_add_mem_range(&crash_mrange_info, start, end);
> - if (ret)
> - return ret;
> - }
> -
> - for_each_mem_range(i, &start, &end) {
> - /*
> - * skip the memory chunk that is already added
> - * (0 through boot_memory_top).
> - */
> - if (start < fw_dump.boot_mem_top) {
> - if (end > fw_dump.boot_mem_top)
> - start = fw_dump.boot_mem_top;
> - else
> - continue;
> - }
> -
> - /* add this range excluding the reserved dump area. */
> - ret = fadump_exclude_reserved_area(start, end);
> - if (ret)
> - return ret;
> - }
> -
> - return 0;
> -}
> -
> /*
> * If the given physical address falls within the boot memory region then
> * return the relocated address that points to the dump region reserved
> @@ -1073,36 +989,50 @@ static inline unsigned long fadump_relocate(unsigned long paddr)
> return raddr;
> }
>
> -static int fadump_create_elfcore_headers(char *bufp)
> +static void __init populate_elf_pt_load(struct elf_phdr *phdr, u64 start,
> + u64 size, unsigned long long offset)
> {
> - unsigned long long raddr, offset;
> - struct elf_phdr *phdr;
> + phdr->p_align = 0;
> + phdr->p_memsz = size;
> + phdr->p_filesz = size;
> + phdr->p_paddr = start;
> + phdr->p_offset = offset;
> + phdr->p_type = PT_LOAD;
> + phdr->p_flags = PF_R|PF_W|PF_X;
> + phdr->p_vaddr = (unsigned long)__va(start);
> +}
> +
> +static void __init fadump_populate_elfcorehdr(struct fadump_crash_info_header *fdh)
> +{
> + char *bufp;
> struct elfhdr *elf;
> - int i, j;
> + struct elf_phdr *phdr;
> + u64 boot_mem_dest_offset;
> + unsigned long long i, ra_start, ra_end, ra_size, mstart, mend;
>
> + bufp = (char *) fw_dump.elfcorehdr_addr;
> fadump_init_elfcore_header(bufp);
> elf = (struct elfhdr *)bufp;
> bufp += sizeof(struct elfhdr);
>
> /*
> - * setup ELF PT_NOTE, place holder for cpu notes info. The notes info
> - * will be populated during second kernel boot after crash. Hence
> - * this PT_NOTE will always be the first elf note.
> + * Set up ELF PT_NOTE, a placeholder for CPU notes information.
> + * The notes info will be populated later by platform-specific code.
> + * Hence, this PT_NOTE will always be the first ELF note.
> *
> * NOTE: Any new ELF note addition should be placed after this note.
> */
> phdr = (struct elf_phdr *)bufp;
> bufp += sizeof(struct elf_phdr);
> phdr->p_type = PT_NOTE;
> - phdr->p_flags = 0;
> - phdr->p_vaddr = 0;
> - phdr->p_align = 0;
> -
> - phdr->p_offset = 0;
> - phdr->p_paddr = 0;
> - phdr->p_filesz = 0;
> - phdr->p_memsz = 0;
> -
> + phdr->p_flags = 0;
> + phdr->p_vaddr = 0;
> + phdr->p_align = 0;
> + phdr->p_offset = 0;
> + phdr->p_paddr = 0;
> + phdr->p_filesz = 0;
> + phdr->p_memsz = 0;
> + /* Increment number of program headers. */
> (elf->e_phnum)++;
>
> /* setup ELF PT_NOTE for vmcoreinfo */
> @@ -1112,55 +1042,66 @@ static int fadump_create_elfcore_headers(char *bufp)
> phdr->p_flags = 0;
> phdr->p_vaddr = 0;
> phdr->p_align = 0;
> -
> - phdr->p_paddr = fadump_relocate(paddr_vmcoreinfo_note());
> - phdr->p_offset = phdr->p_paddr;
> - phdr->p_memsz = phdr->p_filesz = VMCOREINFO_NOTE_SIZE;
> -
> + phdr->p_paddr = phdr->p_offset = fdh->vmcoreinfo_raddr;
> + phdr->p_memsz = phdr->p_filesz = fdh->vmcoreinfo_size;
> /* Increment number of program headers. */
> (elf->e_phnum)++;
>
> - /* setup PT_LOAD sections. */
> - j = 0;
> - offset = 0;
> - raddr = fw_dump.boot_mem_addr[0];
> - for (i = 0; i < crash_mrange_info.mem_range_cnt; i++) {
> - u64 mbase, msize;
> -
> - mbase = crash_mrange_info.mem_ranges[i].base;
> - msize = crash_mrange_info.mem_ranges[i].size;
> - if (!msize)
> - continue;
> -
> + /*
> + * Setup PT_LOAD sections. first include boot memory regions
> + * and then add rest of the memory regions.
> + */
> + boot_mem_dest_offset = fw_dump.boot_mem_dest_addr;
> + for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) {
> phdr = (struct elf_phdr *)bufp;
> bufp += sizeof(struct elf_phdr);
> - phdr->p_type = PT_LOAD;
> - phdr->p_flags = PF_R|PF_W|PF_X;
> - phdr->p_offset = mbase;
> -
> - if (mbase == raddr) {
> - /*
> - * The entire real memory region will be moved by
> - * firmware to the specified destination_address.
> - * Hence set the correct offset.
> - */
> - phdr->p_offset = fw_dump.boot_mem_dest_addr + offset;
> - if (j < (fw_dump.boot_mem_regs_cnt - 1)) {
> - offset += fw_dump.boot_mem_sz[j];
> - raddr = fw_dump.boot_mem_addr[++j];
> - }
> + populate_elf_pt_load(phdr, fw_dump.boot_mem_addr[i],
> + fw_dump.boot_mem_sz[i],
> + boot_mem_dest_offset);
> + /* Increment number of program headers. */
> + (elf->e_phnum)++;
> + boot_mem_dest_offset += fw_dump.boot_mem_sz[i];
> + }
> +
> + /* Memory reserved for fadump in first kernel */
> + ra_start = fw_dump.reserve_dump_area_start;
> + ra_size = get_fadump_area_size();
> + ra_end = ra_start + ra_size;
> +
> + phdr = (struct elf_phdr *)bufp;
> + for_each_mem_range(i, &mstart, &mend) {
> + /* Boot memory regions already added, skip them now */
> + if (mstart < fw_dump.boot_mem_top) {
> + if (mend > fw_dump.boot_mem_top)
> + mstart = fw_dump.boot_mem_top;
> + else
> + continue;
> }
>
> - phdr->p_paddr = mbase;
> - phdr->p_vaddr = (unsigned long)__va(mbase);
> - phdr->p_filesz = msize;
> - phdr->p_memsz = msize;
> - phdr->p_align = 0;
> + /* Handle memblock regions overlaps with fadump reserved area */
> + if ((ra_start < mend) && (ra_end > mstart)) {
> + if ((mstart < ra_start) && (mend > ra_end)) {
> + populate_elf_pt_load(phdr, mstart, ra_start - mstart, mstart);
> + /* Increment number of program headers. */
> + (elf->e_phnum)++;
> + bufp += sizeof(struct elf_phdr);
> + phdr = (struct elf_phdr *)bufp;
> + populate_elf_pt_load(phdr, ra_end, mend - ra_end, ra_end);
> + } else if (mstart < ra_start) {
> + populate_elf_pt_load(phdr, mstart, ra_start - mstart, mstart);
> + } else if (ra_end < mend) {
> + populate_elf_pt_load(phdr, ra_end, mend - ra_end, ra_end);
> + }
> + } else {
> + /* No overlap with fadump reserved memory region */
> + populate_elf_pt_load(phdr, mstart, mend - mstart, mstart);
> + }
>
> /* Increment number of program headers. */
> (elf->e_phnum)++;
> + bufp += sizeof(struct elf_phdr);
> + phdr = (struct elf_phdr *) bufp;
> }
> - return 0;
> }
>
> static unsigned long init_fadump_header(unsigned long addr)
> @@ -1175,14 +1116,25 @@ static unsigned long init_fadump_header(unsigned long addr)
>
> memset(fdh, 0, sizeof(struct fadump_crash_info_header));
> fdh->magic_number = FADUMP_CRASH_INFO_MAGIC;
> - fdh->elfcorehdr_addr = addr;
> + fdh->version = FADUMP_HEADER_VERSION;
> /* We will set the crashing cpu id in crash_fadump() during crash. */
> fdh->crashing_cpu = FADUMP_CPU_UNKNOWN;
> +
> + /*
> + * The physical address and size of vmcoreinfo are required in the
> + * second kernel to prepare elfcorehdr.
> + */
> + fdh->vmcoreinfo_raddr = fadump_relocate(paddr_vmcoreinfo_note());
> + fdh->vmcoreinfo_size = VMCOREINFO_NOTE_SIZE;
> +
> +
> + fdh->pt_regs_sz = sizeof(struct pt_regs);
> /*
> * When LPAR is terminated by PYHP, ensure all possible CPUs'
> * register data is processed while exporting the vmcore.
> */
> fdh->cpu_mask = *cpu_possible_mask;
> + fdh->cpu_mask_sz = sizeof(struct cpumask);
>
> return addr;
> }
> @@ -1190,8 +1142,6 @@ static unsigned long init_fadump_header(unsigned long addr)
> static int register_fadump(void)
> {
> unsigned long addr;
> - void *vaddr;
> - int ret;
>
> /*
> * If no memory is reserved then we can not register for firmware-
> @@ -1200,18 +1150,10 @@ static int register_fadump(void)
> if (!fw_dump.reserve_dump_area_size)
> return -ENODEV;
>
> - ret = fadump_setup_crash_memory_ranges();
> - if (ret)
> - return ret;
> -
> addr = fw_dump.fadumphdr_addr;
>
> /* Initialize fadump crash info header. */
> addr = init_fadump_header(addr);
> - vaddr = __va(addr);
> -
> - pr_debug("Creating ELF core headers at %#016lx\n", addr);
> - fadump_create_elfcore_headers(vaddr);
>
> /* register the future kernel dump with firmware. */
> pr_debug("Registering for firmware-assisted kernel dump...\n");
> @@ -1230,7 +1172,6 @@ void fadump_cleanup(void)
> } else if (fw_dump.dump_registered) {
> /* Un-register Firmware-assisted dump if it was registered. */
> fw_dump.ops->fadump_unregister(&fw_dump);
> - fadump_free_mem_ranges(&crash_mrange_info);
> }
>
> if (fw_dump.ops->fadump_cleanup)
> @@ -1416,6 +1357,22 @@ static void fadump_release_memory(u64 begin, u64 end)
> fadump_release_reserved_area(tstart, end);
> }
>
> +static void fadump_free_elfcorehdr_buf(void)
> +{
> + if (fw_dump.elfcorehdr_addr == 0 || fw_dump.elfcorehdr_size == 0)
> + return;
> +
> + /*
> + * Before freeing the memory of `elfcorehdr`, reset the global
> + * `elfcorehdr_addr` to prevent modules like `vmcore` from accessing
> + * invalid memory.
> + */
> + elfcorehdr_addr = ELFCORE_ADDR_ERR;
> + fadump_free_buffer(fw_dump.elfcorehdr_addr, fw_dump.elfcorehdr_size);
> + fw_dump.elfcorehdr_addr = 0;
> + fw_dump.elfcorehdr_size = 0;
> +}
> +
> static void fadump_invalidate_release_mem(void)
> {
> mutex_lock(&fadump_mutex);
> @@ -1427,6 +1384,7 @@ static void fadump_invalidate_release_mem(void)
> fadump_cleanup();
> mutex_unlock(&fadump_mutex);
>
> + fadump_free_elfcorehdr_buf();
> fadump_release_memory(fw_dump.boot_mem_top, memblock_end_of_DRAM());
> fadump_free_cpu_notes_buf();
>
> @@ -1632,6 +1590,102 @@ static void __init fadump_init_files(void)
> return;
> }
>
> +static int __init fadump_setup_elfcorehdr_buf(void)
> +{
> + int elf_phdr_cnt;
> + unsigned long elfcorehdr_size;
> +
> + /*
> + * Program header for CPU notes comes first, followed by one for
> + * vmcoreinfo, and the remaining program headers correspond to
> + * memory regions.
> + */
> + elf_phdr_cnt = 2 + fw_dump.boot_mem_regs_cnt + memblock_num_regions(memory);
> + elfcorehdr_size = sizeof(struct elfhdr) + (elf_phdr_cnt * sizeof(struct elf_phdr));
> + elfcorehdr_size = PAGE_ALIGN(elfcorehdr_size);
> +
> + fw_dump.elfcorehdr_addr = (u64)fadump_alloc_buffer(elfcorehdr_size);
> + if (!fw_dump.elfcorehdr_addr) {
> + pr_err("Failed to allocate %lu bytes for elfcorehdr\n",
> + elfcorehdr_size);
> + return -ENOMEM;
> + }
> + fw_dump.elfcorehdr_size = elfcorehdr_size;
> + return 0;
> +}
> +
> +/*
> + * Check if the fadump header of crashed kernel is compatible with fadump kernel.
> + *
> + * It checks the magic number, endianness, and size of non-primitive type
> + * members of fadump header to ensure safe dump collection.
> + */
> +static bool __init is_fadump_header_compatible(struct fadump_crash_info_header *fdh)
> +{
> + if (fdh->magic_number == FADUMP_CRASH_INFO_MAGIC_OLD) {
> + pr_err("Old magic number, can't process the dump.\n");
> + return false;
> + }
> +
> + if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) {
> + if (fdh->magic_number == swab64(FADUMP_CRASH_INFO_MAGIC))
> + pr_err("Endianness mismatch between the crashed and fadump kernels.\n");
> + else
> + pr_err("Fadump header is corrupted.\n");
> +
> + return false;
> + }
> +
> + /*
> + * Dump collection is not safe if the size of non-primitive type members
> + * of the fadump header do not match between crashed and fadump kernel.
> + */
> + if (fdh->pt_regs_sz != sizeof(struct pt_regs) ||
> + fdh->cpu_mask_sz != sizeof(struct cpumask)) {
> + pr_err("Fadump header size mismatch.\n");
> + return false;
> + }
> +
> + return true;
> +}
> +
> +static void __init fadump_process(void)
> +{
> + struct fadump_crash_info_header *fdh;
> +
> + fdh = (struct fadump_crash_info_header *) __va(fw_dump.fadumphdr_addr);
> + if (!fdh) {
> + pr_err("Crash info header is empty.\n");
> + goto err_out;
> + }
> +
> + /* Avoid processing the dump if fadump header isn't compatible */
> + if (!is_fadump_header_compatible(fdh))
> + goto err_out;
> +
> + /* Allocate buffer for elfcorehdr */
> + if (fadump_setup_elfcorehdr_buf())
> + goto err_out;
> +
> + fadump_populate_elfcorehdr(fdh);
> +
> + /* Let platform update the CPU notes in elfcorehdr */
> + if (fw_dump.ops->fadump_process(&fw_dump) < 0)
> + goto err_out;
> +
> + /*
> + * elfcorehdr is now ready to be exported.
> + *
> + * set elfcorehdr_addr so that vmcore module will export the
> + * elfcorehdr through '/proc/vmcore'.
> + */
> + elfcorehdr_addr = virt_to_phys((void *)fw_dump.elfcorehdr_addr);
> + return;
> +
> +err_out:
> + fadump_invalidate_release_mem();
> +}
> +
> /*
> * Prepare for firmware-assisted dump.
> */
> @@ -1651,12 +1705,7 @@ int __init setup_fadump(void)
> * saving it to the disk.
> */
> if (fw_dump.dump_active) {
> - /*
> - * if dump process fails then invalidate the registration
> - * and release memory before proceeding for re-registration.
> - */
> - if (fw_dump.ops->fadump_process(&fw_dump) < 0)
> - fadump_invalidate_release_mem();
> + fadump_process();
> }
> /* Initialize the kernel dump memory structure and register with f/w */
> else if (fw_dump.reserve_dump_area_size) {
> diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c
> index 964f464b1b0e3..767a6b19e42a8 100644
> --- a/arch/powerpc/platforms/powernv/opal-fadump.c
> +++ b/arch/powerpc/platforms/powernv/opal-fadump.c
> @@ -513,8 +513,8 @@ opal_fadump_build_cpu_notes(struct fw_dump *fadump_conf,
> final_note(note_buf);
>
> pr_debug("Updating elfcore header (%llx) with cpu notes\n",
> - fdh->elfcorehdr_addr);
> - fadump_update_elfcore_header(__va(fdh->elfcorehdr_addr));
> + fadump_conf->elfcorehdr_addr);
> + fadump_update_elfcore_header((char *)fadump_conf->elfcorehdr_addr);
> return 0;
> }
>
> @@ -526,12 +526,7 @@ static int __init opal_fadump_process(struct fw_dump *fadump_conf)
> if (!opal_fdm_active || !fadump_conf->fadumphdr_addr)
> return rc;
>
> - /* Validate the fadump crash info header */
> fdh = __va(fadump_conf->fadumphdr_addr);
> - if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) {
> - pr_err("Crash info header is not valid.\n");
> - return rc;
> - }
>
> #ifdef CONFIG_OPAL_CORE
> /*
> @@ -545,18 +540,7 @@ static int __init opal_fadump_process(struct fw_dump *fadump_conf)
> kernel_initiated = true;
> #endif
>
> - rc = opal_fadump_build_cpu_notes(fadump_conf, fdh);
> - if (rc)
> - return rc;
> -
> - /*
> - * We are done validating dump info and elfcore header is now ready
> - * to be exported. set elfcorehdr_addr so that vmcore module will
> - * export the elfcore header through '/proc/vmcore'.
> - */
> - elfcorehdr_addr = fdh->elfcorehdr_addr;
> -
> - return rc;
> + return opal_fadump_build_cpu_notes(fadump_conf, fdh);
> }
>
> static void opal_fadump_region_show(struct fw_dump *fadump_conf,
> diff --git a/arch/powerpc/platforms/pseries/rtas-fadump.c b/arch/powerpc/platforms/pseries/rtas-fadump.c
> index b5853e9fcc3c1..214f37788b2dc 100644
> --- a/arch/powerpc/platforms/pseries/rtas-fadump.c
> +++ b/arch/powerpc/platforms/pseries/rtas-fadump.c
> @@ -375,11 +375,8 @@ static int __init rtas_fadump_build_cpu_notes(struct fw_dump *fadump_conf)
> }
> final_note(note_buf);
>
> - if (fdh) {
> - pr_debug("Updating elfcore header (%llx) with cpu notes\n",
> - fdh->elfcorehdr_addr);
> - fadump_update_elfcore_header(__va(fdh->elfcorehdr_addr));
> - }
> + pr_debug("Updating elfcore header (%llx) with cpu notes\n", fadump_conf->elfcorehdr_addr);
> + fadump_update_elfcore_header((char *)fadump_conf->elfcorehdr_addr);
> return 0;
>
> error_out:
> @@ -389,14 +386,11 @@ static int __init rtas_fadump_build_cpu_notes(struct fw_dump *fadump_conf)
> }
>
> /*
> - * Validate and process the dump data stored by firmware before exporting
> - * it through '/proc/vmcore'.
> + * Validate and process the dump data stored by the firmware, and update
> + * the CPU notes of elfcorehdr.
> */
> static int __init rtas_fadump_process(struct fw_dump *fadump_conf)
> {
> - struct fadump_crash_info_header *fdh;
> - int rc = 0;
> -
> if (!fdm_active || !fadump_conf->fadumphdr_addr)
> return -EINVAL;
>
> @@ -415,25 +409,7 @@ static int __init rtas_fadump_process(struct fw_dump *fadump_conf)
> return -EINVAL;
> }
>
> - /* Validate the fadump crash info header */
> - fdh = __va(fadump_conf->fadumphdr_addr);
> - if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) {
> - pr_err("Crash info header is not valid.\n");
> - return -EINVAL;
> - }
> -
> - rc = rtas_fadump_build_cpu_notes(fadump_conf);
> - if (rc)
> - return rc;
> -
> - /*
> - * We are done validating dump info and elfcore header is now ready
> - * to be exported. set elfcorehdr_addr so that vmcore module will
> - * export the elfcore header through '/proc/vmcore'.
> - */
> - elfcorehdr_addr = fdh->elfcorehdr_addr;
> -
> - return 0;
> + return rtas_fadump_build_cpu_notes(fadump_conf);
> }
>
> static void rtas_fadump_region_show(struct fw_dump *fadump_conf,