2023-05-29 14:09:14

by Peter De Schrijver

[permalink] [raw]
Subject: [PATCH v5 0/5] firmware: tegra: Add MRQ support for Tegra264.

Changes in v2:

- Added signoff messages
- Updated bindings to support DRAM MRQ GSCs
- Split out memory-region property for tegra186-bpmp into its own patch
- Addressed sparse errors in bpmp-tegra186.c

Changes in v3:

- Added #address-cells = <2> and #size-cells = <2> to
nvidia,tegra264-bpmp-shmem binding example.

Changes in v4:

- Added lost Acked-by tags to patch 1 and 2.
- Updated topic for patch 3 to 'soc/tegra: fuse:'.
- Updated topic for patch 4 to 'dt-bindings: Add support for DRAM MRQ GSCs'.
- Updated topic for patch 5 to 'dt-bindings: Add support for tegra186-bpmp DRAM MRQ GSCs'.
- Removed unneeded include statements in patch 6.
- Renamed some functions in patch 6 for more consistent naming.
- Improved handling of void * vs void __iomem * in patch6 .

Changes in v5:

- Small formatting fixes in patch 2 and 3.
- Add a comment to patch 6 to clarify the purpose of the enum.

Peter De Schrijver (4):
dt-bindings: mailbox: tegra: Document Tegra264 HSP
dt-bindings: reserved-memory: Add support for DRAM MRQ GSCs
dt-bindings: firmware: Add support for tegra186-bpmp DRAM MRQ GSCs
firmware: tegra: bpmp: Add support for DRAM MRQ GSCs

Stefan Kristiansson (1):
mailbox: tegra: add support for Tegra264

.../firmware/nvidia,tegra186-bpmp.yaml | 39 ++-
.../bindings/mailbox/nvidia,tegra186-hsp.yaml | 1 +
.../nvidia,tegra264-bpmp-shmem.yaml | 47 ++++
drivers/firmware/tegra/bpmp-tegra186.c | 232 +++++++++++++-----
drivers/firmware/tegra/bpmp.c | 4 +-
drivers/mailbox/tegra-hsp.c | 16 +-
6 files changed, 264 insertions(+), 75 deletions(-)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/nvidia,tegra264-bpmp-shmem.yaml

--
2.34.1



2023-05-29 14:09:22

by Peter De Schrijver

[permalink] [raw]
Subject: [PATCH v5 1/5] dt-bindings: mailbox: tegra: Document Tegra264 HSP

Add the compatible string for the HSP block found on the Tegra264 SoC.
The HSP block in Tegra264 is not register compatible with the one in
Tegra194 or Tegra234 hence there is no fallback compatibility string.

Acked-by: Krzysztof Kozlowski <[email protected]>
Acked-by: Thierry Reding <[email protected]>
Signed-off-by: Peter De Schrijver <[email protected]>
---
.../devicetree/bindings/mailbox/nvidia,tegra186-hsp.yaml | 1 +
1 file changed, 1 insertion(+)

diff --git a/Documentation/devicetree/bindings/mailbox/nvidia,tegra186-hsp.yaml b/Documentation/devicetree/bindings/mailbox/nvidia,tegra186-hsp.yaml
index a3e87516d637..2d14fc948999 100644
--- a/Documentation/devicetree/bindings/mailbox/nvidia,tegra186-hsp.yaml
+++ b/Documentation/devicetree/bindings/mailbox/nvidia,tegra186-hsp.yaml
@@ -66,6 +66,7 @@ properties:
oneOf:
- const: nvidia,tegra186-hsp
- const: nvidia,tegra194-hsp
+ - const: nvidia,tegra264-hsp
- items:
- const: nvidia,tegra234-hsp
- const: nvidia,tegra194-hsp
--
2.34.1


2023-05-29 14:10:34

by Peter De Schrijver

[permalink] [raw]
Subject: [PATCH v5 5/5] firmware: tegra: bpmp: Add support for DRAM MRQ GSCs

Implement support for DRAM MRQ GSCs.

Signed-off-by: Peter De Schrijver <[email protected]>
---
drivers/firmware/tegra/bpmp-tegra186.c | 232 ++++++++++++++++++-------
drivers/firmware/tegra/bpmp.c | 4 +-
2 files changed, 168 insertions(+), 68 deletions(-)

diff --git a/drivers/firmware/tegra/bpmp-tegra186.c b/drivers/firmware/tegra/bpmp-tegra186.c
index 2e26199041cd..d1c1af793b6f 100644
--- a/drivers/firmware/tegra/bpmp-tegra186.c
+++ b/drivers/firmware/tegra/bpmp-tegra186.c
@@ -4,7 +4,9 @@
*/

#include <linux/genalloc.h>
+#include <linux/io.h>
#include <linux/mailbox_client.h>
+#include <linux/of_address.h>
#include <linux/platform_device.h>

#include <soc/tegra/bpmp.h>
@@ -13,12 +15,22 @@

#include "bpmp-private.h"

+/* Discriminating enum for the union below */
+enum tegra_bpmp_mem_type { TEGRA_INVALID, TEGRA_SRAM, TEGRA_DRAM };
struct tegra186_bpmp {
struct tegra_bpmp *parent;

+ enum tegra_bpmp_mem_type type;
struct {
- struct gen_pool *pool;
- void __iomem *virt;
+ union {
+ struct {
+ void __iomem *virt;
+ struct gen_pool *pool;
+ } sram;
+ struct {
+ void *virt;
+ } dram;
+ };
dma_addr_t phys;
} tx, rx;

@@ -26,6 +38,7 @@ struct tegra186_bpmp {
struct mbox_client client;
struct mbox_chan *channel;
} mbox;
+
};

static inline struct tegra_bpmp *
@@ -118,8 +131,17 @@ static int tegra186_bpmp_channel_init(struct tegra_bpmp_channel *channel,
queue_size = tegra_ivc_total_queue_size(message_size);
offset = queue_size * index;

- iosys_map_set_vaddr_iomem(&rx, priv->rx.virt + offset);
- iosys_map_set_vaddr_iomem(&tx, priv->tx.virt + offset);
+ if (priv->type == TEGRA_SRAM) {
+ iosys_map_set_vaddr_iomem(&rx, priv->rx.sram.virt + offset);
+ iosys_map_set_vaddr_iomem(&tx, priv->tx.sram.virt + offset);
+ } else if (priv->type == TEGRA_DRAM) {
+ iosys_map_set_vaddr(&rx, priv->rx.dram.virt + offset);
+ iosys_map_set_vaddr(&tx, priv->tx.dram.virt + offset);
+ } else {
+ dev_err(bpmp->dev, "Inconsistent state %d of priv->type detected in %s\n",
+ priv->type, __func__);
+ return -EINVAL;
+ }

err = tegra_ivc_init(channel->ivc, NULL, &rx, priv->rx.phys + offset, &tx,
priv->tx.phys + offset, 1, message_size, tegra186_bpmp_ivc_notify,
@@ -158,54 +180,135 @@ static void mbox_handle_rx(struct mbox_client *client, void *data)
tegra_bpmp_handle_rx(bpmp);
}

-static int tegra186_bpmp_init(struct tegra_bpmp *bpmp)
+static void tegra186_bpmp_teardown_channels(struct tegra_bpmp *bpmp)
{
- struct tegra186_bpmp *priv;
- unsigned int i;
- int err;
+ size_t i;
+ struct tegra186_bpmp *priv = bpmp->priv;

- priv = devm_kzalloc(bpmp->dev, sizeof(*priv), GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
+ for (i = 0; i < bpmp->threaded.count; i++) {
+ if (!bpmp->threaded_channels[i].bpmp)
+ continue;

- bpmp->priv = priv;
- priv->parent = bpmp;
+ tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]);
+ }

- priv->tx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0);
- if (!priv->tx.pool) {
+ tegra186_bpmp_channel_cleanup(bpmp->rx_channel);
+ tegra186_bpmp_channel_cleanup(bpmp->tx_channel);
+
+ if (priv->type == TEGRA_SRAM) {
+ gen_pool_free(priv->tx.sram.pool, (unsigned long)priv->tx.sram.virt, 4096);
+ gen_pool_free(priv->rx.sram.pool, (unsigned long)priv->rx.sram.virt, 4096);
+ } else if (priv->type == TEGRA_DRAM) {
+ memunmap(priv->tx.dram.virt);
+ }
+}
+
+static int tegra186_bpmp_sram_init(struct tegra_bpmp *bpmp)
+{
+ int err;
+ struct tegra186_bpmp *priv = bpmp->priv;
+
+ priv->tx.sram.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0);
+ if (!priv->tx.sram.pool) {
dev_err(bpmp->dev, "TX shmem pool not found\n");
return -EPROBE_DEFER;
}

- priv->tx.virt = (void __iomem *)gen_pool_dma_alloc(priv->tx.pool, 4096, &priv->tx.phys);
- if (!priv->tx.virt) {
+ priv->tx.sram.virt = (void __iomem *)gen_pool_dma_alloc(priv->tx.sram.pool, 4096,
+ &priv->tx.phys);
+ if (!priv->tx.sram.virt) {
dev_err(bpmp->dev, "failed to allocate from TX pool\n");
return -ENOMEM;
}

- priv->rx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 1);
- if (!priv->rx.pool) {
+ priv->rx.sram.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 1);
+ if (!priv->rx.sram.pool) {
dev_err(bpmp->dev, "RX shmem pool not found\n");
err = -EPROBE_DEFER;
goto free_tx;
}

- priv->rx.virt = (void __iomem *)gen_pool_dma_alloc(priv->rx.pool, 4096, &priv->rx.phys);
- if (!priv->rx.virt) {
+ priv->rx.sram.virt = (void __iomem *)gen_pool_dma_alloc(priv->rx.sram.pool, 4096,
+ &priv->rx.phys);
+ if (!priv->rx.sram.virt) {
dev_err(bpmp->dev, "failed to allocate from RX pool\n");
err = -ENOMEM;
goto free_tx;
}

+ priv->type = TEGRA_SRAM;
+
+ return 0;
+
+free_tx:
+ gen_pool_free(priv->tx.sram.pool, (unsigned long)priv->tx.sram.virt, 4096);
+
+ return err;
+}
+
+static int tegra186_bpmp_dram_init(struct tegra_bpmp *bpmp)
+{
+ int err;
+ resource_size_t size;
+ struct resource res;
+ struct device_node *np;
+ struct tegra186_bpmp *priv = bpmp->priv;
+
+ np = of_parse_phandle(bpmp->dev->of_node, "memory-region", 0);
+ if (!np)
+ return -ENOENT;
+
+ err = of_address_to_resource(np, 0, &res);
+ if (err) {
+ dev_warn(bpmp->dev, "Parsing memory region returned: %d\n", err);
+ return -EINVAL;
+ }
+
+ size = resource_size(&res);
+ if (size < SZ_8K) {
+ dev_warn(bpmp->dev, "DRAM region must be larger than 8 KiB\n");
+ return -EINVAL;
+ }
+
+ priv->tx.phys = res.start;
+ priv->rx.phys = res.start + SZ_4K;
+
+ priv->tx.dram.virt = memremap(priv->tx.phys, size, MEMREMAP_WC);
+ if (priv->tx.dram.virt == NULL) {
+ dev_warn(bpmp->dev, "DRAM region mapping failed\n");
+ return -EINVAL;
+ }
+ priv->rx.dram.virt = priv->tx.dram.virt + SZ_4K;
+ priv->type = TEGRA_DRAM;
+
+ return 0;
+}
+
+static int tegra186_bpmp_setup_channels(struct tegra_bpmp *bpmp)
+{
+ int err;
+ size_t i;
+ struct tegra186_bpmp *priv = bpmp->priv;
+
+ priv->type = TEGRA_INVALID;
+
+ err = tegra186_bpmp_dram_init(bpmp);
+ if (err == -ENOENT)
+ err = tegra186_bpmp_sram_init(bpmp);
+ if (err < 0)
+ return err;
+
err = tegra186_bpmp_channel_init(bpmp->tx_channel, bpmp,
bpmp->soc->channels.cpu_tx.offset);
if (err < 0)
- goto free_rx;
+ return err;

err = tegra186_bpmp_channel_init(bpmp->rx_channel, bpmp,
bpmp->soc->channels.cpu_rx.offset);
- if (err < 0)
- goto cleanup_tx_channel;
+ if (err < 0) {
+ tegra186_bpmp_channel_cleanup(bpmp->tx_channel);
+ return err;
+ }

for (i = 0; i < bpmp->threaded.count; i++) {
unsigned int index = bpmp->soc->channels.thread.offset + i;
@@ -213,9 +316,42 @@ static int tegra186_bpmp_init(struct tegra_bpmp *bpmp)
err = tegra186_bpmp_channel_init(&bpmp->threaded_channels[i],
bpmp, index);
if (err < 0)
- goto cleanup_channels;
+ break;
}

+ if (err < 0)
+ tegra186_bpmp_teardown_channels(bpmp);
+
+ return err;
+}
+
+static void tegra186_bpmp_reset_channels(struct tegra_bpmp *bpmp)
+{
+ size_t i;
+
+ tegra186_bpmp_channel_reset(bpmp->tx_channel);
+ tegra186_bpmp_channel_reset(bpmp->rx_channel);
+
+ for (i = 0; i < bpmp->threaded.count; i++)
+ tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]);
+}
+
+static int tegra186_bpmp_init(struct tegra_bpmp *bpmp)
+{
+ int err;
+ struct tegra186_bpmp *priv;
+
+ priv = devm_kzalloc(bpmp->dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ bpmp->priv = priv;
+ priv->parent = bpmp;
+
+ err = tegra186_bpmp_setup_channels(bpmp);
+ if (err < 0)
+ return err;
+
/* mbox registration */
priv->mbox.client.dev = bpmp->dev;
priv->mbox.client.rx_callback = mbox_handle_rx;
@@ -226,63 +362,27 @@ static int tegra186_bpmp_init(struct tegra_bpmp *bpmp)
if (IS_ERR(priv->mbox.channel)) {
err = PTR_ERR(priv->mbox.channel);
dev_err(bpmp->dev, "failed to get HSP mailbox: %d\n", err);
- goto cleanup_channels;
+ tegra186_bpmp_teardown_channels(bpmp);
+ return err;
}

- tegra186_bpmp_channel_reset(bpmp->tx_channel);
- tegra186_bpmp_channel_reset(bpmp->rx_channel);
-
- for (i = 0; i < bpmp->threaded.count; i++)
- tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]);
+ tegra186_bpmp_reset_channels(bpmp);

return 0;
-
-cleanup_channels:
- for (i = 0; i < bpmp->threaded.count; i++) {
- if (!bpmp->threaded_channels[i].bpmp)
- continue;
-
- tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]);
- }
-
- tegra186_bpmp_channel_cleanup(bpmp->rx_channel);
-cleanup_tx_channel:
- tegra186_bpmp_channel_cleanup(bpmp->tx_channel);
-free_rx:
- gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.virt, 4096);
-free_tx:
- gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.virt, 4096);
-
- return err;
}

static void tegra186_bpmp_deinit(struct tegra_bpmp *bpmp)
{
struct tegra186_bpmp *priv = bpmp->priv;
- unsigned int i;

mbox_free_channel(priv->mbox.channel);

- for (i = 0; i < bpmp->threaded.count; i++)
- tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]);
-
- tegra186_bpmp_channel_cleanup(bpmp->rx_channel);
- tegra186_bpmp_channel_cleanup(bpmp->tx_channel);
-
- gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.virt, 4096);
- gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.virt, 4096);
+ tegra186_bpmp_teardown_channels(bpmp);
}

static int tegra186_bpmp_resume(struct tegra_bpmp *bpmp)
{
- unsigned int i;
-
- /* reset message channels */
- tegra186_bpmp_channel_reset(bpmp->tx_channel);
- tegra186_bpmp_channel_reset(bpmp->rx_channel);
-
- for (i = 0; i < bpmp->threaded.count; i++)
- tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]);
+ tegra186_bpmp_reset_channels(bpmp);

return 0;
}
diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c
index 8b5e5daa9fae..17bd3590aaa2 100644
--- a/drivers/firmware/tegra/bpmp.c
+++ b/drivers/firmware/tegra/bpmp.c
@@ -735,6 +735,8 @@ static int tegra_bpmp_probe(struct platform_device *pdev)
if (!bpmp->threaded_channels)
return -ENOMEM;

+ platform_set_drvdata(pdev, bpmp);
+
err = bpmp->soc->ops->init(bpmp);
if (err < 0)
return err;
@@ -758,8 +760,6 @@ static int tegra_bpmp_probe(struct platform_device *pdev)

dev_info(&pdev->dev, "firmware: %.*s\n", (int)sizeof(tag), tag);

- platform_set_drvdata(pdev, bpmp);
-
err = of_platform_default_populate(pdev->dev.of_node, NULL, &pdev->dev);
if (err < 0)
goto free_mrq;
--
2.34.1


2023-05-29 14:11:27

by Peter De Schrijver

[permalink] [raw]
Subject: [PATCH v5 4/5] dt-bindings: firmware: Add support for tegra186-bpmp DRAM MRQ GSCs

Add memory-region property to the tegra186-bpmp binding to support
DRAM MRQ GSCs.

Co-developed-by: Stefan Kristiansson <[email protected]>
Signed-off-by: Stefan Kristiansson <[email protected]>
Signed-off-by: Peter De Schrijver <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
---
.../firmware/nvidia,tegra186-bpmp.yaml | 39 ++++++++++++++++---
1 file changed, 34 insertions(+), 5 deletions(-)

diff --git a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
index 833c07f1685c..c43d17f6e96b 100644
--- a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
+++ b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
@@ -57,8 +57,11 @@ description: |
"#address-cells" or "#size-cells" property.

The shared memory area for the IPC TX and RX between CPU and BPMP are
- predefined and work on top of sysram, which is an SRAM inside the
- chip. See ".../sram/sram.yaml" for the bindings.
+ predefined and work on top of either sysram, which is an SRAM inside the
+ chip, or in normal SDRAM.
+ See ".../sram/sram.yaml" for the bindings for the SRAM case.
+ See "../reserved-memory/nvidia,tegra264-bpmp-shmem.yaml" for bindings for
+ the SDRAM case.

properties:
compatible:
@@ -81,6 +84,11 @@ properties:
minItems: 2
maxItems: 2

+ memory-region:
+ description: phandle to reserved memory region used for IPC between
+ CPU-NS and BPMP.
+ maxItems: 1
+
"#clock-cells":
const: 1

@@ -115,10 +123,15 @@ properties:

additionalProperties: false

+oneOf:
+ - required:
+ - memory-region
+ - required:
+ - shmem
+
required:
- compatible
- mboxes
- - shmem
- "#clock-cells"
- "#power-domain-cells"
- "#reset-cells"
@@ -165,8 +178,7 @@ examples:
<&mc TEGRA186_MEMORY_CLIENT_BPMPDMAW &emc>;
interconnect-names = "read", "write", "dma-mem", "dma-write";
iommus = <&smmu TEGRA186_SID_BPMP>;
- mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB
- TEGRA_HSP_DB_MASTER_BPMP>;
+ mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_BPMP>;
shmem = <&cpu_bpmp_tx>, <&cpu_bpmp_rx>;
#clock-cells = <1>;
#power-domain-cells = <1>;
@@ -184,3 +196,20 @@ examples:
#thermal-sensor-cells = <1>;
};
};
+
+ - |
+ #include <dt-bindings/mailbox/tegra186-hsp.h>
+
+ bpmp {
+ compatible = "nvidia,tegra186-bpmp";
+ interconnects = <&mc TEGRA186_MEMORY_CLIENT_BPMPR &emc>,
+ <&mc TEGRA186_MEMORY_CLIENT_BPMPW &emc>,
+ <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAR &emc>,
+ <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAW &emc>;
+ interconnect-names = "read", "write", "dma-mem", "dma-write";
+ mboxes = <&hsp_top1 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_BPMP>;
+ memory-region = <&dram_cpu_bpmp_mail>;
+ #clock-cells = <1>;
+ #power-domain-cells = <1>;
+ #reset-cells = <1>;
+ };
--
2.34.1


2023-05-29 14:23:08

by Peter De Schrijver

[permalink] [raw]
Subject: [PATCH v5 3/5] dt-bindings: reserved-memory: Add support for DRAM MRQ GSCs

Add bindings for DRAM MRQ GSC support.

Co-developed-by: Stefan Kristiansson <[email protected]>
Signed-off-by: Stefan Kristiansson <[email protected]>
Signed-off-by: Peter De Schrijver <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
---
.../nvidia,tegra264-bpmp-shmem.yaml | 47 +++++++++++++++++++
1 file changed, 47 insertions(+)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/nvidia,tegra264-bpmp-shmem.yaml

diff --git a/Documentation/devicetree/bindings/reserved-memory/nvidia,tegra264-bpmp-shmem.yaml b/Documentation/devicetree/bindings/reserved-memory/nvidia,tegra264-bpmp-shmem.yaml
new file mode 100644
index 000000000000..f9b2f0fdc282
--- /dev/null
+++ b/Documentation/devicetree/bindings/reserved-memory/nvidia,tegra264-bpmp-shmem.yaml
@@ -0,0 +1,47 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/reserved-memory/nvidia,tegra264-bpmp-shmem.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Tegra CPU-NS - BPMP IPC reserved memory
+
+maintainers:
+ - Peter De Schrijver <[email protected]>
+
+description: |
+ Define a memory region used for communication between CPU-NS and BPMP.
+ Typically this node is created by the bootloader as the physical address
+ has to be known to both CPU-NS and BPMP for correct IPC operation.
+ The memory region is defined using a child node under /reserved-memory.
+ The sub-node is named shmem@<address>.
+
+allOf:
+ - $ref: reserved-memory.yaml
+
+properties:
+ compatible:
+ const: nvidia,tegra264-bpmp-shmem
+
+ reg:
+ description: The physical address and size of the shared SDRAM region
+
+unevaluatedProperties: false
+
+required:
+ - compatible
+ - reg
+ - no-map
+
+examples:
+ - |
+ reserved-memory {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ dram_cpu_bpmp_mail: shmem@f1be0000 {
+ compatible = "nvidia,tegra264-bpmp-shmem";
+ reg = <0x0 0xf1be0000 0x0 0x2000>;
+ no-map;
+ };
+ };
+...
--
2.34.1