We currently fail to merge a region into another one whose top
address is ULLONG_MAX. This situation shouldn't have been encountered
yet due to the nature of reserved regions being exposed but this
would happen if we were to expose regions beyond the reach of dma_mask
or beyond the reach of the iommu.
Signed-off-by: Eric Auger <[email protected]>
---
drivers/iommu/iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 609bd25bf154..dd8cda340e62 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -423,7 +423,7 @@ static int iommu_insert_resv_region(struct iommu_resv_region *new,
check_overlap:
top_end = top->start + top->length - 1;
- if (iter->start > top_end + 1) {
+ if (top_end != ULLONG_MAX && iter->start > top_end + 1) {
list_move_tail(&iter->list, &stack);
} else {
top->length = max(top_end, iter_end) - top->start + 1;
--
2.21.3