I forgot to remove unnecessary set_bit_string when I converted the
gart driver to use the IOMMU helper.
=
From: FUJITA Tomonori <[email protected]>
Subject: [PATCH] x86 gart: remove unnecessary set_bit_string
iommu_area_alloc internally calls set_bit_string and set bits
properly. This set_bit_string is unnecessary.
Signed-off-by: FUJITA Tomonori <[email protected]>
---
arch/x86/kernel/pci-gart_64.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
index aa8ec92..6183e8c 100644
--- a/arch/x86/kernel/pci-gart_64.c
+++ b/arch/x86/kernel/pci-gart_64.c
@@ -104,7 +104,6 @@ static unsigned long alloc_iommu(struct device *dev, int size)
size, base_index, boundary_size, 0);
}
if (offset != -1) {
- set_bit_string(iommu_gart_bitmap, offset, size);
next_bit = offset+size;
if (next_bit >= iommu_pages) {
next_bit = 0;
--
1.5.5.GIT
* FUJITA Tomonori <[email protected]> wrote:
> iommu_area_alloc internally calls set_bit_string and set bits
> properly. This set_bit_string is unnecessary.
applied to tip/x86/gart - thanks!
Ingo