In the __split_huge_page_map() function, the check for
page_mapcount(page) is invariant within the for loop. Because of the
fact that the macro is implemented using atomic_read(), the redundant
check cannot be optimized away by the compiler leading to unnecessary
read to the page structure.
This patch move the invariant bug check out of the loop so that it
will be done only once.
Signed-off-by: Waiman Long <[email protected]>
---
mm/huge_memory.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b4b1feb..b8bb16c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1744,6 +1744,8 @@ static int __split_huge_page_map(struct page *page,
if (pmd) {
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
pmd_populate(mm, &_pmd, pgtable);
+ if (pmd_write(*pmd))
+ BUG_ON(page_mapcount(page) != 1);
haddr = address;
for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
@@ -1753,8 +1755,6 @@ static int __split_huge_page_map(struct page *page,
entry = maybe_mkwrite(pte_mkdirty(entry), vma);
if (!pmd_write(*pmd))
entry = pte_wrprotect(entry);
- else
- BUG_ON(page_mapcount(page) != 1);
if (!pmd_young(*pmd))
entry = pte_mkold(entry);
if (pmd_numa(*pmd))
--
1.7.1
On Mon, Jun 16, 2014 at 03:35:48PM -0400, Waiman Long wrote:
> In the __split_huge_page_map() function, the check for
> page_mapcount(page) is invariant within the for loop. Because of the
> fact that the macro is implemented using atomic_read(), the redundant
> check cannot be optimized away by the compiler leading to unnecessary
> read to the page structure.
>
> This patch move the invariant bug check out of the loop so that it
> will be done only once.
Looks okay, but why? Was you able to measure difference?
--
Kirill A. Shutemov
On Mon, Jun 16, 2014 at 11:49:34PM +0300, Kirill A. Shutemov wrote:
> On Mon, Jun 16, 2014 at 03:35:48PM -0400, Waiman Long wrote:
> > In the __split_huge_page_map() function, the check for
> > page_mapcount(page) is invariant within the for loop. Because of the
> > fact that the macro is implemented using atomic_read(), the redundant
> > check cannot be optimized away by the compiler leading to unnecessary
> > read to the page structure.
And atomic_read() is *not* atomic operation. It's implemented as
dereferencing though cast to volatile, which suppress compiler
optimization, but doesn't affect what CPU can do with the variable.
So I doubt difference will be measurable anywhere.
--
Kirill A. Shutemov
On 06/16/2014 04:59 PM, Kirill A. Shutemov wrote:
> On Mon, Jun 16, 2014 at 11:49:34PM +0300, Kirill A. Shutemov wrote:
>> On Mon, Jun 16, 2014 at 03:35:48PM -0400, Waiman Long wrote:
>>> In the __split_huge_page_map() function, the check for
>>> page_mapcount(page) is invariant within the for loop. Because of the
>>> fact that the macro is implemented using atomic_read(), the redundant
>>> check cannot be optimized away by the compiler leading to unnecessary
>>> read to the page structure.
> And atomic_read() is *not* atomic operation. It's implemented as
> dereferencing though cast to volatile, which suppress compiler
> optimization, but doesn't affect what CPU can do with the variable.
>
> So I doubt difference will be measurable anywhere.
>
Because it is treated as an volatile object, the compiler will have to
reread the value of the relevant page structure field in every iteration
of the loop (512 for x86) when pmd_write(*pmd) is true. I saw some
slight improvement (about 2%) of a microbench that I wrote to break up
1000 THPs with 1000 forked processes.
-Longman
On Mon, Jun 16, 2014 at 11:45:42PM -0400, Waiman Long wrote:
> On 06/16/2014 04:59 PM, Kirill A. Shutemov wrote:
> >On Mon, Jun 16, 2014 at 11:49:34PM +0300, Kirill A. Shutemov wrote:
> >>On Mon, Jun 16, 2014 at 03:35:48PM -0400, Waiman Long wrote:
> >>>In the __split_huge_page_map() function, the check for
> >>>page_mapcount(page) is invariant within the for loop. Because of the
> >>>fact that the macro is implemented using atomic_read(), the redundant
> >>>check cannot be optimized away by the compiler leading to unnecessary
> >>>read to the page structure.
> >And atomic_read() is *not* atomic operation. It's implemented as
> >dereferencing though cast to volatile, which suppress compiler
> >optimization, but doesn't affect what CPU can do with the variable.
> >
> >So I doubt difference will be measurable anywhere.
> >
>
> Because it is treated as an volatile object, the compiler will have to
> reread the value of the relevant page structure field in every iteration of
> the loop (512 for x86) when pmd_write(*pmd) is true. I saw some slight
> improvement (about 2%) of a microbench that I wrote to break up 1000 THPs
> with 1000 forked processes.
Then bring patch with performance data.
--
Kirill A. Shutemov