Skip to content

Commit cbef847

Browse files
Naoya Horiguchitorvalds
authored andcommitted
mm/hugetlb: pmd_huge() returns true for non-present hugepage
Migrating hugepages and hwpoisoned hugepages are considered as non-present hugepages, and they are referenced via migration entries and hwpoison entries in their page table slots. This behavior causes race condition because pmd_huge() doesn't tell non-huge pages from migrating/hwpoisoned hugepages. follow_page_mask() is one example where the kernel would call follow_page_pte() for such hugepage while this function is supposed to handle only normal pages. To avoid this, this patch makes pmd_huge() return true when pmd_none() is true *and* pmd_present() is false. We don't have to worry about mixing up non-present pmd entry with normal pmd (pointing to leaf level pte entry) because pmd_present() is true in normal pmd. The same race condition could happen in (x86-specific) gup_pmd_range(), where this patch simply adds pmd_present() check instead of pmd_huge(). This is because gup_pmd_range() is fast path. If we have non-present hugepage in this function, we will go into gup_huge_pmd(), then return 0 at flag mask check, and finally fall back to the slow path. Fixes: 290408d ("hugetlb: hugepage migration core") Signed-off-by: Naoya Horiguchi <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: James Hogan <[email protected]> Cc: David Rientjes <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Luiz Capitulino <[email protected]> Cc: Nishanth Aravamudan <[email protected]> Cc: Lee Schermerhorn <[email protected]> Cc: Steve Capper <[email protected]> Cc: <[email protected]> [2.6.36+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 61f77ed commit cbef847

File tree

3 files changed

+10
-2
lines changed

3 files changed

+10
-2
lines changed

arch/x86/mm/gup.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
172172
*/
173173
if (pmd_none(pmd) || pmd_trans_splitting(pmd))
174174
return 0;
175-
if (unlikely(pmd_large(pmd))) {
175+
if (unlikely(pmd_large(pmd) || !pmd_present(pmd))) {
176176
/*
177177
* NUMA hinting faults need to be handled in the GUP
178178
* slowpath for accounting purposes and so that they

arch/x86/mm/hugetlbpage.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,9 +54,15 @@ int pud_huge(pud_t pud)
5454

5555
#else
5656

57+
/*
58+
* pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal
59+
* hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
60+
* Otherwise, returns 0.
61+
*/
5762
int pmd_huge(pmd_t pmd)
5863
{
59-
return !!(pmd_val(pmd) & _PAGE_PSE);
64+
return !pmd_none(pmd) &&
65+
(pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
6066
}
6167

6268
int pud_huge(pud_t pud)

mm/hugetlb.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3679,6 +3679,8 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
36793679
{
36803680
struct page *page;
36813681

3682+
if (!pmd_present(*pmd))
3683+
return NULL;
36823684
page = pte_page(*(pte_t *)pmd);
36833685
if (page)
36843686
page += ((address & ~PMD_MASK) >> PAGE_SHIFT);

0 commit comments

Comments
 (0)