Skip to content

Commit

Permalink
[PATCH] mm: flush_tlb_range outside ptlock
Browse files Browse the repository at this point in the history
There was one small but very significant change in the previous patch:
mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4,
but that doesn't prove it safe in 2.6.

On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which
has always been called from outside page_table_lock in dup_mmap, and is so
proved safe.  Others required a deeper audit: I could find no reliance on
page_table_lock in any; but in ia64 and parisc found some code which looks a
bit as if it might want preemption disabled.  That won't do any actual harm,
so pending a decision from the maintainers, disable preemption there.

Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and
flush_tlb_page entries in cachetlb.txt: they were rather misleading (what
generic code does is different from what usually happens), the rules are now
changing, and it's not yet clear where we'll end up (will the generic
tlb_flush_mmu happen always under lock?  never under lock?  or sometimes under
and sometimes not?).

Signed-off-by: Hugh Dickins <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Hugh Dickins authored and Linus Torvalds committed Oct 30, 2005
1 parent 705e87c commit 663b97f
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 10 deletions.
9 changes: 0 additions & 9 deletions Documentation/cachetlb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,6 @@ changes occur:
page table operations such as what happens during
fork, and exec.

Platform developers note that generic code will always
invoke this interface without mm->page_table_lock held.

3) void flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)

Expand All @@ -72,9 +69,6 @@ changes occur:
call flush_tlb_page (see below) for each entry which may be
modified.

Platform developers note that generic code will always
invoke this interface with mm->page_table_lock held.

4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)

This time we need to remove the PAGE_SIZE sized translation
Expand All @@ -93,9 +87,6 @@ changes occur:

This is used primarily during fault processing.

Platform developers note that generic code will always
invoke this interface with mm->page_table_lock held.

5) void flush_tlb_pgtables(struct mm_struct *mm,
unsigned long start, unsigned long end)

Expand Down
2 changes: 2 additions & 0 deletions arch/ia64/mm/tlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -158,10 +158,12 @@ flush_tlb_range (struct vm_area_struct *vma, unsigned long start, unsigned long
# ifdef CONFIG_SMP
platform_global_tlb_purge(mm, start, end, nbits);
# else
preempt_disable();
do {
ia64_ptcl(start, (nbits<<2));
start += (1UL << nbits);
} while (start < end);
preempt_enable();
# endif

ia64_srlz_i(); /* srlz.i implies srlz.d */
Expand Down
3 changes: 2 additions & 1 deletion include/asm-parisc/tlbflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
if (npages >= 512) /* 2MB of space: arbitrary, should be tuned */
flush_tlb_all();
else {

preempt_disable();
mtsp(vma->vm_mm->context,1);
purge_tlb_start();
if (split_tlb) {
Expand All @@ -102,6 +102,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
pdtlb(start);
start += PAGE_SIZE;
}
preempt_enable();
}
purge_tlb_end();
}
Expand Down

0 comments on commit 663b97f

Please sign in to comment.