Skip to content

Commit

Permalink
ARM: 7925/1: mm: keep track of last ASID allocation to improve bitmap…
Browse files Browse the repository at this point in the history
… searching

Since we only clear entries in the ASID bitmap on a rollover event, the
bitmap tends to consist of a block of consecutive set bits followed by
a block of consecutive clear bits. The exception to this rule is for
ASIDs which have been carried over from a previous generation, but
these are bound by the number of CPUs.

This patch optimises our bitmap searching strategy, so that we search
from the last successful allocation, rather than search from index 1
each time we allocate a new ASID.

Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
  • Loading branch information
wildea01 authored and Russell King committed Dec 29, 2013
1 parent e1a5848 commit a7a0410
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion arch/arm/mm/context.c
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,7 @@ static int is_reserved_asid(u64 asid)

static u64 new_context(struct mm_struct *mm, unsigned int cpu)
{
static u32 cur_idx = 1;
u64 asid = atomic64_read(&mm->context.id);
u64 generation = atomic64_read(&asid_generation);

Expand All @@ -197,14 +198,15 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
* as we reserve ASID #0 to switch via TTBR0 and indicate
* rollover events.
*/
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
if (asid == NUM_USER_ASIDS) {
generation = atomic64_add_return(ASID_FIRST_VERSION,
&asid_generation);
flush_context(cpu);
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
}
__set_bit(asid, asid_map);
cur_idx = asid;
asid |= generation;
cpumask_clear(mm_cpumask(mm));
}
Expand Down

0 comments on commit a7a0410

Please sign in to comment.