Skip to content

Commit a391263

Browse files
wildea01Russell King
authored andcommitted
ARM: 8203/1: mm: try to re-use old ASID assignments following a rollover
Rather than unconditionally allocating a fresh ASID to an mm from an older generation, attempt to re-use the old assignment where possible. This can bring performance benefits on systems where the ASID is used to tag things other than the TLB (e.g. branch prediction resources). Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
1 parent 2b94fe2 commit a391263

File tree

1 file changed

+34
-24
lines changed

1 file changed

+34
-24
lines changed

arch/arm/mm/context.c

Lines changed: 34 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -184,36 +184,46 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
184184
u64 asid = atomic64_read(&mm->context.id);
185185
u64 generation = atomic64_read(&asid_generation);
186186

187-
if (asid != 0 && is_reserved_asid(asid)) {
187+
if (asid != 0) {
188188
/*
189-
* Our current ASID was active during a rollover, we can
190-
* continue to use it and this was just a false alarm.
189+
* If our current ASID was active during a rollover, we
190+
* can continue to use it and this was just a false alarm.
191191
*/
192-
asid = generation | (asid & ~ASID_MASK);
193-
} else {
192+
if (is_reserved_asid(asid))
193+
return generation | (asid & ~ASID_MASK);
194+
194195
/*
195-
* Allocate a free ASID. If we can't find one, take a
196-
* note of the currently active ASIDs and mark the TLBs
197-
* as requiring flushes. We always count from ASID #1,
198-
* as we reserve ASID #0 to switch via TTBR0 and to
199-
* avoid speculative page table walks from hitting in
200-
* any partial walk caches, which could be populated
201-
* from overlapping level-1 descriptors used to map both
202-
* the module area and the userspace stack.
196+
* We had a valid ASID in a previous life, so try to re-use
197+
* it if possible.,
203198
*/
204-
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
205-
if (asid == NUM_USER_ASIDS) {
206-
generation = atomic64_add_return(ASID_FIRST_VERSION,
207-
&asid_generation);
208-
flush_context(cpu);
209-
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
210-
}
211-
__set_bit(asid, asid_map);
212-
cur_idx = asid;
213-
asid |= generation;
214-
cpumask_clear(mm_cpumask(mm));
199+
asid &= ~ASID_MASK;
200+
if (!__test_and_set_bit(asid, asid_map))
201+
goto bump_gen;
215202
}
216203

204+
/*
205+
* Allocate a free ASID. If we can't find one, take a note of the
206+
* currently active ASIDs and mark the TLBs as requiring flushes.
207+
* We always count from ASID #1, as we reserve ASID #0 to switch
208+
* via TTBR0 and to avoid speculative page table walks from hitting
209+
* in any partial walk caches, which could be populated from
210+
* overlapping level-1 descriptors used to map both the module
211+
* area and the userspace stack.
212+
*/
213+
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
214+
if (asid == NUM_USER_ASIDS) {
215+
generation = atomic64_add_return(ASID_FIRST_VERSION,
216+
&asid_generation);
217+
flush_context(cpu);
218+
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
219+
}
220+
221+
__set_bit(asid, asid_map);
222+
cur_idx = asid;
223+
224+
bump_gen:
225+
asid |= generation;
226+
cpumask_clear(mm_cpumask(mm));
217227
return asid;
218228
}
219229

0 commit comments

Comments
 (0)