vm: Reduce address space fragmentation

jemalloc performs two types of virtual memory allocations: (1) large
chunks of virtual memory, where the chunk size is a multiple of a
superpage and explicitly aligned, and (2) small allocations, mostly
128KB, where no alignment is requested.  Typically, it starts with a
small allocation, and over time it makes both types of allocation.

With anon_loc being updated on every allocation, we wind up with a
repeating pattern of a small allocation, a large gap, and a large,
aligned allocation.  (As an aside, we wind up allocating a reservation
for these small allocations, but it will never fill because the next
large, aligned allocation updates anon_loc, leaving a gap that will
never be filled with other small allocations.)

With this change, anon_loc isn't updated on every allocation.  So, the
small allocations will be clustered together, the large allocations will
be clustered together, and there will be fewer gaps between the
anonymous memory allocations.  In addition, I see a small reduction in
reservations allocated (e.g., 1.6% during buildworld), fewer partially
populated reservations, and a small increase in 64KB page promotions on
arm64.

Reviewed by:	kib
MFC after:	1 week
Differential Revision:	https://reviews.freebsd.org/D39845
This commit is contained in:
Alan Cox 2024-06-09 11:58:27 -05:00
parent a7f67ebd82
commit 268f19aacc

View file

@ -2247,8 +2247,15 @@ vm_map_find(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
rv = vm_map_insert(map, object, offset, *addr, *addr + length,
prot, max, cow);
}
if (rv == KERN_SUCCESS && update_anon)
map->anon_loc = *addr + length;
/*
* Update the starting address for clustered anonymous memory mappings
* if a starting address was not previously defined or an ASLR restart
* placed an anonymous memory mapping at a lower address.
*/
if (update_anon && rv == KERN_SUCCESS && (map->anon_loc == 0 ||
*addr < map->anon_loc))
map->anon_loc = *addr;
done:
vm_map_unlock(map);
return (rv);
@ -4041,9 +4048,6 @@ vm_map_delete(vm_map_t map, vm_offset_t start, vm_offset_t end)
entry->object.vm_object != NULL)
pmap_map_delete(map->pmap, entry->start, entry->end);
if (entry->end == map->anon_loc)
map->anon_loc = entry->start;
/*
* Delete the entry only after removing all pmap
* entries pointing to its pages. (Otherwise, its