linux/mm
Tejun Heo b38d08f318 percpu: restructure locking
At first, the percpu allocator required a sleepable context for both
alloc and free paths and used pcpu_alloc_mutex to protect everything.
Later, pcpu_lock was introduced to protect the index data structure so
that the free path can be invoked from atomic contexts.  The
conversion only updated what's necessary and left most of the
allocation path under pcpu_alloc_mutex.

The percpu allocator is planned to add support for atomic allocation
and this patch restructures locking so that the coverage of
pcpu_alloc_mutex is further reduced.

* pcpu_alloc() now grab pcpu_alloc_mutex only while creating a new
  chunk and populating the allocated area.  Everything else is now
  protected soley by pcpu_lock.

  After this change, multiple instances of pcpu_extend_area_map() may
  race but the function already implements sufficient synchronization
  using pcpu_lock.

  This also allows multiple allocators to arrive at new chunk
  creation.  To avoid creating multiple empty chunks back-to-back, a
  new chunk is created iff there is no other empty chunk after
  grabbing pcpu_alloc_mutex.

* pcpu_lock is now held while modifying chunk->populated bitmap.
  After this, all data structures are protected by pcpu_lock.

Signed-off-by: Tejun Heo <tj@kernel.org>
2014-09-02 14:46:02 -04:00
..
backing-dev.c
balloon_compaction.c
bootmem.c
cleancache.c
cma.c
compaction.c
debug-pagealloc.c
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs 2014-08-11 11:44:11 -07:00
filemap_xip.c
fremap.c
frontswap.c
gup.c mm: describe mmap_sem rules for __lock_page_or_retry() and callers 2014-08-06 18:01:20 -07:00
highmem.c mm/highmem: make kmap cache coloring aware 2014-08-06 18:01:22 -07:00
huge_memory.c mm: memcontrol: rewrite charge API 2014-08-08 15:57:17 -07:00
hugetlb.c mm: fix potential infinite loop in dissolve_free_huge_pages() 2014-08-06 18:01:21 -07:00
hugetlb_cgroup.c mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size 2014-08-14 10:56:15 -06:00
hwpoison-inject.c mm/hwpoison-inject.c: remove unnecessary null test before debugfs_remove_recursive 2014-08-06 18:01:19 -07:00
init-mm.c
internal.h
interval_tree.c
iov_iter.c switch iov_iter_get_pages() to passing maximal number of pages 2014-08-07 14:40:11 -04:00
Kconfig mm/zpool: update zswap to use zpool 2014-08-06 18:01:23 -07:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c
ksm.c
list_lru.c
maccess.c
madvise.c mm: update the description for madvise_remove 2014-08-06 18:01:18 -07:00
Makefile mm/zpool: implement common zpool api to zbud/zsmalloc 2014-08-06 18:01:23 -07:00
memblock.c
memcontrol.c mm: memcontrol: avoid charge statistics churn during page migration 2014-08-08 15:57:18 -07:00
memory-failure.c hwpoison: fix race with changing page during offlining 2014-08-06 18:01:19 -07:00
memory.c arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate area 2014-08-08 15:57:27 -07:00
memory_hotplug.c memory-hotplug: add zone_for_memory() for selecting zone for new memory 2014-08-06 18:01:21 -07:00
mempolicy.c
mempool.c
migrate.c mm: memcontrol: rewrite uncharge API 2014-08-08 15:57:17 -07:00
mincore.c
mlock.c mm: describe mmap_sem rules for __lock_page_or_retry() and callers 2014-08-06 18:01:20 -07:00
mm_init.c
mmap.c mm: allow drivers to prevent new writable mappings 2014-08-08 15:57:31 -07:00
mmu_context.c
mmu_notifier.c mmu_notifier: add call_srcu and sync function for listener to delay call and sync 2014-08-06 18:01:22 -07:00
mmzone.c
mprotect.c
mremap.c
msync.c
nobootmem.c
nommu.c arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate area 2014-08-08 15:57:27 -07:00
oom_kill.c mm, oom: remove unnecessary exit_state check 2014-08-06 18:01:21 -07:00
page-writeback.c mm, writeback: prevent race when calculating dirty limits 2014-08-06 18:01:21 -07:00
page_alloc.c mm, thp: restructure thp avoidance of light synchronous migration 2014-08-06 18:01:21 -07:00
page_cgroup.c
page_io.c
page_isolation.c
pagewalk.c
percpu-km.c percpu: restructure locking 2014-09-02 14:46:02 -04:00
percpu-vm.c percpu: move region iterations out of pcpu_[de]populate_chunk() 2014-09-02 14:46:02 -04:00
percpu.c percpu: restructure locking 2014-09-02 14:46:02 -04:00
pgtable-generic.c
process_vm_access.c
quicklist.c
readahead.c
rmap.c mm: memcontrol: rewrite uncharge API 2014-08-08 15:57:17 -07:00
shmem.c Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs 2014-08-11 11:44:11 -07:00
slab.c Revert "slab: remove BAD_ALIEN_MAGIC" 2014-08-08 15:57:17 -07:00
slab.h
slab_common.c
slob.c
slub.c slub: remove kmemcg id from create_unique_id 2014-08-06 18:01:21 -07:00
sparse-vmemmap.c
sparse.c
swap.c mm: memcontrol: use page lists for uncharge batching 2014-08-08 15:57:18 -07:00
swap_state.c mm: allow drivers to prevent new writable mappings 2014-08-08 15:57:31 -07:00
swapfile.c mm: memcontrol: rewrite uncharge API 2014-08-08 15:57:17 -07:00
truncate.c mm: memcontrol: rewrite uncharge API 2014-08-08 15:57:17 -07:00
util.c vm_is_stack: use for_each_thread() rather then buggy while_each_thread() 2014-08-08 15:57:17 -07:00
vmacache.c
vmalloc.c mm/vmalloc.c: clean up map_vm_area third argument 2014-08-06 18:01:19 -07:00
vmpressure.c
vmscan.c mm: memcontrol: use page lists for uncharge batching 2014-08-08 15:57:18 -07:00
vmstat.c mm: vmscan: only update per-cpu thresholds for online CPU 2014-08-06 18:01:20 -07:00
workingset.c
zbud.c mm/zpool: zbud/zsmalloc implement zpool 2014-08-06 18:01:23 -07:00
zpool.c mm/zpool: implement common zpool api to zbud/zsmalloc 2014-08-06 18:01:23 -07:00
zsmalloc.c mm/zpool: zbud/zsmalloc implement zpool 2014-08-06 18:01:23 -07:00
zswap.c mm/zswap.c: add __init to zswap_entry_cache_destroy() 2014-08-08 15:57:18 -07:00