linux/mm
Linus Torvalds c2407cf7d2 mm: make wait_on_page_writeback() wait for multiple pending writebacks
Ever since commit 2a9127fcf2 ("mm: rewrite wait_on_page_bit_common()
logic") we've had some very occasional reports of BUG_ON(PageWriteback)
in write_cache_pages(), which we thought we already fixed in commit
073861ed77 ("mm: fix VM_BUG_ON(PageTail) and BUG_ON(PageWriteback)").

But syzbot just reported another one, even with that commit in place.

And it turns out that there's a simpler way to trigger the BUG_ON() than
the one Hugh found with page re-use.  It all boils down to the fact that
the page writeback is ostensibly serialized by the page lock, but that
isn't actually really true.

Yes, the people _setting_ writeback all do so under the page lock, but
the actual clearing of the bit - and waking up any waiters - happens
without any page lock.

This gives us this fairly simple race condition:

  CPU1 = end previous writeback
  CPU2 = start new writeback under page lock
  CPU3 = write_cache_pages()

  CPU1          CPU2            CPU3
  ----          ----            ----

  end_page_writeback()
    test_clear_page_writeback(page)
    ... delayed...

                lock_page();
                set_page_writeback()
                unlock_page()

                                lock_page()
                                wait_on_page_writeback();

    wake_up_page(page, PG_writeback);
    .. wakes up CPU3 ..

                                BUG_ON(PageWriteback(page));

where the BUG_ON() happens because we woke up the PG_writeback bit
becasue of the _previous_ writeback, but a new one had already been
started because the clearing of the bit wasn't actually atomic wrt the
actual wakeup or serialized by the page lock.

The reason this didn't use to happen was that the old logic in waiting
on a page bit would just loop if it ever saw the bit set again.

The nice proper fix would probably be to get rid of the whole "wait for
writeback to clear, and then set it" logic in the writeback path, and
replace it with an atomic "wait-to-set" (ie the same as we have for page
locking: we set the page lock bit with a single "lock_page()", not with
"wait for lock bit to clear and then set it").

However, out current model for writeback is that the waiting for the
writeback bit is done by the generic VFS code (ie write_cache_pages()),
but the actual setting of the writeback bit is done much later by the
filesystem ".writepages()" function.

IOW, to make the writeback bit have that same kind of "wait-to-set"
behavior as we have for page locking, we'd have to change our roughly
~50 different writeback functions.  Painful.

Instead, just make "wait_on_page_writeback()" loop on the very unlikely
situation that the PG_writeback bit is still set, basically re-instating
the old behavior.  This is very non-optimal in case of contention, but
since we only ever set the bit under the page lock, that situation is
controlled.

Reported-by: syzbot+2fc0712f8f8b8b8fa0ef@syzkaller.appspotmail.com
Fixes: 2a9127fcf2 ("mm: rewrite wait_on_page_bit_common() logic")
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-01-05 11:33:00 -08:00
..
kasan kasan: fix null pointer dereference in kasan_record_aux_stack 2020-12-29 15:36:49 -08:00
backing-dev.c
balloon_compaction.c
cleancache.c
cma.c
cma.h
cma_debug.c
compaction.c
debug.c
debug_page_ref.c
debug_vm_pgtable.c
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c mm/filemap: fix infinite loop in generic_file_buffered_read() 2020-12-18 13:37:04 -08:00
frame_vector.c
frontswap.c
gup.c
gup_test.c
gup_test.h
highmem.c
hmm.c
huge_memory.c
hugetlb.c mm/hugetlb: fix deadlock in hugetlb_cow error path 2020-12-29 15:36:49 -08:00
hugetlb_cgroup.c
hwpoison-inject.c
init-mm.c
internal.h
interval_tree.c
ioremap.c
Kconfig mm/Kconfig: fix spelling mistake "whats" -> "what's" 2020-12-19 11:25:41 -08:00
Kconfig.debug
khugepaged.c
kmemleak.c
ksm.c
list_lru.c
maccess.c
madvise.c
Makefile
mapping_dirty_helpers.c
memblock.c
memcontrol.c mm/memcontrol:rewrite mem_cgroup_page_lruvec() 2020-12-19 11:18:37 -08:00
memfd.c
memory-failure.c
memory.c mm: generalise COW SMC TLB flushing race comment 2020-12-29 15:36:49 -08:00
memory_hotplug.c mm: memmap defer init doesn't work as expected 2020-12-29 15:36:49 -08:00
mempolicy.c
mempool.c kasan, mm: rename kasan_poison_kfree 2020-12-22 12:55:09 -08:00
memremap.c
memtest.c
migrate.c
mincore.c
mlock.c
mm_init.c
mmap.c UAPI Changes: 2020-12-18 12:38:28 -08:00
mmap_lock.c
mmu_gather.c
mmu_notifier.c
mmzone.c
mprotect.c
mremap.c mm/mremap.c: fix extent calculation 2020-12-29 15:36:49 -08:00
msync.c
nommu.c
oom_kill.c
page-writeback.c mm: make wait_on_page_writeback() wait for multiple pending writebacks 2021-01-05 11:33:00 -08:00
page_alloc.c mm: memmap defer init doesn't work as expected 2020-12-29 15:36:49 -08:00
page_counter.c
page_ext.c
page_idle.c
page_io.c
page_isolation.c
page_owner.c
page_poison.c kasan, mm: reset tags when accessing metadata 2020-12-22 12:55:08 -08:00
page_reporting.c
page_reporting.h
page_vma_mapped.c
pagewalk.c
percpu-internal.h
percpu-km.c
percpu-stats.c
percpu-vm.c
percpu.c
pgalloc-track.h
pgtable-generic.c
process_vm_access.c
ptdump.c kasan, arm64: expand CONFIG_KASAN checks 2020-12-22 12:55:08 -08:00
readahead.c
rmap.c
rodata_test.c
shmem.c
shuffle.c
shuffle.h
slab.c
slab.h
slab_common.c kasan, mm: allow cache merging with no metadata 2020-12-22 12:55:09 -08:00
slob.c
slub.c mm: slub: call account_slab_page() after slab page initialization 2020-12-29 15:36:49 -08:00
sparse-vmemmap.c
sparse.c
swap.c
swap_cgroup.c
swap_slots.c
swap_state.c
swapfile.c
truncate.c
usercopy.c
userfaultfd.c
util.c
vmacache.c
vmalloc.c
vmpressure.c
vmscan.c
vmstat.c
workingset.c
z3fold.c
zbud.c
zpool.c
zsmalloc.c
zswap.c