mm/writeback: Add folio_mark_dirty()

Reimplement set_page_dirty() as a wrapper around folio_mark_dirty().
There is no change to filesystems as they were already being called
with the compound_head of the page being marked dirty.  We avoid
several calls to compound_head(), both statically (through
using folio_test_dirty() instead of PageDirty() and dynamically by
calling folio_mapping() instead of page_mapping().

Also return bool instead of int to show the range of values actually
returned, and add kernel-doc.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: David Howells <dhowells@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
This commit is contained in:
Matthew Wilcox (Oracle) 2021-04-26 23:53:10 -04:00
parent f143f1ea5a
commit b5e84594ca
3 changed files with 27 additions and 17 deletions

View file

@ -2008,7 +2008,8 @@ int redirty_page_for_writepage(struct writeback_control *wbc,
struct page *page);
void account_page_cleaned(struct page *page, struct address_space *mapping,
struct bdi_writeback *wb);
int set_page_dirty(struct page *page);
bool folio_mark_dirty(struct folio *folio);
bool set_page_dirty(struct page *page);
int set_page_dirty_lock(struct page *page);
void __cancel_dirty_page(struct page *page);
static inline void cancel_dirty_page(struct page *page)

View file

@ -77,3 +77,9 @@ bool set_page_writeback(struct page *page)
return folio_start_writeback(page_folio(page));
}
EXPORT_SYMBOL(set_page_writeback);
bool set_page_dirty(struct page *page)
{
return folio_mark_dirty(page_folio(page));
}
EXPORT_SYMBOL(set_page_dirty);

View file

@ -2581,18 +2581,21 @@ int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page)
}
EXPORT_SYMBOL(redirty_page_for_writepage);
/*
* Dirty a page.
/**
* folio_mark_dirty - Mark a folio as being modified.
* @folio: The folio.
*
* For pages with a mapping this should be done under the page lock for the
* benefit of asynchronous memory errors who prefer a consistent dirty state.
* This rule can be broken in some special cases, but should be better not to.
* For folios with a mapping this should be done under the page lock
* for the benefit of asynchronous memory errors who prefer a consistent
* dirty state. This rule can be broken in some special cases,
* but should be better not to.
*
* Return: True if the folio was newly dirtied, false if it was already dirty.
*/
int set_page_dirty(struct page *page)
bool folio_mark_dirty(struct folio *folio)
{
struct address_space *mapping = page_mapping(page);
struct address_space *mapping = folio_mapping(folio);
page = compound_head(page);
if (likely(mapping)) {
/*
* readahead/lru_deactivate_page could remain
@ -2604,17 +2607,17 @@ int set_page_dirty(struct page *page)
* it will confuse readahead and make it restart the size rampup
* process. But it's a trivial problem.
*/
if (PageReclaim(page))
ClearPageReclaim(page);
return mapping->a_ops->set_page_dirty(page);
if (folio_test_reclaim(folio))
folio_clear_reclaim(folio);
return mapping->a_ops->set_page_dirty(&folio->page);
}
if (!PageDirty(page)) {
if (!TestSetPageDirty(page))
return 1;
if (!folio_test_dirty(folio)) {
if (!folio_test_set_dirty(folio))
return true;
}
return 0;
return false;
}
EXPORT_SYMBOL(set_page_dirty);
EXPORT_SYMBOL(folio_mark_dirty);
/*
* set_page_dirty() is racy if the caller has no reference against