vm_pageout_scan_inactive: take a lock break

In vm_pageout_scan_inactive, release the object lock when we go to
refill the scan batch queue so that someone else has a chance to acquire
it.  This improves access latency to the object when the pagedaemon is
processing many consecutive pages from a single object, and also in any
case avoids a hiccup during refill for the last touched object.

Reviewed by:	alc, markj (previous version)
Sponsored by:	Dell EMC Isilon
Differential Revision:	https://reviews.freebsd.org/D45288
This commit is contained in:
Ryan Libby 2024-05-24 08:52:58 -07:00
parent d09ee08f10
commit a216e311a7
2 changed files with 21 additions and 1 deletions

View file

@ -1451,7 +1451,21 @@ vm_pageout_scan_inactive(struct vm_domain *vmd, int page_shortage)
pq = &vmd->vmd_pagequeues[PQ_INACTIVE];
vm_pagequeue_lock(pq);
vm_pageout_init_scan(&ss, pq, marker, NULL, pq->pq_cnt);
while (page_shortage > 0 && (m = vm_pageout_next(&ss, true)) != NULL) {
while (page_shortage > 0) {
/*
* If we need to refill the scan batch queue, release any
* optimistically held object lock. This gives someone else a
* chance to grab the lock, and also avoids holding it while we
* do unrelated work.
*/
if (object != NULL && vm_batchqueue_empty(&ss.bq)) {
VM_OBJECT_WUNLOCK(object);
object = NULL;
}
m = vm_pageout_next(&ss, true);
if (m == NULL)
break;
KASSERT((m->flags & PG_MARKER) == 0,
("marker page %p was dequeued", m));

View file

@ -354,6 +354,12 @@ vm_batchqueue_init(struct vm_batchqueue *bq)
bq->bq_cnt = 0;
}
static inline bool
vm_batchqueue_empty(const struct vm_batchqueue *bq)
{
return (bq->bq_cnt == 0);
}
static inline int
vm_batchqueue_insert(struct vm_batchqueue *bq, vm_page_t m)
{