io_uring: don't raw spin unlock to match cq_lock

There is one newly added place when we lock ring with io_cq_lock() but
unlocking is hand coded calling spin_unlock directly. It's ugly and
troublesome in the long run. Make it consistent with the other completion
locking.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/4ca4f0564492b90214a190cd5b2a6c76522de138.1669821213.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
Pavel Begunkov 2022-11-30 15:21:56 +00:00 committed by Jens Axboe
parent 443e575506
commit 618d653a34
2 changed files with 6 additions and 1 deletions

View file

@ -860,7 +860,7 @@ bool io_aux_cqe(struct io_ring_ctx *ctx, bool defer, u64 user_data, s32 res, u32
io_cq_lock(ctx);
__io_flush_post_cqes(ctx);
/* no need to flush - flush is deferred */
spin_unlock(&ctx->completion_lock);
io_cq_unlock(ctx);
}
/* For defered completions this is not as strict as it is otherwise,

View file

@ -93,6 +93,11 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx)
spin_lock(&ctx->completion_lock);
}
static inline void io_cq_unlock(struct io_ring_ctx *ctx)
{
spin_unlock(&ctx->completion_lock);
}
void io_cq_unlock_post(struct io_ring_ctx *ctx);
static inline struct io_uring_cqe *io_get_cqe_overflow(struct io_ring_ctx *ctx,