linux/fs/pstore/pmsg.c

95 lines
1.9 KiB
C
Raw Normal View History

// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright 2014 Google, Inc.
*/
#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/uaccess.h>
#include "internal.h"
pstore: Revert pmsg_lock back to a normal mutex This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721. So while priority inversion on the pmsg_lock is an occasional problem that an rt_mutex would help with, in uses where logging is writing to pmsg heavily from multiple threads, the pmsg_lock can be heavily contended. After this change landed, it was reported that cases where the mutex locking overhead was commonly adding on the order of 10s of usecs delay had suddenly jumped to ~msec delay with rtmutex. It seems the slight differences in the locks under this level of contention causes the normal mutexes to utilize the spinning optimizations, while the rtmutexes end up in the sleeping slowpath (which allows additional threads to pile on trying to take the lock). In this case, it devolves to a worse case senerio where the lock acquisition and scheduling overhead dominates, and each thread is waiting on the order of ~ms to do ~us of work. Obviously, having tons of threads all contending on a single lock for logging is non-optimal, so the proper fix is probably reworking pstore pmsg to have per-cpu buffers so we don't have contention. Additionally, Steven Rostedt has provided some furhter optimizations for rtmutexes that improves the rtmutex spinning path, but at least in my testing, I still see the test tripping into the sleeping path on rtmutexes while utilizing the spinning path with mutexes. But in the short term, lets revert the change to the rt_mutex and go back to normal mutexes to avoid a potentially major performance regression. And we can work on optimizations to both rtmutexes and finer-grained locking for pstore pmsg in the future. Cc: Wei Wang <wvw@google.com> Cc: Midas Chien<midaschieh@google.com> Cc: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: kernel-team@android.com Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion") Reported-by: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230308204043.2061631-1-jstultz@google.com
2023-03-08 20:40:43 +00:00
static DEFINE_MUTEX(pmsg_lock);
static ssize_t write_pmsg(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct pstore_record record;
int ret;
if (!count)
return 0;
pstore_record_init(&record, psinfo);
record.type = PSTORE_TYPE_PMSG;
record.size = count;
/* check outside lock, page in any data. write_user also checks */
Remove 'type' argument from access_ok() function Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04 02:57:57 +00:00
if (!access_ok(buf, count))
return -EFAULT;
pstore: Revert pmsg_lock back to a normal mutex This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721. So while priority inversion on the pmsg_lock is an occasional problem that an rt_mutex would help with, in uses where logging is writing to pmsg heavily from multiple threads, the pmsg_lock can be heavily contended. After this change landed, it was reported that cases where the mutex locking overhead was commonly adding on the order of 10s of usecs delay had suddenly jumped to ~msec delay with rtmutex. It seems the slight differences in the locks under this level of contention causes the normal mutexes to utilize the spinning optimizations, while the rtmutexes end up in the sleeping slowpath (which allows additional threads to pile on trying to take the lock). In this case, it devolves to a worse case senerio where the lock acquisition and scheduling overhead dominates, and each thread is waiting on the order of ~ms to do ~us of work. Obviously, having tons of threads all contending on a single lock for logging is non-optimal, so the proper fix is probably reworking pstore pmsg to have per-cpu buffers so we don't have contention. Additionally, Steven Rostedt has provided some furhter optimizations for rtmutexes that improves the rtmutex spinning path, but at least in my testing, I still see the test tripping into the sleeping path on rtmutexes while utilizing the spinning path with mutexes. But in the short term, lets revert the change to the rt_mutex and go back to normal mutexes to avoid a potentially major performance regression. And we can work on optimizations to both rtmutexes and finer-grained locking for pstore pmsg in the future. Cc: Wei Wang <wvw@google.com> Cc: Midas Chien<midaschieh@google.com> Cc: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: kernel-team@android.com Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion") Reported-by: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230308204043.2061631-1-jstultz@google.com
2023-03-08 20:40:43 +00:00
mutex_lock(&pmsg_lock);
ret = psinfo->write_user(&record, buf);
pstore: Revert pmsg_lock back to a normal mutex This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721. So while priority inversion on the pmsg_lock is an occasional problem that an rt_mutex would help with, in uses where logging is writing to pmsg heavily from multiple threads, the pmsg_lock can be heavily contended. After this change landed, it was reported that cases where the mutex locking overhead was commonly adding on the order of 10s of usecs delay had suddenly jumped to ~msec delay with rtmutex. It seems the slight differences in the locks under this level of contention causes the normal mutexes to utilize the spinning optimizations, while the rtmutexes end up in the sleeping slowpath (which allows additional threads to pile on trying to take the lock). In this case, it devolves to a worse case senerio where the lock acquisition and scheduling overhead dominates, and each thread is waiting on the order of ~ms to do ~us of work. Obviously, having tons of threads all contending on a single lock for logging is non-optimal, so the proper fix is probably reworking pstore pmsg to have per-cpu buffers so we don't have contention. Additionally, Steven Rostedt has provided some furhter optimizations for rtmutexes that improves the rtmutex spinning path, but at least in my testing, I still see the test tripping into the sleeping path on rtmutexes while utilizing the spinning path with mutexes. But in the short term, lets revert the change to the rt_mutex and go back to normal mutexes to avoid a potentially major performance regression. And we can work on optimizations to both rtmutexes and finer-grained locking for pstore pmsg in the future. Cc: Wei Wang <wvw@google.com> Cc: Midas Chien<midaschieh@google.com> Cc: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: kernel-team@android.com Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion") Reported-by: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230308204043.2061631-1-jstultz@google.com
2023-03-08 20:40:43 +00:00
mutex_unlock(&pmsg_lock);
return ret ? ret : count;
}
static const struct file_operations pmsg_fops = {
.owner = THIS_MODULE,
.llseek = noop_llseek,
.write = write_pmsg,
};
static struct class *pmsg_class;
static int pmsg_major;
#define PMSG_NAME "pmsg"
#undef pr_fmt
#define pr_fmt(fmt) PMSG_NAME ": " fmt
driver core: make struct class.devnode() take a const * The devnode() in struct class should not be modifying the device that is passed into it, so mark it as a const * and propagate the function signature changes out into all relevant subsystems that use this callback. Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Reinette Chatre <reinette.chatre@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <axboe@kernel.dk> Cc: Justin Sanders <justin@coraid.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Benjamin Gaignard <benjamin.gaignard@collabora.com> Cc: Liam Mark <lmark@codeaurora.org> Cc: Laura Abbott <labbott@redhat.com> Cc: Brian Starkey <Brian.Starkey@arm.com> Cc: John Stultz <jstultz@google.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: David Airlie <airlied@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Leon Romanovsky <leon@kernel.org> Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Sean Young <sean@mess.org> Cc: Frank Haverkamp <haver@linux.ibm.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.com> Cc: Hans Verkuil <hverkuil-cisco@xs4all.nl> Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Cc: Xie Yongji <xieyongji@bytedance.com> Cc: Gautam Dawar <gautam.dawar@xilinx.com> Cc: Dan Carpenter <error27@gmail.com> Cc: Eli Cohen <elic@nvidia.com> Cc: Parav Pandit <parav@nvidia.com> Cc: Maxime Coquelin <maxime.coquelin@redhat.com> Cc: alsa-devel@alsa-project.org Cc: dri-devel@lists.freedesktop.org Cc: kvm@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-block@vger.kernel.org Cc: linux-input@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: linux-rdma@vger.kernel.org Cc: linux-scsi@vger.kernel.org Cc: linux-usb@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Link: https://lore.kernel.org/r/20221123122523.1332370-2-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-11-23 12:25:20 +00:00
static char *pmsg_devnode(const struct device *dev, umode_t *mode)
{
if (mode)
*mode = 0220;
return NULL;
}
void pstore_register_pmsg(void)
{
struct device *pmsg_device;
pmsg_major = register_chrdev(0, PMSG_NAME, &pmsg_fops);
if (pmsg_major < 0) {
pr_err("register_chrdev failed\n");
goto err;
}
pmsg_class = class_create(PMSG_NAME);
if (IS_ERR(pmsg_class)) {
pr_err("device class file already in use\n");
goto err_class;
}
pmsg_class->devnode = pmsg_devnode;
pmsg_device = device_create(pmsg_class, NULL, MKDEV(pmsg_major, 0),
NULL, "%s%d", PMSG_NAME, 0);
if (IS_ERR(pmsg_device)) {
pr_err("failed to create device\n");
goto err_device;
}
return;
err_device:
class_destroy(pmsg_class);
err_class:
unregister_chrdev(pmsg_major, PMSG_NAME);
err:
return;
}
void pstore_unregister_pmsg(void)
{
device_destroy(pmsg_class, MKDEV(pmsg_major, 0));
class_destroy(pmsg_class);
unregister_chrdev(pmsg_major, PMSG_NAME);
}