linux/Documentation/features
Alexandre Ghiti 54d7431af7
riscv: Add support for BATCHED_UNMAP_TLB_FLUSH
Allow to defer the flushing of the TLB when unmapping pages, which allows
to reduce the numbers of IPI and the number of sfence.vma.

The ubenchmarch used in commit 43b3dfdd04 ("arm64: support
batched/deferred tlb shootdown during page reclamation/migration") that
was multithreaded to force the usage of IPI shows good performance
improvement on all platforms:

* Unmatched: ~34%
* TH1520   : ~78%
* Qemu     : ~81%

In addition, perf on qemu reports an important decrease in time spent
dealing with IPIs:

Before:  68.17%  main     [kernel.kallsyms]            [k] __sbi_rfence_v02_call
After :   8.64%  main     [kernel.kallsyms]            [k] __sbi_rfence_v02_call

* Benchmark:

int stick_this_thread_to_core(int core_id) {
        int num_cores = sysconf(_SC_NPROCESSORS_ONLN);
        if (core_id < 0 || core_id >= num_cores)
           return EINVAL;

        cpu_set_t cpuset;
        CPU_ZERO(&cpuset);
        CPU_SET(core_id, &cpuset);

        pthread_t current_thread = pthread_self();
        return pthread_setaffinity_np(current_thread,
sizeof(cpu_set_t), &cpuset);
}

static void *fn_thread (void *p_data)
{
        int ret;
        pthread_t thread;

        stick_this_thread_to_core((int)p_data);

        while (1) {
                sleep(1);
        }

        return NULL;
}

int main()
{
        volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
                                         MAP_SHARED | MAP_ANONYMOUS, -1, 0);
        pthread_t threads[4];
        int ret;

        for (int i = 0; i < 4; ++i) {
                ret = pthread_create(&threads[i], NULL, fn_thread, (void *)i);
                if (ret)
                {
                        printf("%s", strerror (ret));
                }
        }

        memset(p, 0x88, SIZE);

        for (int k = 0; k < 10000; k++) {
                /* swap in */
                for (int i = 0; i < SIZE; i += 4096) {
                        (void)p[i];
                }

                /* swap out */
                madvise(p, SIZE, MADV_PAGEOUT);
        }

        for (int i = 0; i < 4; i++)
        {
                pthread_cancel(threads[i]);
        }

        for (int i = 0; i < 4; i++)
        {
                pthread_join(threads[i], NULL);
        }

        return 0;
}

Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Jisheng Zhang <jszhang@kernel.org>
Tested-by: Jisheng Zhang <jszhang@kernel.org> # Tested on TH1520
Tested-by: Nam Cao <namcao@linutronix.de>
Link: https://lore.kernel.org/r/20240108193640.344929-1-alexghiti@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-01-11 08:01:53 -08:00
..
core Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
debug Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
io/dma-contiguous Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
locking Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
perf Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
sched Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
scripts Documentation/features-refresh.sh: Only sed the beginning "arch" of ARCH_DIR 2022-12-05 02:50:12 -07:00
seccomp/seccomp-filter Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
time Documentation: Drop IA64 from feature descriptions 2023-09-11 08:13:18 +00:00
vm riscv: Add support for BATCHED_UNMAP_TLB_FLUSH 2024-01-11 08:01:53 -08:00
arch-support.txt Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't apply to ARM64 2021-03-15 13:17:40 -06:00
list-arch.sh scripts: get_feat.pl: use its implementation for list-arch.sh 2020-12-03 15:10:14 -07:00