linux/arch/riscv/include/asm/kfence.h
Alexandre Ghiti 25abe0db92
riscv: Fix kfence now that the linear mapping can be backed by PUD/P4D/PGD
RISC-V Kfence implementation used to rely on the fact the linear mapping
was backed by at most PMD hugepages, which is not true anymore since
commit 3335068f87 ("riscv: Use PUD/P4D/PGD pages for the linear
mapping").

Instead of splitting PUD/P4D/PGD mappings afterwards, directly map the
kfence pool region using PTE mappings by allocating this region before
setup_vm_final().

Reported-by: syzbot+a74d57bddabbedd75135@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=a74d57bddabbedd75135
Fixes: 3335068f87 ("riscv: Use PUD/P4D/PGD pages for the linear mapping")
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/20230606130444.25090-1-alexghiti@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-06-07 07:13:53 -07:00

31 lines
611 B
C

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_RISCV_KFENCE_H
#define _ASM_RISCV_KFENCE_H
#include <linux/kfence.h>
#include <linux/pfn.h>
#include <asm-generic/pgalloc.h>
#include <asm/pgtable.h>
static inline bool arch_kfence_init_pool(void)
{
return true;
}
static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
pte_t *pte = virt_to_kpte(addr);
if (protect)
set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
else
set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
return true;
}
#endif /* _ASM_RISCV_KFENCE_H */