linux/rust/kernel/alloc.rs
Miguel Ojeda 00280272a0 rust: kernel: remove redundant imports
Rust's `unused_imports` lint covers both unused and redundant imports.
In the upcoming 1.78.0, the lint detects more cases of redundant imports
[1], e.g.:

    error: the item `bindings` is imported redundantly
      --> rust/kernel/print.rs:38:9
       |
    38 |     use crate::bindings;
       |         ^^^^^^^^^^^^^^^ the item `bindings` is already defined by prelude

Most cases are `use crate::bindings`, plus a few other items like `Box`.
Thus clean them up.

Note that, in the `bindings` case, the message "defined by prelude"
above means the extern prelude, i.e. the `--extern` flags we pass.

Link: https://github.com/rust-lang/rust/pull/117772 [1]
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20240401212303.537355-3-ojeda@kernel.org
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2024-05-05 19:22:25 +02:00

74 lines
2.2 KiB
Rust

// SPDX-License-Identifier: GPL-2.0
//! Extensions to the [`alloc`] crate.
#[cfg(not(test))]
#[cfg(not(testlib))]
mod allocator;
pub mod box_ext;
pub mod vec_ext;
/// Indicates an allocation error.
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub struct AllocError;
/// Flags to be used when allocating memory.
///
/// They can be combined with the operators `|`, `&`, and `!`.
///
/// Values can be used from the [`flags`] module.
#[derive(Clone, Copy)]
pub struct Flags(u32);
impl core::ops::BitOr for Flags {
type Output = Self;
fn bitor(self, rhs: Self) -> Self::Output {
Self(self.0 | rhs.0)
}
}
impl core::ops::BitAnd for Flags {
type Output = Self;
fn bitand(self, rhs: Self) -> Self::Output {
Self(self.0 & rhs.0)
}
}
impl core::ops::Not for Flags {
type Output = Self;
fn not(self) -> Self::Output {
Self(!self.0)
}
}
/// Allocation flags.
///
/// These are meant to be used in functions that can allocate memory.
pub mod flags {
use super::Flags;
/// Zeroes out the allocated memory.
///
/// This is normally or'd with other flags.
pub const __GFP_ZERO: Flags = Flags(bindings::__GFP_ZERO);
/// Users can not sleep and need the allocation to succeed.
///
/// A lower watermark is applied to allow access to "atomic reserves". The current
/// implementation doesn't support NMI and few other strict non-preemptive contexts (e.g.
/// raw_spin_lock). The same applies to [`GFP_NOWAIT`].
pub const GFP_ATOMIC: Flags = Flags(bindings::GFP_ATOMIC);
/// Typical for kernel-internal allocations. The caller requires ZONE_NORMAL or a lower zone
/// for direct access but can direct reclaim.
pub const GFP_KERNEL: Flags = Flags(bindings::GFP_KERNEL);
/// The same as [`GFP_KERNEL`], except the allocation is accounted to kmemcg.
pub const GFP_KERNEL_ACCOUNT: Flags = Flags(bindings::GFP_KERNEL_ACCOUNT);
/// Ror kernel allocations that should not stall for direct reclaim, start physical IO or
/// use any filesystem callback. It is very likely to fail to allocate memory, even for very
/// small allocations.
pub const GFP_NOWAIT: Flags = Flags(bindings::GFP_NOWAIT);
}