man: IO -> I/O; I/Os -> I/O operations again

Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #13116
This commit is contained in:
наб 2022-02-17 21:26:43 +01:00 committed by Brian Behlendorf
parent aafb89016b
commit a737b415d6
8 changed files with 29 additions and 26 deletions

View file

@ -880,7 +880,7 @@ is set, then the deadman behavior is invoked as described by
.Sy zfs_deadman_failmode .
By default, the deadman is enabled and set to
.Sy wait
which results in "hung" I/Os only being logged.
which results in "hung" I/O operations only being logged.
The deadman is automatically disabled when a pool gets suspended.
.
.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
@ -964,7 +964,7 @@ will result in objects waiting when there is not actually contention on the
same object.
.
.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
Rate limit delay and deadman zevents (which report slow I/Os) to this many per
Rate limit delay and deadman zevents (which report slow I/O operations) to this many per
second.
.
.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1GB Pc Pq ulong
@ -990,7 +990,7 @@ This tunable is important because it involves a trade-off between import
time after an unclean export and the frequency of flushing metaslabs.
The higher this number is, the more log blocks we allow when the pool is
active which means that we flush metaslabs less often and thus decrease
the number of I/Os for spacemap updates per TXG.
the number of I/O operations for spacemap updates per TXG.
At the same time though, that means that in the event of an unclean export,
there will be more log spacemap blocks for us to read, inducing overhead
in the import time of the pool.
@ -1683,8 +1683,8 @@ This should only be used as a last resort,
as it typically results in leaked space, or worse.
.
.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
Ignore hard IO errors during device removal.
When set, if a device encounters a hard IO error during the removal process
Ignore hard I/O errors during device removal.
When set, if a device encounters a hard I/O error during the removal process
the removal will not be cancelled.
This can result in a normally recoverable block becoming permanently damaged
and is hence not recommended.
@ -1948,7 +1948,7 @@ Historical statistics for this many latest TXGs will be available in
Flush dirty data to disk at least every this many seconds (maximum TXG duration).
.
.It Sy zfs_vdev_aggregate_trim Ns = Ns Sy 0 Ns | Ns 1 Pq int
Allow TRIM I/Os to be aggregated.
Allow TRIM I/O operations to be aggregated.
This is normally not helpful because the extents to be trimmed
will have been already been aggregated by the metaslab.
This option is provided for debugging and performance analysis.

View file

@ -193,10 +193,10 @@ Calculating the exact requirement depends heavily
on the type of data stored in the pool.
.Pp
Enabling deduplication on an improperly-designed system can result in
performance issues (slow IO and administrative operations).
performance issues (slow I/O and administrative operations).
It can potentially lead to problems importing a pool due to memory exhaustion.
Deduplication can consume significant processing power (CPU) and memory as well
as generate additional disk IO.
as generate additional disk I/O.
.Pp
Before creating a pool with deduplication enabled, ensure that you have planned
your hardware requirements appropriately and implemented appropriate recovery

View file

@ -1828,7 +1828,8 @@ Although under Linux the
.Xr getxattr 2
and
.Xr setxattr 2
system calls limit the maximum size to 64K.
system calls limit the maximum size to
.Sy 64K .
This is the most compatible
style of extended attribute and is supported by all ZFS implementations.
.Pp
@ -1836,10 +1837,12 @@ System attribute based xattrs can be enabled by setting the value to
.Sy sa .
The key advantage of this type of xattr is improved performance.
Storing extended attributes as system attributes
significantly decreases the amount of disk IO required.
Up to 64K of data may be stored per-file in the space reserved for system attributes.
significantly decreases the amount of disk I/O required.
Up to
.Sy 64K
of data may be stored per-file in the space reserved for system attributes.
If there is not enough space available for an extended attribute
then it will be automatically written as a directory based xattr.
then it will be automatically written as a directory-based xattr.
System attribute based extended attributes are not accessible
on platforms which do not support the
.Sy xattr Ns = Ns Sy sa

View file

@ -25,7 +25,7 @@
.Nm
.Op Fl AbcdDFGhikLMNPsvXYy
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl I Ar inflight I/Os
.Op Fl I Ar inflight-I/O-ops
.Oo Fl o Ar var Ns = Ns Ar value Oc Ns
.Op Fl t Ar txg
.Op Fl U Ar cache
@ -404,8 +404,8 @@ transactions.
Dump the contents of the zfs_dbgmsg buffer before exiting
.Nm .
zfs_dbgmsg is a buffer used by ZFS to dump advanced debug information.
.It Fl I , -inflight Ns = Ns Ar inflight I/Os
Limit the number of outstanding checksum I/Os to the specified value.
.It Fl I , -inflight Ns = Ns Ar inflight-I/O-ops
Limit the number of outstanding checksum I/O operations to the specified value.
The default value is 200.
This option affects the performance of the
.Fl c

View file

@ -71,18 +71,18 @@ Force a vdev into the DEGRADED or FAULTED state.
.Fl D Ar latency : Ns Ar lanes
.Ar pool
.Xc
Add an artificial delay to IO requests on a particular
Add an artificial delay to I/O requests on a particular
device, such that the requests take a minimum of
.Ar latency
milliseconds to complete.
Each delay has an associated number of
.Ar lanes
which defines the number of concurrent
IO requests that can be processed.
I/O requests that can be processed.
.Pp
For example, with a single lane delay of 10 ms
.No (\& Ns Fl D Ar 10 : Ns Ar 1 ) ,
the device will only be able to service a single IO request
the device will only be able to service a single I/O request
at a time with each request taking 10 ms to complete.
So, if only a single request is submitted every 10 ms, the
average latency will be 10 ms; but if more than one request

View file

@ -66,7 +66,7 @@ command initiates the removal and returns, while the evacuation continues in
the background.
The removal progress can be monitored with
.Nm zpool Cm status .
If an IO error is encountered during the removal process it will be cancelled.
If an I/O error is encountered during the removal process it will be cancelled.
The
.Sy device_removal
feature flag must be enabled to remove a top-level vdev, see

View file

@ -93,12 +93,12 @@ and referenced
.Pq logically referenced in the pool
block counts and sizes by reference count.
.It Fl s
Display the number of leaf vdev slow IOs.
This is the number of IOs that
didn't complete in
Display the number of leaf vdev slow I/O operations.
This is the number of I/O operations that didn't complete in
.Sy zio_slow_io_ms
milliseconds (default 30 seconds).
This does not necessarily mean the IOs failed to complete, just took an
milliseconds
.Pq Sy 30000 No by default .
This does not necessarily mean the I/O operations failed to complete, just took an
unreasonably long amount of time.
This may indicate a problem with the underlying storage.
.It Fl t

View file

@ -148,8 +148,8 @@ for the specified storage pool(s).
.It Xr zpool-status 8
Displays the detailed health status for the given pools.
.It Xr zpool-iostat 8
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
be observed via
Displays logical I/O statistics for the given pools/vdevs.
Physical I/O operations may be observed via
.Xr iostat 1 .
.It Xr zpool-events 8
Lists all recent events generated by the ZFS kernel modules.