Oops, last manual commit was to -stable, should have been to -current.

No biggy, the code MFC to stable will catch up to the docs in a week.

X-MFC after:    -7 days
This commit is contained in:
Matthew Dillon 2002-08-17 20:44:24 +00:00
parent d8d0cebecd
commit e1583529ee
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=102034
2 changed files with 53 additions and 0 deletions

View file

@ -300,6 +300,36 @@ no reseeding will occur.
Reseeding should not be necessary, and will break
.Dv TIME_WAIT
recycling for a few minutes.
.It tcp.inflight_enable
Enable
.Tn TCP
bandwidth delay product limiting. An attempt will be made to calculate
the bandwidth delay product for each individual TCP connection and limit
the amount of inflight data being transmitted to avoid building up
unnecessary packets in the network. This option is recommended if you
are serving a lot of data over connections with high bandwidth-delay
products, such as modems, GigE links, and fast long-haul WANs, and/or
you have configured your machine to accomodate large TCP windows. In such
situations, without this option, you may experience high interactive
latencies or packet loss due to the overloading of intermediate routers
and switches. Note that bandwidth delay product limiting only effects
the transmit side of a TCP connection.
.It tcp.inflight_debug
Enable debugging for the bandwidth delay product algorithm. This may
default to on (1) so if you enable the algorithm you should probably also
disable debugging by setting this variable to 0.
.It tcp.inflight_min
This puts an lower bound on the bandwidth delay product window, in bytes.
A value of 1024 is typically used for debugging. 6000-16000 is more typical
in a production installation. Setting this value too low may result in
slow ramp-up times for bursty connections. Setting this value too high
effectively disables the algorithm.
.It tcp.inflight_max
This puts an upper bound on the bandwidth delay product window, in bytes.
This value should not generally be modified but may be used to set a
global per-connection limit on queued data, potentially allowing you to
intentionally set a less then optimum limit to smooth data flow over a
network while still being able to specify huge internal TCP buffers.
.El
.Sh ERRORS
A socket operation may fail with one of the following errors returned:

View file

@ -522,6 +522,29 @@ In such environments, setting the sysctl to 0 may reduce the occurrence of
TCP session disconnections.
.Pp
The
.Va net.inet.tcp.inflight_enable
sysctl turns on bandwidth delay product limiting for all TCP connections.
The system will attempt to calculate the bandwidth delay product for each
connection and limit the amount of data queued to the network to just the
amount required to maintain optimum throughput. This feature is useful
if you are serving data over modems, GigE, or high speed WAN links (or
any other link with a high bandwidth*delay product), especially if you are
also using window scaling or have configured a large send window. If
you enable this option you should also be sure to set
.Va net.inet.tcp.inflight_debug
to 0 (disable debugging), and for production use setting
.Va net.inet.tcp.inflight_min
to at least 6144 may be beneficial. Note, however, that setting high
minimums may effectively disable bandwidth limiting depending on the link.
The limiting feature reduces the amount of data built up in intermediate
router and switch packet queues as well as reduces the amount of data built
up in the local host's interface queue. With fewer packets queued up,
interactive connections, especially over slow modems, will also be able
to operate with lower round trip times. However, note that this feature
only effects data transmission (uploading / server-side). It does not
effect data reception (downloading).
.Pp
The
.Va kern.ipc.somaxconn
sysctl limits the size of the listen queue for accepting new TCP connections.
The default value of 128 is typically too low for robust handling of new