Remove old unmaintained docs (#5699)

- removes backend/{fs,xl} files.
- removes FreeBSD doc since we already have FreeNAS doc.
- removes sets/ directory which is redundant with
  large-bucket directory.
- removes browser/ directory which attempts
  to explain about browser RPC calls, since
  its an internal API we do not need to document it.
This commit is contained in:
Harshavardhana 2018-03-23 11:37:04 -07:00 committed by Nitish Tiwari
parent 850caaa1be
commit 020cc8b669
13 changed files with 1 additions and 335 deletions

View file

@ -1,77 +0,0 @@
# Minio FreeBSD Quickstart Guide [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io)
### Minio with ZFS backend - FreeBSD
This example assumes that you have a FreeBSD 11.x running
#### Start ZFS service
```sh
sysrc zfs_enable="YES"
```
Start ZFS service
```sh
service zfs start
```
Configure a loopback device on the `/zfs` file.
```sh
dd if=/dev/zero of=/zfs bs=1M count=4000
mdconfig -a -t vnode -f /zfs
```
Create zfs pool
```sh
zpool create minio-example /dev/md0
```
```sh
df /minio-example
Filesystem 512-blocks Used Avail Capacity Mounted on
minio-example 7872440 38 7872402 0% /minio-example
```
Verify if it is writable
```sh
touch /minio-example/testfile
ls -l /minio-example/testfile
-rw-r--r-- 1 root wheel 0 Apr 26 00:51 /minio-example/testfile
```
Now you have successfully created a ZFS pool for further reading please refer [ZFS Quickstart Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
However, this pool is not taking advantage of any ZFS features. So let's create a ZFS filesytem on this pool with compression enabled. ZFS supports many compression algorithms: [`lzjb`, `gzip`, `zle`, `lz4`]. `lz4` is often the most performant algorithm in terms of compression of data versus system overhead.
```sh
zfs create minio-example/compressed-objects
zfs set compression=lz4 minio-example/compressed-objects
```
To monitor if your pools are healthy.
```sh
zpool status -x
all pools are healthy
```
#### Start Minio service
Install [Minio](https://minio.io) from FreeBSD port.
```sh
pkg install minio
```
Enable minio and configure minio to use ZFS volume mounted at `/minio-example/compressed-objects`.
```
sysrc minio_enable=yes
sysrc minio_disks=/minio-example/compressed-objects
```
Start minio.
```
service minio start
```
Now you have an Minio running on top of your ZFS backend which transparently provides disk level compression for your uploaded objects, please visit http://localhost:9000 to open Minio Browser.
#### Stop Minio service
In-case you wish to stop Minio service.
```sh
service minio stop
```

View file

@ -1,8 +0,0 @@
## Backends
Minio currently implements two types of backends.
| Minio | FS | Erasure | Stability |
|:-----------:|:----:|:----:|:---:|
| Standalone | x | x | Stable |
| Distributed | x | x | Stable |

View file

@ -1,24 +0,0 @@
### Backend format `fs.json`
```go
// objectPartInfo Info of each part kept in the multipart metadata
// file after CompleteMultipartUpload() is called.
type objectPartInfo struct {
Number int `json:"number"`
Name string `json:"name"`
ETag string `json:"etag"`
Size int64 `json:"size"`
}
// A fsMetaV1 represents a metadata header mapping keys to sets of values.
type fsMetaV1 struct {
Version string `json:"version"`
Format string `json:"format"`
Minio struct {
Release string `json:"release"`
} `json:"minio"`
// Metadata map for current object `fs.json`.
Meta map[string]string `json:"meta,omitempty"`
Parts []objectPartInfo `json:"parts,omitempty"`
}
```

View file

@ -1,4 +0,0 @@
{
"format": "fs",
"version": "1"
}

View file

@ -1,20 +0,0 @@
{
"version": "1.0.0",
"format": "fs",
"minio": {
"release": "DEVELOPMENT.GOGET"
},
"meta": {
"etag": "97586a5290d4f5a41328062d6a7da593-3",
"content-type": "binary/octet-stream",
"content-encoding": "gzip"
},
"parts": [
{
"number": 1,
"name": "part.1",
"size": 29,
"eTag": "",
},
]
}

View file

@ -1,10 +0,0 @@
{
"version": "1",
"format": "fs",
"uploadIds": [
{
"uploadID": "id",
"startTime": "time",
}
]
}

View file

@ -1,54 +0,0 @@
### Backend format `xl.json`
```go
// objectPartInfo Info of each part kept in the multipart metadata
// file after CompleteMultipartUpload() is called.
type objectPartInfo struct {
Number int `json:"number"`
Name string `json:"name"`
ETag string `json:"etag"`
Size int64 `json:"size"`
}
// checkSumInfo - carries checksums of individual scattered parts per disk.
type checkSumInfo struct {
Name string `json:"name"`
Algorithm string `json:"algorithm"`
Hash string `json:"hash"`
}
// erasureInfo - carries erasure coding related information, block
// distribution and checksums.
type erasureInfo struct {
Algorithm string `json:"algorithm"`
DataBlocks int `json:"data"`
ParityBlocks int `json:"parity"`
BlockSize int64 `json:"blockSize"`
Index int `json:"index"`
Distribution []int `json:"distribution"`
Checksum []checkSumInfo `json:"checksum,omitempty"`
}
// statInfo - carries stat information of the object.
type statInfo struct {
Size int64 `json:"size"` // Size of the object `xl.json`.
ModTime time.Time `json:"modTime"` // ModTime of the object `xl.json`.
}
// A xlMetaV1 represents `xl.json` metadata header.
type xlMetaV1 struct {
Version string `json:"version"` // Version of the current `xl.json`.
Format string `json:"format"` // Format of the current `xl.json`.
Stat statInfo `json:"stat"` // Stat of the current object `xl.json`.
// Erasure coded info for the current object `xl.json`.
Erasure erasureInfo `json:"erasure"`
// Minio release tag for current object `xl.json`.
Minio struct {
Release string `json:"release"`
} `json:"minio"`
// Metadata map for current object `xl.json`.
Meta map[string]string `json:"meta,omitempty"`
// Captures all the individual object `xl.json`.
Parts []objectPartInfo `json:"parts,omitempty"`
}
```

View file

@ -1,20 +0,0 @@
{
"xl": {
"jbod": [
"8aa2b1bc-0e5a-49e0-8221-05228336b040",
"3467a69b-0266-478a-9e10-e819447e4545",
"d4a4505b-4e4f-4864-befd-4f36adb0bc66",
"592b6583-ca26-47af-b991-ba6d097e34e8",
"c7ef69f0-dbf5-4c0e-b167-d30a441bad7e",
"f0b36ea3-fe96-4f2b-bced-22c7f33e0e0c",
"b83abf39-e39d-4e7b-8e16-6f9953455a48",
"7d63dfc9-5441-4243-bd36-de8db0691982",
"c1bbffc5-81f9-4251-9398-33a959b3ce37",
"64408f94-26e0-4277-9593-2d703f4d5a91"
],
"disk": "8aa2b1bc-0e5a-49e0-8221-05228336b040",
"version": "1"
},
"format": "xl",
"version": "1"
}

View file

@ -1,10 +0,0 @@
{
"version": "1.0.0",
"format": "xl",
"uploadIds": [
{
"uploadID": "id",
"startTime": "time",
}
]
}

View file

@ -1,50 +0,0 @@
{
"parts": [
{
"number": 1,
"size": 5242880,
"etag": "3565c6e741e69a007a5ac7db893a62b5",
"name": "part.1"
},
{
"number": 2,
"size": 5242880,
"etag": "d416712335c280ab1e39498552937764",
"name": "part.2"
}
],
"meta": {
"etag": "97586a5290d4f5a41328062d6a7da593-3",
"content-type": "application\/octet-stream",
"content-encoding": "gzip"
},
"minio": {
"release": "DEVELOPMENT.GOGET"
},
"erasure": {
"algorithm": "klauspost/reedsolomon/vandermonde",
"index": 2,
"distribution": [ 1, 3, 4, 2, 5, 8, 7, 6, 9 ],
"blockSize": 4194304,
"parity": 5,
"data": 5,
"checksum": [
{
"name": "part.1",
"algorithm": "blake2b",
"hash": "173c2cda7fb9e2798f91ba10135b65475fd2b97684355b328941d1c470a37549502672b2b4630c8268d5e5f79c8eb2fe433a25c6368f31d75b2c0504e3104c0e",
},
{
"name": "part.2",
"algorithm": "blake2b",
"hash": "14f9ba0006b2db7cd171507e79032ad3a34eab7df02b6564c487614b6d1e6613343a4f56074aa3473ac9c3b26e00cbbeae937d478cca4cfb138e72838ebb0826",
},
],
},
"stat": {
"modTime": "2016-05-24T00:09:40.122390255Z",
"size": 14824084
},
"format": "xl",
"version": "1.0.0"
}

View file

@ -1,41 +0,0 @@
## Minio Browser
Minio Browser uses Json Web Tokens to authenticate JSON RPC requests.
Initial request generates a token for 'AccessKey' and 'SecretKey'
provided by the user.
<blockquote>
Currently these tokens expire after 10hrs, this is not configurable yet.
</blockquote>
### Start minio server
```
minio server /data
```
### JSON RPC APIs.
JSON RPC namespace is `Web`.
#### Server operations.
* ServerInfo - fetches current server information, includes memory statistics, minio binary
version, golang runtime version and more.
* StorageInfo - fetches disc space availability(Total/Free), Type, Online/Offline status of disc with counts along with ReadQuorum and WriteQuorum counts.
#### Auth operations
* Login - waits for 'username, password' and on success replies a new Json Web Token (JWT).
* SetAuth - change access credentials with new 'username, password'.
* GetAuth - fetch the current auth from the server.
#### Bucket/Object operations.
* ListBuckets - lists buckets, requires a valid token.
* ListObjects - lists objects, requires a valid token.
* MakeBucket - make a new bucket, requires a valid token.
* RemoveObject - removes an object from a bucket, requires a valid token.
* Upload - uploads a new object from the browser, requires a valid token.
* Download - downloads an object from a bucket, requires a valid token.

View file

@ -7,7 +7,7 @@ Following network parameter settings can help ensure optimal Minio server perfor
- *`tcp_fin_timeout`* : A socket left in memory takes approximately 1.5Kb of memory. It makes sense to close the unused sockets preemptively to ensure no memory leakage. This way, even if a peer doesn't close the socket due to some reason, the system itself closes it after a timeout. `tcp_fin_timeout` variable defines this timeout and tells kernel how long to keep sockets in the state FIN-WAIT-2. We recommend setting it to 30. You can set it as shown below
```sh
`sysctl -w net.ipv4.tcp_fin_timeout=30`
sysctl -w net.ipv4.tcp_fin_timeout=30
```
- *`tcp_keepalive_probes`* : This variable defines the number of unacknowledged probes to be sent before considering a connection dead. You can set it as shown below

View file

@ -1,16 +0,0 @@
Introduction [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io)
------------
This feature allows Minio to combine a set of disks larger than 16 in a distributed setup. There are no special configuration changes required to enable this feature. Access to files stored across this setup are locked and synchronized by default.
Motivation
----------
As next-generation data centers continue to shrink, IT professions must re-evaluate ahead to get the benefits of greater server density and storage density. Computer hardware is changing rapidly in system form factors, virtualization, containerization have allowed far more enterprise computing with just a fraction of the physical space. Increased densities allow for smaller capital purchases and lower energy bills.
Restrictions
------------
* Each set is still a maximum of 16 disks, you can start with multiple such sets statically.
* Static sets of disks and cannot be changed, there is no elastic expansion allowed.
* ListObjects() across sets can be relatively slower since List happens on all servers, and is merged at this layer.