Broke README.md into pieces

Added more to "Architecture"
This commit is contained in:
Ev Kontsevoy 2016-03-16 23:14:17 -07:00
parent 563207594e
commit 22b35b0468
5 changed files with 356 additions and 282 deletions

View file

@ -1,6 +1,6 @@
# Overview
### Introduction
## Introduction
Gravitational Teleport is a tool for remotely accessing isolated clusters of
Linux servers via SSH or HTTPS. Unlike traditional key-based access, Teleport
@ -16,7 +16,7 @@ enables teams to easily adopt the following practices:
Take a look at [Quick Start]() page to get a taste of using Teleport, or read the
[Design Document]() to get a full understanding of how Teleport works.
### Why?
## Why?
Mature tech companies with significant infrastructure footprints tend to implement most
of these patterns internally. Gravitational Teleport allows smaller companies without
@ -27,7 +27,7 @@ license.
Teleport is built on top of the high-quality [Golang SSH](https://godoc.org/golang.org/x/crypto/ssh)
implementation and it is fully compatible with OpenSSH.
### Who Built Teleport?
## Who Built Teleport?
Teleport was created by [Gravitational Inc](https://gravitational.com). We have built Teleport
by borrowing from our previous experiences at Rackspace. It has been extracted from the
@ -38,282 +38,3 @@ Being a wonderful standalone tool, Teleport can be used as a software library en
trust management in a complex multi-cluster, multi-region scenarios across many teams
within multiple organizations.
# Quick Start
#### Quick Start
Welcome to Teleport Quick Start Guide. The goal of this document is to show off the basic
capabilities of Teleport on a single node.
#### Core Concepts
There are three types of services Teleport nodes can run: `nodes`, `proxies` and `auth servers`.
- An auth server is the core of a cluster. Auth servers store user accounts and offer
authentication and authorization service for every node and every user in a cluster.
- Nodes are regular SSH nodes, similar to `sshd` daemon you are probably used to. When a node receives
a connection request, it authenticates it via the cluster's auth server.
- Proxies route client connection requests to an appropriate node and serve a Web UI
which can also be used to login into SSH nodes. Every client-to-node connection in
Teleport must be routed via a proxy.
The `teleport` daemon runs all 3 of these services by default. This Quick Start Guide will
be using this default behavior to create a single node cluster and interact with it
using the client CLI tool: `tsh`.
#### Installing
Gravitational Teleport natively runs on any modern Linux distribution and OSX. You can
download pre-built binaries from [here](https://github.com/gravitational/teleport/releases)
or you can [build it from source](https://github.com/gravitational/teleport).
#### Starting Teleport
Lets create a single-node cluster and connect to it using the CLI as well as your
web browser.
First, create a directory for Teleport
to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon:
```bash
mkdir -p /var/lib/teleport
teleport start
```
At this point you should see Teleport print listening IPs of all 3 services into the console.
Congradulations! You are running a single-node Teleport cluster.
#### Creating Users
Teleport users are defined on a cluster level, and every Teleport user must be associated with
a list of machine-level OS usernames it can authenticate as during a login. This list is
called "user mappings".
If you do not specify the mappings, the new Teleport user will be assigned a mapping with
the same name. Lets create a Teleport user with the same name as the OS user:
```bash
> tctl users add $USER
Signup token has been created. Share this URL with the user:
https://turing:3080/web/newuser/96c85ed60b47ad345525f03e1524ac95d78d94ffd2d0fb3c683ff9d6221747c2
```
`tctl` prints a sign-up URL for you to visit and complete registration. Open this link in a
browser, install Google Authenticator on your phone, set up 2nd factor authentication and
pick a password.
Having done that, you will be presented with a Web UI where you will see your machine and
will be able to log into it using web-based terminal.
#### Login
Lets login using the `tsh` command line tool:
```bash
tsh --proxy=localhost localhost
```
Notice that `tsh` client always needs `--proxy` flag because all client connections
in Teleport have to go via a proxy sometimes called an "SSH bastion".
#### Adding Nodes to Cluster
Lets add another node to your cluster. Lets assume the other node can be reached by
hostname "luna". `tctl` command below will create a single-use token for a node to
join and will print instructions for you to follow:
```bash
> tctl nodes add
The invite token: n92bb958ce97f761da978d08c35c54a5c
Run this on the new node to join the cluster:
teleport start --roles=node --token=n92bb958ce97f761da978d08c35c54a5c --auth-server=10.0.10.1
```
Start `teleport` daemon on "luna" as shown above, but make sure to use the proper `--auth-server`
IP to point back to your localhost.
Once you do that, "luna" will join the cluster. To verify, type this on your localhost:
```bash
> tsh --proxy=localhost ls
Node Name Node ID Address Labels
--------- ------- ------- ------
localhost xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.1:3022
luna xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022
```
#### Using Node Labels
Notice the "Labels" column in the output above. It is currently not populated. Teleport lets
you apply static or dynamic labels to your nodes. As the cluster grows and nodes assume different
roles, labels will help to find the right node quickly.
Lets see labels in action. Stop `teleport` on "luna" and restart it with the following command:
```bash
teleport start --roles=node --auth-server=10.0.10.1 --nodename=db --labels "location=virginia,arch=[1h:/bin/uname -m]"
```
Notice a few things here:
* We did not use `--token` flag this time, because "luna" is already a member of the cluster.
* We renamed "luna" to "db" because this machine is running a database. This name only exists within Teleport, the actual hostname has not changed.
* We assigned a static label "location" to this host and set it to "virginia".
* We also assigned a dynamic label "arch" which will evaluate `/bin/uname -m` command once an hour and assign the output to this label value.
Lets take a look at our cluster now:
```bash
> tsh --proxy=localhost ls
Node Name Node ID Address Labels
--------- ------- ------- ------
localhost xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.1:3022
db xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022 location=virginia,arch=x86_64
```
Lets use the newly created labels to filter the output of `tsh ls` and ask to show only
nodes located in Virginia:
```
> tsh --proxy=localhost ls location=virginia
Node Name Node ID Address Labels
--------- ------- ------- ------
db xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022 location=virginia,arch=x86_64
```
Labels can be used with the regular `ssh` command too. This will execute `ls -l /` command
on all servers located in Virginia:
```
> tsh --proxy=localhost ssh location=virginia ls -l /
```
#### Sharing SSH Sessions with Colleagues
Suppose you are trying to troubleshoot a problem on a node. Sometimes it makes sense to ask
another team member for help. Traditionally this could be done by letting them know which
node you're on, having them SSH in, start a terminal multiplexer like `screen` and join a
session there.
Teleport makes this a bit more convenient. Lets login into "luna" and ask Teleport for your
current session status:
```bash
> tsh --proxy=teleport.example.com ssh luna
luna > teleport status
User ID : joe, logged in as joe from 10.0.10.1 43026 3022
Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4
Session URL: https://teleport.example.com:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4
```
You can share the Session URL with a colleague in your organization. Assuming that `teleport.example.com`
is your company's Teleport proxy, he will be able to join and help you troubleshoot the
problem on "luna" in his browser.
Also, people can join your session via CLI. They will have to run:
```bash
> tsh --proxy=teleport.example.com join 7645d523-60cb-436d-b732-99c5df14b7c4
```
NOTE: for this to work, both of you must have proper user mappings allowing you
access `luna` under the same OS user.
#### Inviting Colleagues to your Laptop
Sometimes you may want to temporarily open up your own laptop for someone else (if you
trust them, of course). First, you will have to start teleport with `--roles=node` in
a separate Terminal:
```bash
> teleport start --proxy=teleport.example.com
```
... then you will need to start a local SSH session by logging into localhost and
asking for a session ID:
```bash
> tsh --proxy=teleport.example.com ssh localhost
localhost> teleport status
```
Now you can invite someone into your localhost session. They will need to have a proper
user mapping, of course, to be allowed to join your session. To disconnect, shut down
`teleport` daemon or simply exit the `tsh` session.
# Architecture
This document covers the underlying design principles of Teleport and offers the detailed
description of Teleport architecture.
### Design Principles
Teleport was designed in accordance with the following design principles:
* **Off the shelf security**. Teleport does not re-implement any security primitives
and uses well-established, popular implementations of the encryption and network protocols.
* **Open standards**. There is no security through obscurity. Teleport is fully compatible
with existing and open standards.
### Core Concepts
There are three types of services (roles) in a Teleport cluster.
| Service(Role) | Description
|----------------|------------------------------------------------------------------------
| node | This role provides the SSH access to a node. Typically every machine in a cluster runs `teleport` with this role. It is stateless and lightweight.
| proxy | The proxy accepts inbound connections from the clients and routes them to the appropriate nodes. The proxy also serves the Web UI.
| auth | This service provides authentication and authorization service to proxies and nodes. It is the certificate authority (CA) of a cluster and the storage for audit logs. It is the only stateful component of a Teleport cluster.
Although `teleport` daemon is a single binary, it can provide any combination of these services
via `--roles` command line flag or via the configuration file.
Lets explore how these services interact with Teleport clients and with each other. Consider the diagram:
![Teleport Diagram](img/teleport.png)
# Admin Guide
### Building
Gravitational Teleport is written in Go and requires Golang v1.5 or newer. If you have Go
already installed, building is easy:
```bash
> git clone https://github.com/gravitational/teleport && cd teleport
> make
```
If you do not have Go but you have Docker installed and running, you can build Teleport
this way:
```bash
> git clone https://github.com/gravitational/teleport
> make -C build.assets
```
### Installing
TBD
- Configuration
- Adding users to the cluster
- Adding nodes to the cluster
- Controlling access
FAQ
---
0. Can I use Teleport instead of OpenSSH in production today?
1. Can I use OpenSSH client's `ssh` command with Teleport?
2. Which TCP ports does Teleport uses?
3. Do you offer commercial support for Teleport?

28
docs/admin-guide.md Normal file
View file

@ -0,0 +1,28 @@
# Admin Guide
### Building
Gravitational Teleport is written in Go and requires Golang v1.5 or newer. If you have Go
already installed, building is easy:
```bash
> git clone https://github.com/gravitational/teleport && cd teleport
> make
```
If you do not have Go but you have Docker installed and running, you can build Teleport
this way:
```bash
> git clone https://github.com/gravitational/teleport
> make -C build.assets
```
### Installing
TBD
- Configuration
- Adding users to the cluster
- Adding nodes to the cluster
- Controlling access

99
docs/architecture.md Normal file
View file

@ -0,0 +1,99 @@
# Architecture
This document covers the underlying design principles of Teleport and offers the detailed
description of Teleport architecture.
### Design Principles
Teleport was designed in accordance with the following design principles:
* **Off the Shelf Security**. Teleport does not re-implement any security primitives
and uses well-established, popular implementations of the encryption and network protocols.
* **Open Standards**. There is no security through obscurity. Teleport is fully compatible
with existing and open standards and other software, including OpenSSH.
* **Cluster-oriented Design**. Teleport is built for managing clusters, not individual
servers. In practice this means that hosts and users have cluster memberships. Identity
management and authorization happen on a cluster level.
* **Built for Teams**. Teleport was created under an assumption of multiple teams operating
on several disconnected clusters, for example production-vs-staging, or perhaps
on a cluster-per-customer or cluster-per-application basis.
### Core Concepts
There are three types of services (roles) in a Teleport cluster.
| Service(Role) | Description
|----------------|------------------------------------------------------------------------
| node | This role provides the SSH access to a node. Typically every machine in a cluster runs `teleport` with this role. It is stateless and lightweight.
| proxy | The proxy accepts inbound connections from the clients and routes them to the appropriate nodes. The proxy also serves the Web UI.
| auth | This service provides authentication and authorization service to proxies and nodes. It is the certificate authority (CA) of a cluster and the storage for audit logs. It is the only stateful component of a Teleport cluster.
Although `teleport` daemon is a single binary, it can provide any combination of these services
via `--roles` command line flag or via the configuration file.
Lets explore how these services come together and interact with Teleport clients and with each other.
Lets look at this high level diagram illustrating the process:
![Teleport Overview](img/overview.png)
Notice that the Teleport Admin tool must be physically present on the same machine where
Teleport Auth is running. Adding new nodes or inviting new users to the cluster is only
possible using this tool.
Once nodes and users (clients) have been invited to the cluster, lets go over the sequence
of network calls performed by Teleport components when the client tries to connect to the
node.
1. The client tries to establish an SSH connection to a proxy using either the CLI interface or a
web browser (via HTTPS). Clients must always connect through a proxy for two reasons:
* Individual nodes may not always be reacheable from "the outside".
* Proxies always record SSH sessions and keep track of active user sessions. This makes it possible
for an SSH user to see if someone else is connected to a node he is about to work on.
When establishing a connection, the client offers its public key.
2. The proxy checks if the submitted public key has been previously signed by the auth server.
If there was no key offered (first time login) or if the key certificate has expired, the
proxy denies the connection and asks the client to login interactively using a password and a
2nd factor.
Teleport uses [Google Authenticator](https://support.google.com/accounts/answer/1066447?hl=en)
for the two-step authentication.
The password + 2nd factor are submitted to a proxy via HTTPS, therefore it is critical for
a secure configuration of Teleport to install a proper HTTPS certificate on a proxy.
**DO NOT** use the self-signed certificate installed by default.
If the credentials are correct, the auth server generates and signs a new certificate and returns
it to a client via the proxy. The client stores this key and will use it for subsequent
logins. The key will automatically expire after 22 hours. In the future, Teleport will support
configurable TTL of these temporary keys.
3. At this step, the proxy tries to locate the requested node in a cluster. There are three
lookup mechanism a proxy uses to find the node's IP address:
* Tries to resolve the name requested by the client.
* Asks the auth server if there is a node registered with this `nodename`.
* Asks the auth server to find a node (or nodes) with a label that matches the requested name.
If the node is located, the proxy establishes the connection between the client and the
requested node and begins recording the session, sending the session history to the auth
server to be stored.
4. When the node receives a connection request, it too checks with the auth server to validate
the submitted client certificate. The node also requests the auth server to provide a list
of OS users (user mappings) for the connecting client, to make sure the client is authorized
to use the requested OS login.
In other words, every connection is authenticated twice before being authorized to log in:
* User's cluster membership is validated when connecting a proxy.
* User's cluster membership is validated again when connecting to a node.
* User's node-level permissions are validated before authorizing him to interact with SSH
subsystems.

9
docs/faq.md Normal file
View file

@ -0,0 +1,9 @@
# FAQ
0. Can I use Teleport instead of OpenSSH in production today?
1. Can I use OpenSSH client's `ssh` command with Teleport?
2. Which TCP ports does Teleport uses?
3. Do you offer commercial support for Teleport?

217
docs/quickstart.md Normal file
View file

@ -0,0 +1,217 @@
# Quick Start
### Introduction
Welcome to Teleport Quick Start Guide. The goal of this document is to show off the basic
capabilities of Teleport on a single node.
There are three types of services Teleport nodes can run: `nodes`, `proxies` and `auth servers`.
- An auth server is the core of a cluster. Auth servers store user accounts and offer
authentication and authorization service for every node and every user in a cluster.
- Nodes are regular SSH nodes, similar to `sshd` daemon you are probably used to. When a node receives
a connection request, it authenticates it via the cluster's auth server.
- Proxies route client connection requests to an appropriate node and serve a Web UI
which can also be used to login into SSH nodes. Every client-to-node connection in
Teleport must be routed via a proxy.
The `teleport` daemon runs all 3 of these services by default. This Quick Start Guide will
be using this default behavior to create a single node cluster and interact with it
using the client CLI tool: `tsh`.
### Installing
Gravitational Teleport natively runs on any modern Linux distribution and OSX. You can
download pre-built binaries from [here](https://github.com/gravitational/teleport/releases)
or you can [build it from source](https://github.com/gravitational/teleport).
### Starting a Cluster
Lets create a single-node cluster and connect to it using the CLI as well as your
web browser.
First, create a directory for Teleport
to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon:
```bash
mkdir -p /var/lib/teleport
teleport start
```
At this point you should see Teleport print listening IPs of all 3 services into the console.
Congradulations! You are running a single-node Teleport cluster.
### Creating Users
Teleport users are defined on a cluster level, and every Teleport user must be associated with
a list of machine-level OS usernames it can authenticate as during a login. This list is
called "user mappings".
If you do not specify the mappings, the new Teleport user will be assigned a mapping with
the same name. Lets create a Teleport user with the same name as the OS user:
```bash
> tctl users add $USER
Signup token has been created. Share this URL with the user:
https://turing:3080/web/newuser/96c85ed60b47ad345525f03e1524ac95d78d94ffd2d0fb3c683ff9d6221747c2
```
`tctl` prints a sign-up URL for you to visit and complete registration. Open this link in a
browser, install Google Authenticator on your phone, set up 2nd factor authentication and
pick a password.
Having done that, you will be presented with a Web UI where you will see your machine and
will be able to log into it using web-based terminal.
### Login
Lets login using the `tsh` command line tool:
```bash
tsh --proxy=localhost localhost
```
Notice that `tsh` client always needs `--proxy` flag because all client connections
in Teleport have to go via a proxy sometimes called an "SSH bastion".
### Adding Nodes to Cluster
Lets add another node to your cluster. Lets assume the other node can be reached by
hostname "luna". `tctl` command below will create a single-use token for a node to
join and will print instructions for you to follow:
```bash
> tctl nodes add
The invite token: n92bb958ce97f761da978d08c35c54a5c
Run this on the new node to join the cluster:
teleport start --roles=node --token=n92bb958ce97f761da978d08c35c54a5c --auth-server=10.0.10.1
```
Start `teleport` daemon on "luna" as shown above, but make sure to use the proper `--auth-server`
IP to point back to your localhost.
Once you do that, "luna" will join the cluster. To verify, type this on your localhost:
```bash
> tsh --proxy=localhost ls
Node Name Node ID Address Labels
--------- ------- ------- ------
localhost xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.1:3022
luna xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022
```
### Using Node Labels
Notice the "Labels" column in the output above. It is currently not populated. Teleport lets
you apply static or dynamic labels to your nodes. As the cluster grows and nodes assume different
roles, labels will help to find the right node quickly.
Lets see labels in action. Stop `teleport` on "luna" and restart it with the following command:
```bash
teleport start --roles=node --auth-server=10.0.10.1 --nodename=db --labels "location=virginia,arch=[1h:/bin/uname -m]"
```
Notice a few things here:
* We did not use `--token` flag this time, because "luna" is already a member of the cluster.
* We renamed "luna" to "db" because this machine is running a database. This name only exists within Teleport, the actual hostname has not changed.
* We assigned a static label "location" to this host and set it to "virginia".
* We also assigned a dynamic label "arch" which will evaluate `/bin/uname -m` command once an hour and assign the output to this label value.
Lets take a look at our cluster now:
```bash
> tsh --proxy=localhost ls
Node Name Node ID Address Labels
--------- ------- ------- ------
localhost xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.1:3022
db xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022 location=virginia,arch=x86_64
```
Lets use the newly created labels to filter the output of `tsh ls` and ask to show only
nodes located in Virginia:
```
> tsh --proxy=localhost ls location=virginia
Node Name Node ID Address Labels
--------- ------- ------- ------
db xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022 location=virginia,arch=x86_64
```
Labels can be used with the regular `ssh` command too. This will execute `ls -l /` command
on all servers located in Virginia:
```
> tsh --proxy=localhost ssh location=virginia ls -l /
```
### Sharing SSH Sessions with Colleagues
Suppose you are trying to troubleshoot a problem on a node. Sometimes it makes sense to ask
another team member for help. Traditionally this could be done by letting them know which
node you're on, having them SSH in, start a terminal multiplexer like `screen` and join a
session there.
Teleport makes this a bit more convenient. Lets login into "luna" and ask Teleport for your
current session status:
```bash
> tsh --proxy=teleport.example.com ssh luna
luna > teleport status
User ID : joe, logged in as joe from 10.0.10.1 43026 3022
Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4
Session URL: https://teleport.example.com:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4
```
You can share the Session URL with a colleague in your organization. Assuming that `teleport.example.com`
is your company's Teleport proxy, he will be able to join and help you troubleshoot the
problem on "luna" in his browser.
Also, people can join your session via CLI. They will have to run:
```bash
> tsh --proxy=teleport.example.com join 7645d523-60cb-436d-b732-99c5df14b7c4
```
NOTE: for this to work, both of you must have proper user mappings allowing you
access `luna` under the same OS user.
### Inviting Colleagues to your Laptop
Sometimes you may want to temporarily open up your own laptop for someone else (if you
trust them, of course). First, you will have to start teleport with `--roles=node` in
a separate Terminal:
```bash
> teleport start --proxy=teleport.example.com
```
... then you will need to start a local SSH session by logging into localhost and
asking for a session ID:
```bash
> tsh --proxy=teleport.example.com ssh localhost
localhost> teleport status
```
Now you can invite someone into your localhost session. They will need to have a proper
user mapping, of course, to be allowed to join your session. To disconnect, shut down
`teleport` daemon or simply exit the `tsh` session.
### Running in Production
We hope this Guide helped you to quickly set up a toy single-server SSH cluster on
localhost. For production environments we strongly recommend the following:
- Install HTTPS certificates for every Teleport proxy.
- Run Teleport `auth` on isolated servers. The auth service can run in a
highly available (HA) configuration.
- Use a configuration file instead of command line flags because it gives you
more flexibility, for example for configuring HA clusters.