Added 2.3 version of the docs
It is not made "latest" though.
31
docs/2.3.yaml
Normal file
|
@ -0,0 +1,31 @@
|
|||
site_name: Gravitational Teleport
|
||||
site_url: https://gravitational.com/teleport
|
||||
repo_url: https://github.com/gravitational/teleport
|
||||
site_description: SSH infrastructure for Clusters and Teams
|
||||
site_author: Gravitational Inc
|
||||
copyright: Gravitational Inc, 2016-17
|
||||
|
||||
# output directory:
|
||||
site_dir: ../build/docs/2.3
|
||||
docs_dir: "2.3"
|
||||
|
||||
theme: readthedocs
|
||||
theme_dir: theme
|
||||
markdown_extensions:
|
||||
- markdown_include.include
|
||||
- admonition
|
||||
- def_list
|
||||
- footnotes
|
||||
extra_css: []
|
||||
extra_javascript: []
|
||||
extra:
|
||||
version: 2.3
|
||||
pages:
|
||||
- QuickStart Guide: quickstart.md
|
||||
- Architecture: architecture.md
|
||||
- User Manual: user-manual.md
|
||||
- Admin Manual: admin-guide.md
|
||||
- Teleport Enterprise: enterprise.md
|
||||
- Teleport Enterprise - SAML: saml.md
|
||||
- Teleport Enterprise- OIDC: oidc.md
|
||||
- FAQ: faq.md
|
41
docs/2.3/README.md
Normal file
|
@ -0,0 +1,41 @@
|
|||
# Overview
|
||||
|
||||
## Introduction
|
||||
|
||||
Gravitational Teleport ("Teleport") is a tool for remotely accessing isolated clusters of
|
||||
Linux servers via SSH or HTTPS. Unlike traditional key-based access, Teleport
|
||||
enables teams to easily adopt the following practices:
|
||||
|
||||
- Avoid key distribution and [trust on first use](https://en.wikipedia.org/wiki/Trust_on_first_use) issues by using auto-expiring keys signed by a cluster certificate authority (CA).
|
||||
- Enforce 2nd factor authentication.
|
||||
- Connect to clusters located behind firewalls without direct Internet access via SSH bastions.
|
||||
- Record and replay SSH sessions for knowledge sharing and auditing purposes.
|
||||
- Collaboratively troubleshoot issues through session sharing.
|
||||
- Discover online servers and Docker containers within a cluster with dynamic node labels.
|
||||
|
||||
Teleport is built on top of the high-quality [Golang SSH](https://godoc.org/golang.org/x/crypto/ssh)
|
||||
implementation and it is fully compatible with OpenSSH.
|
||||
|
||||
## Why Build Teleport?
|
||||
|
||||
Mature tech companies with significant infrastructure footprints tend to implement most
|
||||
of these patterns internally. Teleport allows smaller companies without
|
||||
significant in-house SSH expertise to easily adopt them, as well. Teleport comes with an
|
||||
accessible Web UI and a very permissive [Apache 2.0](https://github.com/gravitational/teleport/blob/master/LICENSE)
|
||||
license to facilitate adoption and use.
|
||||
|
||||
Being a complete standalone tool, Teleport can be used as a software library enabling
|
||||
trust management in complex multi-cluster, multi-region scenarios across many teams
|
||||
within multiple organizations.
|
||||
|
||||
## Who Built Teleport?
|
||||
|
||||
Teleport was created by [Gravitational Inc](https://gravitational.com). We have built Teleport
|
||||
by borrowing from our previous experiences at Rackspace. It has been extracted from [Gravity](https://gravitational.com/vendors.html), our system for helping our clients to deploy
|
||||
and remotely manage their SaaS applications on many cloud regions or even on-premise.
|
||||
|
||||
## Resources
|
||||
To get started with Teleport we recommend starting with the [Architecture Document](architecture.md). Then if you want to jump right in and play with Teleport, you can read the [Quick Start](quickstart.md). For a deeper understanding of how everything works and recommended production setup, please review the [Admin Manual](admin-guide.md) to setup Teleport and the [User Manual](user-manual.md) for daily usage. There is also an [FAQ](faq.md) where we'll be collecting common questions. Finally, you can always type `tsh`, `tctl` or `teleport` in terminal after Teleport has been installed to review those reference guides.
|
||||
|
||||
The best way to ask questions or file issues regarding Teleport is by creating a Github issue or pull request. Otherwise, you can reach us through the contact form or chat on our [website](https://gravitational.com/).
|
||||
|
1231
docs/2.3/admin-guide.md
Normal file
356
docs/2.3/architecture.md
Normal file
|
@ -0,0 +1,356 @@
|
|||
# Architecture
|
||||
|
||||
This document covers the underlying design principles of Teleport and a detailed description of Teleport architecture.
|
||||
|
||||
## Design Principles
|
||||
|
||||
Teleport was designed in accordance with the following principles:
|
||||
|
||||
* **Off the Shelf Security**: Teleport does not re-implement any security primitives
|
||||
and uses well-established, popular implementations of the encryption and network protocols.
|
||||
|
||||
* **Open Standards**: There is no security through obscurity. Teleport is fully compatible
|
||||
with existing and open standards and other software, including OpenSSH.
|
||||
|
||||
* **Cluster-Oriented Design**: Teleport is built for managing clusters, not individual
|
||||
servers. In practice this means that hosts and users have cluster memberships. Identity
|
||||
management and authorization happen on a cluster level.
|
||||
|
||||
* **Built for Teams**: Teleport was created under the assumption of multiple teams operating
|
||||
on several disconnected clusters (production-vs-staging, or perhaps
|
||||
on a cluster-per-customer or cluster-per-application basis).
|
||||
|
||||
## Core Concepts
|
||||
|
||||
The following core concepts are integral to understanding the Teleport architecture.
|
||||
|
||||
* **Cluster of Nodes**. Unlike traditional SSH service, Teleport operates on a _cluster_ of nodes.
|
||||
A cluster is a set of nodes (servers). There are several ramifications of this:
|
||||
* User identities and user permissions are defined and enforced on a cluster level.
|
||||
* A node must become a _cluster member_ before any user can connect to it via SSH.
|
||||
* SSH access to any cluster node is _always_ performed via a cluster proxy,
|
||||
sometimes called an "SSH bastion".
|
||||
|
||||
* **User Account**. Unlike traditional SSH, Teleport introduces the concept of a User Account.
|
||||
A User Account is not the same as SSH login. For example, there can be a Teleport user "johndoe"
|
||||
who can be given permission to login as "root" to a specific subset of nodes.
|
||||
|
||||
* **Teleport Services**. A Teleport cluster consists of three separate services, also called
|
||||
"node roles": `proxy`, `auth` and `node`. Each Teleport node can run any combination of them
|
||||
by passing `--role` flag to the `teleport` daemon.
|
||||
|
||||
* **User Roles**. Unlike traditional SSH, each Teleport user account is assigned a `role`.
|
||||
Having roles allows Teleport to implement role-based access control (RBAC), i.e. assign
|
||||
users to groups (roles) and restrict each role to a subset of actions on a subset of
|
||||
nodes in a cluster.
|
||||
|
||||
* **Certificates**. Teleport uses SSH certificates to authenticate nodes and users within
|
||||
a cluster. Teleport does not allow public key or password-based SSH authentication.
|
||||
|
||||
* **Dynamic Configuration**. Nearly everything in Teleport can be configured via the
|
||||
configuration file, `/etc/teleport.yaml` by default. But some settings can be changed
|
||||
at runtime, by modifying the cluster state (eg., creating user roles or
|
||||
connecting to trusted clusters). These operations are called 'dynamic configuration'.
|
||||
|
||||
## User Accounts
|
||||
|
||||
Teleport supports two types of user accounts:
|
||||
|
||||
* **Internal users** are created and stored in Teleport's own identitiy storage. A cluster
|
||||
administrator has to create account entries for every Teleport user.
|
||||
Teleport supports second factor authentication (2FA) and it is enforced by default.
|
||||
There are two types of 2FA supported:
|
||||
* [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
|
||||
is the default. You can use [Google Authenticator](https://en.wikipedia.org/wiki/Google_Authenticator) or
|
||||
[Authy](https://www.authy.com/) or any other TOTP client.
|
||||
* [U2F](https://en.wikipedia.org/wiki/Universal_2nd_Factor).
|
||||
* **External users** are users stored elsewhere else within an organization. Examples include
|
||||
Github, Active Directory (AD), LDAP server, OpenID/OAuth2 endpoint or behind SAML.
|
||||
|
||||
|
||||
!!! tip "Version Warning":
|
||||
External user identities are only supported in Teleport Enterprise. Please
|
||||
take a look at [Teleport Enterprise](enterprise.md) chapter for more information.
|
||||
|
||||
## Teleport Cluster
|
||||
|
||||
Lets explore how these services come together and interact with Teleport clients and with each other.
|
||||
|
||||
**High Level Diagram of a Teleport cluster**
|
||||
|
||||
![Teleport Overview](img/overview.svg)
|
||||
|
||||
Notice that the Teleport Admin tool must be physically present on the same machine where
|
||||
Teleport auth is running. Adding new nodes or inviting new users to the cluster is only
|
||||
possible using this tool.
|
||||
|
||||
Once nodes and users (clients) have been invited to the cluster, lets go over the sequence
|
||||
of network calls performed by Teleport components when the client tries to connect to the
|
||||
node.
|
||||
|
||||
1) The client tries to establish an SSH connection to a proxy using either the CLI interface or a
|
||||
web browser (via HTTPS). When establishing a connection, the client offers its public key. Clients must always connect through a proxy for two reasons:
|
||||
|
||||
* Individual nodes may not always be reachable from "the outside".
|
||||
* Proxies always record SSH sessions and keep track of active user sessions. This makes it possible for an SSH user to see if someone else is connected to a node she is about to work on.
|
||||
|
||||
2) The proxy checks if the submitted certificate has been previously signed by the auth server.
|
||||
If there was no key previously offered (first time login) or if the certificate has expired, the
|
||||
proxy denies the connection and asks the client to login interactively using a password and a
|
||||
2nd factor.
|
||||
|
||||
Teleport uses [Google Authenticator](https://support.google.com/accounts/answer/1066447?hl=en)
|
||||
for the two-step authentication. The password + 2nd factor are submitted to a proxy via HTTPS, therefore it is critical for a secure configuration of Teleport to install a proper HTTPS certificate on a proxy.
|
||||
|
||||
!!! warning "Warning":
|
||||
Do not use a self-signed certificate in production!
|
||||
If the credentials are correct, the auth server generates and signs a new certificate and returns
|
||||
it to a client via the proxy. The client stores this key and will use it for subsequent
|
||||
logins. The key will automatically expire after 23 hours by default. This TTL can be configured
|
||||
to a maximum of 30 hours and a minimum of 1 minute.
|
||||
|
||||
3) At this step, the proxy tries to locate the requested node in a cluster. There are three
|
||||
lookup mechanisms a proxy uses to find the node's IP address:
|
||||
|
||||
* Tries to resolve the name requested by the client.
|
||||
* Asks the auth server if there is a node registered with this `nodename`.
|
||||
* Asks the auth server to find a node (or nodes) with a label that matches the requested name.
|
||||
|
||||
If the node is located, the proxy establishes the connection between the client and the
|
||||
requested node and begins recording the session, sending the session history to the auth
|
||||
server to be stored.
|
||||
|
||||
4) When the node receives a connection request, it too checks with the auth server to validate
|
||||
the submitted client certificate. The node also requests the auth server to provide a list
|
||||
of OS users (user mappings) for the connecting client, to make sure the client is authorized
|
||||
to use the requested OS login.
|
||||
|
||||
In other words, every connection is authenticated twice before being authorized to log in:
|
||||
|
||||
* User's cluster membership is validated when connecting a proxy.
|
||||
* User's cluster membership is validated again when connecting to a node.
|
||||
* User's node-level permissions are validated before authorizing her to interact with SSH
|
||||
subsystems.
|
||||
|
||||
**Detailed Diagram of a Teleport cluster**
|
||||
|
||||
![Teleport Everything](img/everything.svg)
|
||||
|
||||
|
||||
### Cluster State
|
||||
|
||||
Each cluster node is completely stateless and holds no secrets such as keys, passwords, etc.
|
||||
The persistent state of a Teleport cluster is kept by the auth server. There are three types
|
||||
of data stored by the auth server:
|
||||
|
||||
* **Key storage**. As described above, a Teleport cluster is a set of machines whose public keys are
|
||||
signed by the same certificate authority (CA), with the auth server acting as the CA of a cluster.
|
||||
The auth server stores its own keys in a key storage. Teleport supports multiple storage back-ends
|
||||
to store secrets, including the file-based storage or databases like [BoltDB](https://github.com/boltdb/bolt),
|
||||
[DynamoDB](https://aws.amazon.com/dynamodb/) or [etcd](https://github.com/coreos/etcd). Implementing another key storage backend is simple, see `lib/backend` directory in Teleport source code.
|
||||
|
||||
* **Audit Log**. When users login into a Teleport cluster, execute remote commands and logout,
|
||||
that activity is recorded in the audit log. See [Audit Log](admin-guide.md#audit-log)
|
||||
for more details.
|
||||
|
||||
* **Recorded Sessions**. When Teleport users launch remote shells via `tsh ssh` command, their
|
||||
interactive sessions are recorded and stored by the auth server. Each recorded
|
||||
session is a file which is saved in `/var/lib/teleport`, by default.
|
||||
|
||||
|
||||
## Teleport Services
|
||||
|
||||
There are three types of services (roles) in a Teleport cluster.
|
||||
|
||||
| Service (Node Role) | Description
|
||||
|----------------|------------------------------------------------------------------------
|
||||
| node | This role provides the SSH access to a node. Typically every machine in a cluster runs this role. It is stateless and lightweight.
|
||||
| proxy | The proxy accepts inbound connections from the clients and routes them to the appropriate nodes. The proxy also serves the Web UI.
|
||||
| auth | This service provides authentication and authorization service to proxies and nodes. It is the certificate authority (CA) of a cluster and the storage for audit logs. It is the only stateful component of a Teleport cluster.
|
||||
|
||||
Although `teleport` daemon is a single binary, it can provide any combination of these services
|
||||
via `--roles` command line flag or via the configuration file.
|
||||
|
||||
In addition to `teleport` daemon, there are three client tools you will use:
|
||||
|
||||
| Tool | Description
|
||||
|----------------|------------------------------------------------------------------------
|
||||
| tctl | Cluster administration tool used to invite nodes to a cluster and manage user accounts. `tctl` must be used on the same machine where `auth` is running.
|
||||
| tsh | Teleport client tool, similar in principle to OpenSSH's `ssh`. Use it to login into remote SSH nodes, list and search for nodes in a cluster, securely upload/download files, etc. `tsh` can work in conjunction with `ssh` by acting as an SSH agent.
|
||||
| Web browser | You can use your web browser to login into any Teleport node, just open `https://<proxy-host>:3080` (`proxy-host` is one of the machines that has proxy service enabled).
|
||||
|
||||
Let's explore each of the Teleport services in detail.
|
||||
|
||||
### The Auth Service
|
||||
|
||||
The auth server acts as a certificate authority (CA) of the cluster. Teleport security is
|
||||
based on SSH certificates and every certificate must be signed by the cluster auth server.
|
||||
|
||||
There are two types of certificates the auth server can sign:
|
||||
|
||||
* **Host certificates** are used to add new nodes to a cluster.
|
||||
* **User certificates** are used to authenticate users when they try to log into a cluster node.
|
||||
|
||||
Upon initialization the auth server generates a public / private keypair and stores it in the
|
||||
configurable key storage. The auth server also keeps the records of what has been happening
|
||||
inside the cluster: it stores recordings of all SSH sessions in the configurable events
|
||||
storage.
|
||||
|
||||
![Teleport Auth](img/auth-server.svg)
|
||||
|
||||
When a new node joins the cluster, the auth server generates a new public / private keypair for
|
||||
the node and signs its certificate.
|
||||
|
||||
To join a cluster for the first time, a node must present a "join token" to the auth server.
|
||||
The token can be static (configured via a config file) or a dynamic, single-use token.
|
||||
|
||||
!!! tip "NOTE":
|
||||
When using dynamic tokens, their default time to live (TTL) is 15 minutes, but it can be
|
||||
reduced (not increased) via `tctl` flag.
|
||||
|
||||
Nodes that are members of a Teleport cluster can interact with the auth server using the auth API.
|
||||
The API is implemented as an HTTP REST service running over the SSH tunnel, authenticated using host
|
||||
or user certificates previously signed by the auth server.
|
||||
|
||||
All nodes of the cluster send periodic ping messages to the auth server, reporting their
|
||||
IP addresses and values of their assigned labels. The list of connected cluster nodes is accessible
|
||||
to all members of the cluster via the API.
|
||||
|
||||
Clients can also connect to the auth API through Teleport proxy to use a limited subset of the API to
|
||||
discover the member nodes of the cluster.
|
||||
|
||||
Cluster administration is performed using `tctl` command line tool.
|
||||
|
||||
!!! tip "NOTE":
|
||||
For high availability in production, a Teleport cluster can be serviced by multiple auth servers
|
||||
running in sync. Check [HA configuration](admin-guide.md#high-availability) in the
|
||||
Admin Guide.
|
||||
|
||||
|
||||
### The Proxy Service
|
||||
|
||||
The proxy is a stateless service which performs two functions in a Teleport cluster:
|
||||
|
||||
1. It serves a Web UI which is used by cluster users to sign up and configure their accounts,
|
||||
explore nodes in a cluster, login into remote nodes, join existing SSH sessions or replay
|
||||
recorded sessions.
|
||||
|
||||
2. It serves as an authentication gateway, asking for user credentials and forwarding them
|
||||
to the auth server via Auth API. When a user executes `tsh --proxy=p ssh node` command,
|
||||
trying to login into "node", the `tsh` tool will establish HTTPS connection to the proxy "p"
|
||||
and authenticate before it will be given access to "node".
|
||||
|
||||
All user interactions with the Teleport cluster are done via a proxy service. It is
|
||||
recommended to have several of them running.
|
||||
|
||||
When you launch the Teleport Proxy for the first time, it will generate a self-signed HTTPS
|
||||
certificate to make it easier to explore Teleport.
|
||||
|
||||
!!! warning "Warning":
|
||||
It is absolutely crucial to properly configure TLS for HTTPS when you use Teleport Proxy in production.
|
||||
|
||||
|
||||
### Web to SSH Proxy
|
||||
|
||||
In this mode, Teleport Proxy implements WSS (secure web sockets) to SSH proxy:
|
||||
|
||||
![Teleport Proxy Web](img/proxy-web.svg)
|
||||
|
||||
1. User logs in using username, password and 2nd factor token to the proxy.
|
||||
2. Proxy passes credentials to the auth server's API
|
||||
3. If auth server accepts credentials, it generates a new web session and generates a special
|
||||
ssh keypair associated with this web session.
|
||||
Auth server starts serving [OpenSSH ssh-agent protocol](https://github.com/openssh/openssh-portable/blob/master/PROTOCOL.agent)
|
||||
to the proxy.
|
||||
4. From the SSH node's perspective it's a regular SSH client connection that is authenticated using
|
||||
OpenSSH certificate, so no special logic is needed.
|
||||
|
||||
!!! tip "NOTE":
|
||||
Unlike in SSH proxying, in web mode Teleport Proxy terminates the traffic and re-encodes for SSH client connection.
|
||||
|
||||
### SSH Proxy
|
||||
|
||||
#### Getting signed short lived certificate
|
||||
|
||||
Teleport Proxy implements a special method to let clients get short lived certificates signed by auth's host certificate authority:
|
||||
|
||||
![Teleport Proxy SSH](img/proxy-ssh-1.svg)
|
||||
|
||||
1. TSH client or TSH agent generate OpenSSH keypair and forward generated public key and username, password and second factor token that are entered by user to the proxy.
|
||||
2. Proxy forwards request to the auth server.
|
||||
3. If auth server accepts credentials, it generates a new certificate signed by its user CA and sends it back to the proxy.
|
||||
4. Proxy
|
||||
|
||||
#### Connecting to the nodes
|
||||
|
||||
Once the client has obtained a short lived certificate, it can use it to authenticate with any node in the cluster. Users can use the certificate using standard OpenSSH client (and get it using ssh-agent socket served by `tsh agent`) or using `tsh` directly:
|
||||
|
||||
![Teleport Proxy Web](img/proxy-ssh-2.svg)
|
||||
|
||||
1. SSH client connects to proxy and executes `proxy` subsystem of the proxy's SSH server, providing target node's host and port location.
|
||||
2. Proxy dials to the target TCP address and starts forwarding the traffic to the client.
|
||||
3. SSH client uses established SSH tunnel to open a new SSH connection and authenticate with the target node using its client certificate.
|
||||
|
||||
!!! tip "NOTE":
|
||||
Teleport's proxy command makes it compatible with [SSH jump hosts](https://wiki.gentoo.org/wiki/SSH_jump_host) implemented using OpenSSH's `ProxyCommand`
|
||||
|
||||
|
||||
## Certificates
|
||||
|
||||
Teleport uses standard Open SSH certificates for client and host authentication.
|
||||
|
||||
### Node Certificates
|
||||
|
||||
Nodes, proxies and auth servers use certificates signed by the cluster's auth server
|
||||
to authenticate when joining the cluster. Teleport does not allow SSH sessions into nodes
|
||||
that are not cluster members.
|
||||
|
||||
A node certificate contains the node's role (like `proxy`, `auth` or `node`) as
|
||||
a certificate extension (opaque signed string). All nodes in the cluster can
|
||||
connect to auth server's HTTP API via SSH tunnel that checks each connecting
|
||||
client's certificate and role to enforce access control (e.g. client connection
|
||||
using node's certificate won't be able to add and delete users, and can only
|
||||
get auth servers registered in the cluster).
|
||||
|
||||
### User Certificates
|
||||
|
||||
When an auth server generates a user certificate, it uses the information provided
|
||||
by the cluster administrator about the role assigned to this user.
|
||||
|
||||
A user role can restrict user logins to specific OS logins or to a subset of
|
||||
cluster nodes (or any other restrictions enforced by the role). Teleport's user name is stored in a certificate's key id field. User's certificates do not use any cert extensions as a workaround to the
|
||||
[bug](https://bugzilla.mindrot.org/show_bug.cgi?id=2387) that treats any extension
|
||||
as a critical one, breaking access to the cluster.
|
||||
|
||||
## Teleport CLI Tools
|
||||
|
||||
Teleport offers two command line tools. `tsh` is a client tool used by the end users, while
|
||||
`tctl` is used for cluster administration.
|
||||
|
||||
### TSH
|
||||
|
||||
`tsh` is similar in nature to OpenSSH `ssh` or `scp`. In fact, it has subcommands named after
|
||||
them so you can call:
|
||||
|
||||
```
|
||||
$ tsh --proxy=p ssh -p 1522 user@host
|
||||
$ tsh --proxy=p scp -P example.txt user@host/destination/dir
|
||||
```
|
||||
|
||||
Unlike `ssh`, `tsh` is very opinionated about authentication: it always uses auto-expiring
|
||||
keys and it always connects to Teleport nodes via a proxy. It is a mandatory parameter.
|
||||
|
||||
When `tsh` logs in, the auto-expiring key is stored in `~/.tsh` and is valid for 23 hours by
|
||||
default, unless you specify another interval via `--ttl` flag (max of 30 hours and minimum of 1 minute and capped by the server-side configuration).
|
||||
|
||||
You can learn more about `tsh` in the [User Manual](user-manual.md).
|
||||
|
||||
### TCTL
|
||||
|
||||
`tctl` is used to administer a Teleport cluster. It connects to the `auth server` listening
|
||||
on `127.0.0.1` and allows cluster administrator to manage nodes and users in the cluster.
|
||||
|
||||
`tctl` is also a tool which can be used to modify the dynamic configuration of the
|
||||
cluster, like creating new user roles or connecting trusted clusters.
|
||||
|
||||
You can learn more about `tctl` in the [Admin Manual](admin-guide.md).
|
5
docs/2.3/draw.io/README.md
Normal file
|
@ -0,0 +1,5 @@
|
|||
### draw.io
|
||||
|
||||
This directory contains draw.io diagrams used to generated SVGs for
|
||||
the documentation.
|
||||
|
13
docs/2.3/draw.io/teleport-auth.html
Normal file
|
@ -0,0 +1,13 @@
|
|||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head>
|
||||
<title>Draw.io Diagram</title>
|
||||
<meta http-equiv="refresh" content="0;URL='https://www.draw.io/#G0B9TbvSc9__vFNzRzQkxscTBMSlU'"/>
|
||||
</head>
|
||||
<body>
|
||||
<div class="mxgraph" style="position:relative;overflow:auto;width:100%;">
|
||||
<div style="width:1px;height:1px;overflow:hidden;">7Vhtj+I2EP41SO2HXeWFBPgIHLtbqVdtD05tP51MYhJrTYwc87L7628cj/O6cCzXPa5SEYL4sTMZz/N4PHHPn64P95Js0o8iprznOfGh53/oeZ43chz408izQVzXDwySSBYjVgFz9kIRxBuTLYtp3hiohOCKbZpgJLKMRqqBrQRvPmJDEmu+AuYR4V30Lxar1KBDL6zwB8qS1D7GDUemZ0mip0SKbYbP63n+qviY7jWxtnBWB8c0bwYjRJ4RCdDihmQNn16EWDcASfMyWGhjxdAxbC+FjKlEKhDjLHtqRo7G9rZUKR3Vcc+7g+9+v7+NJdnfMlEA/r0zGS2Wu3k0+vJld/fHy6eXP58OebSYfJzzz9qWPwMpSCHAnL5aH6aUazlYovt0OAxJNHJJPyT94Y1x6e7M0SU5kmbo8NsMeu5oNVw68XLgU49E7k3fWNwRvsVAGiBXz1YRNAaBYFNIlYpEZITPKnRSsE71I4DBSarWHC5duLTR7UjuuP84JBdbGeFd7SG4ehSRCUUrnXn5JRmwLqlYUyWfC8Vwotiu6Q5BMSTluCqCcIFBPI8h9K0WzwXldANxA3S8hTXgOb9TojXpOXMqd/qiFfB9yhSdb0gx/z0kFYjkinE+FVwYKfvT6Sy4AxcnuZLiidqeTGRNPjQJK5Epy59uS6EgBkKvLU1XzCQkDdOGqOtVirydQRO4ryhmugLqhhx7b8LALnOb3bC9r9KM6w0MltZSTIjjvoem1/XR4GkC+YtC8vKcX2ZZJJ83CiKovSs4e7x//PWnJarDwhncnU+UN/iBRLV99dHmuQkKAwtEjqUUe0AiTvKcRR0KCnIc+OjQkjwtacghsSh7d2kP5vG3JuI2sM1/cDw9MFXrgpbu0WZ/WLbDmLQS7XGK+yHmfaQYN/c6wXZ11gkuwdcYxqc9CgaTrR7ll7u7AWzTmjBTx7vqO1fbUGC377KMalkyEepYAiaJnrsdttED8o4myyhdlk+QgZN5/05wLvb/hcz/7yaUvk0gVm3XTPwo9UY+CTk8cbIEPsJEX1lkC5aaCGcWGW91zQjFpEhATMcsgHvVLV0jn2ik61O9z+QUspTILrc15dtcFeKiO2CqNPSN2x5ErqcBTbLWcjO/gHzOjVLnD/A7hYL4LHMLqb3QEzI3EpCVkEwxeIF528wArMe/tVJAnhqv1ZqvrR2tYwbvNmPOEq3yNYvjomIV0LOCxQhYChiFzneR/iDE9G2Tlt1ba9r37B53dqq9WPy4yP7PUmdkKfeqaQpfga9d9pSlzaBd25yqiK732lfQc4LjoFn3uO9Y+AQuvmPYZ/UvrXz6Ibp5zNJVKx87zZ+4Qr9cj3WxvarHE1rrN7X2nkX2oJ277JZygdZaL39tb66rte5x1cNi8ag3r8ffiipGVy6m8jDbWbfk+GYhUZwqkmUxQEsPpwKjg0kv+KCNQJmEFUUECtLni8dLDU6WlOvDBXM4WhM3Ho+e2M7wRBedqU4168o9siC/4+zriDTa718tYYjVKoel8ibGoVmdlZrh1Rm6P/sK</div>
|
||||
</div>
|
||||
<a style="position:absolute;top:50%;left:50%;margin-top:-128px;margin-left:-64px;" href="https://www.draw.io/#G0B9TbvSc9__vFNzRzQkxscTBMSlU" target="_blank"><img border="0" src="https://www.draw.io/images/drawlogo128.png"/></a>
|
||||
</body>
|
||||
</html>
|
13
docs/2.3/draw.io/teleport-just-proxy.html
Normal file
|
@ -0,0 +1,13 @@
|
|||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head>
|
||||
<title>Draw.io Diagram</title>
|
||||
<meta http-equiv="refresh" content="0;URL='https://www.draw.io/#G0B9TbvSc9__vFS2Z3eFpNaGFRUEU'"/>
|
||||
</head>
|
||||
<body>
|
||||
<div class="mxgraph" style="position:relative;overflow:auto;width:100%;">
|
||||
<div style="width:1px;height:1px;overflow:hidden;">7Vpbc+I4E/01PMYly/j2CExIHnZ3UkumZr99mTK2wKoYi5INTubXT8tuGV9gP5gBspvExU1tuSX3Od3qlhlYk9XznQzW8e8iYsmAkuh5YH0aUGp61IIvJXmpJLYDLSVYSh5hp51gxr8zFBKUbnjEslbHXIgk5+u2MBRpysK8JVuIpD3EOlhq9TvBLAySvvQrj/K4knrU2cnvGV/GehjT8asz8yB8WkqxSXG8AbUW5VGdXgVaF97VM6maNy5FDS8o2Y2Vtub0XYhVSyBZVhsLtS44TgzbcyEjJlEvyhKePrUtxyJ9WZznyqqjAZ3CqygKI5JBYXBRCqw7MvYf59tZ6H/7tp3O6N8Wm67/CO6mf365/aJ0WbfABCkEqFO/Vs8Tlig2aKAjK3IXC38espCEC/ummtL0yN41OJKlOOFfU4hE3AbJBu34UEmy/EUzAjQA06AxLmKes9k6CNWZAsgOsgVPkolIRGVjazK5tacw+jjLpXhi+kwqUqWgpAdTczHVpSLNZziOakuRBzkXCnRgwvj/3yuaY8tkzrS31SCAOzKxYrl8UWRDrtlD5AC6oosGKHZUd7BH3GC5lgVIm2WteWdr+IHmPs70GBQapoe5EwPe94+fAQUYEkxAHoIsK4DEavQNzJE6CUxqPJfwa6l+9fACWyh5nK9gKqVh92LRBA5FQcKXyvohWFw5zVhZlkNsGOGJFY8iNcxeJgjovUhEAZIY+jG44CIYOkO3heHQ62Ho7sFQX3VODIc9DGt8NDyPLGFrIXPodqPglOIZhuyDWElgAvWV7w5Y220DS7HZANa00OQtZFF2TmTtHrJfrxUYL2BZV6cSaFmvb1lrj8to2TkNi2t7y2W6do0gI8Em2oel0UjKkoIoaZuxNDCBA85EQRbXa0yWBzLXl4YJBFMenkLfTGxkierhxRNGWLIDWpBG6o6OAmpoYzBDoKx+bDPhJntI1cJ9UOFwD4LDzTbcjVJjaDr10XY+06IGuGB9dHyssguqbCYcvVF8z/CIXR8dFyfE8N2Do1Sm7Y1Scq623k/REKfR9G8GkZeAJxYZhMj/1jIbcQmJf5U6ATKKKheJ0C5t0xOy/x4/bRS1IjRm+OcMJKjy5wPJLhr8WiwB+8qXv1TSati6+T+85FpxxsQ+xwcagq6mkcT4cYlAY3VZQ3C2JwcTq5O+e2i9y8cLUzvLG1i3TIT+nwh1Mp90Fn44KT/fwkV8wyX20PWHNoWlpU0uF9snk8txiUGI6bumZ1u272Mcq0Pd0HBtx/U816KuY5qdYS5IvX6R/nkNYZ+S2exeQc0khPCsR8frVuvXWoIsnZTqCh5VNImmC4dLl/Bmv4YHSVnDz+Kq5kv4FiyooQrVQr+ApT5nH4U8xIw2lDpLeIVCXofEBpKNwn1Ubrygo9U7M799GvU3y96m09kaB52270sXruZ1/RJSAQS3i45FNmUGT7Y8gE86hc3cd+9rNuksZ/YeBPd520UQ7OfuoOPN7K7UdnyF7RX9jONQshAmXN35R3l7THkLuxOvVt7qh2DvZKNMh4R/Z8HRSVR6Ok6sMerD7G6ndArjyxUVerFu0It+5K3HPoDq0EGXoOdPXKG5e5hcwb77j4F1+wM=</div>
|
||||
</div>
|
||||
<a style="position:absolute;top:50%;left:50%;margin-top:-128px;margin-left:-64px;" href="https://www.draw.io/#G0B9TbvSc9__vFS2Z3eFpNaGFRUEU" target="_blank"><img border="0" src="https://www.draw.io/images/drawlogo128.png"/></a>
|
||||
</body>
|
||||
</html>
|
13
docs/2.3/draw.io/teleport-proxy-ssh.html
Normal file
|
@ -0,0 +1,13 @@
|
|||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head>
|
||||
<title>Draw.io Diagram</title>
|
||||
<meta http-equiv="refresh" content="0;URL='https://drive.draw.io/#G0B9TbvSc9__vFekRkNFI2N1hwRHM'"/>
|
||||
</head>
|
||||
<body>
|
||||
<div class="mxgraph" style="position:relative;overflow:auto;width:100%;">
|
||||
<div style="width:1px;height:1px;overflow:hidden;">5VlLc6M4EP41PsaFeAmOsSdO9jCzqThVu3uawiBAZYxSQo6d+fXTghbm4dnNw052Ki4/UEt0S/11f2rhiTPf7K9l9JB/FQkrJraV7CfOl4ltE9+24EdLnhqJS6GlBZnkCQ46CJb8B0Mh3pdtecKq3kAlRKH4Q18Yi7JkserJUlH0TTxEmVF/ECzjqBhL/+KJyhtpYPsH+Q3jWW7MED9selZRvM6k2JZob2I7af1qujeR0YWr2ltN84K6KHlCycFW2ZvTDyE2PYFkVess1JFynBi2V0ImTKJelBW8XPc9xxJzW66U9urlxF7AO5H8kU0TGe2mXNQi59qahferx2Ucfv/+uGDru/W3xR/2N5Lv7m6+am3OFcSCFAIU6qvNfs4KHQ8G6sRJaJqGq5jFVpx6F82kFs8c3cIjWYlTfptCDMXHqNiiJ28bSaWeTEyABog1aMx2OVds+RDFumcH4Q6ylBfFXBSi8bIzn195C7A+q5QUa2Z6SlFqBXWAMD0Xom8VpVqiHd2WQkWKCw07xMIMp8akYiaZ/nv59RBc+zUTG6bkkw43jDavjbZGYHJxdwh2H0fknTg3sggDJ2s1H3wNF+ju57neHbl+YvsFGJyt4CLTF/esYA9CKhh2AZ9bKfZgEgdJM8pIYALtnSMQwYdanqsNzK/29lGAumiiKCp4piGJwec6l2YaEQ6UcYkdG54k2szR8BAwOi3EDiQ5jGNww3mApbQHrI3NDrDEQZd3kXVRdkpkCS6nB+0wqRLgXWyio1mZXEpZuwolfYjq7LLgBT1JVOVtHlUqksrcGhdRVfG4dbM29DInwzzFVtYgHhuCDgObGfuFmnq5v8bKdYMeVqbZxQpWOcKqFR4DC83dCg6L68SFFU6p5bk0dD3b8ryeYUKxbRQ2C0cdXRodqvWpNbUsElISeI4XhrhnGb3EnVLPp0FAHZv6hAzMNL4bmanjrPXX60JvzCodErncgn+Hkfi+pJ5wCUVK0wZva7zPxAd9om/LqW6QGZI4N9UTDI8OKvYU8YAFApkqBs1tBfwKo3gE3/YCqhArgnoK0NJEC+7YrgrIbNtas6PbwCejfDfoQ0yc8WZOTBicGmInsQIrpiQOViwidnpBxqXUR5A+23P1N+aavv5HX0+9N2wH/wKA0+dx74w87oYDsO3XUvdQU8sMp2dnx3J93w6o68WR48YEi/NujBDNAzd/3kPZDZ0AEgBSVTs4t+gZ1Xz96RPd9/u1HZypn8flLsrelOhDDMfFHRD1+xyZzuBbaqK/LYnG+6RzJIeN7KSuHVNop3hZLm+0vUyv9PfKifeqeaiDD2MMliHq7GAZIEf30gRvOymWx463v9MZqHu+Oc4B/5MDzuCJBjHPuV5zphmcYhCyd9gYcTPvRIujN8Y7UMnZoy6Om/SPdS6mWDB/+o2RmuMeIhagGz9kXxyfcT60/q1r3rYCPtnjkJdSgWcekCNE5hnUOajA93DCaCtE2nkxE4w4ZTiZVzMBNA8Px5vhh39NnKuf</div>
|
||||
</div>
|
||||
<a style="position:absolute;top:50%;left:50%;margin-top:-128px;margin-left:-64px;" href="https://drive.draw.io/#G0B9TbvSc9__vFekRkNFI2N1hwRHM" target="_blank"><img border="0" src="https://www.draw.io/images/drawlogo128.png"/></a>
|
||||
</body>
|
||||
</html>
|
13
docs/2.3/draw.io/teleport-tunnels.html
Normal file
|
@ -0,0 +1,13 @@
|
|||
<!--[if IE]><meta http-equiv="X-UA-Compatible" content="IE=5,IE=9" ><![endif]-->
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>teleport-tunnels</title>
|
||||
</head>
|
||||
<body style="background-color:#ffffff;">
|
||||
<div class="mxgraph" style="position:relative;overflow:auto;width:100%;">
|
||||
<div style="width:1px;height:1px;overflow:hidden;">7Vpbb+LIEv41eSTyHXhMSNh52dFIOdLuPnbsBloxbrYxA8mvnyq7yrQvMJkcO8kkCxLYZbu6/dVXl75c+LP14Q8jNqs/dSLTC89JDhf+zYXnuZNwDH8oeSwlU58ES6MSuukouFNPkoQOSXcqkdvajbnWaa42dWGss0zGeU0mjNH7+m0LndZb3Yglt3gU3MUibUv/Ukm+KqUTLzrKv0i1XHHLbjQtr9yL+GFp9C6j9i48f1F8ystrwbroRbcrkeh9TXRwytPROKBuPJIk8KiVjchq/XzSel0TGLmtMCW1C0WdpfN7bRJpShHLUpU92Mj5t2BkozU8iUfrw0ymaGi2YfnY/MTVCkQjs1rbpx649yZ+LKeOO/bGU9cf+aWC7yLd0bsQaPkjGwrw2+BhvLuHv+v9SuXybiNilO2BmyBb5esUzlw4NDoXudII3hQwvd6kQmUjI6H966URiYKOznSqEZZMZ6iRuiBNLpnd7dc6ggUeIfVa5uYRLUmGDIMJGbIUhGN6s/2RW15ANlhZvPLpPkFGWVa6jyjCAQH5PFDDFqizdLfNpRldtdCFd4ae2Ahuc6MfZAOjhUrThkikaokwx4ARsuwaEVTgYFd0Ya2SBJvpNJmGuxdp4RcruE/CA9cLneV31DXsSXFekhxoVZ5zJ8Drghv89mRAb0LGIQMGYzq3DOiyE9UMSLI+DUhcsgz4zehDqdI2HmiBcHkKYttmANdsdhvOoUPdBi4CGnrJEXnLEpZXoSH6ADxi4NhjWIcFeNSBN8v6xJtibofD0Kv+5zAt+405mp1zGLbx0A7DRcW5PGJbjFMygpSI7aogPp4830acltYHSCub1WVZk3jlP6pFXJzLCA8RM4ALDAL++oU6kek8RpxQfiIzWfZ0UVHDnlHxQTJqo57gmuimZOH8UeQUPagHhTl8ZrO+kmAjhoZu26U9polNCX6uT0oQAT50DPU5Ib2DGMq2PueCMkuuSve4iVOx3aq4Hkmfy/fKZU9w3YFPdYWre78CXSatocFPIbcQDTsQZZmRKRj5e119F8zUwjetoGHLhdg0bNGwoWOrdyaW9JhdaDc1tbjR1JQLs5R5S1Nh9+rFX0aFtu/NlZF7gU/9nunUnbTpeOUTz/pw5WBCAwFOp5wm7XTKaW7wdNoeln2FCYAPFTurEfF7iJ3BJ8R7+oZ40yTPZ8Ib6r+3w5umBs/WBpCUGRLCzCoXSHI61zdKAmHydqXx8uzPVmjEaIbYjtHQmxamlfAXKgFwkJ4qAWDCpe8GXhS6wQTCHE/UVoqdy0k4BTuFoQ9THOQbr1AmtKc5Xo8VlTLo/t/kdsXJP8WwLeT75+DkNa/8OX1gIrvA7FysP0mxOsNCh2w1CMP8JsMaOp7PMP8swyJ//AKGgaUEAsO3bfCG7S+8T0TmODK2VNkbf9vTRh+Cv8/lJsyvvxo3Iw61L4l+NU1jh4qtoenHDQ1FP36vj1zENCcZ37KI4e5/JrzfclDE0fRT4f2GgyKvPQh99XTWe8oaBU5zYZai9BBZqwpPHK54wuaXs1Y04Y0BpGnaWCfuK2u1ukwNDZa12ovTv+VQEObI6kVu1/JXb7xqrpW+uFKPmks0TU3DDfd4Z817ii9V+YwPNMrnfoZ75UueiU88N8JzCkNW1VFvcwoRd5PDxlBVdbPLPBAYLD51DeqiFNco7uFgiQcswAKh2KHGTIz+3eFGKrgBtoRNkJOWqHz2fzvYzgYdKDVA70oldcUgthp7xysmNYf7yerdfD6d9rfU3IyInGktt+laaW4OI/sonLgrHZQ5xxBX4LfNkHy7QoMXv6PRpli0xtRVbkgBXoCUziw5KHIyKI/x4AS3eudRKheoYTgWBWf2sKB8LtYqRXRmELcU4OE5XyVuvqiX7gMU6tW5xbcgotrm/1yeg9PjhswypB234Pq3PwA=</div>
|
||||
</div>
|
||||
<script type="text/javascript" src="https://www.draw.io/embed.js?s=arrows2"></script>
|
||||
</body>
|
||||
</html>
|
290
docs/2.3/enterprise.md
Normal file
|
@ -0,0 +1,290 @@
|
|||
# Teleport Enterprise Features
|
||||
|
||||
This chapter covers Teleport features that are only available in the commercial
|
||||
edition of Teleport, called "Teleport Enterprise".
|
||||
|
||||
Below is the full list of features that are only available to users of
|
||||
Teleport Enterprise:
|
||||
|
||||
|Teleport Enterprise Feature|Description
|
||||
---------|--------------
|
||||
|[Role Based Access Control (RBAC)](#rbac)|Allows Teleport administrators to define User Roles and restrict each role to specific actions. RBAC also allows to partition cluster nodes into groups with different access permissions.
|
||||
|[External User Identity Integration](#external-identities)| Allows Teleport to integrate with existing enterprise identity systems. Examples include Active Directory, Github, Google Apps and numerous identity middleware solutions like Auth0, Okta, and so on. Teleport supports LDAP, SAML and OAuth/OpenID Connect protocols to interact with them.
|
||||
|[Dynamic Configuration](#dynamic-configuration) | Open source edition of Teleport takes its configuration from a single YAML file. Teleport Enterprise can also be controlled at runtime, even programmatically, by dynamiclally updating its configuration.
|
||||
|[Integration with Kubernetes](#integration-with-kubernetes)| Teleport can be embedded into Kubernetes clusters. This allows Teleport users to deploy and remotely manage Kubernetes on any infrastructure, even behind-firewalls. Teleport embedded into Kubernetes is available as a separate offering called [Telekube](http://gravitational.com/telekube/).
|
||||
|External Audit Logging | In addition to supporting the local filesystem, Teleport Enterprise is capable of forwarding the audit log to external systems such as Splunk, Alert Logic and others.
|
||||
|Commercial Support | In addition to these features, Teleport Enterprise also comes with a premium support SLA with guaranteed response times.
|
||||
|
||||
!!! tip "Contact Information":
|
||||
If you are interested in Teleport Enterprise or Telekube, please reach out to
|
||||
`sales@gravitational.com` for more information.
|
||||
|
||||
## RBAC
|
||||
|
||||
RBAC stands for `Role Based Access Control`, quoting
|
||||
[Wikipedia](https://en.wikipedia.org/wiki/Role-based_access_control):
|
||||
|
||||
> In computer systems security, role-based access control (RBAC) is an
|
||||
> approach to restricting system access to authorized users. It is used by the
|
||||
> majority of enterprises with more than 500 employees, and can implement
|
||||
> mandatory access control (MAC) or discretionary access control (DAC). RBAC is
|
||||
> sometimes referred to as role-based security.
|
||||
|
||||
Every user in Teleport is **always** assigned a role. OSS Teleport automatically
|
||||
creates a role-per-user, while Teleport Enterprise allows far greater control over
|
||||
how roles are created, assigned and managed.
|
||||
|
||||
Lets assume your company is using Active Directory to authenticate users, so for a typical
|
||||
enterprise deployment you would:
|
||||
|
||||
1. Configure Teleport to [use existing user identities](#external-identities) stored
|
||||
in Active Directory.
|
||||
2. Using Active Directory, assign a user to several groups, perhaps "sales",
|
||||
"developers", "admins", "contractors", etc.
|
||||
3. Create Teleport Roles - perhaps "users", "developers" and "admins".
|
||||
4. Define mappings from Active Directory groups (claims) to Teleport Roles.
|
||||
|
||||
This section covers the process of defining user roles.
|
||||
|
||||
### Roles
|
||||
|
||||
A role in Teleport defines the following restrictions for the users who are
|
||||
assigned to it:
|
||||
|
||||
**OS logins**
|
||||
|
||||
The typical OS logins traditionally used. For example, you may not want your interns to login as "root".
|
||||
|
||||
**Allowed Labels**
|
||||
|
||||
A user will only be granted access to a node if all of the labels defined in
|
||||
the role are present on the node. This effectively means we use an AND
|
||||
operator when evaluating access using labels. Two examples of using labels to
|
||||
restrict access:
|
||||
|
||||
1. If you split your infrastructure at a macro level with the labels
|
||||
`environment: production` and `environment: staging` then you can create roles
|
||||
that only have access to one environment. Let's say you create an `intern`
|
||||
role with allow label `environment: staging` then interns will not have access
|
||||
to production servers.
|
||||
1. Like above, suppose you split your infrastructure at a macro level with the
|
||||
labels `environment: production` and `environment: staging`. In addition,
|
||||
within each environment you want to split the servers used by the frontend and
|
||||
backend teams, `team: frontend`, `team: backend`. If you have an intern that
|
||||
joins the frontend team that should only have access to staging, you would
|
||||
create a role with the following allow labels
|
||||
`environment: staging, team: frontend`. That would restrict users with the
|
||||
`intern` role to only staging servers the frontend team uses.
|
||||
|
||||
**Session Duration**
|
||||
|
||||
Also known as "Session TTL" - a period of time a user is allowed to be logged in.
|
||||
|
||||
**Resources**
|
||||
|
||||
Resources defines access levels to resources on the backend of Teleport.
|
||||
|
||||
Access is either `read` or `write`. Typically you will not set this for users and simply take the default values.
|
||||
For admins, you often want to give them full read/write access and you can set `resources` to `"*": ["read", "write"]`.
|
||||
|
||||
Currently supported resources are:
|
||||
|
||||
* `oidc` - OIDC Connector
|
||||
* `cert_authority` - Certificate Authority
|
||||
* `tunnel` - Reverse Tunnel (used with trusted clusters)
|
||||
* `user` - Teleport users
|
||||
* `node` - Teleport nodes
|
||||
* `auth_server` - Auth server
|
||||
* `proxy` - Proxy server
|
||||
* `role` - Teleport roles
|
||||
* `namespace` - Teleport namespaces
|
||||
* `trusted_cluster` - Trusted Clusters (creates `cert_authority` and `tunnel`).
|
||||
* `cluster_auth_preference` - Authentication preferences.
|
||||
* `universal_second_factor` - Universal Second Factor (U2F) settings.
|
||||
|
||||
**Namespaces**
|
||||
|
||||
Namespaces allow you to partition nodes within a single cluster to restrict access to a set of nodes.
|
||||
To use namespaces, first you need to create a `namespace` resource on the backend then set `namespace`
|
||||
under `ssh_service` in `teleport.yaml` for each node which you want to be part of said namespace.
|
||||
For admins, you might want to give them access to all namespaces and you can set `namespaces` to `["*"]`.
|
||||
|
||||
The roles are managed as any other resource using [dynamic configuration](#dynamic-configuration)
|
||||
commands. For example, let's create a role `intern`.
|
||||
|
||||
First, lets define this role using YAML format and save it into `interns-role.yaml`:
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v1
|
||||
metadata:
|
||||
description: "This role is for interns"
|
||||
name: "intern"
|
||||
spec:
|
||||
# interns can only SSH as 'intern' OS login
|
||||
logins: ["intern"]
|
||||
|
||||
# automatically log users out after 8 hours
|
||||
max_session_ttl: 8h0m0s
|
||||
|
||||
# Interns will only be allowed to SSH into machines
|
||||
# with the label 'environment' set to 'staging'
|
||||
node_labels:
|
||||
"environment": "staging"
|
||||
```
|
||||
|
||||
Now, we just have to create this role:
|
||||
|
||||
```bash
|
||||
$ tctl create -f interns-role.yaml
|
||||
```
|
||||
|
||||
## External Identities
|
||||
|
||||
The standard OSS edition of Teleport stores user accounts using a local storage
|
||||
back-end, typically on a file system or using a highly available database like `etcd`.
|
||||
|
||||
Teleport Enterprise allows the administrators to integrate Teleport clusters
|
||||
with existing user identities like Active Directory or Google Apps using protocols
|
||||
like LDAP, OpenID/OAuth2 or SAML. Refer to the following links for additional
|
||||
additional integration documentation:
|
||||
|
||||
* [OpenID Connect (OIDC)](oidc.md)
|
||||
* [Security Assertion Markup Language 2.0 (SAML 2.0)](saml.md)
|
||||
|
||||
In addition, Teleport Enterprise can query for users' group membership and assign different
|
||||
roles to different groups, see the [RBAC section](#rbac) for more details.
|
||||
|
||||
## Dynamic Configuration
|
||||
|
||||
OSS Teleport reads its configuration from a single YAML file,
|
||||
usually located in `/etc/teleport.yaml`. Teleport Enterprise extends that by
|
||||
allowing cluster administrators to dynamically update certain configuration
|
||||
parameters while Teleport is running. This can also be done programmatically.
|
||||
|
||||
Teleport treats such dynamic settings as objects, also called "resources".
|
||||
Each resource can be described in a YAML format and can be created, updated or
|
||||
deleted at runtime through three `tctl` commands:
|
||||
|
||||
| Command Example | Description
|
||||
|---------|------------------------------------------------------------------------
|
||||
| `tctl create -f tc.yaml` | Creates the trusted cluster described in `tc.yaml` resource file.
|
||||
| `tctl del -f tc.yaml` | Deletes the trusted cluster described in `tc.yaml` resource file.
|
||||
| `tctl update -f tc.yaml` | Updates the trusted cluster described in `tc.yaml` resource file.
|
||||
|
||||
This is very similar how the `kubectl` command works in
|
||||
[Kubernetes](https://en.wikipedia.org/wiki/Kubernetes).
|
||||
|
||||
Two resources are supported currently:
|
||||
|
||||
* See [Trusted Clusters](#dynamic-trusted-clusters): to dynamically connect / disconnect remote Teleport clusters.
|
||||
* See [User Roles](#rbac): to create or update user permissions on the fly.
|
||||
|
||||
### Dynamic Trusted Clusters
|
||||
|
||||
See the [Dynamic Trusted Clusters](trustedclusters.md) more more details and examples.
|
||||
|
||||
### Authentication Preferences
|
||||
|
||||
Using dynamic configuration you can also view and change the type of cluster authentication Teleport supports at runtime.
|
||||
|
||||
#### Viewing Authentication Preferences
|
||||
|
||||
You can query the Cluster Authentication Preferences (abbreviated `cap`) resource using `tctl` to find out what your current authentication preferences are.
|
||||
|
||||
```
|
||||
$ tctl get cap
|
||||
Type Second Factor
|
||||
---- -------------
|
||||
local u2f
|
||||
```
|
||||
|
||||
In the above example we are using local accounts and the Second Factor used is Universal Second Factor (U2F, abbreviated `u2f`), once again to drill down and get more details you can use `tctl`:
|
||||
|
||||
```
|
||||
$ tctl get u2f
|
||||
App ID Facets
|
||||
------ ------
|
||||
https://localhost:3080 ["https://localhost" "https://localhost:3080"]
|
||||
```
|
||||
|
||||
#### Updating Authentication Preferences
|
||||
|
||||
To update Cluster Authentication Preferences, you'll need to update the resources you viewed before. You can do that creating the following file on disk and then update the backend with `tctl create -f {filename}`.
|
||||
|
||||
```yaml
|
||||
kind: cluster_auth_preference
|
||||
version: v2
|
||||
metadata:
|
||||
description: ""
|
||||
name: "cluster-auth-preference"
|
||||
namespace: "default"
|
||||
spec:
|
||||
type: local # allowable types are local or oidc
|
||||
second_factor: otp # allowable second factors are none, otp, or u2f.
|
||||
```
|
||||
|
||||
If your Second Factor Authentication type is U2F, you'll need to create an additional resource:
|
||||
|
||||
|
||||
```yaml
|
||||
kind: universal_second_factor
|
||||
version: v2
|
||||
metadata:
|
||||
description: ""
|
||||
name: "universal-second-factor"
|
||||
namespace: "default"
|
||||
spec:
|
||||
app_id: "https://localhost:3080"
|
||||
facets: ["https://localhost", "https://localhost:3080"]
|
||||
```
|
||||
|
||||
If you are not using local accounts but rather an external identity provider like OIDC, you'll need to create an OIDC resource like below.
|
||||
|
||||
```yaml
|
||||
kind: oidc
|
||||
version: v2
|
||||
metadata:
|
||||
description: ""
|
||||
name: "example"
|
||||
namespace: "default"
|
||||
spec:
|
||||
issuer_url: https://accounts.example.com
|
||||
client_id: 00000000000000000.example.com
|
||||
client_secret: 00000000-0000-0000-0000-000000000000
|
||||
redirect_url: https://localhost:3080/v1/webapi/oidc/callback
|
||||
display: "Welcome to Example.com"
|
||||
scope: ["email"]
|
||||
claims_to_roles:
|
||||
- {claim: "email", value: "foo@example.com", roles: ["admin"]}
|
||||
```
|
||||
|
||||
## Integration With Kubernetes
|
||||
|
||||
Gravitational maintains a [Kubernetes](https://kubernetes.io/) distribution
|
||||
with Teleport Enterprise integrated, called [Telekube](http://gravitational.com/telekube/).
|
||||
|
||||
Telekube's aim is to dramatically lower the cost of Kubernetes management in a
|
||||
multi-region / multi-site environment.
|
||||
|
||||
Its highlights:
|
||||
|
||||
* Quickly create Kubernetes clusters on any infrastructure.
|
||||
* Every cluster includes an SSH bastion and can be managed remotely even if behind a firewall.
|
||||
* Every Kubernetes cluster becomes a Teleport cluster, with all Teleport
|
||||
capabilities like session recording, audit, etc.
|
||||
* Every cluster is dependency free and automomous, i.e. highly available (HA)
|
||||
and includes a built-in caching Docker registry.
|
||||
* Automated remote cluster upgrades.
|
||||
|
||||
Typical users of Telekube are:
|
||||
|
||||
* Software companies who want to deploy their Kubernetes applications into
|
||||
the infrastructure owned by their customers, i.e. "on-premise".
|
||||
* Managed Service Providers (MSPs) who manage Kubernetes clusters for their
|
||||
clients.
|
||||
* Enterprises who run many Kubernetes clusters in multiple geographically
|
||||
distributed regions / clouds.
|
||||
|
||||
!!! tip "Contact Information":
|
||||
For more information about Telekube please reach out us to `sales@gravitational.com` or fill out the contact for on our [website](http://gravitational.com/)
|
48
docs/2.3/faq.md
Normal file
|
@ -0,0 +1,48 @@
|
|||
# FAQ
|
||||
|
||||
### Can I use Teleport in production today?
|
||||
|
||||
Teleport has completed a security audit from a nationally recognized technology security company.
|
||||
So we are comfortable with the use of Teleport from a security perspective. However, Teleport
|
||||
is still a relatively young product so you may experience usability issues. We are actively
|
||||
supporting Teleport and addressing any issues that are submitted to the [github repo](https://github.com/gravitational/teleport).
|
||||
|
||||
### Can I connect to nodes behind a firewall?
|
||||
|
||||
Yes, Teleport supports reverse SSH tunnels out of the box. To configure behind-firewall clusters
|
||||
refer to [Trusted Clusters](admin-guide.md#trusted-clusters) section of the Admin Manual.
|
||||
|
||||
### Does Web UI support copy and paste?
|
||||
|
||||
Yes. You can copy&paste using the mouse. For working with a keyboard, Teleport employs `tmux`-like
|
||||
"prefix" mode. To enter prefix mode, press `Ctrl+A`.
|
||||
|
||||
While in prefix mode, you can press `Ctrl+V` to paste, or enter text selection mode by pressing `[`.
|
||||
When in text selection mode, move around using `hjkl`, select text by toggling `space` and copy
|
||||
it via `Ctrl+C`.
|
||||
|
||||
### Can I use OpenSSH with a Teleport cluster?
|
||||
|
||||
Yes. Take a look at [Using OpenSSH client](user-manual.md##using-teleport-with-openssh) section in the User Manual
|
||||
and [Using OpenSSH servers](admin-guide.md) in the Admin Manual.
|
||||
|
||||
### What TCP ports does Teleport use?
|
||||
|
||||
[Ports](admin-guide.md#ports) section of the Admin Manual covers it.
|
||||
|
||||
### Does Teleport support LDAP or AD?
|
||||
|
||||
Gravitational offers this feature as part of the commercial version for Teleport called
|
||||
[Teleport Enterprise](enterprise.md#rbac)
|
||||
|
||||
### Do you offer a commercial version of Teleport?
|
||||
|
||||
Yes, in addition to the [numerous advanced features](enterprise.md), the commercial Teleport license
|
||||
also gives you the following:
|
||||
|
||||
* Commercial support.
|
||||
* Premium SLA with guaranteed response times.
|
||||
* Implementation Services: our team can help you integrate Teleport with your
|
||||
existing systems and processes.
|
||||
|
||||
Reach out to `sales@gravitational.com` if you have questions about commercial edition of Teleport.
|
BIN
docs/2.3/img/adfs-1.png
Normal file
After Width: | Height: | Size: 207 KiB |
BIN
docs/2.3/img/adfs-2.png
Normal file
After Width: | Height: | Size: 203 KiB |
BIN
docs/2.3/img/adfs-3.png
Normal file
After Width: | Height: | Size: 205 KiB |
BIN
docs/2.3/img/adfs-4.png
Normal file
After Width: | Height: | Size: 211 KiB |
BIN
docs/2.3/img/auth-server.png
Normal file
After Width: | Height: | Size: 21 KiB |
2
docs/2.3/img/auth-server.svg
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
docs/2.3/img/everything.png
Normal file
After Width: | Height: | Size: 69 KiB |
2
docs/2.3/img/everything.svg
Normal file
After Width: | Height: | Size: 38 KiB |
BIN
docs/2.3/img/oidc-consent.png
Normal file
After Width: | Height: | Size: 39 KiB |
BIN
docs/2.3/img/oidc-copy-creds.png
Normal file
After Width: | Height: | Size: 8 KiB |
BIN
docs/2.3/img/oidc-create-client-id.png
Normal file
After Width: | Height: | Size: 42 KiB |
BIN
docs/2.3/img/oidc-create-project.png
Normal file
After Width: | Height: | Size: 43 KiB |
BIN
docs/2.3/img/oidc-login.png
Normal file
After Width: | Height: | Size: 40 KiB |
BIN
docs/2.3/img/okta-saml-1.png
Normal file
After Width: | Height: | Size: 54 KiB |
BIN
docs/2.3/img/okta-saml-2.1.png
Normal file
After Width: | Height: | Size: 33 KiB |
BIN
docs/2.3/img/okta-saml-2.2.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
docs/2.3/img/okta-saml-2.png
Normal file
After Width: | Height: | Size: 37 KiB |
BIN
docs/2.3/img/okta-saml-3.1.png
Normal file
After Width: | Height: | Size: 45 KiB |
BIN
docs/2.3/img/okta-saml-3.png
Normal file
After Width: | Height: | Size: 97 KiB |
BIN
docs/2.3/img/okta-saml-4.png
Normal file
After Width: | Height: | Size: 14 KiB |
BIN
docs/2.3/img/okta-saml-5.png
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
docs/2.3/img/only-auth.png
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
docs/2.3/img/overview.png
Normal file
After Width: | Height: | Size: 59 KiB |
2
docs/2.3/img/overview.svg
Normal file
After Width: | Height: | Size: 34 KiB |
2
docs/2.3/img/proxy-ssh-1.svg
Normal file
After Width: | Height: | Size: 16 KiB |
2
docs/2.3/img/proxy-ssh-2.svg
Normal file
After Width: | Height: | Size: 16 KiB |
2
docs/2.3/img/proxy-web.svg
Normal file
After Width: | Height: | Size: 19 KiB |
2
docs/2.3/img/tunnel.svg
Normal file
After Width: | Height: | Size: 12 KiB |
9
docs/2.3/index.html
Normal file
|
@ -0,0 +1,9 @@
|
|||
<html>
|
||||
<head>
|
||||
<meta http-equiv="Refresh" content="0; url=quickstart" />
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<a href="quickstart">Teleport QuickStart Guide</a>
|
||||
</body>
|
||||
</html>
|
144
docs/2.3/oidc.md
Normal file
|
@ -0,0 +1,144 @@
|
|||
# OpenID Connect (OIDC)
|
||||
|
||||
Teleport supports [OpenID Connect](http://openid.net/connect/) (also known as
|
||||
`OIDC`) to provide external authentication using commercial OpenID providers
|
||||
like [Auth0](https://auth0.com) as well as open source identity managers like
|
||||
[Keycloak](http://www.keycloak.org).
|
||||
|
||||
## Configuration
|
||||
|
||||
OIDC relies on re-directs to return control back to Teleport after
|
||||
authentication is complete. Decide on the redirect URL you will be using and
|
||||
know it in advance before you register Teleport with an external identity
|
||||
provider.
|
||||
|
||||
### Development mode
|
||||
|
||||
For development purposes we recommend the following `redirect_url`:
|
||||
`https://localhost:3080/v1/webapi/oidc/callback`.
|
||||
|
||||
### Identity Providers
|
||||
|
||||
Register Teleport with the external identity provider you will be using and
|
||||
obtain your `client_id` and `client_secret`. This information should be
|
||||
documented on the identity providers website. Here are a few links:
|
||||
|
||||
* [Auth0 Client Configuration](https://auth0.com/docs/clients)
|
||||
* [Google Identity Platform](https://developers.google.com/identity/protocols/OpenIDConnect)
|
||||
* [Keycloak Client Registration](http://www.keycloak.org/docs/2.0/securing_apps_guide/topics/client-registration.html)
|
||||
|
||||
Add your OIDC connector information to `teleport.yaml`. A few examples are
|
||||
provided below.
|
||||
|
||||
#### OIDC with pre-defined roles
|
||||
|
||||
In the configuration below, we are requesting the scope `group` from the
|
||||
identity provider then mapping the value to either to `admin` role or the `user`
|
||||
role depending on the value returned for `group` within the claims.
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
type: oidc
|
||||
oidc:
|
||||
id: example.com
|
||||
redirect_url: https://localhost:3080/v1/webapi/oidc/callback
|
||||
redirect_timeout: 90s
|
||||
client_id: 000000000000-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.example.com
|
||||
client_secret: AAAAAAAAAAAAAAAAAAAAAAAA
|
||||
issuer_url: https://oidc.example.com
|
||||
display: "Login with Example"
|
||||
scope: [ "group" ]
|
||||
claims_to_roles:
|
||||
- claim: "group"
|
||||
value: "admin"
|
||||
roles: [ "admin" ]
|
||||
- claim: "group"
|
||||
value: "user"
|
||||
roles: [ "user" ]
|
||||
```
|
||||
|
||||
#### OIDC with role templates
|
||||
|
||||
If you have individual system logins using pre-defined roles can be cumbersome
|
||||
because you need to create a new role every time you add a new member to your
|
||||
team. In this situation you can use role templates to dynamically create roles
|
||||
based off information passed in the claims. In the configuration below, if the
|
||||
claims have a `group` with value `admin` we dynamically create a role with the
|
||||
name extracted from the value of `email` in the claim and login `username`.
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
type: oidc
|
||||
oidc:
|
||||
id: google
|
||||
redirect_url: https://localhost:3080/v1/webapi/oidc/callback
|
||||
redirect_timeout: 90s
|
||||
client_id: 000000000000-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.example.com
|
||||
client_secret: AAAAAAAAAAAAAAAAAAAAAAAA
|
||||
issuer_url: https://oidc.example.com
|
||||
display: "Login with Example"
|
||||
scope: [ "group", "username", "email" ]
|
||||
claims_to_roles:
|
||||
- claim: "group"
|
||||
value: "admin"
|
||||
role_template:
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: '{{index . "email"}}'
|
||||
namespace: "default"
|
||||
spec:
|
||||
namespaces: [ "*" ]
|
||||
max_session_ttl: 90h0m0s
|
||||
logins: [ '{{index . "username"}}', root ]
|
||||
node_labels:
|
||||
"*": "*"
|
||||
resources:
|
||||
"*": [ "read", "write" ]
|
||||
```
|
||||
|
||||
#### ACR Values
|
||||
|
||||
Teleport supports sending Authentication Context Class Reference (ACR) values
|
||||
when obtaining an authorization code from an OIDC provider. By default ACR
|
||||
values are not set. However, if the `acr_values` field is set, Teleport expects
|
||||
to receive the same value in the `acr` claim, otherwise it will consider the
|
||||
callback invalid.
|
||||
|
||||
In addition, Teleport supports OIDC provider specific ACR value processing
|
||||
which can be enabled by setting the `provider` field in OIDC configuration. At
|
||||
the moment, the only build-in support is for NetIQ.
|
||||
|
||||
A example of using ACR values and provider specific processing is below:
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
type: oidc
|
||||
oidc:
|
||||
id: example.com
|
||||
redirect_url: https://localhost:3080/v1/webapi/oidc/callback
|
||||
redirect_timeout: 90s
|
||||
client_id: 000000000000-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.example.com
|
||||
client_secret: AAAAAAAAAAAAAAAAAAAAAAAA
|
||||
issuer_url: https://oidc.example.com
|
||||
acr_values: "foo/bar"
|
||||
provider: netiq
|
||||
display: "Login with Example"
|
||||
scope: [ "group" ]
|
||||
claims_to_roles:
|
||||
- claim: "group"
|
||||
value: "admin"
|
||||
roles: [ "admin" ]
|
||||
```
|
||||
|
||||
#### Login
|
||||
|
||||
For the Web UI, if the above configuration were real, you would see a button
|
||||
that says `Login with Example`. Simply click on that and you will be
|
||||
re-directed to a login page for your identity provider and if successful,
|
||||
redirected back to Teleport.
|
||||
|
||||
For console login, you simple type `tsh --proxy <proxy-addr> ssh <server-addr>`
|
||||
and a browser window should automatically open taking you to the login page for
|
||||
your identity provider. `tsh` will also output a link the login page of the
|
||||
identity provider if you are not automatically redirected.
|
228
docs/2.3/quickstart.md
Normal file
|
@ -0,0 +1,228 @@
|
|||
# Quick Start Guide
|
||||
|
||||
Welcome to the Teleport Quick Start Guide!
|
||||
|
||||
The goal of this document is to show off the basic capabilities of Teleport.
|
||||
There are three types of services Teleport nodes can run: `nodes`, `proxies` and `auth servers`.
|
||||
|
||||
- Auth servers are the core of a cluster. Auth servers store user accounts and provide authentication and authorization services for every node and every user in a cluster.
|
||||
- Nodes are regular SSH nodes, similar to the `sshd` daemon you may be familiar with. When a node receives
|
||||
a connection request, it authenticates it via the cluster's auth server.
|
||||
- Proxies route client connection requests to the appropriate node and serve a Web UI
|
||||
which can also be used to log into SSH nodes. Every client-to-node connection in
|
||||
Teleport must be routed via a proxy.
|
||||
|
||||
The `teleport` daemon runs all three of these services by default. This Quick Start Guide will
|
||||
be using this default behavior to create a cluster and interact with it
|
||||
using Teleport's client-side tools:
|
||||
|
||||
| Tool | Description
|
||||
|----------------|------------------------------------------------------------------------
|
||||
| tctl | Cluster administration tool used to invite nodes to a cluster and manage user accounts.
|
||||
| tsh | Similar in principle to OpenSSH's `ssh`. Used to login into remote SSH nodes, list and search for nodes in a cluster, securely upload/download files, etc.
|
||||
| browser | You can use your web browser to login into any Teleport node by opening `https://<proxy-host>:3080`.
|
||||
|
||||
## Installing and Starting
|
||||
|
||||
Gravitational Teleport natively runs on most Linux distributions. You can
|
||||
download pre-built binaries from [here](https://github.com/gravitational/teleport/releases)
|
||||
or you can [build it from source](https://github.com/gravitational/teleport).
|
||||
|
||||
After downloading the binary tarball, run:
|
||||
|
||||
```
|
||||
$ tar -xzf teleport-binary-release.tar.gz
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
This will copy Teleport binaries to `/usr/local/bin`.
|
||||
|
||||
Let's start Teleport on a single-node. First, create a directory for Teleport
|
||||
to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon:
|
||||
|
||||
```bash
|
||||
$ mkdir -p /var/lib/teleport
|
||||
$ sudo teleport start
|
||||
|
||||
[AUTH] Auth service is starting on 0.0.0.0:3025
|
||||
[PROXY] Reverse tunnel service is starting on 0.0.0.0:3024
|
||||
[PROXY] Web proxy service is starting on 0.0.0.0:3080
|
||||
[PROXY] SSH proxy service is starting on 0.0.0.0:3023
|
||||
[SSH] Service is starting on 0.0.0.0:3022
|
||||
```
|
||||
|
||||
At this point you should see Teleport print listening IPs of all 3 services into the console.
|
||||
|
||||
Congratulations - you are now running Teleport!
|
||||
|
||||
## Creating Users
|
||||
|
||||
Teleport users are defined on a cluster level, and every Teleport user must be associated with
|
||||
a list of machine-level OS usernames it can authenticate as during a login. This list is
|
||||
called "user mappings".
|
||||
|
||||
If you do not specify the mappings, the new Teleport user will be assigned a mapping with
|
||||
the same name. Let's create a Teleport user with the same name as the OS user:
|
||||
|
||||
```bash
|
||||
$ sudo tctl users add $USER
|
||||
|
||||
Signup token has been created. Share this URL with the user:
|
||||
https://localhost:3080/web/newuser/96c85ed60b47ad345525f03e1524ac95d78d94ffd2d0fb3c683ff9d6221747c2
|
||||
```
|
||||
|
||||
`tctl` prints a sign-up URL for you to open in your browser and complete registration:
|
||||
|
||||
![teleport login](/img/login.png?style=grv-image-center-md)
|
||||
|
||||
Teleport enforces two-factor authentication. If you do not already have Google Authenticator, you will have to install it on your smart phone. Then you can scan the bar code on the Teleport login web page, pick a password and enter in the two factor token.
|
||||
|
||||
The default TTL for a login is 12 hours but this can be configured to a maximum of 30 hours and a minimum of 1 minute.
|
||||
|
||||
Having done that, you will be presented with a Web UI where you will see your machine and will be able to log in to it using web-based terminal.
|
||||
|
||||
![teleport ui](/img/firstpage.png?style=grv-image-center-md)
|
||||
|
||||
## Logging In Through CLI
|
||||
|
||||
Let's login using the `tsh` command line tool:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=localhost --insecure ssh localhost
|
||||
```
|
||||
|
||||
Notice that `tsh` client always needs `--proxy` flag because all client connections
|
||||
in Teleport have to go via a proxy sometimes called an "SSH bastion".
|
||||
|
||||
!!! warning "Warning":
|
||||
For the purposes of this quickstart we are using the `-- insecure` flag however this should not be used in production. See [Admin Manual](admin-guide.md) for more information on setting up Teleport in production.
|
||||
|
||||
|
||||
!!! tip "Tip":
|
||||
You can use `tsh --proxy=localhost login` to create a client profile in `~/.tsh`
|
||||
directory. This will make `tsh` "remember" the current proxy server and remove
|
||||
the need for `--proxy` flag.
|
||||
|
||||
## Adding Nodes to Cluster
|
||||
|
||||
Let's add another node to the cluster. The `tctl` command below will create a single-use
|
||||
token for a node to join and will print instructions for you to follow:
|
||||
|
||||
```bash
|
||||
$ sudo tctl nodes add
|
||||
|
||||
The invite token: n92bb958ce97f761da978d08c35c54a5c
|
||||
Run this on the new node to join the cluster:
|
||||
teleport start --roles=node --token=n92bb958ce97f761da978d08c35c54a5c --auth-server=10.0.10.1
|
||||
```
|
||||
|
||||
Start `teleport` daemon on a new node as shown above, but make sure to use the proper
|
||||
`--auth-server` IP to point back to your localhost.
|
||||
|
||||
Once you do that, verify that the new node has joined the cluster:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=localhost ls
|
||||
|
||||
Node Name Node ID Address Labels
|
||||
--------- ------- ------- ------
|
||||
localhost xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.1:3022
|
||||
new-node xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022
|
||||
```
|
||||
|
||||
!!! tip "NOTE":
|
||||
Teleport also supports static pre-defined invitation tokens which can be set in the [configuration file](admin-guide.md#adding-nodes-to-the-cluster)
|
||||
|
||||
## Using Node Labels
|
||||
|
||||
Notice the "Labels" column in the output above. It is currently not populated. Teleport lets
|
||||
you apply static or dynamic labels to your nodes. As the cluster grows and nodes assume different
|
||||
roles, labels will help to find the right node quickly.
|
||||
|
||||
Let's see labels in action. Stop Teleport `ctrl-c` on the node we just added and restart it with the following command:
|
||||
|
||||
```bash
|
||||
$ sudo teleport start --roles=node --auth-server=10.0.10.1 --nodename=db --labels "location=virginia,arch=[1h:/bin/uname -m]"
|
||||
```
|
||||
|
||||
Notice a few things here:
|
||||
|
||||
* We did not use `--token` flag this time, because this node is already a member of the cluster.
|
||||
* We explicitly named this node as "db" because this machine is running a database. This name only exists within Teleport, the actual hostname has not changed.
|
||||
* We assigned a static label "location" to this host and set it to "virginia".
|
||||
* We also assigned a dynamic label "arch" which will evaluate `/bin/uname -m` command once an hour and assign the output to this label value.
|
||||
|
||||
Let's take a look at our cluster now:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=localhost ls
|
||||
|
||||
Node Name Node ID Address Labels
|
||||
--------- ------- ------- ------
|
||||
localhost xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.1:3022
|
||||
db xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022 location=virginia,arch=x86_64
|
||||
```
|
||||
|
||||
Let's use the newly created labels to filter the output of `tsh ls` and ask to show only
|
||||
nodes located in Virginia:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=localhost ls location=virginia
|
||||
|
||||
Node Name Node ID Address Labels
|
||||
--------- ------- ------- ------
|
||||
db xxxxx-xxxx-xxxx-xxxxxxx 10.0.10.2:3022 location=virginia,arch=x86_64
|
||||
```
|
||||
|
||||
Labels can be used with the regular `ssh` command too. This will execute `ls -l /` command
|
||||
on all servers located in Virginia:
|
||||
|
||||
```
|
||||
$ tsh --proxy=localhost ssh location=virginia ls -l /
|
||||
```
|
||||
|
||||
## Sharing SSH Sessions
|
||||
|
||||
Suppose you are trying to troubleshoot a problem on a node. Sometimes it makes sense to ask
|
||||
another team member for help. Traditionally this could be done by letting them know which
|
||||
node you're on, having them SSH in, start a terminal multiplexer like `screen` and join a
|
||||
session there.
|
||||
|
||||
Teleport makes this a bit more convenient. Let's login into "db" and ask Teleport for your
|
||||
current session status:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=teleport.example.com ssh db
|
||||
db > teleport status
|
||||
|
||||
User ID : joe, logged in as joe from 10.0.10.1 43026 3022
|
||||
Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4
|
||||
Session URL: https://teleport.example.com:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4
|
||||
```
|
||||
|
||||
You can share the Session URL with a colleague in your organization. Assuming that your colleague has access to `teleport.example.com` proxy, she will be able to join and help you troubleshoot the problem on "db" in her browser.
|
||||
|
||||
Also, people can join your session via terminal assuming they have Teleport installed and running. They just have to run:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=teleport.example.com join 7645d523-60cb-436d-b732-99c5df14b7c4
|
||||
```
|
||||
|
||||
!!! tip "NOTE":
|
||||
For this to work, both of you must have proper user mappings allowing you access `db` under the same OS user.
|
||||
|
||||
## Hosted Teleport (Teleconsole)
|
||||
|
||||
We run a hosted example of Teleport at [teleconsole.com](https://www.teleconsole.com/). You can use it to see how Teleport might work without having to set it up for yourself. It's just an easy way to share your terminal with your friends to show Teleport in action.
|
||||
|
||||
## Running in Production
|
||||
|
||||
We hope this quickstart guide has helped you to quickly set up and play with Teleport. For production environments we strongly recommend the following:
|
||||
|
||||
- Install HTTPS certificates for every Teleport proxy.
|
||||
- Run Teleport `auth` on isolated servers. The auth service can run in a
|
||||
highly available (HA) configuration.
|
||||
- Use a configuration file instead of command line flags because it gives you
|
||||
more flexibility, for example for configuring HA clusters.
|
||||
- Review the [Architecture Overview](architecture.md), [Admin Manual](admin-guide.md) and [User Manual](user-manual.md) for a better understanding of Teleport.
|
||||
|
379
docs/2.3/saml.md
Normal file
|
@ -0,0 +1,379 @@
|
|||
# SAML 2.0 Features
|
||||
|
||||
Teleport Enterprise supports SAML 2.0 as an external identity provider and has been
|
||||
tested to work with Okta and Active Directory Federation Services (ADFS)
|
||||
2016.
|
||||
|
||||
## Okta
|
||||
|
||||
This guide configures Okta map groups via SAML to Teleport roles.
|
||||
|
||||
### Start Teleport
|
||||
|
||||
Start Teleport with this samle config, notice how we set `dynamic_config: true` to indicate that we will use dynamic configuration
|
||||
as opposed to static file config.
|
||||
|
||||
```yaml
|
||||
# Simple config file with just a few customizations (with comments)
|
||||
teleport:
|
||||
nodename: localhost
|
||||
log:
|
||||
output: stderr
|
||||
severity: DEBUG
|
||||
dynamic_config: true
|
||||
auth_service:
|
||||
enabled: yes
|
||||
cluster_name: teleport.local
|
||||
ssh_service:
|
||||
enabled: yes
|
||||
```
|
||||
|
||||
### Confiugre Okta
|
||||
|
||||
#### Create App
|
||||
|
||||
Create SAML 2.0 Web App in Okta config section
|
||||
|
||||
![Create APP](img/okta-saml-1.png?raw=true)
|
||||
![Create APP name](img/okta-saml-2.png?raw=true)
|
||||
|
||||
#### Configure Okta App
|
||||
|
||||
**Create Groups**
|
||||
|
||||
We are going to create groups `okta-dev` and `okta-admin`:
|
||||
|
||||
**Devs**
|
||||
|
||||
![Create Group Devs](img/okta-saml-2.1.png)
|
||||
|
||||
**Admins**
|
||||
|
||||
![Create Group Devs](img/okta-saml-2.2.png)
|
||||
|
||||
**Configure APP**
|
||||
|
||||
We are going to map these Okta groups to SAML Attribute statements (special signed metadata
|
||||
exposed via SAML XML response).
|
||||
|
||||
![Configure APP](img/okta-saml-3.png)
|
||||
|
||||
**Notice:** We have set NameID to email format and mappped groups with wildcard regex in Group Attribute statements.
|
||||
We have also set Audience and SSO url to be the same thing.
|
||||
|
||||
**Assign Groups**
|
||||
|
||||
Assign groups and people to your SAML app:
|
||||
|
||||
![Configure APP](img/okta-saml-3.1.png)
|
||||
|
||||
|
||||
#### Configure Teleport SAML
|
||||
|
||||
![Download metadata](img/okta-saml-4.png?raw=true)
|
||||
|
||||
Download metadata in the form of XML doc, we will use it to configure Teleport.
|
||||
|
||||
```
|
||||
kind: saml
|
||||
version: v2
|
||||
metadata:
|
||||
name: OktaSAML
|
||||
spec:
|
||||
acs: https://localhost:3080/v1/webapi/saml/acs
|
||||
attributes_to_roles:
|
||||
- {name: "groups", value: "okta-admin", roles: ["admin"]}
|
||||
- {name: "groups", value: "okta-dev", roles: ["dev"]}
|
||||
entity_descriptor: |
|
||||
<paste SAML XML contents here>
|
||||
```
|
||||
|
||||
Configure the SAML by creating configuration resource in teleport using `tctl` command:
|
||||
|
||||
```bash
|
||||
tctl create -f saml.yaml
|
||||
```
|
||||
|
||||
Create file `preference.yaml` that will configure teleport to use SAML as primary configuration method:
|
||||
|
||||
```saml
|
||||
kind: cluster_auth_preference
|
||||
version: v2
|
||||
metadata:
|
||||
name: "cluster-auth-preference"
|
||||
spec:
|
||||
type: saml
|
||||
```
|
||||
|
||||
```bash
|
||||
tctl create -f preference.yaml
|
||||
```
|
||||
|
||||
#### Create Teleport Roles
|
||||
|
||||
We are going to create 2 roles, privileged role admin who is able to login as root and is capable
|
||||
of administrating the cluster and non-privileged dev who is only allowed to view sessions and login as non-privileged user.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: admin
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [root]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'*': '*'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
|
||||
Devs are only allowed to login to nodes labelled with `access: relaxed` teleport label.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: stage-devops
|
||||
spec:
|
||||
logins: [ubuntu]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
access: relaxed
|
||||
resources:
|
||||
'*': [read]
|
||||
```
|
||||
|
||||
|
||||
**Notice:** Replace `ubuntu` with linux login available on your servers!
|
||||
|
||||
```bash
|
||||
tctl create -f admin.yaml
|
||||
tctl create -f dev.yaml
|
||||
```
|
||||
|
||||
### Login
|
||||
|
||||
For the Web UI, if the above configuration were real, you would see a button
|
||||
that says `Login with Okta`. Simply click on that and you will be
|
||||
re-directed to a login page for your identity provider and if successful,
|
||||
redirected back to Teleport.
|
||||
|
||||
For console login, you simple type `tsh --proxy <proxy-addr> ssh <server-addr>`
|
||||
and a browser window should automatically open taking you to the login page for
|
||||
your identity provider. `tsh` will also output a link the login page of the
|
||||
identity provider if you are not automatically redirected.
|
||||
|
||||
!!! note "IMPORTANT":
|
||||
Teleport only supports sending party initiated flows for SAML 2.0. This
|
||||
means you can not initiate login from your identity provider, you have to
|
||||
initiate login from either the Teleport Web UI or CLI.
|
||||
|
||||
## ADFS
|
||||
|
||||
### ADFS Configuration
|
||||
|
||||
You'll need to configure ADFS to export claims about a user (Claims Provider
|
||||
Trust in ADFS terminology) and you'll need to configure AD FS to trust
|
||||
Teleport (a Relying Party Trust in ADFS terminology).
|
||||
|
||||
For Claims Provider Trust configuration you'll need to specify at least the
|
||||
following two incoming claims: `Name ID` and `Group`. `Name ID` should be a
|
||||
mapping of the LDAP Attribute `E-Mail-Addresses` to `Name ID`. A group
|
||||
membership claim should be used to map users to roles (for example to
|
||||
separate normal users and admins).
|
||||
|
||||
![Name ID Configuration](img/adfs-1.png?raw=true)
|
||||
![Group Configuration](img/adfs-2.png?raw=true)
|
||||
|
||||
In addition if you are using dynamic roles (see below), it may be useful to map
|
||||
the LDAP Attribute `SAM-Account-Name` to `Windows account name` and create
|
||||
another mapping of `E-Mail-Addresses` to `UPN`.
|
||||
|
||||
![WAN Configuration](img/adfs-3.png?raw=true)
|
||||
![UPN Configuration](img/adfs-4.png?raw=true)
|
||||
|
||||
You'll also need to create a Relying Party Trust, use the below information to
|
||||
help guide you through the Wizard. Note, for development purposes we recommend
|
||||
using `https://localhost:3080/v1/webapi/saml/acs` as the Assertion Consumer
|
||||
Service (ACS) URL, but for production you'll want to change this to a domain
|
||||
that can be accessed by other users as well.
|
||||
|
||||
* Create a claims aware trust.
|
||||
* Enter data about the relying party manually.
|
||||
* Set the display name to something along the lines of "Teleport".
|
||||
* Skip the token encryption certificate.
|
||||
* Select `Enable support for SAML 2.0 Web SSO protocol` and set the URL to `https://localhost:3080/v1/webapi/saml/acs`.
|
||||
* Set the relying party trust identifier to `https://localhost:3080/v1/webapi/saml/acs` as well.
|
||||
* For access control policy select `Permit everyone`.
|
||||
|
||||
Once the Relying Party Trust has been created, update the Claim Issuance Policy
|
||||
for it. Like before make sure you send at least `Name ID` and `Group` claims to the
|
||||
relying party (Teleport). If you are using dynamic roles, it may be useful to
|
||||
map the LDAP Attribute `SAM-Account-Name` to `Windows account name` and create
|
||||
another mapping of `E-Mail-Addresses` to `UPN`.
|
||||
|
||||
Lastly, ensure the user you create in Active Directory has an email address
|
||||
associated with it. To check this open Server Manager then
|
||||
`Tools -> Active Directory Users and Computers` and select the user and right
|
||||
click and open properties. Make sure the email address field is filled out.
|
||||
|
||||
### Teleport Configuration
|
||||
|
||||
Teleport can be configured with static or dynamic roles. Static roles are simple
|
||||
and great when the role you need to associate with a user is static. If the role
|
||||
your user assumes depends on the attributes that you send along, consider using
|
||||
dynamic roles.
|
||||
|
||||
#### Static Roles
|
||||
|
||||
To configure Teleport with static roles, first you'll need to create at least
|
||||
the following two roles. One is for an admins and the other is for a normal
|
||||
users. You can create them on the backend using `tctl create -f {file name}`.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: "admins"
|
||||
namespace: "default"
|
||||
spec:
|
||||
namespaces: [ "*" ]
|
||||
max_session_ttl: 90h0m0s
|
||||
logins: [ root ]
|
||||
node_labels:
|
||||
"*": "*"
|
||||
resources:
|
||||
"*": [ "read", "write" ]
|
||||
```
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: "users"
|
||||
namespace: "default"
|
||||
spec:
|
||||
max_session_ttl: 90h0m0s
|
||||
logins: [ root, jsmith ]
|
||||
```
|
||||
|
||||
Next create a SAML resource, once again you can do this with `tctl create -f {file name}`.
|
||||
|
||||
```yaml
|
||||
kind: saml
|
||||
version: v2
|
||||
metadata:
|
||||
name: "adfs"
|
||||
namespace: "default"
|
||||
spec:
|
||||
provider: "adfs"
|
||||
acs: "https://localhost:3080/v1/webapi/saml/acs"
|
||||
entity_descriptor_url: "https://adfs.example.com/FederationMetadata/2007-06/FederationMetadata.xml"
|
||||
attributes_to_roles:
|
||||
- name: "http://schemas.xmlsoap.org/claims/Group"
|
||||
value: "teleadmins"
|
||||
roles: ["admins"]
|
||||
- name: "http://schemas.xmlsoap.org/claims/Group"
|
||||
value: "teleusers"
|
||||
roles: ["users"]
|
||||
```
|
||||
|
||||
The `acs` field should match the value you set in ADFS earlier and you can
|
||||
obtain the `entity_descriptor_url` from ADFS under
|
||||
`AD FS -> Service -> Endpoints -> Metadata`.
|
||||
|
||||
The `attributes_to_roles` is used to map attributes to the Teleport roles you
|
||||
just creataed. In our situation, we are mapping the `Group` attribute whose full
|
||||
name is `http://schemas.xmlsoap.org/claims/Group` with a value of `teleadmins`
|
||||
to the `admin` role. Groups with the value `teleusers` is being mapped to the
|
||||
`users` role.
|
||||
|
||||
#### Dynamic Roles
|
||||
|
||||
Static roles are simple to understand and use, but can be cumbersome in certain
|
||||
situation. For example if every user has a seperate login stead of a shared
|
||||
login, you have to create/remote a new role every time someone joins (or leaves)
|
||||
the company. In this situation you can use role templates to dynamically create
|
||||
roles based off information passed in the assertions.
|
||||
|
||||
```yaml
|
||||
kind: saml
|
||||
version: v2
|
||||
metadata:
|
||||
name: "adfs"
|
||||
namespace: "default"
|
||||
spec:
|
||||
provider: "adfs"
|
||||
acs: "https://localhost:3080/v1/webapi/saml/acs"
|
||||
entity_descriptor_url: "https://adfs.example.com/FederationMetadata/2007-06/FederationMetadata.xml"
|
||||
attributes_to_roles:
|
||||
- name: "http://schemas.xmlsoap.org/claims/Group"
|
||||
value: "teleadmins"
|
||||
role_template:
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: '{{index . "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"}}'
|
||||
namespace: "default"
|
||||
spec:
|
||||
namespaces: [ "*" ]
|
||||
max_session_ttl: 90h0m0s
|
||||
logins: [ '{{index . "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"}}', root ]
|
||||
node_labels:
|
||||
"*": "*"
|
||||
resources:
|
||||
"*": [ "read", "write" ]
|
||||
- name: "http://schemas.xmlsoap.org/claims/Group"
|
||||
value: "teleusers"
|
||||
role_template:
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: '{{index . "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"}}'
|
||||
namespace: "default"
|
||||
spec:
|
||||
max_session_ttl: 90h0m0s
|
||||
logins: [ '{{index . "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"}}', root ]
|
||||
```
|
||||
|
||||
The `attributes_to_roles` field is the same as static roles except instead of
|
||||
`roles` we have `role_template` which defines the role to be created when the
|
||||
user successfully logs in. Note that the login and role name are extracted from
|
||||
the additional assertions we created above and injected into the role.
|
||||
|
||||
The last resource you'll need to configure for Teleport is your cluster
|
||||
authentication preferences. Once again create the below resource with
|
||||
`tctl create -f {file name}`.
|
||||
|
||||
```yaml
|
||||
kind: cluster_auth_preference
|
||||
version: v2
|
||||
metadata:
|
||||
description: ""
|
||||
name: "cluster-auth-preference"
|
||||
namespace: "default"
|
||||
spec:
|
||||
type: saml
|
||||
```
|
||||
|
||||
### Exporting Signing Key
|
||||
|
||||
For the last step, you'll need to export the signing key, you can do this with
|
||||
`tctl saml export --name adfs`. Save the output to a file named `saml.crt`.
|
||||
Return back to AD FS and open the Relying Party Trust and add this file as one
|
||||
of the signature verification certificates.
|
||||
|
||||
### Login
|
||||
|
||||
For the Web UI, if the above configuration were real, you would see a button
|
||||
that says `Login with adfs`. Simply click on that and you will be
|
||||
re-directed to a login page for your identity provider and if successful,
|
||||
redirected back to Teleport.
|
||||
|
||||
For console login, you simple type `tsh --proxy <proxy-addr> ssh <server-addr>`
|
||||
and a browser window should automatically open taking you to the login page for
|
||||
your identity provider. `tsh` will also output a link the login page of the
|
||||
identity provider if you are not automatically redirected.
|
389
docs/2.3/trustedclusters.md
Normal file
|
@ -0,0 +1,389 @@
|
|||
# Dynamic Trusted Clusters
|
||||
|
||||
Dynamic Trusted Clusters can be used to configure Trusted Clusters in more
|
||||
powerful ways than standard file configuration. If you have not already read the
|
||||
documentation about [Trusted Clusters](admin-guide.md#trusted-clusters) from
|
||||
Teleport, it would be helpful to review before continuing.
|
||||
|
||||
Some of the features Dynamic Trusted Clusters have:
|
||||
|
||||
* Add and remove Trusted Clusters without needing to restart Teleport.
|
||||
* Enable/Disable Trusted Clusters from the Web UI.
|
||||
* More sophisticated role mapping which allows you to map roles you have on your
|
||||
main cluster (which are encoded within your SSH certificate) to the role you
|
||||
assume when you connect to a node within the Trusted Cluster.
|
||||
|
||||
Below we will provide two example configurations. A simple configuration that
|
||||
you can use to quickly get started with Trusted Clusters and a more
|
||||
comprehensive configuration that illustrates some of the more powerful features
|
||||
and abilities of Trusted Clusters.
|
||||
|
||||
## Simple Configuration
|
||||
|
||||
Similar to the example illustrated in the documentation for
|
||||
[Trusted Clusters](admin-guide.md#trusted-clusters), suppose you have a remote
|
||||
cluster that sits in a restricted environment which you can not directly connect
|
||||
to due to firewall rules but you have no other access control restrictions
|
||||
(anyone user in your main cluster should also be able to access any node within
|
||||
the Trusted Cluster).
|
||||
|
||||
### Secret Tokens
|
||||
|
||||
When creating Trusted Clusters dynamically, Teleport requires that the Trusted
|
||||
Cluster know the value of a secret token generated by the main cluster. It uses
|
||||
this value to establish trust between clusters during the initial exchange. Due
|
||||
to the sensitive nature of this token, we recommend you use secure channel to
|
||||
exchange this token so it doesn't fall into the hands of an attacker. Secret
|
||||
tokens can be either static (long lived) or dynamic (short lived).
|
||||
|
||||
To create a dynamic token on the main cluster which lasts only 5 minutes, use
|
||||
the following command:
|
||||
|
||||
```
|
||||
$ tctl nodes add --ttl=5m --roles=trustedcluster
|
||||
```
|
||||
|
||||
If you need long lived static tokens, generate the token out-of-band and add it
|
||||
to your configuration file on the main cluster:
|
||||
|
||||
```yaml
|
||||
auth_service:
|
||||
enabled: yes
|
||||
cluster_name: main
|
||||
tokens:
|
||||
# generate a large random number for your token, we recommend
|
||||
# using a tool like `pwgen` to generate sufficiently random
|
||||
# tokens of length greater than 32 bytes
|
||||
- "trustedcluster:fake-token"
|
||||
```
|
||||
|
||||
#### Security Implications
|
||||
|
||||
Consider the security implications when deciding which token method to use.
|
||||
Short lived tokens decrease the window for attack but also make automation more
|
||||
difficult. Inherent to their nature, short lived tokens also make it difficult
|
||||
to allow your customers the ability to enable/disable a Trusted Cluster because
|
||||
the token exchange has to re-occur if they want to re-establish trust which they
|
||||
can't do with an expired token.
|
||||
|
||||
If even short lived tokens are not acceptable for your threat model, consider
|
||||
using file configuration that requires to you manually verify and add the keys
|
||||
for clusters you trust. Note however that if you use the standard file
|
||||
configuration method, the features below are not available.
|
||||
|
||||
### Resources
|
||||
|
||||
To configure your clusters, you will need create the resources we describe below.
|
||||
|
||||
#### Roles
|
||||
|
||||
Because each cluster is independent, both specify their own roles and we need to
|
||||
create these before we do anything else. Below is a description of the roles and
|
||||
resource files that need to be created within Teleport.
|
||||
|
||||
On the main cluster, you'll need to create the following two roles. The admin
|
||||
role allows you to access all servers and the staging role will limit you to
|
||||
serves with the `type=staging` label. Create the above two roles with `tctl
|
||||
create -f {file name}`.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: admin
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ root ]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'*': '*'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: staging
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ root ]
|
||||
max_session_ttl: 90h0m0s
|
||||
node_labels:
|
||||
'type': 'staging'
|
||||
```
|
||||
|
||||
On the Trusted Cluster, you'll need to create an admin role as well. Note the
|
||||
`logins` field, this is a list of usernames that the user will be able to login
|
||||
as. Once again, create it with `tctl create -f {file name}`.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: admin
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ root ]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'*': '*'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
|
||||
##### Trusted Cluster Resources
|
||||
|
||||
The Trusted Cluster resource is used to both establish trust between the two
|
||||
clusters and map roles from one cluster to another.
|
||||
|
||||
In the resource file below you'll note that we have a `token` field, this is the
|
||||
same token that was generated on the main cluster and is used to establish
|
||||
trust.
|
||||
|
||||
We also have a `role_map` field which describes how a users roles from the main
|
||||
cluster (which are encoded in the users SSH certificate) are mapped to roles in
|
||||
the Trusted Cluster. In this case, we are mapping all roles from the rote
|
||||
cluster to the local admin role. Note users with the staging role will also be
|
||||
able to access the Trusted Cluster. Take a look at the comprehensive
|
||||
configuration section for more details on how to control access.
|
||||
|
||||
```yaml
|
||||
kind: trusted_cluster
|
||||
version: v1
|
||||
metadata:
|
||||
description: "Remote Cluster"
|
||||
name: "Remote Cluster"
|
||||
namespace: "default"
|
||||
spec:
|
||||
enabled: true
|
||||
role_map:
|
||||
- remote: "*"
|
||||
local: [admin]
|
||||
token: "fake-token"
|
||||
tunnel_addr: <main-addr>:3024
|
||||
web_proxy_addr: <main-addr>:3080
|
||||
```
|
||||
|
||||
To disable a Trusted Cluster, simply user `tctl create -f {file name}` again but
|
||||
this time set `enabled: false`.
|
||||
|
||||
#### Verify
|
||||
|
||||
That's it. To verify that you can see your Trusted Cluster, run the following
|
||||
command:
|
||||
|
||||
```
|
||||
$ tsh --proxy=<proxy-addr> clusters
|
||||
Cluster Name Status
|
||||
-------------- ------
|
||||
Main online
|
||||
Remote Cluster online
|
||||
```
|
||||
|
||||
## Comprehensive Configuration
|
||||
|
||||
Suppose you sell an Enterprise version of your application that runs on your
|
||||
customers infrastructure. You want an easy way to access your customers
|
||||
infrastructure in-case you need to troubleshoot problems that may arise. In
|
||||
addition, since you may have multiple customers, you want to limit your users to
|
||||
only have access to the customer cluster they need access to.
|
||||
|
||||
More formally, you have the following:
|
||||
|
||||
Cluster Name | Description
|
||||
-------------|---------------------------------------------------------------------------------
|
||||
main | This is your own cluster, the other Trusted Cluster will dial into this cluster.
|
||||
acme | This cluster belongs to your customer Acme Corporation.
|
||||
emca | This cluster belongs to your customer Emca Corporation.
|
||||
|
||||
Teleport User | Description
|
||||
--------------|----------------------
|
||||
james | Support Engineer that handles Acme Corporation.
|
||||
john | Support Engineer that handles Emca Corporation.
|
||||
robert | Support Engineer that handles Acme and Emca Corporation.
|
||||
|
||||
### Secret Token
|
||||
|
||||
See the [Secret Token](#secret-token) configuration section in the
|
||||
[Simple Configuration](#simple-configuration).
|
||||
|
||||
### Resources
|
||||
|
||||
To configure your clusters, you will need create the resources we describe
|
||||
below.
|
||||
|
||||
#### Roles
|
||||
|
||||
Because each cluster is independent, both specify their own roles and we need to
|
||||
create these before we do anything else. Below is a description of the roles and
|
||||
resource files that need to be created within Teleport.
|
||||
|
||||
On the main cluster, you'll need to create two roles. One that will be used to
|
||||
access Acme Corporations cluster and another for Emca Corporation. Note these
|
||||
are roles are local roles that will be mapped to remote roles. The way they are
|
||||
configured they have full access to all nodes within the main cluster but can be
|
||||
limited roles in the main cluster and admin roles in the remote cluster if need
|
||||
be by restricting them here.
|
||||
|
||||
To create the roles, create the below files and then use `tctl create -f {file
|
||||
name}`.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: acme-support
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ root, acme ]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'*': '*'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: emca-support
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ root, emca ]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'*': '*'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
|
||||
On the Acme cluster, you'll need to at least create a role that allows you to
|
||||
access nodes within it. The following role allows you to access all nodes and
|
||||
resources within the Acme cluster and only allows you to login as the `acme`
|
||||
user. Once again use `tctl create -f {file name}`
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: admin
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ acme ]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'*': '*'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
|
||||
Lastly, on the Emca cluster, create an admin role that only allows you to access
|
||||
nodes with the label `access=relaxed` and only allows you to login as the `emca`
|
||||
user. This is a way to restrict access within a cluster even further.
|
||||
|
||||
```yaml
|
||||
kind: role
|
||||
version: v2
|
||||
metadata:
|
||||
name: admin
|
||||
namespace: default
|
||||
spec:
|
||||
logins: [ emca ]
|
||||
max_session_ttl: 90h0m0s
|
||||
namespaces: ['*']
|
||||
node_labels:
|
||||
'access': 'relaxed'
|
||||
resources:
|
||||
'*': [read, write]
|
||||
```
|
||||
|
||||
#### Users
|
||||
|
||||
Now that you have roles created, you need create users and assign these users
|
||||
roles.
|
||||
|
||||
On your main cluster, create users using the normal method you use. Then run the
|
||||
following commands to assign users the appropriate roles. This will allow the
|
||||
user `james` access to Acme Corporations cluster, the user `john` access to the
|
||||
Emca Corporation cluster, and the user `robert` will have access to both.
|
||||
|
||||
```
|
||||
$ tctl users update james --set-roles=user:james,acme-support
|
||||
$ tctl users update john --set-roles=user:john,emca-support
|
||||
$ tctl users update robert --set-roles=user:robert,acme-support,emca-support
|
||||
```
|
||||
|
||||
#### Trusted Cluster Resources
|
||||
|
||||
The Trusted Cluster resource is used to both establish trust between the two
|
||||
clusters and map roles from one cluster to another.
|
||||
|
||||
In the resource files below you'll note that we have a `token` field, this is
|
||||
the same token that was generated on the main cluster and is used to establish
|
||||
trust.
|
||||
|
||||
We also have a `role_map` field which describes how a users roles from the main
|
||||
cluster (which are encoded in the users SSH certificate) are mapped to roles in
|
||||
the Trusted Cluster. In this case, for each cluster we are mapping either
|
||||
`acme-support` or `emca-support` to the local role `admin`. This means the user
|
||||
`james` will have full access to the Acme cluster, while `john` only has access
|
||||
to the nodes with access `relaxed` in the Emca cluster, and `robert` will have
|
||||
full access to Acme and limit access to Emca.
|
||||
|
||||
```yaml
|
||||
kind: trusted_cluster
|
||||
version: v1
|
||||
metadata:
|
||||
description: "Remote Cluster"
|
||||
name: "Acme Cluster"
|
||||
namespace: "default"
|
||||
spec:
|
||||
enabled: true
|
||||
role_map:
|
||||
- remote: "acme-support"
|
||||
local: [admin]
|
||||
- remote: "emca-support"
|
||||
local: [admin]
|
||||
token: "fake-token"
|
||||
tunnel_addr: <main-addr>:3024
|
||||
web_proxy_addr: <main-addr>:3080
|
||||
```
|
||||
```yaml
|
||||
kind: trusted_cluster
|
||||
version: v1
|
||||
metadata:
|
||||
description: "Remote Cluster"
|
||||
name: "Emca Cluster"
|
||||
namespace: "default"
|
||||
spec:
|
||||
enabled: true
|
||||
role_map:
|
||||
- remote: "emca-support"
|
||||
local: [admin]
|
||||
token: "fake-token"
|
||||
tunnel_addr: <main-addr>:3024
|
||||
web_proxy_addr: <main-addr>:3080
|
||||
```
|
||||
|
||||
To disable a Trusted Cluster, simply user `tctl create -f {file name}` again but
|
||||
this time set `enabled: false`.
|
||||
|
||||
#### Verify
|
||||
|
||||
That's it. To verify that you can see your Trusted Cluster, run the following
|
||||
command:
|
||||
|
||||
```
|
||||
$ tsh --proxy=<proxy-addr> clusters
|
||||
Cluster Name Status
|
||||
-------------- ------
|
||||
Main online
|
||||
Acme Cluster online
|
||||
Emca Cluster online
|
||||
```
|
380
docs/2.3/user-manual.md
Normal file
|
@ -0,0 +1,380 @@
|
|||
# User Manual
|
||||
|
||||
This User Manual covers usage of the Teleport client tool `tsh`. In this
|
||||
document you will learn how to:
|
||||
|
||||
* Log into interactive shell on remote cluster nodes.
|
||||
* Copy files to and from cluster nodes.
|
||||
* Connect to SSH clusters behind firewalls without any open ports using SSH reverse tunnels.
|
||||
* Explore a cluster and execute commands on those nodes in a cluster that match your criteria.
|
||||
* Share interactive shell sessions with colleagues or join someone else's session.
|
||||
* Replay recorded interactive sessions.
|
||||
* Use Teleport with OpenSSH or with other tools that use SSH like Chef and Ansible.
|
||||
|
||||
In addition to this document, you can always type `tsh` into your terminal for the CLI reference.
|
||||
```bash
|
||||
$ tsh
|
||||
usage: tsh [<flags>] <command> [<command-args> ...]
|
||||
|
||||
Gravitational Teleport SSH tool
|
||||
|
||||
Commands:
|
||||
help Show help.
|
||||
version Print the version
|
||||
ssh Run shell or execute a command on a remote SSH node
|
||||
join Join the active SSH session
|
||||
play Replay the recorded SSH session
|
||||
scp Secure file copy
|
||||
ls List remote SSH nodes
|
||||
clusters List available Teleport clusters
|
||||
agent Start SSH agent on unix socket
|
||||
login Log in to the cluster and store the session certificate to avoid login prompts
|
||||
logout Delete a cluster certificate
|
||||
|
||||
Notes:
|
||||
|
||||
- Most of the flags can be set in a profile file ~/.tshconfig
|
||||
- Run `tsh help <command>` to get help for <command> like `tsh help ssh`
|
||||
```
|
||||
|
||||
## Difference vs OpenSSH
|
||||
|
||||
There are a few differences between Teleport's `tsh` and OpenSSH's `ssh` but the
|
||||
most noticeable ones are:
|
||||
|
||||
* Teleport only uses certificate-based authentication. Teleport is designed for clusters
|
||||
using a central certificate authority (CA). The concept of "cluster membership" is
|
||||
essential in Teleport.
|
||||
|
||||
* `tsh` always requires `--proxy` flag because `tsh` needs to know which cluster
|
||||
you are connecting to.
|
||||
|
||||
* `tsh` needs _two_ usernames: one for the cluster and another for the node you
|
||||
are trying to login into. See [User Identities](#user-identities) section below. For convenience,
|
||||
`tsh` assumes `$USER` for both logins by default.
|
||||
|
||||
While it may appear less convenient than `ssh`, we hope that the default behavior
|
||||
and techniques like bash aliases will help to minimize the amount of typing.
|
||||
|
||||
On the other hand, Teleport is built using standard SSH constructs: keys,
|
||||
certificates, protocols. This means that Teleport is 100% compatible with OpenSSH
|
||||
clients and servers. See the [Using Teleport with OpenSSH](admin-guide#using-teleport-with-openssh)
|
||||
section in the Admin Guide for more information.
|
||||
|
||||
## User Identities
|
||||
|
||||
A user identity in Teleport exists in the scope of a cluster. The member nodes
|
||||
of a cluster may have multiple OS users on them. A Teleport administrator assigns
|
||||
allowed logins to every Teleport user account.
|
||||
|
||||
When logging into a remote node, you will have to specify both logins. Teleport
|
||||
identity will have to be passed as `--user` flag, while the node login will be
|
||||
passed as `login@host`, using syntax compatible with traditional `ssh`.
|
||||
|
||||
These examples assume your localhost username is 'joe':
|
||||
|
||||
```bash
|
||||
# Authenticate against cluster 'work' as 'joe' and then login into 'node'
|
||||
# as root:
|
||||
$ tsh ssh --proxy=work.example.com --user=joe root@node
|
||||
|
||||
# Authenticate against cluster 'work' as 'joe' and then login into 'node'
|
||||
# as joe (by default tsh uses $USER for both):
|
||||
$ tsh ssh --proxy=work.example.com node
|
||||
```
|
||||
|
||||
`tsh login` allows you to log in to the cluster without connecting to any master nodes:
|
||||
|
||||
```
|
||||
$ tsh login --proxy=work.example.com
|
||||
```
|
||||
|
||||
This allows you to supply your password and the 2nd factor authentication
|
||||
at the beginning of the day. Subsequent `tsh ssh` commands will run without
|
||||
asking for your credentials until the temporary certificate expires (by default 23 hours).
|
||||
|
||||
## Exploring the Cluster
|
||||
|
||||
In a Teleport cluster all nodes periodically ping the cluster's auth server and
|
||||
update their status. This allows Teleport users to see which nodes are online with the `tsh ls` command:
|
||||
|
||||
```bash
|
||||
# Connect to cluster 'work' as $USER and list all nodes in
|
||||
# a cluster:
|
||||
$ tsh --proxy=work ls
|
||||
|
||||
# Output:
|
||||
Node Name Node ID Address Labels
|
||||
--------- ------- ------- ------
|
||||
turing 11111111-dddd-4132 10.1.0.5:3022 os:linux
|
||||
turing 22222222-cccc-8274 10.1.0.6:3022 os:linux
|
||||
graviton 33333333-aaaa-1284 10.1.0.7:3022 os:osx
|
||||
```
|
||||
|
||||
You can filter out nodes based on their labels. Let's only list OSX machines:
|
||||
|
||||
```
|
||||
$ tsh --proxy=work ls os=osx
|
||||
|
||||
Node Name Node ID Address Labels
|
||||
--------- ------- ------- ------
|
||||
graviton 33333333-aaaa-1284 10.1.0.7:3022 os:osx
|
||||
```
|
||||
|
||||
## Interactive Shell
|
||||
|
||||
To launch an interactive shell on a remote node or to execute a command, use `tsh ssh`
|
||||
command:
|
||||
|
||||
```bash
|
||||
$ tsh ssh --help
|
||||
|
||||
usage: t ssh [<flags>] <[user@]host> [<command>...]
|
||||
Run shell or execute a command on a remote SSH node.
|
||||
|
||||
Flags:
|
||||
--user SSH proxy user [ekontsevoy]
|
||||
--proxy SSH proxy host or IP address, for example --proxy=host:ssh_port,https_port
|
||||
--ttl Minutes to live for a SSH session
|
||||
--insecure Do not verify server certificate and host name. Use only in test environments
|
||||
-d, --debug Verbose logging to stdout
|
||||
-p, --port SSH port on a remote host
|
||||
-l, --login Remote host login
|
||||
-L, --forward Forward localhost connections to remote server
|
||||
--local Execute command on localhost after connecting to SSH node
|
||||
|
||||
Args:
|
||||
<[user@]host> Remote hostname and the login to use
|
||||
[<command>] Command to execute on a remote host
|
||||
```
|
||||
|
||||
`tsh` tries to mimic the `ssh` experience as much as possible, so it supports the most popular `ssh`
|
||||
flags like `-p`, `-l` or `-L`. For example if you have the following alias defined in your
|
||||
`~/.bashrc`: `alias ssh="tsh --proxy=work.example.com --user=myname"` then you can continue
|
||||
using familiar SSH syntax:
|
||||
|
||||
```bash
|
||||
$ ssh root@host
|
||||
$ ssh -p 6122 root@host ls
|
||||
```
|
||||
|
||||
### Proxy Ports
|
||||
|
||||
A Teleport proxy uses two ports: `3080` for HTTPS and `3023` for proxying SSH connections.
|
||||
The HTTPS port is used to serve Web UI and also to implement 2nd factor auth for `tsh` client.
|
||||
|
||||
If your Teleport proxy is configured to listen on other ports, you should specify
|
||||
them via `--proxy` flag as shown:
|
||||
|
||||
```
|
||||
tsh --proxy=host:5000,5001
|
||||
```
|
||||
|
||||
This means _connect to the port `5000` for HTTPS proxy and to `5001` for SSH proxy_.
|
||||
|
||||
### Port Forwarding
|
||||
|
||||
`tsh ssh` supports OpenSSH `-L` flag which allows to forward incoming connections from localhost
|
||||
to the specified remote host:port. The syntax of `-L` flag is:
|
||||
|
||||
```
|
||||
-L [bind_interface]:listen_port:remote_host:remote_port
|
||||
```
|
||||
|
||||
where "bind_interface" defaults to `127.0.0.1`.
|
||||
|
||||
Example:
|
||||
```
|
||||
$ tsh --proxy=work ssh -L 5000:web.remote:80 -d node
|
||||
```
|
||||
|
||||
Will connect to remote server `node` via `work` proxy, then it will open a listening socket on
|
||||
`localhost:5000` and will forward all incoming connections to `web.remote:80` via this SSH
|
||||
tunnel.
|
||||
|
||||
It is often convenient to establish port forwarding, execute a local command which uses such
|
||||
connection and disconnect. Yon can do this via `--local` flag.
|
||||
|
||||
Example:
|
||||
```
|
||||
$ tsh --proxy=work ssh -L 5000:google.com:80 --local node curl http://localhost:5000
|
||||
```
|
||||
|
||||
This forwards just one curl request for `localhost:5000` to `google:80` via "node" server located
|
||||
behind "work" proxy and terminates.
|
||||
|
||||
### Resolving Node Names
|
||||
|
||||
`tsh` supports multiple methods to resolve remote node names.
|
||||
|
||||
1. Traditional: by IP address or via DNS.
|
||||
2. Nodename setting: teleport daemon supports `nodename` flag, which allows Teleport administrators to assign alternative node names.
|
||||
3. Labels: you can address a node by `name=value` pair.
|
||||
|
||||
In the example above, we have two nodes with `os:linux` label and one node with `os:osx`.
|
||||
Lets login into the OSX node:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work ssh os=osx
|
||||
```
|
||||
|
||||
This only works if there is only one remote node with `os:osx` label, but you can still execute
|
||||
commands via SSH on multiple nodes using labels as a selector. This command will update all
|
||||
system packages on machines that run Linux:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work ssh os=linux apt-get update -y
|
||||
```
|
||||
|
||||
### Temporary Logins
|
||||
|
||||
Suppose you are borrowing someone else's computer to login into a cluster. You probably don't
|
||||
want to stay authenticated on this computer for 23 hours (Teleport default). This is where the `--ttl`
|
||||
flag can help.
|
||||
|
||||
This command logs you into the cluster with a very short-lived (1 minute) temporary certificate:
|
||||
|
||||
```bash
|
||||
tsh --proxy=work --ttl=1 ssh
|
||||
```
|
||||
|
||||
You will be logged out after one minute, but if you want to log out immediately, you can
|
||||
always do:
|
||||
|
||||
```bash
|
||||
tsh --proxy=work logout
|
||||
```
|
||||
|
||||
## Copying Files
|
||||
|
||||
To securely copy files to and from cluster nodes use `tsh scp` command. It is designed to mimic
|
||||
traditional `scp` as much as possible:
|
||||
|
||||
```bash
|
||||
$ tsh scp --help
|
||||
|
||||
usage: tsh scp [<flags>] <from, to>...
|
||||
Secure file copy
|
||||
|
||||
Flags:
|
||||
--user SSH proxy user [ekontsevoy]
|
||||
--proxy SSH proxy host or IP address
|
||||
--ttl Minutes to live for a SSH session
|
||||
--insecure Do not verify server certificate and host name. Use only in test environments
|
||||
-P, --debug Verbose logging to stdout
|
||||
-d, --debug Verbose logging to stdout
|
||||
-r, --recursive Recursive copy of subdirectories
|
||||
|
||||
Args:
|
||||
<from, to> Source and the destination
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work scp example.txt root@node:/path/to/dest
|
||||
```
|
||||
|
||||
Again, you may want to create a bash alias like `alias scp="tsh --proxy=work scp"` and use
|
||||
the familiar sytanx:
|
||||
|
||||
```bash
|
||||
$ scp -P 61122 -r files root@node:/path/to/dest
|
||||
```
|
||||
|
||||
## Sharing Sessions
|
||||
|
||||
Suppose you are trying to troubleshoot a problem on a remote server. Sometimes it makes sense
|
||||
to ask another team member for help. Traditionally this could be done by letting them know which
|
||||
node you're on, having them SSH in, start a terminal multiplexer like `screen` and join a
|
||||
session there.
|
||||
|
||||
Teleport makes this a bit more convenient. Let's log in to "luna" and ask Teleport for your
|
||||
current session status:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work ssh luna
|
||||
luna $ teleport status
|
||||
|
||||
User ID : joe, logged in as joe from 10.0.10.1 43026 3022
|
||||
Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4
|
||||
Session URL: https://work:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4
|
||||
```
|
||||
|
||||
Now you can invite another user account in the "work" cluster. You can share the URL for access through a web browser.
|
||||
Or you can share the session ID and she can join you through her terminal by typing:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work join 7645d523-60cb-436d-b732-99c5df14b7c4
|
||||
```
|
||||
|
||||
## Connecting to SSH Clusters behind Firewalls
|
||||
|
||||
Teleport supports creating clusters of servers located behind firewalls without any open ports.
|
||||
This works by creating reverse SSH tunnels from behind-firewall environments into a Teleport
|
||||
proxy you have access to. This feature is called "Trusted Clusters".
|
||||
|
||||
Assuming your "work" Teleport server is configured with a few trusted clusters, this is how you can
|
||||
see a list of them:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work clusters
|
||||
|
||||
Cluster Name Status
|
||||
------------ ------
|
||||
staging online
|
||||
production offline
|
||||
```
|
||||
|
||||
Now you can use `--cluster` flag with any `tsh` command. For example, to list SSH nodes that
|
||||
are members of "production" cluster, simply do:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work --cluster=production ls
|
||||
Node Name Node ID Address Labels
|
||||
--------- ------- ------- ------
|
||||
db-1 xxxxxxxxx 10.0.20.31:3022 kernel:4.4
|
||||
db-2 xxxxxxxxx 10.0.20.41:3022 kernel:4.2
|
||||
```
|
||||
|
||||
Similarly, if you want to SSH into `db-1` inside "production" cluster:
|
||||
|
||||
```bash
|
||||
$ tsh --proxy=work --cluster=production ssh db-1
|
||||
```
|
||||
|
||||
This is possible even if nodes of the "production" cluster are located behind a firewall
|
||||
without open ports. This works because "production" cluster establishes a reverse
|
||||
SSH tunnel back into "work" proxy, and this tunnels is used to establish inbound SSH
|
||||
connections.
|
||||
|
||||
For more details on configuring Trusted Clusters please look at [that section in the Admin Guide](admin-guide.md#trusted-clusters).
|
||||
|
||||
## Web UI
|
||||
|
||||
Teleport proxy serves the web UI on `https://proxyhost:3080`. The UI allows you to see the list of
|
||||
online nodes in a cluster, open web-based Terminal to them, see recorded sessions and replay them.
|
||||
You can also join other users via active sessions.
|
||||
|
||||
You can copy & paste using the mouse. For working with a keyboard, Teleport employs `tmux`-like
|
||||
"prefix" mode. To enter prefix mode, press `Ctrl+A`.
|
||||
|
||||
While in prefix mode, you can press `Ctrl+V` to paste, or enter text selection mode by pressing `[`.
|
||||
When in text selection mode, move around using `hjkl`, select text by toggling `space` and copy
|
||||
it via `Ctrl+C`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter strange behaviour, you may want to try to solve it by enabling
|
||||
the verbose logging by specifying `-d` flag when launching `tsh`.
|
||||
|
||||
Also you may want to reset it to a clean state by deleting temporary keys and
|
||||
other data from `~/.tsh`
|
||||
|
||||
## Getting Help
|
||||
|
||||
Please open an [issue on Github](https://github.com/gravitational/teleport/issues).
|
||||
Alternatively, you can reach through the contact form on our [website](https://gravitational.com/).
|
||||
|
||||
For commercial support, custom features or to try our [Enterprise edition of Teleport](/enterprise/),
|
||||
please reach out to us: `sales@gravitational.com`.
|
|
@ -2,11 +2,12 @@
|
|||
cd $(dirname $0)
|
||||
|
||||
# IMPORTANT! To add a new version, say 8.1
|
||||
# * copy 2.0.yaml to 8.1.yaml
|
||||
# * copy 2.3.yaml to 8.1.yaml
|
||||
# * edit 8.1.yaml
|
||||
# * edit theme/base.html and update docVersions variable
|
||||
mkdocs build --config-file 1.3.yaml
|
||||
mkdocs build --config-file 2.0.yaml
|
||||
mkdocs build --config-file 2.3.yaml
|
||||
|
||||
# copy the index file which serves /docs requests and redirects
|
||||
# visitors to the latest verion of QuickStart
|
||||
|
|