Edit two guides for Cloud users (#11470)

* Edit two guides for Cloud users

(1) Server Access Getting Started

While this guide mentions Teleport Cloud throughout, I wanted to make
sure that users of one edition wouldn't see scope-irrelevant details.

- Add scoped Tabs to the Prerequisites section
- Add the scoped "tctl.mdx" Details box
- Add ScopedBlocks for minor scope-relevant details like the address
  of the Proxy Service
- Edit the tctl.mdx partial to mention when you would use sudo to run
  tctl on a local machine, a detail that was added to this guide but
  could be included in any guide that uses the tctl.mdx partial.

Also makes misc. style, grammar, and clarity edits.

(2) Edit the OpenSSH guide and spin off a PRM guide

- The Recording Proxy Mode instructions don't apply to Cloud users.
  I didn't want to use a ScopedBlock to hide the relevant H2s, since
  these still appear within a page's table of contents. Instead, I
  separated the Recording Proxy Mode instructions into their own guide,
  and added an edition warning for Cloud users at the top of the guide.

  I also did some restructuring of the OpenSSH guide to clean it up after
  separating the Recording Proxy Mode instructions.

- Used ScopedBlocks to ensure that only scope-relevant information is
  shown in the OpenSSH guide.

- Misc grammar/style/clarity tweaks

* Respond to PR feedback
This commit is contained in:
Paul Gottschling 2022-03-31 17:35:48 -04:00 committed by GitHub
parent 4ab64792c1
commit 6b5e2e7802
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
8 changed files with 688 additions and 307 deletions

View file

@ -298,6 +298,10 @@
"title": "OpenSSH Guide",
"slug": "/server-access/guides/openssh/"
},
{
"title": "Recording Proxy Mode",
"slug": "/server-access/guides/recording-proxy-mode/"
},
{
"title": "BPF Session Recording",
"slug": "/server-access/guides/bpf-session-recording/"

View file

@ -85,7 +85,7 @@ which can be used as a `ProxyCommand`.
Similarly to `tsh ssh`, `tsh proxy ssh` establishes a TLS tunnel to Teleport
proxy with `teleport-ssh-proxy` ALPN protocol, which `ssh` then connects over.
See [OpenSSH client](../server-access/guides/openssh.mdx#use-openssh-client)
See [OpenSSH client](../server-access/guides/openssh.mdx#use-the-openssh-client-to-access-teleport-nodes)
guide for details on how it's configured.
## Reverse tunnels

View file

@ -14,6 +14,10 @@ $ tctl status
# Version (=teleport.version=)
# CA pin sha256:sha-hash-here
```
Remain logged in to your Auth Service host so you can run subsequent `tctl`
commands in this guide.
</Details>
<Details
title="Make sure you can connect to Teleport"
@ -32,4 +36,7 @@ $ tctl status
# Version (=teleport.version=)
# CA pin sha256:sha-hash-here
```
You can run subsequent `tctl` commands in this guide on your local machine.
</Details>

View file

@ -4,8 +4,6 @@ description: Getting started with Teleport Server Access.
videoBanner: EsEvO5ndNDI
---
# Getting Started
Server Access involves managing your resources, configuring new clusters, and issuing commands through a CLI or programmatically to an API.
This guide introduces some of these common scenarios and how to interact with Teleport to accomplish them:
@ -27,83 +25,143 @@ This guide introduces some of these common scenarios and how to interact with Te
## Prerequisites
- The Teleport Auth Service and Proxy Service, deployed on your own infrastructure or managed via Teleport Cloud.
- One host running your favorite Linux environment (such as Ubuntu 20.04, CentOS 8.0-1905, or Debian 10). This will serve as a Teleport Server Access Node.
- Teleport (=teleport.version=) installed locally.
<Tabs>
<TabItem scope={["oss"]} label="Self-Hosted">
<Admonition type="tip" title="New Teleport users">
If you have not yet deployed the Teleport Auth Service and Proxy Service, learn how to do so by following one of our [getting started guides](../getting-started.mdx).
</Admonition>
- A running Teleport cluster, version >= (=teleport.version=). For details on how to set this up,
see [Getting Started on a Linux
Server](../getting-started/linux-server.mdx).
- One host running your favorite Linux environment (such as Ubuntu 20.04, CentOS
8.0-1905, or Debian 10). This will serve as a Teleport Server Access Node.
- The `tsh` client tool version >= (=teleport.version=).
See [Installation](../installation.mdx) for details.
</TabItem>
<TabItem
scope={["enterprise"]} label="Enterprise">
- A running Teleport cluster, version >= (=teleport.version=). For details on setting this up, see
our [Enterprise getting started guide](../enterprise/getting-started.mdx).
- One host running your favorite Linux environment (such as Ubuntu 20.04, CentOS
8, or Debian 10). This will serve as a Teleport Server Access Node.
- The `tsh` client tool version >= (=teleport.version=).
You can download this by visiting the
[customer portal](https://dashboard.gravitational.com/web/login).
</TabItem>
<TabItem scope={["cloud"]}
label="Teleport Cloud">
- A Teleport Cloud account. If you do not have one, visit the
[sign up page](https://goteleport.com/signup/) to begin your free trial.
- One host running your favorite Linux environment (such as Ubuntu 20.04, CentOS
8.0-1905, or Debian 10). This will serve as a Teleport Server Access Node.
- The `tsh` and `tctl` client tools version >= (=teleport.version=).
See [Teleport Cloud Downloads](../cloud/downloads.mdx) for details.
</TabItem>
</Tabs>
(!docs/pages/includes/tctl.mdx!)
(!docs/pages/includes/permission-warning.mdx!)
## Step 1/4. Install Teleport
## Step 1/4. Install Teleport on your Linux host
1. Create a new instance of your desired Linux distribution (such as Ubuntu 20.04, CentOS 8.0-1905, or Debian 10).
This instance will be a private resource. Open port 22 so you can initially access, configure, and provision your instance. We'll configure and launch our instance, then demonstrate how to use the `tsh` tool and Teleport in SSH mode thereafter.
1. Your Linux host will be a private resource. Open port 22 so you can initially
access, configure, and provision your instance.
We'll configure and launch our instance, then demonstrate how to use the
`tsh` tool and Teleport in SSH mode.
2. Install Teleport on your instance.
(!docs/pages/includes/install-linux.mdx!)
Next, we'll create a **join token** to add and start Teleport Server Access on the Node.
Next, we'll create a **join token** so you can start the Teleport Node and
add it to your cluster.
## Step 2/4. Add a Node to the cluster
1. Create a join token to add the Node to your Teleport cluster. Run the following command, either on your Auth Service host (for self-hosted deployments) or on your local machine (for Teleport Cloud).
### Create a join token
<Details scope={["cloud"]} scopeOnly={true} title="Teleport Cloud and tctl">
Teleport Cloud users must download the Enterprise version of Teleport to their local machines in order to use `tctl`. To do so, visit the [Teleport Customer Portal](https://dashboard.gravitational.com/web/login).
Once this is done, log in to Teleport:
Next, create a join token so you can add the Node to your Teleport cluster.
```code
$ tsh login --proxy=myinstance.teleport.sh
# Let's save the token to a file
$ sudo tctl tokens add --type=node | grep -oP '(?<=token:\s).*' > token.file
```
If you have installed `tctl` as your local user, you will not need to run `tctl` commands via `sudo`.
</Details>
`--type=node` specifies that the Teleport Node will act and join as an SSH
server.
```code
# Let's save the token to a file
$ sudo tctl tokens add --type=node | grep -oP '(?<=token:\s).*' > token.file
```
`> token.file` indicates that you'd like to save the output to a file name `token.file`.
Each Teleport Node can be configured into SSH mode and run as an enhanced SSH server. `--type=node` specifies that the Teleport Node will act and join as an SSH server.
<Admonition type="tip" title="Tip">
This helps to minimize the direct sharing of tokens even when they are dynamically generated.
</Admonition>
`> token.file` indicates that you'd like to save the output to a file name `token.file`.
### Join your Node to the cluster
<Admonition type="tip" title="Tip">
This helps to minimize the direct sharing of tokens even when they are dynamically generated.
</Admonition>
On your Node, save `token.file` to an appropriate, secure, directory you have
the rights and access to read.
2. Now, open a new terminal and connect to the Teleport Auth Service.
<ScopedBlock scope={["oss", "enterprise"]}>
- On your Node, save `token.file` to an appropriate, secure, directory you have the rights and access to read.
- Start the Node. Change `tele.example.com` to the address of your Teleport Proxy Service. For Teleport Cloud customers, use a tenant address such as `mytenant.teleport.sh`. Assign the `--token` flag to the path where you saved `token.file`.
Start the Node. Change `tele.example.com` to the address of your Teleport Proxy
Service. Assign the `--token` flag to the path where you saved
`token.file`.
```code
# Join cluster
$ sudo teleport start \
--roles=node \
--token=/path/to/token.file \
--auth-server=tele.example.com:443
```
```code
# Join cluster
$ sudo teleport start \
--roles=node \
--token=/path/to/token.file \
--auth-server=tele.example.com:443
```
3. Create a user to access the Web UI through the following command:
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
```code
$ sudo tctl users add tele-admin --roles=editor,access --logins=root,ubuntu,ec2-user
```
Start the Node. Change `mytenant.teleport.sh` to your Teleport Cloud tenant
address. Assign the `--token` flag to the path where you saved `token.file`.
This will generate an initial login link where you can set a password and set up Two-Factor Authentication for `tele-admin`.
```code
# Join cluster
$ sudo teleport start \
--roles=node \
--token=/path/to/token.file \
--auth-server=mytenant.teleport.sh:443
```
<Admonition type="note" title="Note">
We've only given `tele-admin` the roles `editor` and `access` according to the *Principle of Least Privilege* (POLP).
</Admonition>
</ScopedBlock>
4. You should now be able to view your Teleport Node in Teleport Web interface after logging in as `tele-admin`:
### Access the Web UI
Run the following command to create a user that can access the Teleport Web UI:
```code
$ sudo tctl users add tele-admin --roles=editor,access --logins=root,ubuntu,ec2-user
```
This will generate an initial login link where you can create a password and set up two-factor authentication for `tele-admin`.
<Admonition type="note" title="Note">
We've only given `tele-admin` the roles `editor` and `access` according to the Principle of Least Privilege.
</Admonition>
You should now be able to view your Teleport Node in the Teleport Web UI after
logging in as `tele-admin`:
<Figure
align="center"
@ -115,35 +173,81 @@ If you have installed `tctl` as your local user, you will not need to run `tctl`
## Step 3/4. SSH into the server
Now, that we've got our cluster up and running, let's see how easy it is to connect to our Node.
Now that we've got our cluster up and running, let's see how easy it is to
connect to our Node.
We can use `tsh` to SSH into the cluster:
1. On your local machine, log in through `tsh`, assigning the `--proxy` flag to the address of your Teleport Proxy Service:
### Log in to the cluster
```code
# Log in through tsh
$ tsh login --proxy=tele.example.com --user=tele-admin
```
<ScopedBlock scope={["oss", "enterprise"]}>
You'll be prompted to supply the password and second factor we set up previously.
On your local machine, log in to your cluster through `tsh`, assigning the
`--proxy` flag to the address of your Teleport Proxy Service:
2. `tele-admin` will now see something similar to:
```code
# Log in through tsh
$ tsh login --proxy=tele.example.com --user=tele-admin
```
```txt
Profile URL: https://tele.example.com:443
Logged in as: tele-admin
Cluster: tele.example.com
Roles: access, editor
Logins: root, ubuntu, ec2-user
Kubernetes: disabled
Valid until: 2021-04-30 06:39:13 -0500 CDT [valid for 12h0m0s]
Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
In this example, `tele-admin` is now logged into the `tele.example.com` cluster through Teleport SSH.
On your local machine, log in to your cluster through `tsh`, assigning the
`--proxy` flag to the address of your Teleport Cloud tenant:
3. `tele-admin` can now execute the following to find the cluster's `nodenames`. `nodenames` are used for establishing SSH connections:
```code
# Log in through tsh
$ tsh login --proxy=mytenant.teleport.sh --user=tele-admin
```
</ScopedBlock>
You'll be prompted to supply the password and second factor we set up previously.
`tele-admin` will now see something similar to:
<ScopedBlock scope={["oss", "enterprise"]}>
```txt
> Profile URL: https://tele.example.com:443
Logged in as: tele-admin
Cluster: tele.example.com
Roles: access, editor
Logins: root, ubuntu, ec2-user
Kubernetes: disabled
Valid until: 2021-04-30 06:39:13 -0500 CDT [valid for 12h0m0s]
Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
```
In this example, `tele-admin` is now logged into the `tele.example.com` cluster
through Teleport SSH.
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
```txt
> Profile URL: https://mytenant.teleport.sh:443
Logged in as: tele-admin
Cluster: mytenant.teleport.sh
Roles: access, editor
Logins: root, ubuntu, ec2-user
Kubernetes: disabled
Valid until: 2021-04-30 06:39:13 -0500 CDT [valid for 12h0m0s]
Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
```
In this example, `tele-admin` is now logged into the `mytenant.teleport.sh`
cluster through Teleport SSH.
</ScopedBlock>
### Display cluster resources
`tele-admin` can now execute the following to find the cluster's Node names,
which are used for establishing SSH connections:
```code
# Display cluster resources
@ -159,91 +263,112 @@ We can use `tsh` to SSH into the cluster:
ip-172-31-41-144 127.0.0.1:3022 env=example, hostname=ip-172-31-41-144
```
4. `tele-admin` can SSH into the bastion host Node by running the following command locally:
### Connect to a Node
```code
# Use tsh to ssh into a Node
$ tsh ssh root@ip-172-31-41-144
```
`tele-admin` can SSH into the bastion host Node by running the following command locally:
Now, they can:
```code
# Use tsh to ssh into a Node
$ tsh ssh root@ip-172-31-41-144
```
- Connect to other Nodes in the cluster by using the appropriate IP address in the `tsh ssh` command.
- Traverse the Linux file system.
- Execute desired commands.
Now, they can:
All commands executed by `tele-admin` are recorded and can be replayed in the Teleport Web UI.
- Connect to other Nodes in the cluster by using the appropriate IP address in the `tsh ssh` command.
- Traverse the Linux file system.
- Execute desired commands.
The `tsh ssh` command allows one to do anything they would if they were to SSH into a server using a third-party tool. Compare the two equivalent commands:
All commands executed by `tele-admin` are recorded and can be replayed in the Teleport Web UI.
The `tsh ssh` command allows users to do anything they could if they were to SSH into a server using a third-party tool. Compare the two equivalent commands:
<Tabs>
<TabItem label="tsh">
```code
$ tsh ssh root@ip-172-31-41-144
```
</TabItem>
<TabItem label="ssh">
<ScopedBlock scope={["oss", "enterprise"]}>
```code
$ ssh -J tele.example.com root@ip-172-31-41-144
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
```code
$ ssh -J mytenant.teleport.sh root@ip-172-31-41-144
```
</ScopedBlock>
</TabItem>
</Tabs>
## Step 4/4. Use tsh and the unified resource catalog to introspect the cluster
1. Now, `tele-admin` has the ability to SSH into other Nodes within the cluster, traverse the Linux file system, and execute commands.
Now, `tele-admin` has the ability to SSH into other Nodes within the cluster, traverse the Linux file system, and execute commands.
- They have visibility into all resources within the cluster due to their defined and assigned roles.
- They can also quickly view any Node or grouping of Nodes that have been assigned a particular label.
- They have visibility into all resources within the cluster due to their defined and assigned roles.
- They can also quickly view any Node or grouping of Nodes that have been assigned a particular label.
2. Execute the following command within your bastion host console:
### Display the unified resource catalog
```code
# List Nodes
$ sudo tctl nodes ls
```
Execute the following command within your bastion host console:
It displays the unified resource catalog with all queried resources in one view:
```code
# List Nodes
$ sudo tctl nodes ls
```
```txt
Nodename UUID Address Labels
---------------- ------------------------------------ -------------- -------------------------------------
ip-172-31-35-170 4980899c-d260-414f-9aea-874feef71747
ip-172-31-41-144 f3d2a65f-3fa7-451d-b516-68d189ff9ae5 127.0.0.1:3022 env=example,hostname=ip-172-31-41-144
```
This displays the unified resource catalog with all queried resources in one view:
3. Note the "Labels" column on the farthest side. `tele-admin` can query all resources with a shared label using the command:
```txt
Nodename UUID Address Labels
---------------- ------------------------------------ -------------- -------------------------------------
ip-172-31-35-170 4980899c-d260-414f-9aea-874feef71747
ip-172-31-41-144 f3d2a65f-3fa7-451d-b516-68d189ff9ae5 127.0.0.1:3022 env=example,hostname=ip-172-31-41-144
```
```code
# Query all Nodes with a label
$ tsh ls env=example
```
Note the "Labels" column on the farthest side. `tele-admin` can query all resources with a shared label using the command:
Customized labels can be defined in your `teleport.yaml` configuration file or during Node creation.
```code
# Query all Nodes with a label
$ tsh ls env=example
```
This is a convenient feature that allows for more advanced queries. If an IP address changes, for example, an admin can quickly find the current Node with that label since it remains unchanged.
Customized labels can be defined in your `teleport.yaml` configuration file or during Node creation.
4. `tele-admin` can also execute commands on all Nodes that share a label, vastly simplifying repeated operations. For example, the command:
This is a convenient feature that allows for more advanced queries. If an IP address changes, for example, an admin can quickly find the current Node with that label since it remains unchanged.
```code
# Run the ls command on all Nodes with a label
$ tsh ssh root@env=example ls
```
### Run commands on all Nodes with a label
will execute the `ls` command on each Node and display the results in your terminal.
`tele-admin` can also execute commands on all Nodes that share a label, vastly simplifying repeated operations. For example, the command:
```code
# Run the ls command on all Nodes with a label
$ tsh ssh root@env=example ls
```
will execute the `ls` command on each Node and display the results in your terminal.
## Optional: Harden your bastion host
We previously configured our Linux instance to leave port `22` open to easily configure and install Teleport. Feel free to compare Teleport SSH to your usual `ssh` commands.
If you'd like to further experiment with using Teleport according to the bastion pattern:
- Close port `22` on your private Linux instance now that your Teleport Node is configured and running.
- For self-hosted deployments, optionally close port `22` on your bastion host.
- You'll be able to fully connect to the private instance and, for self-hosted deployments, the bastion host, using `tsh ssh`.
## Conclusion
<Admonition type="tip" title="Note">
We previously configured our Linux instance to leave port `22` open to easily configure and install Teleport. Feel free to compare Teleport SSH to your usual `ssh` commands.
If you'd like to further experiment with using Teleport according to the bastion pattern:
- Close port `22` on your private Linux instance now that your Teleport Node is configured and running.
- For self-hosted deployments, optionally close port `22` on your bastion host.
- You'll be able to fully connect to the private instance and, for self-hosted deployments, the bastion host, using `tsh ssh`.
</Admonition>
To recap, this guide described:
1. How to set up and add an SSH Node to a cluster.

View file

@ -14,6 +14,9 @@ layout: tocless-doc
<Tile icon="server" title="OpenSSH Guide" href="./guides/openssh.mdx">
How to use Teleport on legacy systems with OpenSSH and sshd.
</Tile>
<Tile icon="server" title="Recording Proxy Mode" href="./guides/recording-proxy-mode.mdx">
How to use Teleport Recording Proxy Mode to capture activity on OpenSSH servers.
</Tile>
<Tile icon="server" title="BPF Session Recording" href="./guides/bpf-session-recording.mdx">
How to use BPF to record SSH session commands, modified files and network connections.
</Tile>

View file

@ -5,16 +5,18 @@ videoBanner: x0eYFUEIOrM
---
Teleport is fully compatible with OpenSSH and can be quickly set up to record and
audit all SSH activity. Using Teleport and OpenSSH has the advantage of getting you up
audit all SSH activity.
Using Teleport and OpenSSH has the advantage of getting you up
and running, but in the long run, we would recommend replacing `sshd` with `teleport`.
We've outlined these reasons in [OpenSSH vs Teleport SSH for Servers?](https://gravitational.com/blog/openssh-vs-teleport/)
Teleport is a standards-compliant SSH proxy and it can work in environments with
existing SSH implementations, such as OpenSSH. This section will cover:
Teleport is a standards-compliant SSH proxy and can work in environments with
existing SSH implementations, such as OpenSSH. This guide will cover:
- Configuring OpenSSH server `sshd` to join a Teleport cluster. Existing fleets of
- Configuring the OpenSSH server `sshd` to join a Teleport cluster. Existing fleets of
OpenSSH servers can be configured to accept SSH certificates dynamically issued by a Teleport CA.
- Configuring OpenSSH client `ssh` to login into nodes inside a Teleport
- Configuring the OpenSSH client `ssh` to log in to Nodes inside a Teleport
cluster.
<Admonition
@ -28,94 +30,36 @@ existing SSH implementations, such as OpenSSH. This section will cover:
```
</Admonition>
## Overview
(!docs/pages/includes/tctl.mdx!)
<Figure
align="center"
bordered
caption="Teleport OpenSSH Recording Proxy"
>
![Teleport OpenSSH Recording Proxy](../../../img/server-access/openssh-proxy.svg)
</Figure>
## Configure an OpenSSH server to join a Teleport cluster
The recording proxy mode, although less secure, was added to allow Teleport users
to enable session recording for OpenSSH's servers running `sshd`, which is helpful
when gradually transitioning large server fleets to Teleport.
`sshd` must be told to allow users to log in with certificates generated
by the Teleport User CA. Start by exporting the Teleport CA public key.
We consider the "recording proxy mode" to be less secure for two reasons:
- It grants additional privileges to the Teleport proxy. In the default "node recording" mode, the proxy stores no secrets and cannot "see" the decrypted data. This makes a proxy less critical to the security of the overall cluster. But if an attacker gains physical access to a proxy node running in the "recording" mode, they will be able to see the decrypted traffic and client keys stored in the proxy's process memory.
- Recording proxy mode requires the use of SSH agent forwarding. Agent forwarding is required because without it, a proxy will not be able to establish the 2nd connection to the destination node.
Teleport proxy should be available to clients and be set up with TLS.
Teleport OpenSSH supports:
- FQDN `ec2-user@ip-172-31-14-137.us-west-2.compute.internal`
- IPv4 `ubuntu@184.45.45.30`
- IPv6 `root@2001:db8::2`
## Set up OpenSSH recording proxy mode
The first step is install and setup Teleport, we recommend starting with our [Getting Started Guide](../../getting-started.mdx) and [Admin Manual](../../setup/admin.mdx).
(!docs/pages/includes/permission-warning.mdx!)
(!docs/pages/includes/backup-warning.mdx!)
To enable session recording for `sshd` nodes, the cluster must be switched to
["recording proxy" mode](../../architecture/proxy.mdx#recording-proxy-mode).
In this mode, the recording will be done on the proxy level:
```yaml
# snippet from /etc/teleport.yaml
auth_service:
# Session Recording must be set to Proxy to work with OpenSSH
session_recording: "proxy" # can also be "off" and "node" (default)
```
Next, `sshd` must be told to allow users to log in with certificates generated
by the Teleport User CA. Start by exporting the Teleport CA public key:
Export the Teleport Certificate Authority certificate into a file and update
SSH configuration to trust Teleport's CA:
Export the Teleport certificate authority certificate into a file and update
your SSH configuration to trust Teleport's CA:
```code
# tctl needs to be run on the auth server.
# tctl needs to be run on the Auth Server.
$ sudo tctl auth export --type=user | sed s/cert-authority\ // > teleport_user_ca.pub
$ sudo mv ./teleport_user_ca.pub /etc/ssh/teleport_user_ca.pub
$ echo "TrustedUserCAKeys /etc/ssh/teleport_user_ca.pub" | sudo tee -a /etc/ssh/sshd_config
```
Restart SSH daemon.
Restart `sshd`.
Now, `sshd` will trust users who present a Teleport-issued certificate.
The next step is to configure host authentication.
When in recording mode, Teleport will check that the host certificate of any
node a user connects to is signed by a Teleport CA. By default, this is a strict
check. If the node presents just a key or a certificate signed by a different
CA, Teleport will reject this connection with the error message saying *"ssh:
handshake failed: remote host presented a public key, expected a host
certificate"*
You can disable strict host checks as shown below. However, this opens the
possibility for Man-in-the-Middle (MITM) attacks and is not recommended.
```yaml
# snippet from /etc/teleport.yaml
auth_service:
proxy_checks_host_keys: no
```
The recommended solution is to ask Teleport to issue valid host certificates for
all OpenSSH nodes. To generate a host certificate, run this on your Teleport auth server:
all OpenSSH nodes. To generate a host certificate, run the following `tctl` command:
```code
# Creating host certs, with an array of every host to be accessed.
# Wildcard certs aren't supported by OpenSSH, must be full FQDN.
# Management of the host certificates can become complex, this is another
# Wildcard certs aren't supported by OpenSSH. The domain must be fully
# qualified.
# Management of the host certificates can become complex. This is another
# reason we recommend using Teleport SSH on nodes.
$ sudo tctl auth sign \
--host=api.example.com,ssh.example.com,64.225.88.175,64.225.88.178 \
@ -144,90 +88,48 @@ $ ssh-keygen -L -f api.example.com-cert.pub
# x-teleport-role UNKNOWN OPTION (len 8)
```
Then add the following lines to `/etc/ssh/sshd_config` on all OpenSSH nodes, and restart `sshd`.
Then add the following lines to `/etc/ssh/sshd_config` on all OpenSSH nodes, and
restart `sshd`.
```yaml
HostKey /etc/ssh/api.example.com
HostCertificate /etc/ssh/api.example.com-cert.pub
```
Now you can use [`tsh ssh --port=22 user@api.example.com`](../../setup/reference/cli.mdx#tsh) to login
into any `sshd` node in the cluster and the session will be recorded.
## Use the OpenSSH client to access Teleport Nodes
```code
# tsh ssh to use default ssh port:22
$ tsh ssh --port=22 user@host.example.com
# Example for a Amazon EC2 Host
# tsh ssh --port=22 ec2-user@ec2-54-EXAMPLE.us-west-2.compute.amazonaws.com
```
If you want to use OpenSSH `ssh` client for logging into `sshd` servers behind a proxy
in "recording mode", you have to tell the `ssh` client to use the jump host and
enable SSH agent forwarding, otherwise, a recording proxy will not be able to
terminate the SSH connection to record it:
```code
# Note that agent forwarding is enabled twice: one from a client to a proxy
# (mandatory if using a recording proxy), and then optionally from a proxy
# to the end server if you want your agent running on the end server or not
$ ssh -o "ForwardAgent yes" \
-o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %r@p.example.com -s proxy:%h:%p" \
user@host.example.com
```
<Admonition
type="tip"
title="Tip"
>
To avoid typing all this and use the usual `ssh user@host.example.com`, users can update their `~/.ssh/config` file.
</Admonition>
### Setup SSH agent forwarding
It's important to remember that SSH agent forwarding must be enabled on the
client. Verify that a Teleport certificate is loaded into the agent after
logging in:
```code
# Login as Joe
$ tsh login --proxy=proxy.example.com --user=joe
# see if the certificate is present (look for "teleport:joe") at the end of the cert
$ ssh-add -L
```
<Admonition
type="warning"
title="GNOME Keyring SSH Agent and GPG Agent"
>
It is well-known that the Gnome Keyring SSH agent, used by many popular Linux desktops like Ubuntu, and `gpg-agent` from GnuPG do not support SSH
certificates. We recommend using the `ssh-agent` from OpenSSH.
Alternatively, you can disable SSH agent integration entirely using the
`--no-use-local-ssh-agent` flag or `TELEPORT_USE_LOCAL_SSH_AGENT=false`
environment variable with `tsh`.
</Admonition>
## Use OpenSSH client
It is possible to use the OpenSSH client `ssh` to connect to nodes within a
It is possible to use the OpenSSH client `ssh` to connect to Nodes within a
Teleport cluster. Teleport supports SSH subsystems and includes a `proxy` subsystem that can be used like `netcat` is with `ProxyCommand` to connect
through a jump host.
OpenSSH client configuration may be generated automatically by `tsh`, or it can
be configured manually. In either case, make sure you are running OpenSSH's
`ssh-agent`, and have logged in to the Teleport proxy:
`ssh-agent`, and have logged in to the Teleport Proxy Service:
<ScopedBlock scope={["oss","enterprise"]}>
```code
$ eval `ssh-agent`
$ tsh --proxy=root.example.com login
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
```code
$ eval `ssh-agent`
$ tsh --proxy=mytenant.teleport.sh login
```
</ScopedBlock>
`ssh-agent` will print environment variables into the console. Either `eval` the
output as in the example above, or copy and paste the output into the shell you
will be using to connect to a Teleport node. The output exports the
will be using to connect to a Teleport Node. The output exports the
`SSH_AUTH_SOCK` and `SSH_AGENT_PID` environment variables that allow OpenSSH
clients to find the SSH agent.
### Automatic Setup
### Automatic setup
<Admonition
type="note"
@ -240,14 +142,27 @@ clients to find the SSH agent.
`tsh` can automatically generate the necessary OpenSSH client configuration to
connect using the standard OpenSSH client:
<ScopedBlock scope={["oss","enterprise"]}>
```code
# on the machine where you want to run the ssh client
# On the machine where you want to run the SSH client
$ tsh --proxy=root.example.com config
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
```code
# On the machine where you want to run the SSH client
$ tsh --proxy=mytenant.teleport.sh config
```
</ScopedBlock>
This will generate an OpenSSH client configuration block for the root cluster
and all currently-known leaf clusters. Append this to your local OpenSSH config
file (usually `~/.ssh/config`) using your text editor of choice.
and all currently-known leaf clusters (if you are using Trusted Clusters).
Append this to your local OpenSSH config file (usually `~/.ssh/config`) using
your text editor of choice.
<Admonition
type="warning"
@ -262,8 +177,9 @@ file (usually `~/.ssh/config`) using your text editor of choice.
```
</Admonition>
Once configured, log into any node in the `root.example.com` cluster as any
principal listed in your Teleport profile:
<ScopedBlock scope={["oss", "enterprise"]}>
Once configured, log in to any Node in the `root.example.com` cluster:
```code
$ ssh user@node1.root.example.com
@ -271,110 +187,202 @@ $ ssh user@node1.root.example.com
This will connect to the node `node1` on the `root.example.com` cluster. This
name does not need to be DNS accessible as the connection will be routed through
your Teleport proxy.
your Teleport Proxy Service.
If any Trusted Clusters exist, they are also configured:
If any [trusted clusters](../../setup/admin/trustedclusters.mdx) exist, they are also configured:
```code
$ ssh user@node2.leaf.example.com
```
When connecting to nodes with Teleport daemons running on non-standard ports
When connecting to Nodes with Teleport daemons running on non-standard ports
(other than `3022`), a port may be specified:
```code
$ ssh -p 4022 user@node3.leaf.example.com
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
Once configured, log in to any Node in the `mytenant.teleport.sh` cluster:
```code
$ ssh user@node1.mytenant.teleport.sh
```
This will connect to the node `node1` on the `mytenant.teleport.sh` cluster. This
name does not need to be DNS accessible as the connection will be routed through
your Teleport Proxy Service.
If any Trusted Clusters exist, they are also configured:
```code
$ ssh user@node2.mytenant.teleport.sh
```
When connecting to Nodes with Teleport daemons running on non-standard ports
(other than `3022`), a port may be specified:
```code
$ ssh -p 4022 user@node3.mytenant.teleport.sh
```
</ScopedBlock>
<Admonition
type="tip"
title="Automatic OpenSSH and Multiple Clusters"
>
If you switch between multiple Teleport proxy servers, you'll need to re-run
If you switch between multiple Teleport Proxy Servers, you'll need to re-run
`tsh config` for each to generate the cluster-specific configuration.
Similarly, if [trusted clusters](../../setup/admin/trustedclusters.mdx) are added or
removed, be sure to re-run the above command and replace the previous
configuration.
Similarly, if Trusted Clusters are added or removed, be sure to re-run the
above command and replace the previous configuration.
</Admonition>
### Manual Setup
### Manual setup
On your client machine, you need to import the public key of Teleport's host
certificate. This will allow your OpenSSH client to verify that host certificates
are signed by Teleport's trusted host CA:
```code
# on the Teleport auth server
$ tctl auth export --type=host > teleport_host_ca.pub
# on the machine where you want to run the ssh client
# On the machine where you want to run the ssh client
$ cat teleport_host_ca.pub >> ~/.ssh/known_hosts
```
If you have multiple Teleport clusters, you have to export and set up these
certificate authorities for each cluster individually.
<ScopedBlock scope={["oss", "enterprise"]}>
<Admonition
type="tip"
title="OpenSSH and Trusted Clusters"
>
If you use [recording proxy mode](../../architecture/proxy.mdx) and [trusted clusters](../../setup/admin/trustedclusters.mdx),
If you use [Recording Proxy Mode](../../architecture/proxy.mdx) and [Trusted Clusters](../../setup/admin/trustedclusters.mdx),
you need to set up the certificate authority from
the *root* cluster to match **all** nodes, even those that belong to *leaf*
clusters. For example, if your node naming scheme is `*.root.example.com`,
the root cluster to match all Nodes, even those that belong to leaf
clusters.
For example, if your Node naming scheme is `*.root.example.com`,
`*.leaf1.example.com`, `*.leaf2.example.com`, then the
`@certificate-authority` entry should match `*.example.com` and use the CA
from the root auth server only.
from the root Auth Server only.
</Admonition>
Lastly, configure the OpenSSH client to use the Teleport proxy when connecting
to nodes with matching names. Edit `~/.ssh/config` for your user or
</ScopedBlock>
<ScopedBlock scope={["oss", "enterprise"]}>
Lastly, configure the OpenSSH client to use the Teleport Proxy Service when connecting
to Nodes with matching names. Edit `~/.ssh/config` for your user or
`/etc/ssh/ssh_config` for global changes:
```txt
# root.example.com is the jump host (proxy). credentials will be obtained from the
# openssh agent.
# root.example.com is the jump host (Proxy Service). Credentials will be
# obtained from the SSH agent.
Host root.example.com
HostName 192.168.1.2
Port 3023
# connect to nodes in the root.example.com cluster through the jump
# host (proxy) using the same. credentials will be obtained from the
# openssh agent.
# Connect to Nodes in the root.example.com cluster through the jump
# host (Proxy Service). Credentials will be obtained from the
# SSH agent.
Host *.root.example.com
HostName %h
Port 3022
ProxyCommand tsh proxy ssh %r@%h:%p
# connect to nodes within a trusted cluster with the name "leaf1.example.com".
# Connect to Nodes within a Trusted Cluster with the name "leaf1.example.com".
Host *.leaf1.example.com
HostName %h
Port 3022
ProxyCommand tsh proxy ssh --cluster=leaf1.example.com %r@%h:%p
```
When everything is configured properly, you can use ssh to connect to any node
behind `root.example.com` :
When everything is configured properly, you can use SSH to connect to any Node
behind `root.example.com`:
```code
$ ssh root@database.root.example.com
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
Lastly, configure the OpenSSH client to use the Teleport Proxy Service when connecting
to Nodes with matching names. Edit `~/.ssh/config` for your user or
`/etc/ssh/ssh_config` for global changes:
```txt
# mytenant.teleport.sh is the jump host (Proxy Service). Credentials will be
# obtained from the SSH agent.
Host mytenant.teleport.sh
HostName 192.168.1.2
Port 3023
# Connect to Nodes in the mytenant.teleport.sh cluster through the jump
# host (Proxy Service). Credentials will be obtained from the
# SSH agent.
Host *.mytenant.teleport.sh
HostName %h
Port 3022
ProxyCommand tsh proxy ssh %r@%h:%p
# Connect to Nodes within a Trusted Cluster with the name "leaf1.mytenant.teleport.sh".
Host *.mytenant.teleport.sh
HostName %h
Port 3022
ProxyCommand tsh proxy ssh --cluster=mytenant.teleport.sh %r@%h:%p
```
When everything is configured properly, you can use SSH to connect to any Node
behind `mytenant.teleport.sh`:
```code
$ ssh root@database.root.mytenant.teleport.sh
```
</ScopedBlock>
<Admonition
type="tip"
title="Note"
>
Teleport uses OpenSSH certificates instead of keys which means you cannot ordinarily connect to a Teleport node by IP address. You have to connect by
DNS name. This is because OpenSSH ensures the DNS name of the node you are connecting to is listed under the `Principals` section of the OpenSSH certificate to verify you are connecting to the correct node.
Teleport uses OpenSSH certificates instead of keys, which means you cannot ordinarily connect to a Teleport Node by IP address. You have to connect by
DNS name. This is because OpenSSH ensures the DNS name of the Node you are connecting to is listed under the `Principals` section of the OpenSSH certificate to verify you are connecting to the correct Node.
</Admonition>
To connect to the OpenSSH server via `tsh`, add `--port=<ssh port>` with the `tsh ssh` command:
Example ssh to `database.work.example.com` as `root` with an OpenSSH server on port 22 via `tsh`:
<ScopedBlock scope={["oss","enterprise"]}>
Example `tsh ssh` command to access `database.work.example.com` as `root` with
an OpenSSH server on port 22 via `tsh`:
```code
$ tsh ssh --port=22 dev@database.root.example.com
```
</ScopedBlock>
<ScopedBlock scope={["cloud"]}>
Example `tsh ssh` command to access `database.work.mytenant.teleport.sh` as `root` with
an OpenSSH server on port 22 via `tsh`:
```code
$ tsh ssh --port=22 dev@database.root.mytenant.teleport.sh
```
</ScopedBlock>
<Admonition
type="warning"
title="Warning"
@ -382,25 +390,6 @@ $ tsh ssh --port=22 dev@database.root.example.com
The principal/username (`dev@` in the example above) being used to connect must be listed in the Teleport user/role configuration.
</Admonition>
## OpenSSH rate limiting
When using a Teleport proxy in "recording mode", be aware of OpenSSH built-in
rate-limiting. On large numbers of proxy connections, you may encounter errors
like:
```txt
channel 0: open failed: connect failed: ssh: handshake failed: EOF
```
See `MaxStartups` setting in `man sshd_config`. This setting means that by
default OpenSSH only allows 10 unauthenticated connections at a time and starts
dropping connections 30% of the time when the number of connections goes over 10.
When it hits 100 authentication connections, all new connections are
dropped.
To increase the concurrency level, increase the value to something like
`MaxStartups 50:30:100`. This allows 50 concurrent connections and a max of 100.
## Revoke an SSH certificate
To revoke the current Teleport CA and generate a new one, run `tctl auth rotate`. Unless you've highly automated your

View file

@ -0,0 +1,253 @@
---
title: Teleport Recording Proxy Mode
description: Use Recording Proxy Mode to capture OpenSSH server activity
---
<Notice scope={["cloud"]} type="warning">
Teleport Cloud only supports session recording at the Node level. If you are
interested in setting up session recording, read our
[Server Access Getting Started Guide](../getting-started.mdx) so you can start
replacing your OpenSSH servers with Teleport Nodes.
</Notice>
<Figure
align="center"
bordered
caption="Teleport OpenSSH Recording Proxy"
>
![Teleport OpenSSH Recording Proxy](../../../img/server-access/openssh-proxy.svg)
</Figure>
Teleport Recording Proxy Mode was added to allow Teleport users
to enable session recording for OpenSSH's servers running `sshd`, which is helpful
when gradually transitioning large server fleets to Teleport.
We consider Recording Proxy Mode to be less secure than recording at the Node
level for two reasons:
- It grants additional privileges to the Teleport Proxy Service. In the default Node Recording mode, the Proxy Service stores no secrets and cannot "see" the decrypted data. This makes a Proxy Server less critical to the security of the overall cluster. But if an attacker gains physical access to a Proxy Server running in Proxy Recording mode, they will be able to see the decrypted traffic and client keys stored in the Proxy Server's process memory.
- Recording Proxy Mode requires the use of SSH agent forwarding. Agent forwarding is required because without it, a Proxy Server will not be able to establish a second connection to the destination node.
The Teleport Proxy Service should be available to clients and set up with TLS.
## Prerequisites
<Tabs>
<TabItem scope={["oss"]} label="Self-Hosted">
- A running Teleport cluster. For details on how to set this up, see [Getting
Started on a Linux Server](../../getting-started/linux-server.mdx).
- The `tctl` admin tool version >= (=teleport.version=).
```code
$ tctl version
# Teleport v(=teleport.version=) go(=teleport.golang=)
```
See [Installation](../../installation.mdx) for details.
- A host where you will run an OpenSSH server.
</TabItem>
<TabItem
scope={["enterprise"]} label="Enterprise">
- A running Teleport cluster. For details on setting this up, see our
[Enterprise getting started guide](../../enterprise/getting-started.mdx).
- The `tctl` admin tool version >= (=teleport.version=), which you can download
by visiting the
[customer portal](https://dashboard.gravitational.com/web/login).
```code
$ tctl version
# Teleport v(=teleport.version=) go(=teleport.golang=)
```
- A host where you will run an OpenSSH server.
</TabItem>
</Tabs>
## Step 1/3. Configure Teleport
(!docs/pages/includes/permission-warning.mdx!)
(!docs/pages/includes/backup-warning.mdx!)
### Enable Proxy Recording Mode
To enable session recording for `sshd` nodes, the cluster must be switched to
Recording Proxy Mode. In this mode, the recording will be done on the Proxy level.
Edit the Auth Service configuration file as follows:
```yaml
# snippet from /etc/teleport.yaml
auth_service:
# Session Recording must be set to Proxy to work with OpenSSH
session_recording: "proxy" # can also be "off" and "node" (default)
```
### Optional insecure step: Disable strict host checking
When in recording mode, Teleport will check that the host certificate of any
Node a user connects to is signed by a Teleport CA. By default, this is a strict
check. If the Node presents just a key or a certificate signed by a different
CA, Teleport will reject this connection with the error message:
```text
ssh: handshake failed: remote host presented a public key, expected a host
certificate
```
You can disable strict host checks as shown below. However, this opens the
possibility for Person-in-the-Middle attacks and is not recommended.
```yaml
# snippet from /etc/teleport.yaml
auth_service:
proxy_checks_host_keys: no
```
## Step 2/3. Configure `sshd`
`sshd` must be told to allow users to log in with certificates generated
by the Teleport User CA. Start by exporting the Teleport CA public key.
On your Teleport Node, export the Teleport Certificate Authority certificate
into a file and update your SSH configuration to trust Teleport's CA:
```code
# tctl needs to be run on the Auth Server.
$ sudo tctl auth export --type=user | sed s/cert-authority\ // > teleport_user_ca.pub
$ sudo mv ./teleport_user_ca.pub /etc/ssh/teleport_user_ca.pub
$ echo "TrustedUserCAKeys /etc/ssh/teleport_user_ca.pub" | sudo tee -a /etc/ssh/sshd_config
```
Restart `sshd`.
Now, `sshd` will trust users who present a Teleport-issued certificate.
The next step is to configure host authentication.
The recommended solution is to ask Teleport to issue valid host certificates for
all OpenSSH nodes. To generate a host certificate, run this on your Teleport Auth Server:
```code
# Creating host certs, with an array of every host to be accessed.
# Wildcard certs aren't supported by OpenSSH. The domain must be fully
# qualified.
# Management of the host certificates can become complex. This is another
# reason we recommend using Teleport SSH on nodes.
$ sudo tctl auth sign \
--host=api.example.com,ssh.example.com,64.225.88.175,64.225.88.178 \
--format=openssh \
--out=api.example.com
The credentials have been written to api.example.com, api.example.com-cert.pub
# You can use ssh-keygen to verify the contents.
$ ssh-keygen -L -f api.example.com-cert.pub
#api.example.com-cert.pub:
# Type: ssh-rsa-cert-v01@openssh.com host certificate
# Public key: RSA-CERT SHA256:ireEc5HWFjhYPUhmztaFud7EgsopO8l+GpxNMd3wMSk
# Signing CA: RSA SHA256:/6HSHsoU5u+r85M26Ut+M9gl+HventwSwrbTvP/cmvo
# Key ID: ""
# Serial: 0
# Valid: after 2020-07-29T20:26:24
# Principals:
# api.example.com
# ssh.example.com
# 64.225.88.175
# 64.225.88.178
# Critical Options: (none)
# Extensions:
# x-teleport-authority UNKNOWN OPTION (len 47)
# x-teleport-role UNKNOWN OPTION (len 8)
```
Then add the following lines to `/etc/ssh/sshd_config` on all OpenSSH nodes, and
restart `sshd`.
```yaml
HostKey /etc/ssh/api.example.com
HostCertificate /etc/ssh/api.example.com-cert.pub
```
## Step 3/3. Use Proxy Recording Mode
Now you can use the `tsh ssh` command to log in to any `sshd` node in the
cluster, and the session will be recorded.
```code
# tsh ssh to use default ssh port:22
$ tsh ssh --port=22 user@host.example.com
# Example for a Amazon EC2 Host
# tsh ssh --port=22 ec2-user@ec2-54-EXAMPLE.us-west-2.compute.amazonaws.com
```
If you want to use the OpenSSH `ssh` client for logging into `sshd` servers behind a proxy
in "recording mode", you have to tell the `ssh` client to use a jump host and
enable SSH agent forwarding, otherwise, a recording proxy will not be able to
terminate the SSH connection to record it:
```code
# Note that agent forwarding is enabled twice: one from a client to a proxy
# (mandatory if using a recording proxy), and then optionally from a proxy
# to the end server if you want your agent running on the end server
$ ssh -o "ForwardAgent yes" \
-o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %r@p.example.com -s proxy:%h:%p" \
user@host.example.com
```
<Admonition
type="tip"
title="Tip"
>
To avoid typing all this and use the usual `ssh user@host.example.com`, users can update their `~/.ssh/config` file.
</Admonition>
Verify that a Teleport certificate is loaded into the agent after
logging in:
```code
# Login as Joe
$ tsh login --proxy=proxy.example.com --user=joe
# see if the certificate is present (look for "teleport:joe") at the end of the cert
$ ssh-add -L
```
<Admonition
type="warning"
title="GNOME Keyring SSH Agent and GPG Agent"
>
It is well known that the Gnome Keyring SSH agent, used by many popular Linux desktops like Ubuntu, and `gpg-agent` from GnuPG do not support SSH
certificates. We recommend using the `ssh-agent` from OpenSSH.
Alternatively, you can disable the SSH agent integration entirely using the
`--no-use-local-ssh-agent` flag or `TELEPORT_USE_LOCAL_SSH_AGENT=false`
environment variable with `tsh`.
</Admonition>
## OpenSSH rate limiting
When using a Teleport proxy in "recording mode", be aware of OpenSSH's built-in
rate-limiting. On large numbers of Proxy Service connections, you may encounter errors
like:
```txt
channel 0: open failed: connect failed: ssh: handshake failed: EOF
```
See the `MaxStartups` setting in `man sshd_config`. This setting means that by
default, OpenSSH only allows 10 unauthenticated connections at a time and starts
dropping connections 30% of the time when the number of connections goes over 10.
When it hits 100 authentication connections, all new connections are
dropped.
To increase the concurrency level, increase the value to something like
`MaxStartups 50:30:100`. This allows 50 concurrent connections and a max of 100.

View file

@ -110,7 +110,7 @@ to connect to nodes within your Teleport cluster, you need to regenerate the
config.
Run `tsh config` command again so it generates SSH config compatible with SSH
routing setup. See [OpenSSH client](../../server-access/guides/openssh.mdx#use-openssh-client)
routing setup. See [OpenSSH client](../../server-access/guides/openssh.mdx#use-the-openssh-client-to-access-teleport-nodes)
docs for reference.
## Step 7/7. Disable legacy listeners