[v7.0] docs: port of edit pass 2/9 (#7173)

* docs: port 6961

* docs: comment out line

* docs: correct OSS

* docs: correction

* docs: bump example from 24h to 2190h

* docs: update notes
This commit is contained in:
inertial-frame 2021-06-15 20:14:00 -05:00 committed by GitHub
parent e519af958e
commit 9fbe282052
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
27 changed files with 476 additions and 329 deletions

View file

@ -49,7 +49,7 @@ When experimenting, you can quickly start [`teleport`](cli-docs.mdx#teleport) wi
### Systemd unit file
In production, we recommend starting teleport daemon via an init system like `systemd`. Here's the recommended Teleport service unit file for systemd:
In production, we recommend starting teleport daemon via an init system like `systemd`. Here's the recommended Teleport service unit file for `systemd`:
```systemd
[Unit]
@ -169,7 +169,7 @@ teleport:
ca_pin: "sha256:ca-pin-hash-goes-here"
# List of auth servers in a cluster. you will have more than one auth server
# if you configure teleport auth to run in HA configuration.
# if you configure teleport auth to run in High Availability configuration.
# If adding a node located behind NAT, use the Proxy URL. e.g.
# auth_servers:
# - teleport-proxy.example.com:3080
@ -763,7 +763,7 @@ Teleport supports multiple storage back-ends for storing the SSH, Application, a
The section below uses the `dir` backend as an example. `dir` backend uses the local
filesystem of an auth server using the configurable `data_dir` directory.
For highly available (HA) configurations, users can refer to our
For High Availability configurations, users can refer to our
[DynamoDB](#using-dynamodb) or [Firestore](#using-firestore) chapters for information
on how to configure the SSH events and recorded sessions to be stored on
network storage. It is even possible to store the audit log in multiple places at the
@ -1092,8 +1092,8 @@ It's important to note that for Teleport to use HTTP CONNECT
tunneling, the `HTTP_PROXY` and `HTTPS_PROXY` environment variables must be set
within Teleport's environment. You can also optionally set the `NO_PROXY`
environment variable to avoid use of the proxy when accessing specified
hosts/netmasks. When launching Teleport with systemd, this will probably involve
adding some lines to your systemd unit file:
hosts/netmasks. When launching Teleport with `systemd`, this will probably involve
adding some lines to your `systemd` unit file:
```
[Service]
@ -1221,13 +1221,13 @@ non-default storage backends.
in the Teleport Architecture documentation.
</Admonition>
Usually, there are two ways to achieve high availability. You can "outsource"
this function to the infrastructure. For example, using a Highly Available
network-based disk volume (similar to AWS EBS) and by migrating a failed VM to
Usually there are two ways to achieve High Availability. You can "outsource"
this function to the infrastructure. For example, using a highly available
network-based disk volumes (similar to AWS EBS) and by migrating a failed VM to
a new host. In this scenario, there's nothing Teleport-specific to be done.
If high availability cannot be provided by the infrastructure (perhaps you're
running Teleport on a bare-metal cluster), you can still configure Teleport to
If High Availability cannot be provided by the infrastructure (perhaps you're
running Teleport on a bare metal cluster), you can still configure Teleport to
run in a highly available fashion.
### Auth server High Availability
@ -1326,8 +1326,8 @@ and user records will be stored.
title="IMPORTANT"
>
`etcd` can only currently be used to store Teleport's internal database in a
highly available way. This will allow you to have multiple auth servers in your
cluster for an HA deployment, but it will not also store Teleport audit events
highly-available way. This will allow you to have multiple auth servers in your
cluster for an High Availability deployment, but it will not also store Teleport audit events
for you in the same way that [DynamoDB](#using-dynamodb) or
[Firestore](#using-firestore) will. `etcd` is not designed to handle large volumes of time series data like audit events.
</Admonition>
@ -1459,7 +1459,7 @@ These optional `GET` parameters control how Teleport interacts with an S3 endpoi
If you are running Teleport on AWS, you can use
[DynamoDB](https://aws.amazon.com/dynamodb/) as a storage back-end to achieve
high availability. DynamoDB backend supports two types of Teleport data:
High Availability. DynamoDB backend supports two types of Teleport data:
- Cluster state
- Audit log events
@ -1491,7 +1491,7 @@ teleport:
# NOTE: The DynamoDB events table has a different schema to the regular Teleport
# database table, so attempting to use the same table for both will result in errors.
# When using highly available storage like DynamoDB, you should make sure that the list always specifies
# the HA storage method first, as this is what the Teleport web UI uses as its source of events to display.
# the High Availability storage method first, as this is what the Teleport web UI uses as its source of events to display.
audit_events_uri: ['dynamodb://events_table_name', 'file:///var/lib/teleport/audit/events', 'stdout://']
# This setting configures Teleport to save the recorded sessions in an S3 bucket:
@ -1671,7 +1671,7 @@ teleport:
# NOTE: The Firestore events table has a different schema to the regular Teleport
# database table, so attempting to use the same table for both will result in errors.
# When using highly available storage like Firestore, you should make sure that the list always specifies
# the HA storage method first, as this is what the Teleport web UI uses as its source of events to display.
# the High Availability storage method first, as this is what the Teleport web UI uses as its source of events to display.
audit_events_uri: ['firestore://Example_TELEPORT_FIRESTORE_EVENTS_TABLE_NAME', 'file:///var/lib/teleport/audit/events', 'stdout://']
# This setting configures Teleport to save the recorded sessions in GCP storage:
@ -1766,7 +1766,7 @@ When upgrading a single Teleport cluster:
type="warning"
title="Warning"
>
If several auth servers are running in HA configuration
If several auth servers are running in High Availability configuration
(for example, in AWS auto-scaling group) you have to shrink the group to
**just one auth server** before performing an upgrade. While Teleport will attempt to perform any necessary migrations, we recommend users create a backup of their backend before upgrading the Auth Server, as a
precaution. This allows for a safe rollback in case the migration itself fails.

View file

@ -49,7 +49,7 @@ tctl tokens add \
### TLS requirements
TLS is required to secure Teleport's Unified Access Plane and any connected
TLS is required to secure Teleport's Access Plane and any connected
applications. When setting up Teleport, the minimum requirement is a certificate
for the proxy and a wildcard certificate for its sub-domain. This is where
everyone will log into Teleport.

View file

@ -188,7 +188,10 @@ storage.
type="tip"
title="Deployment Considerations"
>
If multiple Teleport auth servers are used to service the same cluster (High Availability mode) a network file system must be used for `/var/lib/teleport/log` to allow them to combine all audit events into the same audit log. [Learn how to deploy Teleport in HA Mode.](../admin-guide.mdx#high-availability)
If multiple Teleport auth servers are used
to service the same cluster (High Availability mode) a network file system must be used for
`/var/lib/teleport/log` to allow them to combine all audit events into the
same audit log. [Learn how to deploy Teleport in High Availability Mode.](../admin-guide.mdx#high-availability).
</Admonition>
## Storage back-ends
@ -216,9 +219,8 @@ it allows them to run Teleport clusters completely devoid of local state.
type="tip"
title="NOTE"
>
For high availability in production, a Teleport cluster can be
serviced by multiple auth servers running in sync. Check [HA
configuration](../admin-guide.mdx#high-availability) in the Admin Guide.
For High Availability in production, a Teleport cluster can be
serviced by multiple auth servers running in sync. Check [High Availability configuration](../admin-guide.mdx#high-availability) in the Admin Guide.
</Admonition>
## More concepts

View file

@ -13,7 +13,7 @@ We have split this guide into:
- [Authenticating to EKS Using GitHub Credentials with Teleport Open Source Edition](#using-teleport-with-eks)
- [Setting up Teleport Enterprise on AWS](#running-teleport-enterprise-on-aws)
- [Teleport AWS Tips & Tricks](#teleport-aws-tips--tricks)
- [AWS HA with Terraform](aws-terraform-guide.mdx)
- [AWS High Availability with Terraform](aws-terraform-guide.mdx)
### Teleport on AWS FAQ
@ -48,7 +48,7 @@ for the below.
This guide will cover how to setup, configure and run Teleport on [AWS](https://aws.amazon.com/).
### AWS Services required to run Teleport in HA
### AWS Services required to run Teleport in High Availability
- [EC2 / Autoscale](#ec2--autoscale)
- [DynamoDB](#dynamodb)
@ -59,7 +59,7 @@ This guide will cover how to setup, configure and run Teleport on [AWS](https://
- [ACM](#acm)
- [SSM](#aws-systems-manager-parameter-store)
We recommend setting up Teleport in high availability mode (HA). In HA mode DynamoDB
We recommend setting up Teleport in High Availability mode. In High Availability mode DynamoDB
stores the state of the system and S3 will store audit logs.
![AWS Intro Image](../img/aws/aws-intro.png)
@ -269,7 +269,7 @@ With AWS Certificate Manager, you can quickly request SSL/TLS certificates.
To add new nodes to a Teleport Cluster, we recommend using a [strong static token](https://gravitational.com/teleport/docs/admin-guide/#example-configuration). SSM can be also used to store the
enterprise licence.
## Setting up a HA Teleport Cluster
## Setting up a High Availability Teleport Cluster
Teleport's config based setup offers a wide range of customization for customers.
This guide offers a range of setup options for AWS. If you have a very large account,
@ -279,7 +279,7 @@ more than happy to help you architect, setup and deploy Teleport into your envir
We have these options for you.
- [Deploying with CloudFormation](#deploying-with-cloudformation)
- [Deploying with Terraform HA + Monitoring](#deploying-with-terraform)
- [Deploying with Terraform High Availability + Monitoring](#deploying-with-terraform)
## Deploying with CloudFormation
@ -651,7 +651,7 @@ correct binary.
We've a few other resources for Enterprise customers such as our
- [Running Teleport Enterprise in HA mode on AWS using Terraform](aws-terraform-guide.mdx)
- [Running Teleport Enterprise in High Availability mode on AWS using Terraform](aws-terraform-guide.mdx)
- [Teleport Enterprise Quickstart](enterprise/quickstart-enterprise.mdx)
If you would like help setting up Teleport Enterprise on AWS, please mail us at [info@goteleport.com](mailto:info@goteleport.com)

View file

@ -1,7 +1,7 @@
---
title: Teleport HA mode on AWS
description: How to configure Teleport in highly available (HA) mode for AWS deployments
h1: Running Teleport Enterprise in HA mode on AWS
title: Teleport High Availability mode on AWS
description: How to configure Teleport in High Availability mode for AWS deployments.
h1: Running Teleport Enterprise in High Availability mode on AWS
---
This guide is designed to accompany our [reference Terraform code](https://github.com/gravitational/teleport/tree/master/examples/aws/terraform/ha-autoscale-cluster#terraform-based-provisioning-example-amazon-single-ami)
@ -543,7 +543,7 @@ You can get detailed logs for the Teleport auth servers using the `journalctl` c
$ journalctl -u teleport-auth.service
```
Remember that there is more than one auth server in an HA deployment. You should use this command to get the IP addresses
Remember that there is more than one auth server in an High Availability deployment. You should use this command to get the IP addresses
of each auth server that you'll need to connect to:
```bash
@ -576,7 +576,7 @@ You can get detailed logs for the Teleport proxy service using the `journalctl`
$ journalctl -u teleport-proxy.service
```
Remember that there is more than one proxy instance in an HA deployment. You should use this command to get the IP addresses
Remember that there is more than one proxy instance in an High Availability deployment. You should use this command to get the IP addresses
of each auth instance that you'll need to connect to:
```bash
@ -636,7 +636,7 @@ You can get detailed logs for the Teleport auth server using the `journalctl` co
$ journalctl -u teleport-auth.service
```
Remember that there is more than one auth instance in an HA deployment. You should use this command to get the IP addresses
Remember that there is more than one auth instance in an High Availability deployment. You should use this command to get the IP addresses
of each auth instance that you'd need to connect to for checking logs:
```bash
@ -669,7 +669,7 @@ You can get detailed logs for the Teleport proxy service using the `journalctl`
$ journalctl -u teleport-proxy-acm.service
```
Remember that there is more than one proxy instance in an HA deployment. You can use this command to get the IP addresses
Remember that there is more than one proxy instance in an High Availability deployment. You can use this command to get the IP addresses
of each proxy instance that you'd need to connect to for checking logs:
```bash

File diff suppressed because it is too large Load diff

View file

@ -29,7 +29,7 @@ Customer data, including audit logging, is backed up using the DynamoDB
"point in time recovery" system. Data can be recovered up to 35 days.
This retention period is not configurable.
## High availability
## High Availability
Clusters are deployed in a single AWS region in 2 availability zones.
AWS guarantees [99.99%](https://aws.amazon.com/compute/sla/) of monthly uptime percentage.

View file

@ -1,6 +1,6 @@
---
title: Teleport Configuration Reference
description: The detailed guide for configuring Teleport for SSH and Kubernetes access
description: The detailed guide and reference documentation for configuring Teleport for SSH and Kubernetes access.
---
## teleport.yaml
@ -20,7 +20,7 @@ get a good starting file, run `teleport configure -o teleport.yaml`.
# This section of the configuration file applies to all teleport
# services.
teleport:
# nodename allows to assign an alternative name this node can be reached by.
# nodename allows one to assign an alternative name this node can be reached by.
# by default it's equal to hostname
nodename: graviton
@ -47,8 +47,8 @@ teleport:
# This value can be specified as FQDN e.g. host.example.com
advertise_ip: 10.1.0.5
# list of auth servers in a cluster. you will have more than one auth server
# if you configure teleport auth to run in HA configuration.
# List of auth servers in a cluster. you will have more than one auth server
# if you configure teleport auth to run in High Availability configuration.
# If adding a node located behind NAT, use the Proxy URL. e.g.
# auth_servers:
# - teleport-proxy.example.com:3080
@ -85,7 +85,7 @@ teleport:
# Configuration for the storage back-end used for the cluster state and the
# audit log. Several back-end types are supported. See the "High Availability"
# section of the Admin Manual (https://goteleport.com/docs/admin-guide/#high-availability)
# to learn how to configure DynamoDB, S3, etcd and other highly available back-ends.
# to learn how to configure DynamoDB, S3, etcd, and other highly available back-ends.
storage:
# By default teleport uses the `data_dir` directory on a local filesystem
type: dir
@ -165,7 +165,7 @@ auth_service:
# A cluster name is used as part of a signature in certificates
# generated by this CA.
#
# We strongly recommend to explicitly set it to something meaningful as it
# We strongly recommend explicitly setting it to something meaningful as it
# becomes important when configuring trust between multiple clusters.
#
# By default an automatically generated name is used (not recommended)
@ -200,7 +200,7 @@ auth_service:
# the second factor will need to re-register.
app_id: https://localhost:3080
# list of allowed addresses of the Teleport proxy, checked during
# list of allowed addresses of the Teleport proxy checked during
# authentication attempts. This list is used to prevent malicious
# websites and proxies from requesting U2F challenges on behalf of
# the legitimate proxy.
@ -228,7 +228,7 @@ auth_service:
# certificates
listen_addr: 0.0.0.0:3025
# The optional DNS name the auth server if located behind a load balancer.
# The optional DNS name for the auth server if located behind a load balancer.
# See the "Public Addr" section for more details
# (https://goteleport.com/docs/admin-guide/#public-addr).
public_addr: auth.example.com:3025
@ -282,7 +282,7 @@ auth_service:
keep_alive_interval: 5m
keep_alive_count_max: 3
# Determines the internal session control timeout cluster wide. This value will
# Determines the internal session control timeout cluster-wide. This value will
# be used with enterprise max_connections and max_sessions. It's unlikely that
# you'll need to change this.
# session_control_timeout: 2m
@ -327,8 +327,8 @@ ssh_service:
command: ['/bin/uname', '-p']
period: 1h0m0s
# Enables reading ~/.tsh/environment before creating a session. By default
# set to false, can be set true here or as a command line flag.
# Enables reading ~/.tsh/environment before creating a session.
# By default it's set to false can be set true here or through the command-line flag.
permit_user_env: false
# Enhanced Session Recording
@ -427,7 +427,7 @@ proxy_service:
# Address advertised to MySQL clients. If not set, public_addr is used.
mysql_public_addr: "mysql.teleport.example.com:3306"
# Get automatic certificate from Letsencrypt.org using ACME via TLS_ALPN-01 challenge.
# Get an automatic certificate from Letsencrypt.org using ACME via TLS_ALPN-01 challenge.
# When using ACME, the cluster name must match the 'public_addr' of Teleport and
# the 'proxy_service' must be publicly accessible over port 443.
# Also set using the CLI command:
@ -511,7 +511,7 @@ db_service:
# This section contains definitions of all databases proxied by this
# service, it can contain multiple database instances.
databases:
# Name of the database proxy instance, used to reference in CLI.
# Name of the database proxy instance used to reference in CLI.
- name: "prod"
# Free-form description of the database proxy instance.
description: "Production database"

View file

@ -63,7 +63,7 @@ These are *How-To's* in [DIVIO parlance](https://www.divio.com/). They describe
Example:
1. Guide title: "Kubernetes Access on GKE" which features a high availability setup using Firebase and GCS backend.
1. Guide title: "Kubernetes Access on GKE" which features a High Availability setup using Firebase and GCS backend.
2. Followed by the guide purpose.
3. Followed by prerequisites.
4. Next, the setup steps.

View file

@ -87,7 +87,7 @@ the OIDC Connector, under `google_service_account_uri` or inline with `google_se
<Admonition type="note">
Teleport requires the service account JSON to be uploaded to all Teleport authentication servers when setting
up in a HA config.
up in a High Availability config.
</Admonition>
## Manage API Scopes

View file

@ -34,7 +34,7 @@ We plan on expanding our guide to eventually include using Teleport with Google
This guide will cover how to setup, configure and run Teleport on GCP.
GCP Services required to run Teleport in HA:
GCP Services required to run Teleport in High Availability:
- [Compute Engine: VM Instances with Instance Groups](#compute-engine-vm-instances-with-instance-groups)
- [Computer Engine: Health Checks](#computer-engine-health-checks)
@ -52,14 +52,14 @@ Optional:
- Management Tools: Cloud Deployment Manager
- Stackdriver Logging
We recommend setting up Teleport in high availability mode (HA). In HA mode Firestore
We recommend setting up Teleport in High Availability mode. In High Availability mode Firestore
stores the state of the system and Google Cloud Storage stores the audit logs.
![GCP Intro Image](../img/gcp/gcp-teleport.svg)
### Compute Engine: VM Instances with Instance Groups
To run Teleport in a HA configuration we recommend using `n1-standard-2` instances in
To run Teleport in a High Availability configuration we recommend using `n1-standard-2` instances in
Production. It's best practice to separate the proxy and authentication server, using
Instance groups for the proxy and auth server.
@ -180,7 +180,7 @@ Save the following configuration file as `/etc/teleport.yaml` on the Proxy Serve
teleport:
auth_token: EXAMPLE-CLUSTER-JOIN-TOKEN
# We recommend using a TCP load balancer pointed to the auth servers when
# setting up in HA mode.
# setting up in High Availability mode.
auth_servers: [ "auth.example.com:3025" ]
# enable ssh service and disable auth and proxy:
ssh_service:
@ -199,7 +199,7 @@ Save the following configuration file as `/etc/teleport.yaml` on the node:
teleport:
auth_token: EXAMPLE-CLUSTER-JOIN-TOKEN
# We recommend using a TCP load balancer pointed to the auth servers when
# setting up in HA mode.
# setting up in High Availability mode.
auth_servers: [ "auth.example.com:3025" ]
# enable ssh service and disable auth and proxy:
ssh_service:

View file

@ -35,7 +35,7 @@ We plan on expanding our guide to eventually include using Teleport with IBM Clo
This guide will cover how to setup, configure and run Teleport on IBM Cloud.
IBM Services required to run Teleport in HA:
IBM Services required to run Teleport in High Availability:
- [IBM Cloud: Virtual Servers with Instance Groups](#ibm-cloud-virtual-servers-with-instance-groups)
- [Storage: Database for etcd](#storage-database-for-etcd)
@ -46,7 +46,7 @@ Other things needed:
- [SSL Certificate](https://www.ibm.com/cloud/ssl-certificates)
We recommend setting up Teleport in high availability mode (HA). In HA mode [etcd](https://etcd.io/)
We recommend setting up Teleport in High Availability mode (HA). In High Availability mode [etcd](https://etcd.io/)
stores the state of the system and [IBM Cloud Storage](https://www.ibm.com/cloud/storage)
stores the audit logs.
@ -62,7 +62,7 @@ We recommend Gen 2 Cloud IBM [Virtual Servers](https://www.ibm.com/cloud/virtual
### Storage: Database for etcd
IBM offers [managed etcd](https://www.ibm.com/cloud/databases-for-etcd) instances.
Teleport uses etcd as a scalable database to maintain high availability and provide
Teleport uses etcd as a scalable database to maintain High Availability and provide
graceful restarts. The service has to be turned on from within the [IBM Cloud Dashboard](https://cloud.ibm.com/catalog/services/databases-for-etcd).
We recommend picking an etcd instance in the same region as your planned Teleport

View file

@ -22,7 +22,7 @@ Server Version: version.Info{Major:"1", Minor:"17+"}
## Connecting Clusters
Teleport can act as a unified access plane for multiple Kubernetes clusters.
Teleport can act as an access plane for multiple Kubernetes clusters.
We have set up the Teleport cluster `tele.example.com` in [SSO and Kubernetes](../getting-started.mdx).
Let's start a lightweight agent in another Kubernetes cluster `cookie` and connect it to `tele.example.com`.

View file

@ -4,13 +4,13 @@ description: How to set up Prometheus to monitor Teleport for SSH and Kubernetes
h1: Metrics
---
## Teleport Prometheus Endpoint
## Teleport Prometheus endpoint
Teleport provides HTTP endpoints for monitoring purposes. They are disabled
by default, but you can enable them using the `--diag-addr` flag to `teleport start`:
```bash
$ teleport start --diag-addr=127.0.0.1:3000
sudo teleport start --diag-addr=127.0.0.1:3000
```
Now you can see the monitoring information by visiting several endpoints:
@ -20,7 +20,7 @@ Now you can see the monitoring information by visiting several endpoints:
collectors.
- `http://127.0.0.1:3000/healthz` returns "OK" if the process is healthy or
`503` otherwise.
- `http://127.0.0.1:3000/readyz` is similar to `/healthz` , but it returns "OK"
- `http://127.0.0.1:3000/readyz` is similar to `/healthz`, but it returns "OK"
*only after* the node successfully joined the cluster, i.e.it draws the
difference between "healthy" and "ready".
- `http://127.0.0.1:3000/debug/pprof/` is Golang's standard profiler. It's only

View file

@ -70,16 +70,12 @@ of them is configurable.
### Systemd unit file
In production, we recommend starting teleport daemon via an init system like
`systemd`. If systemd and unit files are new to you, check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's an example systemd unit file for the Teleport [Proxy, Node, and Auth Service](https://github.com/gravitational/teleport/tree/master/examples/systemd/production).
In production, we recommend starting teleport daemon via an init system like `systemd`. If `systemd` and unit files are new to you, check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's an example systemd unit file for the Teleport [Proxy, Node, and Auth Service](https://github.com/gravitational/teleport/tree/master/examples/systemd/production).
There are a couple of important things to notice about this file:
1. The start command in the unit file specifies `--config` as a file and there are very few flags passed to the `teleport` binary. Most of the configuration
2. The **ExecReload** command allows admins to run `systemctl reload teleport`.
This will attempt to perform a graceful restart of Teleport **but it only works if
network-based backend storage like [DynamoDB](admin-guide.mdx#using-dynamodb) or
[etc 3.3](admin-guide.mdx#using-etcd) is configured**. Graceful Restarts will fork a new process to handle new incoming requests and leave the old daemon process running until existing clients disconnect.
1. The start command in the unit file specifies `--config` as a file and there are very few flags passed to the `teleport` binary. Most of the configuration for Teleport should be done in the [configuration file](admin-guide.mdx#configuration).
2. The **ExecReload** command allows admins to run `systemctl reload teleport`. This will attempt to perform a graceful restart of Teleport **but it only works if network-based backend storage like [DynamoDB](admin-guide.mdx#using-dynamodb) or [etc 3.3](admin-guide.mdx#using-etcd) is configured**. Graceful Restarts will fork a new process to handle new incoming requests and leave the old daemon process running until existing clients disconnect.
### Start the Teleport service
@ -196,7 +192,7 @@ setting up the proxy nodes start Teleport with:
`teleport start --insecure --roles=proxy --config=/etc/teleport.yaml`
See [Teleport Proxy HA](admin-guide.mdx#teleport-proxy-high-availability) for more info.
See [Teleport Proxy High Availability](admin-guide.mdx#teleport-proxy-high-availability) for more info.
{
/* TODO SSL for Webproxy & Auth Section */

View file

@ -1,15 +1,24 @@
---
title: Teleport API Reference
description: Reference for the Teleport API
title: Teleport API Reference Documentation
description: Reference documentation for the Teleport gRPC API.
---
# Reference
- [Introduction](./api/introduction.mdx)
- [Getting Started](./api/getting-started.mdx)
- [Architecture](./api/architecture.mdx)
- [pkg.go.dev](https://pkg.go.dev/github.com/gravitational/teleport/api/client)
- [Using the client](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Client)
- [Working with credentials](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Credentials)
<TileSet>
<Tile icon="list" title="Teleport API Introduction" href="./api/introduction.mdx">
Introduction to Teleport API.
</Tile>
<Tile icon="list" title="Getting Started" href="./api/getting-started.mdx">
Check out the Teleport API Getting Started guide.
</Tile>
<Tile icon="list" title="API Architecture" href="./api/architecture.mdx">
Read about Teleport API architecture and concepts.
</Tile>
</TileSet>
## pkg.go
- Learn about [pkg.go.dev](https://pkg.go.dev/github.com/gravitational/teleport/api/client)
- Learn how to use [the client](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Client)
- Learn how to [work with credentials](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Credentials)

View file

@ -1,6 +1,6 @@
---
title: API Architecture
description: Architecture of the Teleport API
description: Architectural overview of the Teleport gRPC API.
---
## Authentication
@ -49,45 +49,41 @@ on role based access control.
## Credentials
The Teleport Go client uses Credentials in order to gather and hold TLS certificates, connect
The Teleport Go client uses Credentials to gather and hold TLS certificates, connect
to proxy servers over SSH, and perform some other actions.
Credentials are created by using Credential loaders, which gather certificates and data
generated by [Teleport CLIs](../../cli-docs.mdx).
Since there are several Credential loaders to choose from with distinct benefits, here's a quick breakdown:
- Profile Credentials are the easiest to get started with. All you have to do is login
on your device with `tsh login`. Your Teleport proxy address and credentials will
automatically be located and used.
- IdentityFile Credentials are the most well rounded in terms of usability, functionality,
- Profile Credentials are the easiest to get started with. All you have to do is log in to your device with `tsh login`. Your Teleport proxy address and credentials will automatically be located and used.
- IdentityFile Credentials are the most well-rounded in terms of usability, functionality,
and customizability. Identity files can be generated through `tsh login` or `tctl auth sign`,
making them ideal for both long lived proxy and auth server connections.
- Key Pair Credentials have a much simpler implementation than the first two Credentials listed,
and may feel more familiar. These are good for authenticating client's hosted directly on the auth server.
- TLS Credentials leave everything up to the client user. This is mostly used internally, but
some advanced users may find it useful.
making them ideal for both long-lived proxy and auth server connections.
- Key Pair Credentials have a much simpler implementation than the first two Credentials listed and may feel more familiar. These are good for authenticating clients hosted directly on the auth server.
- TLS Credentials leave everything up to the client user. This is mostly used internally, but some advanced users may find it useful.
Here are some more specific details to differentiate them by:
| Type | Profile Credentials | Identity Credentials | Key Pair Credentials | TLS Credentials |
| - | - | - | - | - |
| Ease of use | easy | easy | med | hard |
| Supports long lived certificates | yes, but must be configured on server side | yes | yes | yes |
| Supports SSH connections | yes | yes (6.1+) | no | no |
| Automatic Proxy Address discovery | yes | no | no | no |
| Ease of use | Easy | Easy | Medium | Hard |
| Supports long-lived certificates | Yes, but must be configured on server side | Yes | Yes | Yes |
| Supports SSH connections | Yes | Yes (6.1+) | No | No |
| Automatic Proxy Address discovery | Yes | No | No | No |
| CLI used | tsh | tctl/tsh | tctl | - |
| Available in | 6.1+ | 6.0+ | 6.0+ | 6.0+ |
See the [Credentials type](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Credentials)
on pkg.go.dev for more information and examples for Credentials and Credential Loaders.
## Client Connection
## Client connection
The API client makes requests through an open connection to the Teleport Auth server.
If the Auth server is isolated behind a [Proxy Server](../../architecture/proxy.mdx), a reverse
tunnel connection can be made using SSH certificates signed by the auth server. You can either
provide the server's reverse tunnel address directly, or provide the web proxy address and have
provide the server's reverse tunnel address directly or provide the web proxy address and have
the client automatically retrieve the reverse tunnel address.
<Admonition type="note">

View file

@ -1,6 +1,6 @@
---
title: Teleport API Introduction
description: Introduction for the Teleport API
description: Introduction to the Teleport gRPC API.
---
The Teleport Auth API provides a gRPC API for remotely interacting with a Teleport Auth server.
@ -8,7 +8,7 @@ The Teleport Auth API provides a gRPC API for remotely interacting with a Telepo
Teleport has a public [Go client](https://pkg.go.dev/github.com/gravitational/teleport/api/client)
to programatically interact with the API. [tsh and tctl](../../cli-docs.mdx/) use the same API.
## Go Client
## Go client
Here is what you can do with the Go Client:
- Integrating with external tools, which we have already done
@ -30,6 +30,6 @@ Here is what you can do with the Go Client:
[please complete our survey](https://docs.google.com/forms/d/1HPQu5Asg3lR0cu5crnLDhlvovGpFVIIbDMRvqclPhQg/viewform).
</Admonition>
## Get Started
## Get started
Create an API client in 3 minutes with the [Getting Started](./getting-started.mdx) Guide.

View file

@ -65,10 +65,6 @@ This guide introduces some of these common scenarios and how to interact with Te
2. Install Teleport on each instance.
<Admonition type="warning" title="Warning">
The examples below may include the use of the `sudo` keyword to make following each step easier when creating resources from scratch. However, we generally discourage the use of `sudo` in customer-facing production environments per the *Principle of Least Privilege* (POLP).
</Admonition>
<Tabs>
<TabItem label="Amazon Linux 2/RHEL (RPM)">
```bash

View file

@ -27,10 +27,6 @@ existing SSH implementations, such as OpenSSH. This section will cover:
```
</Admonition>
<Admonition type="warning" title="Warning">
The examples below may include the use of the `sudo` keyword to make following each step easier when creating resources from scratch. However, we generally discourage the use of `sudo` in customer-facing production environments per the *Principle of Least Privilege* (POLP).
</Admonition>
### Follow along with our video guide
<iframe

View file

@ -342,9 +342,9 @@ tsh ssh -L 5000:google.com:80 --local node curl http://localhost:5000
This command:
1. Connects to `node`
2. Binds the local port `5000` to port `80` on `google.com`
3. Executes `curl` command locally, which results in `curl` hitting `google.com:80` via `node`
1. Connects to `node`.
2. Binds the local port `5000` to port `80` on `google.com`.
3. Executes `curl` command locally, which results in `curl` hitting `google.com:80` via `node`.
### SSH jumphost
@ -507,9 +507,8 @@ see recorded sessions, and replay them. You can also join other users in active
There are a few differences between Teleport's `tsh` and OpenSSH's `ssh` but
most of them can be mitigated.
1. `tsh` always requires the `--proxy` flag because `tsh` needs to know which cluster you are connecting to. But if you execute `tsh --proxy=xxx login`,
the current proxy will be saved in your `~/.tsh` profile and won't be needed for other `tsh` commands.
2. `tsh ssh` operates *two* usernames: one for the cluster and another for the node you are trying to log into. See [User Identities](#user-identities)
1. `tsh` always requires the `--proxy` flag because `tsh` needs to know which cluster you are connecting to. But if you execute `tsh --proxy=xxx login`, the current proxy will be saved in your `~/.tsh` profile and won't be needed for other `tsh` commands.
2. `tsh ssh` operates *two* usernames: one for the cluster and another for the node you are trying to log into. See [User Identities](#user-identities) section below. For convenience, `tsh` assumes `$USER` for both by default. But again, if you use `tsh login` before `tsh ssh`, your Teleport username will be stored in `~/.tsh`.
If you'd like to set the login name that should be used by default on the remote host, you can set the `TELEPORT_LOGIN` environment variable.

View file

@ -80,7 +80,7 @@ strategy: RollingUpdate
# values:
# - teleport
#
## For high availability, distribute teleport pods to nodes as evenly as possible
## For High Availability, distribute teleport pods to nodes as evenly as possible
# podAntiAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - podAffinityTerm:

View file

@ -121,7 +121,7 @@ usePSP: false
# values:
# - teleport
#
## For high availability, distribute teleport pods to nodes as evenly as possible
## For High Availability, distribute teleport pods to nodes as evenly as possible
# podAntiAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - podAffinityTerm:

View file

@ -1,6 +1,6 @@
## Configuring High Availability
Running multiple instances of the Authentication Services requires using a high availability storage configuration. The [documentation](https://gravitational.com/teleport/docs/admin-guide/#high-availability) provides detailed examples using AWS DynamoDB/S3, GCP Firestore/Google storage or an `etcd` cluster. Here we provide detailed steps for an AWS example configuration.
Running multiple instances of the Authentication Services requires using a High Availability storage configuration. The [documentation](https://gravitational.com/teleport/docs/admin-guide/#high-availability) provides detailed examples using AWS DynamoDB/S3, GCP Firestore/Google storage or an `etcd` cluster. Here we provide detailed steps for an AWS example configuration.
### Prerequisites
- Available AWS credentials (/home/<user>/.aws/credentials)
@ -44,7 +44,7 @@ extraAuthVolumeMounts:
### Configuring Multiple Instances of Teleport
A high availability deployment of Teleport will typically have at least 2 proxy and 2 auth service instances. SSH service is typically not enabled on these instances. To enable separate deployments of the auth and auth services follow these steps.
A High Availability deployment of Teleport will typically have at least two proxy and two auth service instances. SSH service is typically not enabled on these instances. To enable separate deployments of the auth and auth services follow these steps.
1. In the configuration section set the `highAvailability` to true. Also confirm the auth public address and Service Type.
```yaml

View file

@ -23,7 +23,7 @@ The `values.yaml` is configurable for multiple options including:
- Using the Community edition of Teleport (Set license.enabled to false)
- Using self-signed TLS certificates (Set proxy.tls.usetlssecret to false)
- Using a specific version of Teleport (See image.tag)
- Using persistent or high availability storage (See below example). Persistent or high availability storage is recommended for production usage.
- Using persistent or High Availability storage (See below example). Persistent or High Availability storage is recommended for production usage.
- Increasing the replica count for multiple instances (Using High Availability configuration)
See the comments in the default `values.yaml` and also the [Teleport documentation](https://gravitational.com/teleport/docs/) for more options.

View file

@ -41,15 +41,15 @@ proxy:
#
config:
# used for cluster name, advertise_ip, and public addresses
# If high availability is set it is only used for the proxy
# If High Availability is set it is only used for the proxy
public_address: teleport.example.com
#used for listen addreses in proxy, auth and ssh
listen_addr: 0.0.0.0
# Set to true to have separate proxy and auth instances for high availability.
# You must use non-dir storage for high availability or you can only have 1 auth instance.
# You must use non-dir storage for High Availability or you can only have 1 auth instance.
highAvailability: false
# High availability configuration with proxy and auth servers. No configured SSH service.
# High Availability configuration with proxy and auth servers. No configured SSH service.
proxyCount: 2
authCount: 2
auth_public_address: auth.example.com
@ -60,7 +60,7 @@ config:
externalTrafficPolicy: ""
loadBalancerSourceRanges: []
# Set for proxies in high availability, single proxy and ssh service only deployments
# Set for proxies in High Availability, single proxy and ssh service only deployments
# auth_service_connection:
# auth_token: dogs-are-much-nicer-than-cats
# auth_servers:
@ -193,7 +193,7 @@ otherConfig:
teleportConfig:
# place a full teleport.yaml configuration here
# Teleport configuration for high availability deployment
# Teleport configuration for High Availability deployment
otherConfigHA:
useOtherConfig: false
teleportConfig:
@ -282,7 +282,7 @@ extraVolumeMounts: []
# mountPath: /var/lib/ca-certs
# readOnly: true
# Volume mounts only for the auth service in high availability deployments
# Volume mounts only for the auth service in High Availability deployments
extraAuthVolumes: []
extraAuthVolumeMounts: []
@ -358,7 +358,7 @@ strategy: RollingUpdate
# values:
# - teleport
#
## For high availability, distribute teleport pods to nodes as evenly as possible
## For High Availability, distribute teleport pods to nodes as evenly as possible
# podAntiAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - podAffinityTerm:

View file

@ -158,7 +158,7 @@ emitted by print event with chunk index 0
**Multiple Auth Servers**
In high availability mode scenario, multiple auth servers will be
In High Availability mode scenario, multiple auth servers will be
deployed behind a load balancer.
Any auth server can go down during session and clients will retry the delivery