* Base fork for 4.3 docs * [docs] external email identities and Kube Users (#3628) * Base fork for 4.3 docs * [docs] external email identities and Kube Users (#3628) * Remove trailing whitespace from docs files Some editors will do this automatically on save. This causes a lot of diffs when editing the docs in such an editor. Clean them up once now and we'll try to keep it tidy going forward. * Add make rules for docs whitespace and milv docs-test-whitespace: checks for trailing whitespace in all .md files under docs/. docs-fix-whitespace: removes trailing whitespace in all .md files under docs/. docs-test-links: runs milv in all docs/ subdirectories that have milv.config.yaml. docs-test: runs whitespace and links tests, used during `make docs` * Document the new `--use-local-ssh-agent` flag for tsh The flag is used to bypass the local SSH agent even when it's running. Specifically, this helps with agents that don't support certs. The flag was added in #3721 * Remove pam_script.so docs from SSH PAM page With #3725 we now populate teleport-specific env vars in a way that's accessible to `pam_exec.so`. There's no longer any reason to install pam_script.so separately and duplicate our docs. Updates #3692 * Using the correct --insecure-no-tls flag * Run docs-fix-whitespace make rule in a busybox container * Fixes #3414 Co-authored-by: Andrew Lytvynov <andrew@gravitational.com> Co-authored-by: Gus Luxton <gus@gravitational.com> Co-authored-by: Steven Martin <steven@gravitational.com> Co-authored-by: Gus Luxton <webvictim@gmail.com>
9.8 KiB
Kubernetes and SSH Integration Guide
Teleport v.3.0+ has the ability to act as a compliance gateway for managing privileged access to Kubernetes clusters. This enables the following capabilities:
- A Teleport Proxy can act as a single authentication endpoint for both SSH and
Kubernetes. Users can authenticate against a Teleport proxy using Teleport's
tsh login
command and retrieve credentials for both SSH and Kubernetes API. - Users RBAC roles are always synchronized between SSH and Kubernetes, making it easier to implement policies like developers must not access production data.
- Teleport's session recording and audit log extend to Kubernetes, as well.
Regular
kubectl exec
commands are logged into the audit log and the interactive commands are recorded as regular sessions that can be stored and replayed in the future.
This guide will walk you through the steps required to configure Teleport to work as a unified gateway for both SSH and Kubernetes. We will cover both the open source and enterprise editions of Teleport.
For this guide, we'll be using an instance of Kubernetes running on Google's GKE but this guide should apply with any upstream Kubernetes instance.
Teleport Proxy Service
By default, the Kubernetes integration is turned off in Teleport. The configuration setting to enable the integration is the proxy_service/kubernetes/enabled
setting which can be found in the proxy service section in the /etc/teleport.yaml
file, as shown below:
# snippet from /etc/teleport.yaml on the Teleport proxy service:
proxy_service:
# create the 'kubernetes' section and set 'enabled' to 'yes':
kubernetes:
enabled: yes
public_addr: [teleport.example.com:3026]
listen_addr: 0.0.0.0:3026
Let's take a closer look at the available Kubernetes settings:
-
public_addr
defines the publicly accessible address which Kubernetes API clients likekubectl
will connect to. This address will be placed inside ofkubeconfig
on a client's machine when a client executestsh login
command to retrieve its certificate. If you intend to run multiple Teleport proxies behind a load balancer, this must be the load balancer's public address. -
listen_addr
defines which network interface and port the Teleport proxy server should bind to. It defaults to port 3026 on all NICs.
Connecting the Teleport proxy to Kubernetes
There are two ways this can be done:
- Deploy Teleport Proxy service as a Kubernetes pod inside the Kubernetes cluster you want the proxy to have access to. No Teleport configuration changes are required in this case.
- Deploy the Teleport proxy service outside of Kubernetes and update the Teleport Proxy configuration with Kubernetes
credentials. In this case, we need to update
/etc/teleport.yaml
for the proxy service as shown below:
# snippet from /etc/teleport.yaml on the proxy service deployed outside k8s:
proxy_service:
kubernetes:
kubeconfig_file: /path/to/kubeconfig
To retrieve the Kubernetes credentials for the Teleport proxy service, you have to authenticate against your Kubernetes
cluster directly then copy the file to /path/to/kubeconfig
on the Teleport proxy server.
Unfortunately for GKE users, GKE requires its own client-side extensions to authenticate, so we've created a
simple script you can run
to generate a kubeconfig
file for the Teleport proxy service.
Impersonation
The next step is to configure the Teleport Proxy to be able to impersonate Kubernetes principals within a given group using Kubernetes Impersonation Headers.
If Teleport is running inside the cluster using a Kubernetes ServiceAccount
, here's an example of the permissions that
the ServiceAccount
will need to be able to use impersonation (change teleport-serviceaccount
to the name of the
ServiceAccount
that's being used):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: teleport-impersonation
rules:
- apiGroups:
- ""
resources:
- users
- groups
- serviceaccounts
verbs:
- impersonate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: teleport
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: teleport-impersonation
subjects:
- kind: ServiceAccount
# this should be changed to the name of the Kubernetes ServiceAccount being used
name: teleport-serviceaccount
namespace: default
There is also an example of this usage within the example Teleport Helm chart.
If Teleport is running outside of the Kubernetes cluster, you will need to ensure that the principal used to connect to
Kubernetes via the kubeconfig
file has the same impersonation permissions as are described in the ClusterRole
above.
Kubernetes RBAC
Once you perform the steps above, your Teleport instance should become a fully functional Kubernetes API proxy. The next step is to configure Teleport to assign the correct Kubernetes groups to Teleport users.
Mapping Kubernetes groups to Teleport users depends on how Teleport is configured. In this guide we'll look at two common configurations:
-
Open source, Teleport Community edition configured to authenticate users via Github. In this case, we'll need to map Github teams to Kubernetes groups.
-
Commercial, Teleport Enterprise edition configured to authenticate users via Okta SSO. In this case, we'll need to map users' groups that come from Okta to Kubernetes groups.
Github Auth
When configuring Teleport to authenticate against Github, you have to create a
Teleport connector for Github, like the one shown below. Notice the kubernetes_groups
setting which assigns Kubernetes groups to a given Github team:
kind: github
version: v3
metadata:
# connector name that will be used with `tsh --auth=github login`
name: github
spec:
# client ID of Github OAuth app
client_id: <client-id>
# client secret of Github OAuth app
client_secret: <client-secret>
# connector display name that will be shown on web UI login screen
display: Github
# callback URL that will be called after successful authentication
redirect_url: https://teleport.example.com:3080/v1/webapi/github/callback
# mapping of org/team memberships onto allowed logins and roles
teams_to_logins:
- organization: octocats # Github organization name
team: admins # Github team name within that organization
# allowed UNIX logins for team octocats/admins:
logins:
- root
# list of Kubernetes groups this Github team is allowed to connect to
kubernetes_groups: ["system:masters"]
To obtain client ID and client secret from Github, please follow Github documentation on how to create and register an OAuth app. Be sure to set the "Authorization callback URL" to the same value as redirect_url in the resource spec.
Finally, create the Github connector with the command: tctl create -f github.yaml
. Now, when Teleport users execute the Teleport's tsh login
command, they will be prompted to login through the Github SSO and upon successful authentication, they have access to Kubernetes.
# Login via Github SSO and retrieve SSH+Kubernetes certificates:
$ tsh login --proxy=teleport.example.com --auth=github login
# Use Kubernetes API!
$ kubectl exec -ti <pod-name>
The kubectl exec
request will be routed through the Teleport proxy and
Teleport will log the audit record and record the session.
!!! note
For more information on integrating Teleport with Github SSO, please see the [Github section in the Admin Manual](admin-guide.md#github-oauth-20).
Enterprise SSO - Okta Example
With Okta (or any other SAML/OIDC/Active Directory provider), you must update Teleport's roles to include the mapping to Kubernetes groups.
Let's assume you have the Teleport role called "admin". Add kubernetes_groups
setting to it as shown below:
# NOTE: the role definition is edited to remove the unnecessary fields
kind: role
version: v3
metadata:
name: admin
spec:
allow:
# if kubernetes integration is enabled, this setting configures which
# kubernetes groups the users of this role will be assigned to.
# note that you can refer to a SAML/OIDC trait via the "external" property bag,
# this allows you to specify Kubernetes group membership in an identity manager:
kubernetes_groups: ["system:masters", "{% raw %}{{external.trait_name}}{% endraw %}"]]
To add kubernetes_groups
setting to an existing Teleport role, you can either
use the Web UI or tctl
:
# Dump the "admin" role into a file:
$ tctl get roles/admin > admin.yaml
# Edit the file, add kubernetes_groups setting
# and then execute:
$ tctl create -f admin.yaml
!!! tip "Advanced Usage"
`{% raw %}{{ external.trait_name }}{% endraw %}` example is shown to demonstrate how to fetch
the Kubernetes groups dynamically from Okta during login. In this case, you
need to define Kubernetes group membership in Okta (as a trait) and use
that trait name in the Teleport role.
Once this is complete, when users execute tsh login
and go through the usual Okta login
sequence, their kubeconfig
will be updated with their Kubernetes credentials.
!!! note
For more information on integrating Teleport with Okta, please see the
[Okta integration guide](enterprise/sso/ssh_okta.md).