tradeoffs, and new role mapping.
12 KiB
Dynamic Trusted Clusters
Dynamic Trusted Clusters can be used to configure Trusted Clusters in more powerful ways than standard file configuration. If you have not already read the documentation about Trusted Clusters from Teleport, it would be helpful to review before continuing.
Some of the features Dynamic Trusted Clusters have:
- Add and remove Trusted Clusters without needing to restart Teleport.
- Enable/Disable Trusted Clusters from the Web UI.
- More sophisticated role mapping which allows you to map roles you have on your main cluster (which are encoded within your SSH certificate) to the role you assume when you connect to a node within the Trusted Cluster.
Below we will provide two example configurations. A simple configuration that you can use to quickly get started with Trusted Clusters and a more comprehensive configuration that illustrates some of the more powerful features and abilities of Trusted Clusters.
Simple Configuration
Similar to the example illustrated in the documentation for Trusted Clusters, suppose you have a remote cluster that sits in a restricted environment which you can not directly connect to due to firewall rules but you have no other access control restrictions (anyone user in your main cluster should also be able to access any node within the Trusted Cluster).
Secret Tokens
When creating Trusted Clusters dynamically, Teleport requires that the Trusted Cluster know the value of a secret token generated by the main cluster. It uses this value to establish trust between clusters during the initial exchange. Due to the sensitive nature of this token, we recommend you use secure channel to exchange this token so it doesn't fall into the hands of an attacker. Secret tokens can be either static (long lived) or dynamic (short lived).
To create a dynamic token on the main cluster which lasts only 5 minutes, use the following command:
$ tctl nodes add --ttl=5m --roles=trustedcluster
If you need long lived static tokens, generate the token out-of-band and add it to your configuration file on the main cluster:
auth_service:
enabled: yes
cluster_name: main
tokens:
# generate a large random number for your token, we recommend
# using a tool like `pwgen` to generate sufficiently random
# tokens of length greater than 32 bytes
- "trustedcluster:fake-token"
Security Implications
Consider the security implications when deciding which token method to use. Short lived tokens decrease the window for attack but also make automation more difficult. Inherent to their nature, short lived tokens also make it difficult to allow your customers the ability to enable/disable a Trusted Cluster because the token exchange has to re-occur if they want to re-establish trust which they can't do with an expired token.
If even short lived tokens are not acceptable for your threat model, consider using file configuration that requires to you manually verify and add the keys for clusters you trust. Note however that if you use the standard file configuration method, the features below are not available.
Resources
To configure your clusters, you will need create the resources we describe below.
Roles
Because each cluster is independent, both specify their own roles and we need to create these before we do anything else. Below is a description of the roles and resource files that need to be created within Teleport.
On the main cluster, you'll need to create the following two roles. The admin
role allows you to access all servers and the staging role will limit you to
serves with the type=staging
label. Create the above two roles with tctl create -f {file name}
.
kind: role
version: v2
metadata:
name: admin
namespace: default
spec:
logins: [ root ]
max_session_ttl: 90h0m0s
namespaces: ['*']
node_labels:
'*': '*'
resources:
'*': [read, write]
kind: role
version: v2
metadata:
name: staging
namespace: default
spec:
logins: [ root ]
max_session_ttl: 90h0m0s
node_labels:
'type': 'staging'
On the Trusted Cluster, you'll need to create an admin role as well. Note the
logins
field, this is a list of usernames that the user will be able to login
as. Once again, create it with tctl create -f {file name}
.
kind: role
version: v2
metadata:
name: admin
namespace: default
spec:
logins: [ root ]
max_session_ttl: 90h0m0s
namespaces: ['*']
node_labels:
'*': '*'
resources:
'*': [read, write]
Trusted Cluster Resources
The Trusted Cluster resource is used to both establish trust between the two clusters and map roles from one cluster to another.
In the resource file below you'll note that we have a token
field, this is the
same token that was generated on the main cluster and is used to establish
trust.
We also have a role_map
field which describes how a users roles from the main
cluster (which are encoded in the users SSH certificate) are mapped to roles in
the Trusted Cluster. In this case, we are mapping all roles from the rote
cluster to the local admin role. Note users with the staging role will also be
able to access the Trusted Cluster. Take a look at the comprehensive
configuration section for more details on how to control access.
kind: trusted_cluster
version: v1
metadata:
description: "Remote Cluster"
name: "Remote Cluster"
namespace: "default"
spec:
enabled: true
role_map:
- remote: "*"
local: [admin]
token: "fake-token"
tunnel_addr: <main-addr>:3024
web_proxy_addr: <main-addr>:3080
To disable a Trusted Cluster, simply user tctl create -f {file name}
again but
this time set enabled: false
.
Verify
That's it. To verify that you can see your Trusted Cluster, run the following command:
$ tsh --proxy=<proxy-addr> clusters
Cluster Name Status
-------------- ------
Main online
Remote Cluster online
Comprehensive Configuration
Suppose you sell an Enterprise version of your application that runs on your customers infrastructure. You want an easy way to access your customers infrastructure in-case you need to troubleshoot problems that may arise. In addition, since you may have multiple customers, you want to limit your users to only have access to the customer cluster they need access to.
More formally, you have the following:
Cluster Name | Description |
---|---|
main | This is your own cluster, the other Trusted Cluster will dial into this cluster. |
acme | This cluster belongs to your customer Acme Corporation. |
emca | This cluster belongs to your customer Emca Corporation. |
Teleport User | Description |
---|---|
james | Support Engineer that handles Acme Corporation. |
john | Support Engineer that handles Emca Corporation. |
robert | Support Engineer that handles Acme and Emca Corporation. |
Secret Token
See the Secret Token configuration section in the Simple Configuration.
Resources
To configure your clusters, you will need create the resources we describe below.
Roles
Because each cluster is independent, both specify their own roles and we need to create these before we do anything else. Below is a description of the roles and resource files that need to be created within Teleport.
On the main cluster, you'll need to create two roles. One that will be used to access Acme Corporations cluster and another for Emca Corporation. Note these are roles are local roles that will be mapped to remote roles. The way they are configured they have full access to all nodes within the main cluster but can be limited roles in the main cluster and admin roles in the remote cluster if need be by restricting them here.
To create the roles, create the below files and then use tctl create -f {file name}
.
kind: role
version: v2
metadata:
name: acme-support
namespace: default
spec:
logins: [ root, acme ]
max_session_ttl: 90h0m0s
namespaces: ['*']
node_labels:
'*': '*'
resources:
'*': [read, write]
kind: role
version: v2
metadata:
name: emca-support
namespace: default
spec:
logins: [ root, emca ]
max_session_ttl: 90h0m0s
namespaces: ['*']
node_labels:
'*': '*'
resources:
'*': [read, write]
On the Acme cluster, you'll need to at least create a role that allows you to
access nodes within it. The following role allows you to access all nodes and
resources within the Acme cluster and only allows you to login as the acme
user. Once again use tctl create -f {file name}
kind: role
version: v2
metadata:
name: admin
namespace: default
spec:
logins: [ acme ]
max_session_ttl: 90h0m0s
namespaces: ['*']
node_labels:
'*': '*'
resources:
'*': [read, write]
Lastly, on the Emca cluster, create an admin role that only allows you to access
nodes with the label access=relaxed
and only allows you to login as the emca
user. This is a way to restrict access within a cluster even further.
kind: role
version: v2
metadata:
name: admin
namespace: default
spec:
logins: [ emca ]
max_session_ttl: 90h0m0s
namespaces: ['*']
node_labels:
'access': 'relaxed'
resources:
'*': [read, write]
Users
Now that you have roles created, you need create users and assign these users roles.
On your main cluster, create users using the normal method you use. Then run the
following commands to assign users the appropriate roles. This will allow the
user james
access to Acme Corporations cluster, the user john
access to the
Emca Corporation cluster, and the user robert
will have access to both.
$ tctl users update james --set-roles=user:james,acme-support
$ tctl users update john --set-roles=user:john,emca-support
$ tctl users update robert --set-roles=user:robert,acme-support,emca-support
Trusted Cluster Resources
The Trusted Cluster resource is used to both establish trust between the two clusters and map roles from one cluster to another.
In the resource files below you'll note that we have a token
field, this is
the same token that was generated on the main cluster and is used to establish
trust.
We also have a role_map
field which describes how a users roles from the main
cluster (which are encoded in the users SSH certificate) are mapped to roles in
the Trusted Cluster. In this case, for each cluster we are mapping either
acme-support
or emca-support
to the local role admin
. This means the user
james
will have full access to the Acme cluster, while john
only has access
to the nodes with access relaxed
in the Emca cluster, and robert
will have
full access to Acme and limit access to Emca.
kind: trusted_cluster
version: v1
metadata:
description: "Remote Cluster"
name: "Acme Cluster"
namespace: "default"
spec:
enabled: true
role_map:
- remote: "acme-support"
local: [admin]
- remote: "emca-support"
local: [admin]
token: "fake-token"
tunnel_addr: <main-addr>:3024
web_proxy_addr: <main-addr>:3080
kind: trusted_cluster
version: v1
metadata:
description: "Remote Cluster"
name: "Emca Cluster"
namespace: "default"
spec:
enabled: true
role_map:
- remote: "emca-support"
local: [admin]
token: "fake-token"
tunnel_addr: <main-addr>:3024
web_proxy_addr: <main-addr>:3080
To disable a Trusted Cluster, simply user tctl create -f {file name}
again but
this time set enabled: false
.
Verify
That's it. To verify that you can see your Trusted Cluster, run the following command:
$ tsh --proxy=<proxy-addr> clusters
Cluster Name Status
-------------- ------
Main online
Acme Cluster online
Emca Cluster online