Fixes#3604
This commit adds support for cluster_labels
role parameter limiting access to remote clusters by label.
New tctl update rc provides interface to set labels on remote clusters.
Consider two clusers, `one` - root and `remote` - leaf.
```bash
$ tsh clusters
Cluster Name Status
------------ ------
one online
two online
```
Create the trusted cluster join token with labels:
```bash
$ tctl tokens add --type=trusted_cluster --labels=env=prod
```
Every cluster joined using this token will inherit env:prod labels.
Alternatively, update remote cluster labels by modifying
`rc` command. Letting remote clusters to propagate their labels
creates a problem of rogue clusters updating their labels to bad values.
Instead, administrator of root cluster control the labels
using remote clusters API without fear of override:
```bash
$ tctl get rc
kind: remote_cluster
metadata:
name: two
status:
connection: online
last_heartbeat: "2020-09-14T03:13:59.35518164Z"
version: v3
```
```bash
$ tctl update rc/two --set-labels=env=prod
cluster two has been updated
```
```bash
$ tctl get rc
kind: remote_cluster
metadata:
labels:
env: prod
name: two
status:
connection: online
last_heartbeat: "2020-09-14T03:13:59.35518164Z"
```
Update the role to deny access to prod env:
```yaml
kind: role
metadata:
name: dev
spec:
allow:
logins: [root]
node_labels:
'*': '*'
# Cluster labels control what clusters user can connect to. The wildcard ('*') means
# any cluster. If no role in the role set is using labels and cluster is not labeled,
# the cluster labels check is not applied. Otherwise, cluster labels are always enforced.
# This makes the feature backwards-compatible.
cluster_labels:
'env': 'staging'
deny:
# cluster labels control what clusters user can connect to. The wildcard ('*') means
# any cluster. By default none is set in deny rules to preserve backwards compatibility
cluster_labels:
'env': 'prod'
```
```bash
$ tctl create -f dev.yaml
```
Cluster two is now invisible to user with `dev` role.
```bash
$ tsh clusters
Cluster Name Status
------------ ------
one online
```
Added support for an identity aware, RBAC enforcing, mutually
authenticated, web application proxy to Teleport.
* Updated services.Server to support an application servers.
* Updated services.WebSession to support application sessions.
* Added CRUD RPCs for "AppServers".
* Added CRUD RPCs for "AppSessions".
* Added RBAC support using labels for applications.
* Added JWT signer as a services.CertAuthority type.
* Added support for signing and verifying JWT tokens.
* Refactored dynamic label and heartbeat code into standalone packages.
* Added application support to web proxies and new "app_service" to
proxy mutually authenticated connections from proxy to an internal
application.
* Implement kubernetes_service registration and sratup
The new service now starts, registers (locally or via a join token) and
heartbeats its presence to the auth server.
This service can handle k8s requests (like a proxy) but not to remote
teleport clusters. Proxies will be responsible for routing those.
The client (tsh) will not yet go to this service, until proxy routing is
implemented. I manually tweaked server addres in kubeconfig to test it.
You can also run `tctl get kube_service` to list all registered
instances. The self-reported info is currently limited - only listening
address is set.
* Address review feedback
This is a shorthand for the larger kubernetes section:
```
proxy_service:
kube_listen_addr: "0.0.0.0:3026"
```
if equivalent to:
```
proxy_service:
kubernetes:
enabled: yes
listen_addr: "0.0.0.0:3026"
```
This shorthand is meant to be used with the new `kubernetes_service`:
https://github.com/gravitational/teleport/pull/4455
It reduces confusion when both `proxy_service` and `kubernetes_service`
are configured in the same process.
* Make k8s permissions test optional
There are several legitimate cases where it can fail:
- root proxy running inside k8s but without access to local k8s cluster
- root proxy running with a dummy kubeconfig that we recommended in the
past
Leave a ForwarderConfig flag to enforce this check, it will be useful in
kubernetes_service later that should always have the right permissions.
This commit fixes#4598
Config with multiple event backends was crashing on 4.4:
```yaml
storage:
audit_events_uri: ['dynamodb://streaming', 'stdout://', 'dynamodb://streaming2']
```
Uploader retries slower on network errors and picks the pace
after any upload has succeeded.
Records that were corrupted, will never get uploaded.
The uploader will create streams indefinitely, clogging the auth server
with streams. Now uploader writes marker for bad session uploads
and does not attempt to reupload.
* Fix local etcd test failures when etcd is not running
* Add kubernetes_service to teleport.yaml
This plumbs config fields only, they have no effect yet.
Also, remove `cluster_name` from `proxy_config.kubernetes`. This field
will only exist under `kubernetes_service` per
https://github.com/gravitational/teleport/pull/4455
* Handle IPv6 in kubernetes_service and rename label fields
* Disable k8s cluster name defaulting in user TLS certs
Need to implement service registration first.
Most users won't need this, so the behavior is optional. Default system
configs will usually trigger a password prompt, which is why this
feature is disabled by default.
* option to export latency profile
* use print percentiles method
* add timestamp to file name
* add descriptions to flags
* Flush to make sure data is written to file
* path error catching
* fix trailing whitespaces
* seperate logic into separate functions
* return histogram err message
* move functionality into one function
* return path for onBenchmark to print
* close file if writer error
`require` is a sister package to `assert` that terminates the test on
failure. `assert` records the failure but lets the test proceed, which
is un-intuitive.
Also update all existing tests to match.
* Update k8s script retrieval
Curl command retrieves the html version, not the script itself. modified to retrieve raw.
* Modify k8s script url to use raw