This commit moves proxy kubernetes configuration
to a separate nested block to provide more fine
grained settings:
```yaml
auth:
kubernetes_ca_cert_path: /tmp/custom-ca
proxy:
enabled: yes
kubernetes:
enabled: yes
public_addr: [custom.example.com:port]
api_addr: kuberentes.example.com:443
listen_addr: localhost:3026
```
1. Kubernetes config section is explicitly enabled
and disabled. It is disabled by default.
2. Public address in kubernetes section
is propagated to tsh profile
The other part of the commit updates Ping
endpoint to send proxy configuration back to
the client, including kubernetes public address
and ssh listen address.
Clients updates profile accordingly to configuration
received from the proxy.
This is a helm chart for Teleport that conforms to [helm chart best practices](https://docs.helm.sh/chart_best_practices/) and various conventions seen in the official charts repository, so that it becomes easy-to-use and flexible enough to support many deployment scenarios.
Features:
- Locally testable on minikube
- Chart values for flexible configuration, instead of sourcing the raw teleport.yaml contained in the chart
- Automatically rolling-update the pods on configuration change according to the helm best practices
- Service and deplyment ports more finely configurable
- Customizable service and ingress for exposing the proxy to the private network or the internet
- Use service annotatinos for integration with e.g. [external-dns](https://github.com/kubernetes-incubator/external-dns)
- Use ingress for integration with e.g.[aws-alb-ingress-controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller)
- Configurable pod annotations. Uesful for IAM integration with kube2iam/kiam for example.
- Customizable pod assignment for security and availability
This issue updates #1986.
This is intial, experimental implementation that will
be updated with tests and edge cases prior to production 2.7.0 release.
Teleport proxy adds support for Kubernetes API protocol.
Auth server uses Kubernetes API to receive certificates
issued by Kubernetes CA.
Proxy intercepts and forwards API requests to the Kubernetes
API server and captures live session traffic, making
recordings available in the audit log.
Tsh login now updates kubeconfig configuration to use
Teleport as a proxy server.
Fixes#1671
* Add notes about TOS agreements for AMI
* Use specific UID for Teleport instances
* Use encrypted EFS for session storage
* Default scale up auto scaling groups to amount of AZs
* Move dashboard to local file
* Fix dynamo locking bug
* Move PID writing fixing enterprise pid-file
* Add reload method for teleport units
Demo monitoring stack sets up example monitoring
infrastructure:
* All nodes, auth servers and proxies
run telegraf alongside them, polling prometheus
diagnostic endpoints.
* Telegraf sends the data to InfluxDB database
* Grafana sets up cluster health dashboard
watching key teleport metrics - numbers of goroutines,
number of active sessions, file descriptors and so on.
* Fix IAM instance profiles assignments for proxy and nodes
* Add support for auth server certificate verification done by
nodes and proxies joining the cluster.
* Fix out of order events returned by auth servers in HA mode.
In HA mode, auth server could return events out of order
in case if they were sent to multiple auth servers what confused
the user interface expecting events sorted.
This commit fixes the problem by sorting events returned
by function SearchEvents.
This is MVP for HA deployment of Teleport on AWS
* Using terraform
* EFS for audit log storage
* Proxies and auth servers in auto scaling group
* NLB for frontends
* Letsencrypt
Some users noticed that 'display' field is not well-documented for the
connectors.
I also noticed that some defaults are not sensible (like "google" as the
provider)
- Switched to new way of building Enterprise
- Removed `tctl tunnels` command (preparation for new resources)
- Removed `tctl auth ls` command (preparation for new resources)
Instead of trying to achieve a full "offline" operation, this commit
honestly converts previous attempts to a "caching access point client"
behavior.
Closes#554
What works:
1. You have to start all 3: node, proxy and auth.
2. Login using 'tsh' (so it will create a cert)
3. Then you can shut 'auth' down.
4. Proxy and node will stay up and tsh will be able to login.
What doesn't work:
1. Auth updates are not visible to proxy/node (like new servers)
2. Not sure if "trusted clusters" will work.