Commit graph

14 commits

Author SHA1 Message Date
Brian Joerger 4d36870ff0
Remove remaining API aliases (#7137) 2021-06-08 12:08:55 -07:00
Brian Joerger 7bff7c41bd
Remove API aliases (#6983) 2021-06-04 13:29:31 -07:00
Nic Klaassen f268ba173e
Stop registering a Kubernetes cluster named after the Teleport cluster (#6786) 2021-05-25 17:50:35 -07:00
dmitri d6fe06c906 Augment checking stream/streamer and AuditWriter with cluster name detail to automatically populate the field upon event emission.
Updates https://github.com/gravitational/teleport/issues/5856.
2021-03-17 18:21:57 -07:00
Alexey Kontsevoy 472df28f2a
Add "billing_information" RBAC resource (#5676)
* Expose GRPC client connection to plugins
* Replaces global plugin state with the PluginRegistry
2021-03-01 22:47:03 -05:00
Andrej Tokarčík 899cc1c0ec
Propagate the mapped local user identity via auth.Context (#5794)
In `auth.Context`, the `Identity` field used to contain the original
caller identity and `User` field contained the mapped local user. These
are different, if the request comes from a remote trusted cluster.

Lots of code assumed that `auth.Context.Identity` contained the local
identity and used roles/traits from there.

To prevent this confusion, populate `auth.Context.Identity` with the
*mapped* identity, and add `auth.Context.UnmappedIdentity` for callers
that actually need it.

One caller that needs `UnmappedIdentity` is the k8s proxy. It uses that
identity to generate an ephemeral user cert. Using the local mapped
identity in that case would make the downstream server (e.g.
kubernetes_service) to treat it like a real local user, which doesn't
exist in the backend and causes trouble.

`ProcessKubeCSR` endpoint on the auth server was also updated to
understand the unmapped remote identities.

Co-authored-by: Andrew Lytvynov <andrew@goteleport.com>
2021-03-01 21:55:59 +01:00
Andrew Lytvynov 3fa6904377
Multiple fixes for k8s forwarder (#5038)
* kube: emit audit events using process context

Using the request context can prevent audit events from getting emitted,
if client disconnected and request context got closed.
We shouldn't be losing audit events like that.

Also, log all response errors from exec handler.

* kube: cleanup forwarder code

Rename a few config fields to be more descriptive.
Avoid embedding unless necessary, to keep the package API clean.

* kube: cache only user certificates, not the entire session

The expensive part that we need to cache is the client certificate.
Making a new one requires a round-trip to the auth server, plus entropy
for crypto operations.

The rest of clusterSession contains request-specific state, and only
adds problems if cached.
For example: clusterSession stores a reference to a remote teleport
cluster (if needed); caching requires extra logic to invalidate the
session when that cluster disappears (or tunnels drop out). Same problem
happens with kubernetes_service tunnels.

Instead, the forwarder now picks a new target for each request from the
same user, providing a kind of "load-balancing".

* Init session uploader in kubernetes service

It's started in all other services that upload sessions (app/proxy/ssh),
but was missing here. Because of this, the session storage directory for
async uploads wasn't created on disk and caused interactive sessions to
fail.
2020-12-08 11:12:07 -08:00
a-palchikov 7c87576a8b
flaky tests: consistent logging (#4849)
* Update logrus package to fix data races
* Introduce a logger that uses the test context to log the messages so they are output if a test fails for improved trouble-shooting.
* Revert introduction of test logger - simply leave logger configuration at debug level outputting to stderr during tests.
* Run integration test for e as well
* Use make with a cap and append to only copy the relevant roles.
* Address review comments
* Update integration test suite to use test-local logger that would only output logs iff a specific test has failed - no logs from other test cases will be output.
* Revert changes to InitLoggerForTests API
* Create a new logger instance when applying defaults or merging with file service configuration
* Introduce a local logger interface to be able to test file configuration merge.
* Fix kube integration tests w.r.t log
* Move goroutine profile dump into a separate func to handle parameters consistently for all invocations
2020-12-07 15:35:15 +01:00
Sasha Klizhentas e6681abe6a Fan out events in async mode for async recordings.
This commit fixes #4695.

Teleport in async recording mode sends all events to disk,
and uploads them to the server later.

It uploads some events synchronously to the audit log so
they show up in the global event log right away.

However if the auth server is slow, the fanout blocks the session.

This commit makes the fanout of some events to be fast,
but nonblocking and never fail so sessions will not hang
unless the disk writes hang.

It adds a backoff period and timeout after which some
events will be lost, but session will continue without locking.
2020-11-13 17:10:35 -08:00
Russell Jones cf635a7e60 Addressed code review comments. 2020-11-13 14:52:00 -08:00
Andrew Lytvynov dd3977957a Register a kubernetes cluster from proxy_service
A proxy running in pre-5.0 mode (e.g. with local kubeconfig) should
register an entry in `tsh kube clusters`.
After upgrading to 5.0, without migration to kubernetes_service, all the
new `tsh kube` commands will work as expected.
2020-11-13 14:52:00 -08:00
Andrew Lytvynov 4bc8011722
RBAC for kubernetes clusters (#4782)
* Add labels to KubernetesCluster resources

Plumb from config to the registered object, keep dynamic labels updated.

* Check kubernetes RBAC

Checks are in some CRUD operations on the auth server and in the
kubernetes forwarder (both proxy or kubernetes_service).
The logic is essentially copy-paste of the TAA version.
2020-11-11 22:58:33 +00:00
Andrew Lytvynov b16ad647b4
Kubernetes request routing and cluster registration (#4670)
This change has several parts: cluster registration, cache updates,
routing and a new tctl flag.

> cluster registration

Cluster registration means adding `KubernetesClusters` to `ServerSpec`
for servers with `KindKubeService`.

`kubernetes_service` instances will parse their kubeconfig or local
`kube_cluster_name` and add them to their `ServerSpec` sent to the auth
server. They are effectively declaring that "I can serve k8s requests
for k8s cluster X".

> cache updates

This is just cache plumbing for `kubernetes_service` presence, so that
other teleport processes can fetch all of kube services. It was missed
in the previous PR implementing CRUD for `kubernetes_service`.

> routing

Now the fun part - routing logic. This logic lives in
`/lib/kube/proxy/forwarder.go` and is shared by both `proxy_service`
(with kubernetes integration enabled) and `kubernetes_service`.

The target k8s cluster name is passed in the client cert, along with k8s
users/groups information.

`kubernetes_service` only serves requests for its direct k8s cluster
(from `Forwarder.creds`) and doesn't route requests to other teleport
instances.

`proxy_service` can serve requests:
- directly to a k8s cluster (the way it works pre-5.0)
- to a leaf teleport cluster (also same as pre-5.0, based on
  `RouteToCluster` field in the client cert)
- to a `kubernetes_service` (directly or over a tunnel)

The last two modes require the proxy to generate an ephemeral client TLS
cert to do an outbound mTLS connection.

> tctl flag

A flag `--kube-cluster-name` for `tctl auth sign --format=kubernetes`
which allows generating client certs for non-default k8s cluster name
(as long as it's registered in a cluster).
I used this for testing, but it could be used for automation too.
2020-11-09 19:40:02 +00:00
Andrew Lytvynov 5ec194cd0d
Implement kubernetes_service registration and startup (#4611)
* Implement kubernetes_service registration and sratup

The new service now starts, registers (locally or via a join token) and
heartbeats its presence to the auth server.

This service can handle k8s requests (like a proxy) but not to remote
teleport clusters. Proxies will be responsible for routing those.
The client (tsh) will not yet go to this service, until proxy routing is
implemented. I manually tweaked server addres in kubeconfig to test it.

You can also run `tctl get kube_service` to list all registered
instances. The self-reported info is currently limited - only listening
address is set.

* Address review feedback
2020-10-30 17:19:53 +00:00