By only providing the tunnel address from the `reversetunnel.Resolver`
callers would still need to lookup the proxy listener mode to determine
how to dial the address. This results in sending a request to
`/webapi/find` once by the resolver to get the tunnel address and then
a second request to `/webapi/find` by users of the `Resolver` to determine
the proxy listener mode. Propagating the listener mode along with the
tunnel address by the `Resolver` ensures only one `/webapi/find` call
is needed.
This is especially impactful because the `reversetunnel.TunnelAuthDialer`
which is used by the auth http client would do this everytime the
`http.Client` connection pool was empty. When the `http.Client` needed
to dial the auth server it was incurring the additional roundtrip to the
proxy.
Prior to this change, each individual service (proxy, app, SSH, db, etc)
would spin up its own uploader service. If you are running multiple
Teleport services in the same process, this means you get multiple
uploaders all looking at the same directory, which can result in
duplicate upload events in the audit log.
Additionally, desktop access has (mistakenly) failed to set up this
service, so desktop sessions would only be uploaded if you happened
to also run some other service in the same process that does spin up
the uploader.
Solve these issues by centralizing the uploader service so that it
runs once per process, and each Teleport service doesn't need to think
about whether or not the service should run.
* Add Azure auto-discovery configuration fields
* Init databases if azure matchers are in config
* Use AzureMatchers in db service
* Use all azure subscriptions/resource groups if omitted in matcher
* Add azure config tests
* Update lib/services/matchers.go
Co-authored-by: Krzysztof Skrzętnicki <krzysztof.skrzetnicki@goteleport.com>
* Update lib/config/fileconf.go
Co-authored-by: Marek Smoliński <marek@goteleport.com>
* Update lib/config/fileconf.go
Co-authored-by: Marek Smoliński <marek@goteleport.com>
* Update lib/services/matchers.go
Co-authored-by: Marek Smoliński <marek@goteleport.com>
* Remove superfluous cmp option for diffing azure matcher
* Rename AzureMatchers Tags to ResourceTags
* Deduplicate subscription/resource groups and add tests
* Remove azure matcher config fixup
Co-authored-by: Krzysztof Skrzętnicki <krzysztof.skrzetnicki@goteleport.com>
Co-authored-by: Marek Smoliński <marek@goteleport.com>
This adds proxy peering support. A configurable setting that allows for agents
to connect to a subset of proxies and be reachable through any proxy in the
cluster. This is achieved by creating grpc connections between each proxy
server. Client connections can then be passed between proxies to the desired
agent.
Teleport now will try to extract MySQL server version from initial handshake package instead of sending `8.0.0-Teleport` every time. This string can be overridden by new configuration option `mysql.server_version`. On DB service start Teleport will also try to fetch the current version from MySQL/MariaDB instance. After that the server version will be updated on every successful connection to keep it up to date.
Co-authored-by: STeve (Xin) Huang <xin.huang@goteleport.com>
Co-authored-by: Paul Gottschling <paul.gottschling@goteleport.com>
* Replace Upload completer grace period logic with session tracker checking to accurately determine whether an upload has been abandoned
* Update session tracker expiration to be 1 hour, and dynamically extend it while the session is active.
* Dynamically resolve reverse tunnel address
The reverse tunnel address is currently a static string that is
retrieved from config and passed around for the duration of a
services lifetime. When the `tunnel_public_address` is changed
on the proxy and the proxy is then restarted, all established
reverse tunnels over the old address will fail indefinintely.
As a means to get around this, #8102 introduced a mechanism
that would cause nodes to restart if their connection to the
auth server was down for a period of time. While this did
allow the nodes to pickup the new address after the nodes
restarted it was meant to be a stop gap until a more robust
solution could be applid.
Instead of using a static address, the reverse tunnel address
is now resolved via a `reversetunnel.Resolver`. Anywhere that
previoulsy relied on the static proxy address now will fetch
the actual reverse tunnel address via the webclient by using
the Resolver. In addition this builds on the refactoring done
in #4290 to further simplify the reversetunnel package. Since
we no longer track multiple proxies, all the left over bits
that did so have been removed to accomodate using a dynamic
reverse tunnel address.
Now 'verify-full', 'verify-ca' and 'insecure' modes can be used when connecting to a database. 'verify-full` is the default on and it's the most strict. 'verify-ca' skips the server-name check. 'insecure' accepts any certificate provided by a database.
* Refactor component heartbeat callbacks
Consolidate the OK/degraded broadcasts so the same logic isn't
duplicated for each component.
* Periodically update discovered desktops
Fixes#8644
* Allow customizing the desktop search
With this change, we support a discovery base DN other than '*',
and add support for further filtering the results with additional
LDAP filters.
Additionally, we filter out group managed service accounts, which
show up in LDAP searches for (objectClass=computer), despite not
being comptuers. (This is mostly harmless, as the service accounts
aren't present in DNS, so Teleport just ignores them. It does, however,
log a DNS error message that could be confusing, so we explicitly
filter these out just to be safe. This was discovered when testing
on AWS managed AD, which creates a gMSA for DNS.
Fixed two issues that were causing a performance issue with the Web UI.
The first issue was that when an "Authorizer" was being created at
process startup by Auth Service, it was by-passing the cache and always
hitting the backend directly. All services have been updated to now use
an cached access point.
The second issue was that the Web UI was not using the local cache when
fetching the list of roles for a user. The Web UI has been updated to
now use the local cached access point.
In `auth.Context`, the `Identity` field used to contain the original
caller identity and `User` field contained the mapped local user. These
are different, if the request comes from a remote trusted cluster.
Lots of code assumed that `auth.Context.Identity` contained the local
identity and used roles/traits from there.
To prevent this confusion, populate `auth.Context.Identity` with the
*mapped* identity, and add `auth.Context.UnmappedIdentity` for callers
that actually need it.
One caller that needs `UnmappedIdentity` is the k8s proxy. It uses that
identity to generate an ephemeral user cert. Using the local mapped
identity in that case would make the downstream server (e.g.
kubernetes_service) to treat it like a real local user, which doesn't
exist in the backend and causes trouble.
`ProcessKubeCSR` endpoint on the auth server was also updated to
understand the unmapped remote identities.
Co-authored-by: Andrew Lytvynov <andrew@goteleport.com>