* Upgrade github.com/gravitataional/trace to v1.1.12
We were a few versions behind. In particular this versions lets us use
stdlib's `errors.Is/As` to inspect errors.
* Bump trace to 1.1.13
Co-authored-by: Andrew Lytvynov <andrew@goteleport.com>
This commit fixes#5177
Initial implementation uses dir backend as a cache and is OK
for small clusters, but will be a problem for many proxies.
This implementation uses Go autocert that is quite limited
compared to Caddy's certmagic or lego.
Autocert has no OCSP stapling and no locking for cache for example.
However, it is much simpler and has no dependencies.
It will be easier to extend to use Teleport backend as a cert cache.
```yaml
proxy_service:
public_addr: ['example.com']
# ACME - automatic certificate management environment.
#
# It provisions certificates for domains and
# valid subdomains in public_addr section.
#
# The sudomains are valid if there is a registered application.
# For example, app.example.com will get a cert if app is a regsitered
# application access app. The sudomain cookie.example.com is not.
#
# Teleport acme is using TLS-ALPN-01 challenge:
#
# https://letsencrypt.org/docs/challenge-types/#tls-alpn-01
#
acme:
# By default acme is disabled.
enabled: true
# Use a custom URI, for example staging is
#
# https://acme-staging-v02.api.letsencrypt.org/directory
#
# Default is letsencrypt.org production URL:
#
# https://acme-v02.api.letsencrypt.org/directory
uri: ''
# Set email to receive alerts and other correspondence
# from your certificate authority.
email: 'alice@example.com'
```
* Make k8s errors responses decode-able by kubectl
`kubectl` expects a k8s `Status` object in error responses.
Intercept generic handler errors and forwarder errors, and wrap them in
a `Status` object.
* Use strict teleport.yaml validation in warning mode
Strict YAML validation catches the cases where a valid config key is
placed in the wrong location in the config. These errors were not
caught by the old validation.
The failure is always reported, but only fails startup when both old and
new validations fail. This will let the users fix their configs during
6.0 release and we will start enforcing it in 7.0.
Example:
```yaml
auth_service:
data_dir: "/foo" # this field must live under "teleport:", not "auth_service:"
```
Output:
```
$ teleport start -c teleport-invalid.yaml
ERRO "Teleport configuration is invalid: yaml: unmarshal errors:\n line 6: field data_dir not found in type config.Auth." config/fileconf.go:303
ERRO This error will be enforced in the next Teleport release. config/fileconf.go:304
[AUTH] Auth service 5.0.0-dev:v4.4.0-alpha.1-262-g307040886-dirty is starting on 0.0.0.0:3025.
... continues startup ...
```
* Remove newlines from YAML error
The HTTP request context is canceled when the client disconnects. Using
this context in the session recorder prevents it from uploading the
session when it's finished.
Use the server context instead, to prevent lost recordings.
* Use "5.0" as string instead of integer
Otherwise, it won't find the tag as it will look for tag 5, instead of 5.0
* update values for teleport-auto-trustedcluster and teleport-deamonset
Co-authored-by: Gus Luxton <gus@gravitational.com>
Co-authored-by: Andrew Lytvynov <andrew@goteleport.com>
* RFD 12: Teleport versioning
This is a new versioning scheme for teleport releases.
It's similar to the current scheme, and keeps similar compatibility
guarantees.
The new scheme aims to be more intuitive (semver-like) to implicitly
communicate to users what the semantics of different version bumps are.
* Add logger attributes to be able to propagate logger from tests for identifying tests
* Add test case for Server's DeepCopy.
* Update test to using the testing package directly. Update dependency after upstream PR.
* kube: emit audit events using process context
Using the request context can prevent audit events from getting emitted,
if client disconnected and request context got closed.
We shouldn't be losing audit events like that.
Also, log all response errors from exec handler.
* kube: cleanup forwarder code
Rename a few config fields to be more descriptive.
Avoid embedding unless necessary, to keep the package API clean.
* kube: cache only user certificates, not the entire session
The expensive part that we need to cache is the client certificate.
Making a new one requires a round-trip to the auth server, plus entropy
for crypto operations.
The rest of clusterSession contains request-specific state, and only
adds problems if cached.
For example: clusterSession stores a reference to a remote teleport
cluster (if needed); caching requires extra logic to invalidate the
session when that cluster disappears (or tunnels drop out). Same problem
happens with kubernetes_service tunnels.
Instead, the forwarder now picks a new target for each request from the
same user, providing a kind of "load-balancing".
* Init session uploader in kubernetes service
It's started in all other services that upload sessions (app/proxy/ssh),
but was missing here. Because of this, the session storage directory for
async uploads wasn't created on disk and caused interactive sessions to
fail.
* Update logrus package to fix data races
* Introduce a logger that uses the test context to log the messages so they are output if a test fails for improved trouble-shooting.
* Revert introduction of test logger - simply leave logger configuration at debug level outputting to stderr during tests.
* Run integration test for e as well
* Use make with a cap and append to only copy the relevant roles.
* Address review comments
* Update integration test suite to use test-local logger that would only output logs iff a specific test has failed - no logs from other test cases will be output.
* Revert changes to InitLoggerForTests API
* Create a new logger instance when applying defaults or merging with file service configuration
* Introduce a local logger interface to be able to test file configuration merge.
* Fix kube integration tests w.r.t log
* Move goroutine profile dump into a separate func to handle parameters consistently for all invocations