Commit graph

11 commits

Author SHA1 Message Date
Jakub Nyckowski 0ee91f6c37
Enable GCI linter (#17894) 2022-10-28 20:20:28 +00:00
Alan Parra a75fcc21d8
Update golangci-lint to 1.49.0 (#16507)
Update metalinter, fix a few lint warnings and replace deprecated linters.

`deadcode`, `structcheck` and `varcheck` are abandoned and now replaced by [`unused`][1].

Since 1.19, `go fmt` reformats godocs according to https://go.dev/doc/comment. I've done a bulk-reformatting of the codebase to keep the linter happy. Backporting is mostly harmless (the exception being `lib/services/role_test.go`, that for some reason breaks the _old_ linter using the new format).

[1]: https://golangci-lint.run/usage/linters/

* Bump golangci-lint version
* Replace abandoned linters
* Fix bodyclose on lib/auth/github.com
* Fix bodyclose on lib/kube/proxy/streamproto/proto_test.go
* Fix bodyclose on lib/srv/alpnproxy/proxy_test.go
* Fix bodyclose on lib/web/conn_upgrade_test.go
* Silence staticcheck on lib/kube/proxy/forwarder_test.go
* Silence staticcheck on lib/utils/certs_test.go
* Address BuildNameToCertificate deprecation warnings
* Run `go fmt ./...`
* Run `go fmt ./...` on api/
* Ignore formatting in role_test.go
* Remove redundant initializers in lib/srv/uacc/
* Update e/
2022-09-19 22:38:59 +00:00
Roman Tkachenko 29e46a2a6a
buddy: Fix incorrect use of loop variables (#16306)
* Fix incorrect use of loop variables

This commit fixes a few occurrences of loop variables being
incorrectly used in the context of Go-routines or (most frequently)
parallel tests. To fix the issues, we create a local copy of the range
variables before the parallel tests (or Go-routine), as suggested in
the documentation of the `testing` package:

https://pkg.go.dev/testing#hdr-Subtests_and_Sub_benchmarks

Issues were found using the `loopvarcapture` linter.

Signed-off-by: Roman Tkachenko <roman@goteleport.com>

* fix TestTraceProvider/spans_exported_with_gRPC+TLS

* run TestSSH serially

* operator: Conserve 'created_by' data in user spec

Signed-off-by: Roman Tkachenko <roman@goteleport.com>
Co-authored-by: Renato Costa <renato@cockroachlabs.com>
Co-authored-by: Tim Ross <tim.ross@goteleport.com>
Co-authored-by: Hugo Hervieux <hugo.hervieux@goteleport.com>
2022-09-14 14:31:56 +00:00
rosstimothy c469a34994
Move prometheus collectors from utils to metrics (#15288) 2022-08-09 17:35:19 +00:00
rosstimothy 0c48cefef2
Add dynamodb metrics (#14518)
* Add dynamodb metrics

Adds promethues counters for dynamo api requests, requests failed,
and latency of requests. Each counter is labeled with the table type,
either backend or events, and the operation being requested.
2022-07-21 13:31:50 -04:00
rosstimothy dfb9daac61
Allow traces to be exported to files (#14332)
* Allow traces to be exported to files

Adds support for exporting traces to a file. While not recommended
for production use, some folks may need to collect traces without
having any telemetry infrastructure in place to store them. To do
so they can simply update their tracing_service to point to a
directory, as seen in the following config snippet.

```yaml
tracing_service:
   exporter_url: "file:///var/lib/teleport/traces"
```

The file contents will contain one json encoded otlp trace per line.
Files written by the exporter will all follow the following naming
convention:  <unix_timestamp>-<random_number>.trace

To prevent a trace file from growing unbound forever, there is a
default limit of 100MB, after which, the file will be rotated for
a brand new file. Users can adjust the file size limit by adding
a query paramter to the exporter url like: `?limit=12345`.

Part of #12241
2022-07-21 13:25:33 +00:00
rosstimothy 30a0400c36
Add s3 metrics (#14542)
* Add s3 metrics

Adds promethues counters for s3 api requests and a histogram
for requests latencies. Each counter is labeled with the
operation being requested.
2022-07-19 21:01:17 +00:00
rosstimothy ab8ffb244a
Fix tracing exporter endpoints (#14003)
* Fix tracing exporter endpoints

Ensure that the endpoint provided to the trace clients
are correct even if the configuration doesn't include
the scheme. Prior to this the endpoint always attempted
to remove the scheme prefix, even when one wasn't provided.
Doing so led to the hostname to be altered which caused
some unknown host issues.

This also removes the process and process owner detector
from the tracing resource. Running within a container might
not have a username mapped to the uid which was preventing
tracing from being initialized.
2022-07-05 16:55:53 +00:00
rosstimothy 652e089d3d
Manually instrument backend.Backend (#13268)
Adds a `trace.Tracer` to the `backend.Reporter`
wrapper so that all `bakend.Backend` implementations
can be traced. Further instrumentation of each specific
backend will be added at a later date to see how long
each sql query, or call to dynamo/etcd took within
each backend operation.

#12241
2022-06-11 16:09:42 +00:00
rosstimothy c3736c7c70
Span forwarding (#12980)
* Span forwarding

Modifies the auth grpc server to implement the OTLP Collector
RegisterTraceServiceServer API (https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/collector/trace/v1/trace_service.proto).
This allows the auth server to receive spans from other services
like `tsh`, `tctl`, and `tbot`. Any spans received by the auth
server will be forwarded to the exporter configured via the
`tracing_service` if it is enabled. All received spans will be
dropped in the event that the`tracing_service` disabled. By
forwarding spans to the auth server, `tsh` doesn't need to
be provided with any of the telemetry backend information
to have its spans exported.

Adds a new `--trace` flag to `tsh` to enable collecting and
forwarding spans to the auth server. When set, the tracing
provider is initialized with a sampling rate of 1.0 to force
all spans to be recorded. Teleport respects the sampling rate
from remote spans, which means that when `--trace` is set, all
spans from `tsh` and any downstream Teleport services will be
recorded and exported regardless of the sampling rate that each
Teleport service is configured with.
2022-06-02 09:28:30 -04:00
rosstimothy 5b4a18bf24
Add tracing service and configuration (#12699)
* Add tracing service and configuration

Provides a new tracing configuration block, which can be
used to configure if and how spans are exported to a
telemetry backend. In the example below, the tracing
service is enabled and will export spans to
`collector.example.com:4317` via gRPC with mTLS enabled.

```yaml
tracing_service:
  enabled: yes
  exporter_url: collector.example.com:4317
  sampling_rate_per_million: 1000000
  ca_certs:
    - /certs/rootCA.pem
  keypairs:
    - key_file:  /certs/example.com-client-key.pem
      cert_file: /certs/example.com-client.pem
```

This configuration ends up being consumed by the `TeleportProcess`
and passed to `tracing.NewTraceProvider` which sets up the OpenTelemetry
Exporter, TracerProvider, Propagator and Sampler. In order for spans to
be exported, the `tracing_service` must be enabled **and** have a
`sampling_rate_per_million` value > 0.
2022-05-26 22:55:47 +00:00