* Add stories for longer resource and cluster names
* Extract `PickerContainer`, improve styling of pickers and input
* Extract `FilterButton` component to avoid repeating the same code
* Add a message about excluded clusters
* Show a hint message when input is empty
* Show cluster filters only when there is more than one cluster
* Make search bar input responsive
* Review fixes
* Add a story for no results state
* Fix missing margin when input wraps
* Add license header
* Render `NoResultsItem` and `TypeToSearchItem` as extra items above regular items
* Use `calc` to calculate padding
* Fix comment
* Show TypeToSearchItem only after filter actions attempt finishes
* Run filter search synchronously
---------
Co-authored-by: Rafał Cieślak <rafal.cieslak@goteleport.com>
* client-side upgrade window export
adds client-side logic for exporting maintenance windows
for external updaters. export behavior is enabled via
env var (`TELEPORT_EXT_UPGRADER=kube|unit`).
* print raw version
* update e-ref
* dronegen: Sort workflow inputs for stable output
Sort the GitHub Actions inputs when generating the `gh-trigger-workflow`
command line so that it does not randomly change order, as happens when
iterating a map directly.
* dronegen: Have darwin pipelines call out to GitHub Actions
Update the darwin pipelines to run workflows on GitHub Actions instead
of locally on drone builders. This replaces four pipelines with a single
GitHub actions workflow as the one workflow builds the tarballs, Mac
packages and Mac disk images.
We continue to drive the push build from drone until we work out how
secrets are safely managed in the Teleport OSS repo.
* drone: Regenerate .drone.yml for Mac pipeline changes
To regenerate the `.drone.yml` file, first three pipelines were manually
removed:
- build-darwin-amd64-pkg
- build-darwin-amd64-pkg-tsh
- build-darwin-amd64-connect
Then
make dronegen
was run to update the pipelines:
- push-build-darwin-amd64
- build-darwin-amd64
* Fix headless authetnication watcher race condition on initial backend
check
* Fix rare race conition in headless authn watcher test using sync.Once
* Customize time between put events to avoid unwanted stale checks.
* Ensure the Okta service can connect through the reverse tunnel.
A few additional spots were not updated when enabling tunneling for the new
enterprise Okta service. Those spots are:
* `auth.DefaultDNSNamesForRole` needed to be updated to ensure that wildcard
certs for the API domain are generated.
* `reversetunnel` updates to ensure the `OktaTunnel` is handled in a similar
fashion to the `AppTunnel`.
* `process.getAdditionalPrincipals` needed to be updated to account for the
`HostUUID` as part of the principals supported for certificates.
With these, the Okta service is able to handle connections over the reverse
tunnel properly.
* Add comment to getConn switch statement.
* Support spellchecking in docs content
In gravitational/docs#261, we will add a script that checks the spelling
of each version of the docs. This change edits one version of the docs
content to support this, including:
- A cspell configuration file
- A new step in the GitHub Actions in the "Lint (Docs)" workflow that
runs the spellcheck script we will add in `gravitational/docs`
- Fix mispellings so this passes the lint job. The mispellings are in a
file that we generated automatically, but there are few enough of
them, and we haven't merged the auto-generation script yet, that I
think it makes sense to fix them in the generated file for now.
* Respond to PR feedback
- Remove misspellings from the ignore list
- Sort the ignore list (and format it via prettier)
* Use the new yarn spellcheck command
* Spelling fixes
* spell fixes and add words to cspell.json
---------
Co-authored-by: Steven Martin <steven@goteleport.com>
* Consistency for role impersonation expiry between normal join & delegated joining bots
* Add testing for certificate expiry configuration
* Add another test case
* Fixes to metrics docs
Based on my testing, setting Teleport in debug mode is not required to expose the metrics
~~~
# cat /etc/systemd/system/teleport.service
ExecStart=/usr/local/bin/teleport start --config=/etc/teleport.yaml --diag-addr=http://172.31.36.239:3434 --pid-file=/run/teleport/teleport.pid
# cat /etc/teleport.yaml
teleport:
log:
severity: INFO
# curl http://172.31.36.239:3434/metrics | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0# HELP audit_failed_disk_monitoring Number of times disk monitoring failed.
# TYPE audit_failed_disk_monitoring counter
audit_failed_disk_monitoring 0
# HELP audit_failed_emit_events Number of times emitting audit event failed.
# TYPE audit_failed_emit_events counter
audit_failed_emit_events 0
# HELP audit_percentage_disk_space_used Percentage disk space used.
# TYPE audit_percentage_disk_space_used gauge
audit_percentage_disk_space_used 100
# HELP audit_server_open_files Number of open audit files
# TYPE audit_server_open_files gauge
audit_server_open_files 0
...
~~~
Also, curling the diag endpoint without the metrics part returns a 404 page not found which is a confusing way to validate it is working
~~~
# curl http://172.31.36.239:3434/
404 page not found
~~~
* Docs: add flag var and extra detail for debug (#24242)
* add flag var and extra detail for debug
* Update docs/pages/includes/diagnostics/diag-addr-prereqs-tabs.mdx
Co-authored-by: Zac Bergquist <zac.bergquist@goteleport.com>
---------
Co-authored-by: Zac Bergquist <zac.bergquist@goteleport.com>
* Update diag-addr-prereqs-tabs.mdx
---------
Co-authored-by: Alex Fornuto <alex.fornuto@goteleport.com>
Co-authored-by: Zac Bergquist <zac.bergquist@goteleport.com>
* refactor SFTP backend to use upstream dep, not our fork
This change also greatly reduces the number of SFTP audit logs.
Now SFTP events are only sent when files are opened or modified
in any way, instead of for *every* SFTP request.
* added to SFTP integration test
* fix error when handling setstat on dirs
* fix linter warning
* move file/dir permission constants to lib/defaults package
* Export desktop recordings to video
Add a new tsh command that will write Windows desktop recordings
to an AVI file for offline playback. Encoding is done client side
to avoid consuming server resources.
This uses the Motion JPEG codec (https://en.wikipedia.org/wiki/Motion_JPEG)
for its simplicity and ease of use. Something like ffmpeg would perform
better in nearly every aspect (run time, compression / file size, video
quality, etc), but that would complicate our build process and add extra
native dependencies. This implementation uses pure Go and works on any
platform where tsh runs today.
Also make sure `tsh recordings ls` shows Windows and SSH recordings.
* Untangle test imports
lib/events/eventstest is allowed to import lib/events
(it needs to in order to implement interfaces and use types)
This means lib/events can not import lib/events/eventstest,
which requires that we move some tests from package events
to package events_test
* tdp: break dependency on lib/srv
The lib/srv package is large and contains Unix-specific code.
Now that tsh needs to understand the TDP protocol, we need to
avoid importing lib/srv so that tsh can still build on Windows.
* Delete teleterm's ptyHost/v1, added by mistake
* Add package name to protos conforming to PACKAGE_VERSION_SUFFIX
* use go run in buf-connect-go.gen.yaml directly
* Run protogen in place
* Run the buf-go generation off of go run
This also adds protoc-gen-go-grpc to go.mod
* Prevent races in proxyClusterGuesser
Uses the same mechanism as api.client.proxy.clusterName within
lib.client.proxyClusterGuesser to prevent races on the cluster
name when connecting via ssh.
* Correctly set up transport service tls config
Using `setupTLSConfigClientCAsForCluster` was overwriting the
tls.Config.ClientAuth on each client connection which caused falling
back to connecting via ssh.
Part of https://github.com/gravitational/teleport/pull/23546
This will add a fileTransferRequest to a session and allow environment variables to be passed from the webUI in order to validate a request that happens "outside" the moderated session (via HTTP request).
* Disable `build-macos` and `build-windows` on PR
This commit removes the `build-macos` and `build-windows` from the PR flow, instead delegating to the bypass job.
These jobs still run at the merge queue point.
This of course means that failures in these two jobs may not be known until the merge queue.
There is an unequestionable disadvantage in not discovering those issues until that point, but this change is being recommended because:
* Currently MacOS builds are 31% of our Teleport Actions spend (~$3,500 / week)
* Windows builds are also significant at 13% (~$1,400 / week)
* There has been relatively few failures of these jobs (without other jobs also failing)
Although merge queue verification is not ideal because it's later in the process, it is considered the most critical in ensuring that `master` remains stable.
* Make sure all bypass jobs run on `ubuntu-latest`
In a couple cases this allows the jobs to be run on a cheaper instance.
* docs: use teleport systemd include for start
* docs: use systemctl start include
* patch broken include
* copy edits
---------
Co-authored-by: alexfornuto <alex.fornuto@goteleport.com>
* docs(database-access): add sql server as supported in rds proxy
* Update docs/pages/database-access/guides/rds-proxy.mdx
Co-authored-by: Alex Fornuto <alex.fornuto@goteleport.com>
---------
Co-authored-by: Alex Fornuto <alex.fornuto@goteleport.com>
The Core Concepts page uses "Teleport Service" in uppercase. While I
think it is appropriate to capitalize "Service" when naming specific
Teleport architectural components, e.g., "Database Service",
"Application Service", etc., I'm not sure we want to imply that
"Teleport Service" is a distinct product by making it a proper noun.
The docs tend to use "Teleport service", where "service" is a general
computing term. We don't lose any meaning by using "Teleport service"
instead of "Teleport Service", and don't risk suggesting that Teleport
services have more in common than they really do.