teleport/constants.go

902 lines
31 KiB
Go
Raw Normal View History

/*
Copyright 2018-2019 Gravitational, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package teleport
import (
"fmt"
Teleport signal handling and live reload. This commit introduces signal handling. Parent teleport process is now capable of forking the child process and passing listeners file descriptors to the child. Parent process then can gracefully shutdown by tracking the amount of current connections and closing listeners once the amount goes to 0. Here are the signals handled: * USR2 signal will cause the parent to fork a child process and pass listener file descriptors to it. Child process will close unused file descriptors and will bind to the used ones. At this moment two processes - the parent and the forked child process will be serving requests. After looking at the traffic and the log files, administrator can either shut down the parent process or the child process if the child process is not functioning as expected. * TERM, INT signals will trigger graceful process shutdown. Auth, node and proxy processes will wait until the amount of active connections goes down to 0 and will exit after that. * KILL, QUIT signals will cause immediate non-graceful shutdown. * HUP signal combines USR2 and TERM signals in a convenient way: parent process will fork a child process and self-initate graceful shutdown. This is a more convenient than USR2/TERM sequence, but less agile and robust as if the connection to the parent process drops, but the new process exits with error, administrators can lock themselves out of the environment. Additionally, boltdb backend has to be phased out, as it does not support read/writes by two concurrent processes. This had required refactoring of the dir backend to use file locking to allow inter-process collaboration on read/write operations.
2018-02-08 02:32:50 +00:00
"strings"
"time"
"github.com/coreos/go-semver/semver"
)
2017-03-02 19:50:35 +00:00
// WebAPIVersion is a current webapi version
const WebAPIVersion = "v1"
2017-03-08 05:42:17 +00:00
const (
// SSHAuthSock is the environment variable pointing to the
// Unix socket the SSH agent is running on.
SSHAuthSock = "SSH_AUTH_SOCK"
// SSHAgentPID is the environment variable pointing to the agent
// process ID
SSHAgentPID = "SSH_AGENT_PID"
// SSHTeleportUser is the current Teleport user that is logged in.
SSHTeleportUser = "SSH_TELEPORT_USER"
// SSHSessionWebProxyAddr is the address the web proxy.
SSHSessionWebProxyAddr = "SSH_SESSION_WEBPROXY_ADDR"
// SSHTeleportClusterName is the name of the cluster this node belongs to.
SSHTeleportClusterName = "SSH_TELEPORT_CLUSTER_NAME"
// SSHTeleportHostUUID is the UUID of the host.
SSHTeleportHostUUID = "SSH_TELEPORT_HOST_UUID"
// SSHSessionID is the UUID of the current session.
SSHSessionID = "SSH_SESSION_ID"
// EnableNonInteractiveSessionRecording can be used to record non-interactive SSH session.
EnableNonInteractiveSessionRecording = "SSH_TELEPORT_RECORD_NON_INTERACTIVE"
2017-03-08 05:42:17 +00:00
)
const (
// HTTPNextProtoTLS is the NPN/ALPN protocol negotiated during
// HTTP/1.1.'s TLS setup.
// https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids
HTTPNextProtoTLS = "http/1.1"
)
2017-02-10 22:46:26 +00:00
const (
// TOTPValidityPeriod is the number of seconds a TOTP token is valid.
TOTPValidityPeriod uint = 30
// TOTPSkew adds that many periods before and after to the validity window.
TOTPSkew uint = 1
2017-02-10 22:46:26 +00:00
)
2016-03-11 01:03:01 +00:00
const (
// ComponentMemory is a memory backend
ComponentMemory = "memory"
// ComponentAuthority is a TLS and an SSH certificate authority
ComponentAuthority = "ca"
// ComponentProcess is a main control process
ComponentProcess = "proc"
// ComponentServer is a server subcomponent of some services
ComponentServer = "server"
// ComponentACME is ACME protocol controller
ComponentACME = "acme"
// ComponentReverseTunnelServer is reverse tunnel server
// that together with agent establish a bi-directional SSH revers tunnel
// to bypass firewall restrictions
ComponentReverseTunnelServer = "proxy:server"
// ComponentReverseTunnelAgent is reverse tunnel agent
// that together with server establish a bi-directional SSH revers tunnel
// to bypass firewall restrictions
ComponentReverseTunnelAgent = "proxy:agent"
// ComponentLabel is a component label name used in reporting
ComponentLabel = "component"
// ComponentProxyKube is a kubernetes proxy
ComponentProxyKube = "proxy:kube"
Cleaned up Teleport logging * Downgraded many messages from `Debug` to `Info` * Edited messages so they're not verbose and not too short * Added "context" to some * Added logical teleport component as [COMPONENT] at the beginning of many, making logs **vastly** easier to read. * Added one more logging level option when creating Teleport (only Teleconsole uses it for now) The output with 'info' severity now look extremely clean. This is startup, for example: ``` INFO[0000] [AUTH] Auth service is starting on turing:32829 file=utils/cli.go:107 INFO[0000] [SSH:auth] listening socket: 127.0.0.1:32829 file=sshutils/server.go:119 INFO[0000] [SSH:auth] is listening on 127.0.0.1:32829 file=sshutils/server.go:144 INFO[0000] [Proxy] Successfully registered with the cluster file=utils/cli.go:107 INFO[0000] [Node] Successfully registered with the cluster file=utils/cli.go:107 INFO[0000] [AUTH] keyAuth: 127.0.0.1:56886->127.0.0.1:32829, user=turing file=auth/tun.go:370 WARN[0000] unable to load the auth server cache: open /tmp/cluster-teleconsole-client781495771/authservers.json: no such file or directory file=auth/tun.go:594 INFO[0000] [SSH:auth] new connection 127.0.0.1:56886 -> 127.0.0.1:32829 vesion: SSH-2.0-Go file=sshutils/server.go:205 INFO[0000] [AUTH] keyAuth: 127.0.0.1:56888->127.0.0.1:32829, user=turing.teleconsole-client file=auth/tun.go:370 INFO[0000] [AUTH] keyAuth: 127.0.0.1:56890->127.0.0.1:32829, user=turing.teleconsole-client file=auth/tun.go:370 INFO[0000] [Node] turing connected to the cluster 'teleconsole-client' file=service/service.go:158 INFO[0000] [AUTH] keyAuth: 127.0.0.1:56892->127.0.0.1:32829, user=turing file=auth/tun.go:370 INFO[0000] [SSH:auth] new connection 127.0.0.1:56890 -> 127.0.0.1:32829 vesion: SSH-2.0-Go file=sshutils/server.go:205 INFO[0000] [SSH:auth] new connection 127.0.0.1:56888 -> 127.0.0.1:32829 vesion: SSH-2.0-Go file=sshutils/server.go:205 INFO[0000] [Node] turing.teleconsole-client connected to the cluster 'teleconsole-client' file=service/service.go:158 INFO[0000] [Node] turing.teleconsole-client connected to the cluster 'teleconsole-client' file=service/service.go:158 INFO[0000] [SSH] received event(SSHIdentity) file=service/service.go:436 INFO[0000] [SSH] received event(ProxyIdentity) file=service/service.go:563 ``` You can easily tell that auth, ssh node and proxy have successfully started.
2016-09-02 23:04:05 +00:00
// ComponentAuth is the cluster CA node (auth server API)
ComponentAuth = "auth"
// ComponentGRPC is gRPC server
Events and GRPC API This commit introduces several key changes to Teleport backend and API infrastructure in order to achieve scalability improvements on 10K+ node deployments. Events and plain keyspace -------------------------- New backend interface supports events, pagination and range queries and moves away from buckets to plain keyspace, what better aligns with DynamoDB and Etcd featuring similar interfaces. All backend implementations are exposing Events API, allowing multiple subscribers to consume the same event stream and avoid polling database. Replacing BoltDB, Dir with SQLite ------------------------------- BoltDB backend does not support having two processes access the database at the same time. This prevented Teleport using BoltDB backend to be live reloaded. SQLite supports reads/writes by multiple processes and makes Dir backend obsolete as SQLite is more efficient on larger collections, supports transactions and can detect data corruption. Teleport automatically migrates data from Bolt and Dir backends into SQLite. GRPC API and protobuf resources ------------------------------- GRPC API has been introduced for the auth server. The auth server now serves both GRPC and JSON-HTTP API on the same TLS socket and uses the same client certificate authentication. All future API methods should use GRPC and HTTP-JSON API is considered obsolete. In addition to that some resources like Server and CertificateAuthority are now generated from protobuf service specifications in a way that is fully backward compatible with original JSON spec and schema, so the same resource can be encoded and decoded from JSON, YAML and protobuf. All models should be refactored into new proto specification over time. Streaming presence service -------------------------- In order to cut bandwidth, nodes are sending full updates only when changes to labels or spec have occured, otherwise new light-weight GRPC keep alive updates are sent over to the presence service, reducing bandwidth usage on multi-node deployments. In addition to that nodes are no longer polling auth server for certificate authority rotation updates, instead they subscribe to event updates to detect updates as soon as they happen. This is a new API, so the errors are inevitable, that's why polling is still done, but on a way slower rate.
2018-11-07 23:33:38 +00:00
ComponentGRPC = "grpc"
// ComponentMigrate is responsible for data migrations
ComponentMigrate = "migrate"
// ComponentNode is SSH node (SSH server serving requests)
ComponentNode = "node"
// ComponentForwardingNode is SSH node (SSH server serving requests)
2017-11-29 00:15:46 +00:00
ComponentForwardingNode = "node:forward"
// ComponentProxy is SSH proxy (SSH server forwarding connections)
ComponentProxy = "proxy"
// ComponentProxyPeer is the proxy peering component of the proxy service
ComponentProxyPeer = "proxy:peer"
// ComponentApp is the application proxy service.
ComponentApp = "app:service"
2021-01-15 02:21:38 +00:00
// ComponentDatabase is the database proxy service.
ComponentDatabase = "db:service"
// ComponentDiscovery is the Discovery service.
ComponentDiscovery = "discovery:service"
// ComponentAppProxy is the application handler within the web proxy service.
ComponentAppProxy = "app:web"
Web UI disconnects (#5276) * Use fake clock consistently in units tests. * Split web session management into two interfaces and implement them separately for clear separation * Split session management into New/Validate to make it aparent where the sessions are created and where existing sessions are managed. Remove ttlmap in favor of a simple map and handle expirations explicitly. Add web session management to gRPC server for the cache. * Reintroduce web sessions APIs under a getter interface. * Add SubKind to WatchKind for gRPC and add conversions from/to protobuf. Fix web sessions unit tests. * lib/web: create/insert session context in ValidateSession if the session has not yet been added to session cache. lib/cache: add event filter for web session in auth cache. lib/auth: propagate web session subkind in gRPC event. * Add implicit migrations for legacy web session key path for queries. * Integrate web token in lib/web * Add a bearer token when upserting a web session * Fix tests. Use fake clock wherever possible. * Converge session cache handling in lib/web * Clean up and add doc comments where necessary * Use correct form of sessions/tokens controller for ServerWithRoles. Use fake time in web tests * Converge the web sessions/tokens handling in lib/auth to match the old behavior w.r.t access checking (e.g. implicit handling of the local user identity). * Use cached reads and waiters only when necessary. Query sessions/tokens using best-effort - first looking in the cache and falling back to a proxy client * Properly propagate events about deletes for values with subkind. * Update to retrofit changes after recent teleport API refactorings * Update comment on removing legacy code to move the deadline to 7.x * Do not close the resources on the session when it expires - this beats the purpose of this PR. Also avoid a race between closing the cached clients and an existing reference to the session by letting the session linger for longer before removing it. * Move web session/token request structs to the api client proto package * Only set HTTP fs on the web handler if the UI is enabled * Properly tear down web session test by releasing resources at the end. Fix the web UI assets configuration by removing DisableUI and instead use the presence of assets (HTTP file system) as an indicator that the web UI has been enabled. * Decrease the expired session cache clean up threshold to 2m. Only log the expiration error message for errors other than not found * Add test for terminal disconnect when using two proxies in HA mode
2021-02-04 15:50:18 +00:00
// ComponentWebProxy is the web handler within the web proxy service.
ComponentWebProxy = "web"
Teleport signal handling and live reload. This commit introduces signal handling. Parent teleport process is now capable of forking the child process and passing listeners file descriptors to the child. Parent process then can gracefully shutdown by tracking the amount of current connections and closing listeners once the amount goes to 0. Here are the signals handled: * USR2 signal will cause the parent to fork a child process and pass listener file descriptors to it. Child process will close unused file descriptors and will bind to the used ones. At this moment two processes - the parent and the forked child process will be serving requests. After looking at the traffic and the log files, administrator can either shut down the parent process or the child process if the child process is not functioning as expected. * TERM, INT signals will trigger graceful process shutdown. Auth, node and proxy processes will wait until the amount of active connections goes down to 0 and will exit after that. * KILL, QUIT signals will cause immediate non-graceful shutdown. * HUP signal combines USR2 and TERM signals in a convenient way: parent process will fork a child process and self-initate graceful shutdown. This is a more convenient than USR2/TERM sequence, but less agile and robust as if the connection to the parent process drops, but the new process exits with error, administrators can lock themselves out of the environment. Additionally, boltdb backend has to be phased out, as it does not support read/writes by two concurrent processes. This had required refactoring of the dir backend to use file locking to allow inter-process collaboration on read/write operations.
2018-02-08 02:32:50 +00:00
// ComponentDiagnostic is a diagnostic service
ComponentDiagnostic = "diag"
Teleport signal handling and live reload. This commit introduces signal handling. Parent teleport process is now capable of forking the child process and passing listeners file descriptors to the child. Parent process then can gracefully shutdown by tracking the amount of current connections and closing listeners once the amount goes to 0. Here are the signals handled: * USR2 signal will cause the parent to fork a child process and pass listener file descriptors to it. Child process will close unused file descriptors and will bind to the used ones. At this moment two processes - the parent and the forked child process will be serving requests. After looking at the traffic and the log files, administrator can either shut down the parent process or the child process if the child process is not functioning as expected. * TERM, INT signals will trigger graceful process shutdown. Auth, node and proxy processes will wait until the amount of active connections goes down to 0 and will exit after that. * KILL, QUIT signals will cause immediate non-graceful shutdown. * HUP signal combines USR2 and TERM signals in a convenient way: parent process will fork a child process and self-initate graceful shutdown. This is a more convenient than USR2/TERM sequence, but less agile and robust as if the connection to the parent process drops, but the new process exits with error, administrators can lock themselves out of the environment. Additionally, boltdb backend has to be phased out, as it does not support read/writes by two concurrent processes. This had required refactoring of the dir backend to use file locking to allow inter-process collaboration on read/write operations.
2018-02-08 02:32:50 +00:00
// ComponentClient is a client
ComponentClient = "client"
// ComponentTunClient is a tunnel client
2017-10-08 01:11:03 +00:00
ComponentTunClient = "client:tunnel"
// ComponentCache is a cache component
ComponentCache = "cache"
// ComponentBackend is a backend component
ComponentBackend = "backend"
// ComponentSubsystemProxy is the proxy subsystem.
ComponentSubsystemProxy = "subsystem:proxy"
// ComponentSubsystemSFTP is the SFTP subsystem.
ComponentSubsystemSFTP = "subsystem:sftp"
2017-11-29 00:15:46 +00:00
// ComponentLocalTerm is a terminal on a regular SSH node.
ComponentLocalTerm = "term:local"
// ComponentRemoteTerm is a terminal on a forwarding SSH node.
ComponentRemoteTerm = "term:remote"
// ComponentRemoteSubsystem is subsystem on a forwarding SSH node.
ComponentRemoteSubsystem = "subsystem:remote"
// ComponentAuditLog is audit log component
ComponentAuditLog = "audit"
// ComponentKeyAgent is an agent that has loaded the sessions keys and
// certificates for a user connected to a proxy.
ComponentKeyAgent = "keyagent"
// ComponentKeyStore is all sessions keys and certificates a user has on disk
// for all proxies.
ComponentKeyStore = "keystore"
// ComponentConnectProxy is the HTTP CONNECT proxy used to tunnel connection.
ComponentConnectProxy = "http:proxy"
2018-09-28 01:22:51 +00:00
// ComponentSOCKS is a SOCKS5 proxy.
ComponentSOCKS = "socks"
// ComponentKeyGen is the public/private keypair generator.
ComponentKeyGen = "keygen"
// ComponentFirestore represents firestore clients
ComponentFirestore = "firestore"
// ComponentSession is an active session.
ComponentSession = "session"
External events and sessions storage. Updates #1755 Design ------ This commit adds support for pluggable events and sessions recordings and adds several plugins. In case if external sessions recording storage is used, nodes or proxies depending on configuration store the session recordings locally and then upload the recordings in the background. Non-print session events are always sent to the remote auth server as usual. In case if remote events storage is used, auth servers download recordings from it during playbacks. DynamoDB event backend ---------------------- Transient DynamoDB backend is added for events storage. Events are stored with default TTL of 1 year. External lambda functions should be used to forward events from DynamoDB. Parameter audit_table_name in storage section turns on dynamodb backend. The table will be auto created. S3 sessions backend ------------------- If audit_sessions_uri is specified to s3://bucket-name node or proxy depending on recording mode will start uploading the recorded sessions to the bucket. If the bucket does not exist, teleport will attempt to create a bucket with versioning and encryption turned on by default. Teleport will turn on bucket-side encryption for the tarballs using aws:kms key. File sessions backend --------------------- If audit_sessions_uri is specified to file:///folder teleport will start writing tarballs to this folder instead of sending records to the file server. This is helpful for plugin writers who can use fuse or NFS mounted storage to handle the data. Working dynamic configuration.
2018-03-04 02:26:44 +00:00
// ComponentDynamoDB represents dynamodb clients
ComponentDynamoDB = "dynamodb"
2018-02-24 01:23:09 +00:00
// Component pluggable authentication module (PAM)
ComponentPAM = "pam"
// ComponentUpload is a session recording upload server
ComponentUpload = "upload"
2018-08-17 20:53:49 +00:00
// ComponentWeb is a web server
ComponentWeb = "web"
// ComponentUnifiedResource is a cache of resources meant to be listed and displayed
// together in the web UI
ComponentUnifiedResource = "unified_resource"
// ComponentWebsocket is websocket server that the web client connects to.
ComponentWebsocket = "websocket"
// ComponentRBAC is role-based access control.
ComponentRBAC = "rbac"
// ComponentKeepAlive is keep-alive messages sent from clients to servers
// and vice versa.
ComponentKeepAlive = "keepalive"
// ComponentTeleport is the "teleport" binary.
ComponentTeleport = "teleport"
// ComponentTSH is the "tsh" binary.
ComponentTSH = "tsh"
Certificate renewal bot (#10099) * Add certificate renewal bot This adds a new `tbot` tool to continuously renew a set of certificates after registering with a Teleport cluster using a similar process to standard node joining. This makes some modifications to user certificate generation to allow for certificates that can be renewed beyond their original TTL, and exposes new gRPC endpoints: * `CreateBotJoinToken` creates a join token for a bot user * `GenerateInitialRenewableUserCerts` exchanges a token for a set of certificates with a new `renewable` flag set A new `tctl` command, `tctl bots add`, creates a bot user and calls `CreateBotJoinToken` to issue a token. A bot instance can then be started using a provided command. * Cert bot refactoring pass * Use role requests to split renewable certs from end-user certs * Add bot configuration file * Use `teleport.dev/bot` label * Remove `impersonator` flag on initial bot certs * Remove unnecessary `renew` package * Misc other cleanup * Do not pass through `renewable` flag when role requests are set This adds additional restrictions on when a certificate's `renewable` flag is carried over to a new certificate. In particular, it now also denies the flag when either role requests are present, or the `disallowReissue` flag has been previously set. In practice `disallow-reissue` would have prevented any undesired behavior but this improves consistency and resolves a TODO. * Various tbot UX improvements; render SSH config * Fully flesh out config template rendering * Fix rendering for SSH configuration templates * Added `String()` impls for destination types * Improve certificate renewal logging; show more detail * Properly fall back to default (all) roles * Add mode hints for files * Add/update copyright headers * Add stubs for tbot init and watch commands * Add gRPC endpoints for managing bots * Add `CreateBot`, `DeleteBot`, and `GetBotUsers` gRPC endpoints * Replace `tctl bot (add|rm|ls)` implementations with gRPC calls * Define a few new constants, `DefaultBotJoinTTL`, `BotLabel`, `BotGenerationLabel` * Fix outdated destination flag in example tbot command * Bugfix pass for demo * Fixed a few nil pointer derefs when using config from CLI args * Properly create destination if `--destination-dir` flag is used * Remove improper default on CLI flag * `DestinationConfig` is now a list of pointers * Address first wave of review feedback Fixes the majority of smaller issues caught by reviewers, thanks all! * Add doc comments for bot.go functions * Return the token TTL from CreateBot * Split initial user cert issuance from `generateUserCerts()` Issuing initial renewable certificate ended up requiring a lot of hacks to skip checks that prevented anonymous bots from getting certs even though we'd verified their identity elsewhere (via token). This reverts all those hacks and splits initial bot cert logic into a dedicated `generateInitialRenewableUserCerts()` function which should make the whole process much easier to follow. * Set bot traits to silence log messages * tbot log message consistency pass * Resolve lints * Add config tests * Remove CreateBotJoinToken endpoint Users should instead use the CreateBot/DeleteBot endpoints. * Create a fresh private key for every impersonated identity renewal * Hide `config` subcommand * Rename bot label prefix to `teleport.internal/` * Use types.NewRole() to create bot roles * Clean up error handling in custom YAML unmarshallers Also, add notes about the supported YAML shapes. * Fetch proxy host via gRPC Ping() instead of GetProxies() * Update lib/auth/bot.go Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * Fix some review comments * Add renewable certificate generation checks (#10098) * Add renewable certificate generation checks This adds a new validation check for renewable certificates that maintains a renewal counter as both a certificate extension and a user label. This counter is used to ensure only a single certificate lineage can exist: for example, if a renewable certificate is stolen, only one copy of the certificate can be renewed as the generation counter will not match When renewing a certificate, first the generation counter presented by the user (via their TLS identity) is compared to a value stored with the associated user (in a new `teleport.dev/bot-generation` label field). If they aren't equal, the renewal attempt fails. Otherwise, the generation counter is incremented by 1, stored to the database using a `CompareAndSwap()` to ensure atomicity, and set on the generated certificate for use in future renewals. * Add unit tests for the generation counter This adds new unit tests to exercise the generation counter checks. Additionally, it fixes two other renewable cert tests that were failing. * Remove certRequestGeneration() function * Emit audit event when cert generations don't match * Fully implement `tctl bots lock` * Show bot name in `tctl bots ls` * Lock bots when a cert generation mismatch is found * Make CompareFailed respones from validateGenerationLabel() more actionable * Update lib/services/local/users.go Co-authored-by: Nic Klaassen <nic@goteleport.com> * Backend changes for tbot IoT and AWS joining (#10360) * backend changes * add token permission check * pass ctx from caller Co-authored-by: Roman Tkachenko <roman@goteleport.com> * fix comment typo Co-authored-by: Roman Tkachenko <roman@goteleport.com> * use UserMetadata instead of Identity in RenewableCertificateGenerationMismatch event * Client changes for tbot IoT joining (#10397) * client changes * delete replaced APIs * delete unused tbot/auth.go * add license header * don't unecessarily fetch host CA * log fixes * s/tunnelling/tunneling/ Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * auth server addresses may be proxies Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * comment typo fix Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * move *Server methods out of auth_with_roles.go (#10416) Co-authored-by: Tim Buckley <tim@goteleport.com> Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> Co-authored-by: Tim Buckley <tim@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com> Co-authored-by: Tim Buckley <tim@goteleport.com> Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> Co-authored-by: Nic Klaassen <nic@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com> Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * Address another batch of review feedback * Addres another batch of review feedback Add `Role.SetMetadata()`, simplify more `trace.WrapWithMessage()` calls, clear some TODOs and lints, and address other misc feedback items. * Fix lint * Add missing doc comments to SaveIdentity / LoadIdentity * Remove pam tag from tbot build * Update note about bot lock deletion * Another pass of review feedback Ensure all requestable roles exist when creating a bot, adjust the default renewable cert TTL down to 1 hour, and check types during `CompareAndSwapUser()` Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> Co-authored-by: Nic Klaassen <nic@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com>
2022-02-19 02:41:45 +00:00
// ComponentTBot is the "tbot" binary
ComponentTBot = "tbot"
// ComponentKubeClient is the Kubernetes client.
ComponentKubeClient = "client:kube"
Events and GRPC API This commit introduces several key changes to Teleport backend and API infrastructure in order to achieve scalability improvements on 10K+ node deployments. Events and plain keyspace -------------------------- New backend interface supports events, pagination and range queries and moves away from buckets to plain keyspace, what better aligns with DynamoDB and Etcd featuring similar interfaces. All backend implementations are exposing Events API, allowing multiple subscribers to consume the same event stream and avoid polling database. Replacing BoltDB, Dir with SQLite ------------------------------- BoltDB backend does not support having two processes access the database at the same time. This prevented Teleport using BoltDB backend to be live reloaded. SQLite supports reads/writes by multiple processes and makes Dir backend obsolete as SQLite is more efficient on larger collections, supports transactions and can detect data corruption. Teleport automatically migrates data from Bolt and Dir backends into SQLite. GRPC API and protobuf resources ------------------------------- GRPC API has been introduced for the auth server. The auth server now serves both GRPC and JSON-HTTP API on the same TLS socket and uses the same client certificate authentication. All future API methods should use GRPC and HTTP-JSON API is considered obsolete. In addition to that some resources like Server and CertificateAuthority are now generated from protobuf service specifications in a way that is fully backward compatible with original JSON spec and schema, so the same resource can be encoded and decoded from JSON, YAML and protobuf. All models should be refactored into new proto specification over time. Streaming presence service -------------------------- In order to cut bandwidth, nodes are sending full updates only when changes to labels or spec have occured, otherwise new light-weight GRPC keep alive updates are sent over to the presence service, reducing bandwidth usage on multi-node deployments. In addition to that nodes are no longer polling auth server for certificate authority rotation updates, instead they subscribe to event updates to detect updates as soon as they happen. This is a new API, so the errors are inevitable, that's why polling is still done, but on a way slower rate.
2018-11-07 23:33:38 +00:00
// ComponentBuffer is in-memory event circular buffer
// used to broadcast events to subscribers.
ComponentBuffer = "buffer"
// ComponentBPF is the eBPF packagae.
ComponentBPF = "bpf"
// ComponentRestrictedSession is restriction of user access to kernel objects
ComponentRestrictedSession = "restrictedsess"
// ComponentCgroup is the cgroup package.
ComponentCgroup = "cgroups"
// ComponentKube is an Kubernetes API gateway.
ComponentKube = "kubernetes"
// ComponentSAML is a SAML service provider.
ComponentSAML = "saml"
// ComponentMetrics is a metrics server
ComponentMetrics = "metrics"
// ComponentWindowsDesktop is a Windows desktop access server.
ComponentWindowsDesktop = "windows_desktop"
// ComponentTracing is a tracing exporter
ComponentTracing = "tracing"
2022-06-01 00:37:31 +00:00
// ComponentInstance is an abstract component common to all services.
ComponentInstance = "instance"
2022-08-02 16:21:21 +00:00
// ComponentVersionControl is the component common to all version control operations.
ComponentVersionControl = "version-control"
Add a new usage reporter (#18142) * [draft] Add a new usage reporter This adds a new usage reporter service to the auth server. It's disabled by default in OSS and can only be turned on via startup hook in Cloud / Enterprise. In OSS, the audit log wrapper is never configured and any usage events are sent to a no-op discard reporter. Usage events are defined in prehog and can be sent to the new UsageReporter Service on the auth server. An audit event wrapper is used to capture certain events that are otherwise difficult to hook. Events are anonymized before submission, then held in a non-blocking queue for batching and submission purposes. * Remove dead code * Add SubmitUsageEvent RPC to Auth. This adds a new SubmitUsageEvent RPC to the Auth API that external clients (e.g. the UI) can use to submit usage events externally. * Slight refactor for unit testing * Add Prometheus metrics and add initial working prehog submitter * Add more metrics, tweak prehog client, and add unit tests * Further tweak http transport settings based on Teleport defaults * Add missing metrics * Fix goimports * Add new UI usage events * Update e ref * Add prehog directly for now. Improve logging. * update prehog * Add new prehog events; use username from request identity * add HTTP server for user events * Add username back to pre-onboard events * unauthenticated user events * Fix userevent build error * Use event-provided username where appropriate * Move barebones prehog reqs to lib/prehog and generate here. Also, use prod tunable values. * Fix license lints * De-flake tests by adding unfortunate amounts of synchronization. * Add missing license header * Misc PR cleanup for review * Update lib/events/usageevents/usageevents.go Co-authored-by: Edoardo Spadolini <edoardo.spadolini@goteleport.com> * Address a batch of review comments Adds `anonymizer.AnonymizeString` and parent loggers * Update e ref * Clean up comments * Remove onboard prefix from recovery code event * Address another batch of feedback * Use defaults.HTTPClient() * Remove a noisy log message * Demote noisy log message to debug * Temporarily revert e ref for merge Co-authored-by: Michelle Bergquist <michelle.bergquist@goteleport.com> Co-authored-by: Edoardo Spadolini <edoardo.spadolini@goteleport.com>
2022-12-05 17:13:54 +00:00
// ComponentUsageReporting is the component responsible for reporting usage metrics.
ComponentUsageReporting = "usage-reporting"
// ComponentAthena represents athena clients.
ComponentAthena = "athena"
2023-05-06 02:14:44 +00:00
// ComponentProxySecureGRPC represents secure gRPC server running on Proxy (used for Kube).
ComponentProxySecureGRPC = "proxy:secure-grpc"
ai: Add a node embedding watcher (#27204) * ai: add embeddings basic support - add Embeddings service and its local implementation - add Embedding type and proto message - add nodeEmbeddingCollector tracking nodes - add NodeEmbeddingWatcher watching for events adn sending them to the collector - add the Embedder interface and its openai implementation * ai: adapt embeddings to the vector index * fixup! ai: adapt embeddings to the vector index * fixup! fixup! ai: adapt embeddings to the vector index * Update lib/service/service.go Co-authored-by: Jakub Nyckowski <jakub.nyckowski@goteleport.com> * address feedback pt.1 * address feedback pt.2: store protobuf message in backend * address feedback pt.3: have GetEmbeddings return a stream * Update lib/services/embeddings.go Co-authored-by: rosstimothy <39066650+rosstimothy@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Edoardo Spadolini <edoardo.spadolini@goteleport.com> * address feedback pt.4: extract embedding logic out of Embeddings service * fixup! address feedback pt.4: extract embedding logic out of Embeddings service * address feedback pt.5: simpler error handling when embedding fails * fix tests pt.1 * fix tests pt.2 * fix tests pt.3 * [Assist] Replace embedding watcher (#27953) Change the way how the embeddings are calculated. Instead of creating a watcher in Auth, we will process all nodes every hour and process embeddings if any embeddings are missing or any node has been updated. --------- Co-authored-by: Jakub Nyckowski <jakub.nyckowski@goteleport.com> Co-authored-by: rosstimothy <39066650+rosstimothy@users.noreply.github.com> Co-authored-by: Edoardo Spadolini <edoardo.spadolini@goteleport.com>
2023-06-21 01:28:56 +00:00
// ComponentAssist represents Teleport Assist
ComponentAssist = "assist"
// VerboseLogEnvVar forces all logs to be verbose (down to DEBUG level)
VerboseLogsEnvVar = "TELEPORT_DEBUG"
// IterationsEnvVar sets tests iterations to run
IterationsEnvVar = "ITERATIONS"
// DefaultTerminalWidth defines the default width of a server-side allocated
// pseudo TTY
DefaultTerminalWidth = 80
// DefaultTerminalHeight defines the default height of a server-side allocated
// pseudo TTY
DefaultTerminalHeight = 25
// SafeTerminalType is the fall-back TTY type to fall back to (when $TERM
// is not defined)
SafeTerminalType = "xterm"
// DataDirParameterName is the name of the data dir configuration parameter passed
// to all backends during initialization
DataDirParameterName = "data_dir"
// KeepAliveReqType is a SSH request type to keep the connection alive. A client and
// a server keep pining each other with it.
KeepAliveReqType = "keepalive@openssh.com"
// ClusterDetailsReqType is the name of a global request which returns cluster details like
// if the proxy is recording sessions or not and if FIPS is enabled.
ClusterDetailsReqType = "cluster-details@goteleport.com"
2017-08-25 03:24:47 +00:00
// JSON means JSON serialization format
JSON = "json"
// YAML means YAML serialization format
YAML = "yaml"
// Text means text serialization format
Text = "text"
// PTY is a raw PTY session capture format
PTY = "pty"
// Names is for formatting node names in plain text
Names = "names"
// LinuxAdminGID is the ID of the standard adm group on linux
LinuxAdminGID = 4
// DirMaskSharedGroup is the mask for a directory accessible
// by the owner and group
DirMaskSharedGroup = 0o770
// FileMaskOwnerOnly is the file mask that allows read write access
// to owers only
FileMaskOwnerOnly = 0o600
// On means mode is on
On = "on"
// Off means mode is off
Off = "off"
External events and sessions storage. Updates #1755 Design ------ This commit adds support for pluggable events and sessions recordings and adds several plugins. In case if external sessions recording storage is used, nodes or proxies depending on configuration store the session recordings locally and then upload the recordings in the background. Non-print session events are always sent to the remote auth server as usual. In case if remote events storage is used, auth servers download recordings from it during playbacks. DynamoDB event backend ---------------------- Transient DynamoDB backend is added for events storage. Events are stored with default TTL of 1 year. External lambda functions should be used to forward events from DynamoDB. Parameter audit_table_name in storage section turns on dynamodb backend. The table will be auto created. S3 sessions backend ------------------- If audit_sessions_uri is specified to s3://bucket-name node or proxy depending on recording mode will start uploading the recorded sessions to the bucket. If the bucket does not exist, teleport will attempt to create a bucket with versioning and encryption turned on by default. Teleport will turn on bucket-side encryption for the tarballs using aws:kms key. File sessions backend --------------------- If audit_sessions_uri is specified to file:///folder teleport will start writing tarballs to this folder instead of sending records to the file server. This is helpful for plugin writers who can use fuse or NFS mounted storage to handle the data. Working dynamic configuration.
2018-03-04 02:26:44 +00:00
// GCSTestURI turns on GCS tests
GCSTestURI = "TEST_GCS_URI"
// AZBlobTestURI specifies the storage account URL to use for Azure Blob
// Storage tests.
AZBlobTestURI = "TEST_AZBLOB_URI"
// AWSRunTests turns on tests executed against AWS directly
AWSRunTests = "TEST_AWS"
// AWSRunDBTests turns on tests executed against AWS databases directly.
AWSRunDBTests = "TEST_AWS_DB"
// Region is AWS region parameter
Region = "region"
2019-10-09 18:58:18 +00:00
// Endpoint is an optional Host for non-AWS S3
Endpoint = "endpoint"
// Insecure is an optional switch to use HTTP instead of HTTPS
Insecure = "insecure"
// DisableServerSideEncryption is an optional switch to opt out of SSE in case the provider does not support it
DisableServerSideEncryption = "disablesse"
// ACL is the canned ACL to send to S3
ACL = "acl"
// SSEKMSKey is an optional switch to use an KMS CMK key for S3 SSE.
SSEKMSKey = "sse_kms_key"
// SchemeFile configures local disk-based file storage for audit events
External events and sessions storage. Updates #1755 Design ------ This commit adds support for pluggable events and sessions recordings and adds several plugins. In case if external sessions recording storage is used, nodes or proxies depending on configuration store the session recordings locally and then upload the recordings in the background. Non-print session events are always sent to the remote auth server as usual. In case if remote events storage is used, auth servers download recordings from it during playbacks. DynamoDB event backend ---------------------- Transient DynamoDB backend is added for events storage. Events are stored with default TTL of 1 year. External lambda functions should be used to forward events from DynamoDB. Parameter audit_table_name in storage section turns on dynamodb backend. The table will be auto created. S3 sessions backend ------------------- If audit_sessions_uri is specified to s3://bucket-name node or proxy depending on recording mode will start uploading the recorded sessions to the bucket. If the bucket does not exist, teleport will attempt to create a bucket with versioning and encryption turned on by default. Teleport will turn on bucket-side encryption for the tarballs using aws:kms key. File sessions backend --------------------- If audit_sessions_uri is specified to file:///folder teleport will start writing tarballs to this folder instead of sending records to the file server. This is helpful for plugin writers who can use fuse or NFS mounted storage to handle the data. Working dynamic configuration.
2018-03-04 02:26:44 +00:00
SchemeFile = "file"
// SchemeStdout outputs audit log entries to stdout
SchemeStdout = "stdout"
// SchemeS3 is used for S3-like object storage
SchemeS3 = "s3"
// SchemeGCS is used for Google Cloud Storage
SchemeGCS = "gs"
// SchemeAZBlob is the Azure Blob Storage scheme, used as the scheme in the
// session storage URI to identify a storage account accessed over https.
SchemeAZBlob = "azblob"
// SchemeAZBlobHTTP is the Azure Blob Storage scheme, used as the scheme in the
// session storage URI to identify a storage account accessed over http.
SchemeAZBlobHTTP = "azblob-http"
External events and sessions storage. Updates #1755 Design ------ This commit adds support for pluggable events and sessions recordings and adds several plugins. In case if external sessions recording storage is used, nodes or proxies depending on configuration store the session recordings locally and then upload the recordings in the background. Non-print session events are always sent to the remote auth server as usual. In case if remote events storage is used, auth servers download recordings from it during playbacks. DynamoDB event backend ---------------------- Transient DynamoDB backend is added for events storage. Events are stored with default TTL of 1 year. External lambda functions should be used to forward events from DynamoDB. Parameter audit_table_name in storage section turns on dynamodb backend. The table will be auto created. S3 sessions backend ------------------- If audit_sessions_uri is specified to s3://bucket-name node or proxy depending on recording mode will start uploading the recorded sessions to the bucket. If the bucket does not exist, teleport will attempt to create a bucket with versioning and encryption turned on by default. Teleport will turn on bucket-side encryption for the tarballs using aws:kms key. File sessions backend --------------------- If audit_sessions_uri is specified to file:///folder teleport will start writing tarballs to this folder instead of sending records to the file server. This is helpful for plugin writers who can use fuse or NFS mounted storage to handle the data. Working dynamic configuration.
2018-03-04 02:26:44 +00:00
// LogsDir is a log subdirectory for events and logs
LogsDir = "log"
// Syslog is a mode for syslog logging
Syslog = "syslog"
// HumanDateFormat is a human readable date formatting
HumanDateFormat = "Jan _2 15:04 UTC"
// HumanDateFormatMilli is a human readable date formatting with milliseconds
HumanDateFormatMilli = "Jan _2 15:04:05.000 UTC"
// DebugLevel is a debug logging level name
DebugLevel = "debug"
// MinimumEtcdVersion is the minimum version of etcd supported by Teleport
MinimumEtcdVersion = "3.3.0"
)
const (
// These values are from https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
// OIDCPromptSelectAccount instructs the Authorization Server to
// prompt the End-User to select a user account.
OIDCPromptSelectAccount = "select_account"
// OIDCAccessTypeOnline indicates that OIDC flow should be performed
// with Authorization server and user connected online
OIDCAccessTypeOnline = "online"
)
Teleport signal handling and live reload. This commit introduces signal handling. Parent teleport process is now capable of forking the child process and passing listeners file descriptors to the child. Parent process then can gracefully shutdown by tracking the amount of current connections and closing listeners once the amount goes to 0. Here are the signals handled: * USR2 signal will cause the parent to fork a child process and pass listener file descriptors to it. Child process will close unused file descriptors and will bind to the used ones. At this moment two processes - the parent and the forked child process will be serving requests. After looking at the traffic and the log files, administrator can either shut down the parent process or the child process if the child process is not functioning as expected. * TERM, INT signals will trigger graceful process shutdown. Auth, node and proxy processes will wait until the amount of active connections goes down to 0 and will exit after that. * KILL, QUIT signals will cause immediate non-graceful shutdown. * HUP signal combines USR2 and TERM signals in a convenient way: parent process will fork a child process and self-initate graceful shutdown. This is a more convenient than USR2/TERM sequence, but less agile and robust as if the connection to the parent process drops, but the new process exits with error, administrators can lock themselves out of the environment. Additionally, boltdb backend has to be phased out, as it does not support read/writes by two concurrent processes. This had required refactoring of the dir backend to use file locking to allow inter-process collaboration on read/write operations.
2018-02-08 02:32:50 +00:00
// Component generates "component:subcomponent1:subcomponent2" strings used
// in debugging
func Component(components ...string) string {
return strings.Join(components, ":")
}
const (
// AuthorizedKeys are public keys that check against User CAs.
AuthorizedKeys = "authorized_keys"
// KnownHosts are public keys that check against Host CAs.
KnownHosts = "known_hosts"
)
const (
// CertExtensionPermitX11Forwarding allows X11 forwarding for certificate
CertExtensionPermitX11Forwarding = "permit-X11-forwarding"
// CertExtensionPermitAgentForwarding allows agent forwarding for certificate
CertExtensionPermitAgentForwarding = "permit-agent-forwarding"
// CertExtensionPermitPTY allows user to request PTY
CertExtensionPermitPTY = "permit-pty"
// CertExtensionPermitPortForwarding allows user to request port forwarding
CertExtensionPermitPortForwarding = "permit-port-forwarding"
2017-05-17 17:36:25 +00:00
// CertExtensionTeleportRoles is used to propagate teleport roles
CertExtensionTeleportRoles = "teleport-roles"
// CertExtensionTeleportRouteToCluster is used to encode
// the target cluster to route to in the certificate
CertExtensionTeleportRouteToCluster = "teleport-route-to-cluster"
// CertExtensionTeleportTraits is used to propagate traits about the user.
CertExtensionTeleportTraits = "teleport-traits"
// CertExtensionTeleportActiveRequests is used to track which privilege
// escalation requests were used to construct the certificate.
CertExtensionTeleportActiveRequests = "teleport-active-requests"
// CertExtensionMFAVerified is used to mark certificates issued after an MFA
// check.
CertExtensionMFAVerified = "mfa-verified"
// CertExtensionPreviousIdentityExpires is the extension that stores an RFC3339
// timestamp representing the expiry time of the identity/cert that this
// identity/cert was derived from. It is used to determine a session's hard
// deadline in cases where both require_session_mfa and disconnect_expired_cert
// are enabled. See https://github.com/gravitational/teleport/issues/18544.
CertExtensionPreviousIdentityExpires = "prev-identity-expires"
// CertExtensionLoginIP is used to embed the IP of the client that created
// the certificate.
CertExtensionLoginIP = "login-ip"
// CertExtensionImpersonator is set when one user has requested certificates
// for another user
CertExtensionImpersonator = "impersonator"
Allow impersonation of roles without users (#9561) * Allow impersonation of roles without users This adds the ability to impersonate one or more roles without impersonating a particular user. In Teleport today, when creating an impersonator role, both users and roles must be specified as impersonation is fundamentally tied to an existing Teleport user: ```yaml allow: impersonate: users: ['jenkins'] roles: ['jenkins'] ``` This is inconvenient for two reasons: 1. A user must exist for each set of roles you'd like to impersonate, creating a UX burden. 2. It makes it difficult to use impersonation to reduce one's permissions as you always inherit all of the roles granted to the target user. For the [certificate bot][bot] we'd instead like to use impersonation to generate end-user (impersonated) certificates with a reduced set of permissions. For example, given the following role: ```yaml allow: impersonate: roles: ['jenkins', 'deploy'] ``` We can then use `GenerateUserCerts` to issue certifices for a subset of the allowed roles, e.g. one set of certificates with only the `jenkins` role attached, and another with only `deploy`. To that end, this patch: 1. Removes the requirement that roles define both `users` and `roles` in impersonate conditions 2. Introduces a new `RoleRequests` field in `UserCertsRequest` 3. Modifies `generateUserCerts` to gather `roles` from `RoleRequests` if allowed by an `allow` (with no `users`) [bot]: https://github.com/gravitational/teleport/pull/7986 * Add `determineDesiredRolesAndTraits`; audit log on role impersonation This splits initial role and trait determination into a new function, `determineDesiredRolesAndTraits`, to improve control flow and clarity given the new branches introduced for role impersonation. Additionally, this moves the call to `CheckRoleImpersonation` down to match regular user impersonation's flow, and emits an audit log event on failure. * Formatting fix * Unit testing for role requests This adds a new set of unit tests for role requests. Also discovered the `impersonator` field wasn't being set for role impersonation, so it's now set to the user's own username. In other words, role impersonation will appear (in the audit log and elsewhere) as self-impersonation. * Clean up testing users between runs * Deny most reimpersonation cases and add tests This attempts to deny most cases of reimpersonation, where an impersonated certificate might be used to generate certificates for other roles the user is allowed to impersonate. One test case is currently failing pending a solution. * Add new DisallowReissue certificate extension This adds a new DisallowReissue certificate extension that, if set, prevents that identity from interacting with `GenerateUserCerts`. This flag is always set when RoleRequests are used to prevent unintended privilege escalation (while avoiding breaking changes to Teleport's existing certificate generation behavior). * Fix test lints * Fix typo * Fix test doc typo, add testcase for user impersonation misuse * Apply suggestions from code review Co-authored-by: rosstimothy <39066650+rosstimothy@users.noreply.github.com> * Accept context in CreateRole per review feedback * Fix misleading comment Co-authored-by: rosstimothy <39066650+rosstimothy@users.noreply.github.com>
2022-01-14 22:15:13 +00:00
// CertExtensionDisallowReissue is set when a certificate should not be allowed
// to request future certificates.
CertExtensionDisallowReissue = "disallow-reissue"
Certificate renewal bot (#10099) * Add certificate renewal bot This adds a new `tbot` tool to continuously renew a set of certificates after registering with a Teleport cluster using a similar process to standard node joining. This makes some modifications to user certificate generation to allow for certificates that can be renewed beyond their original TTL, and exposes new gRPC endpoints: * `CreateBotJoinToken` creates a join token for a bot user * `GenerateInitialRenewableUserCerts` exchanges a token for a set of certificates with a new `renewable` flag set A new `tctl` command, `tctl bots add`, creates a bot user and calls `CreateBotJoinToken` to issue a token. A bot instance can then be started using a provided command. * Cert bot refactoring pass * Use role requests to split renewable certs from end-user certs * Add bot configuration file * Use `teleport.dev/bot` label * Remove `impersonator` flag on initial bot certs * Remove unnecessary `renew` package * Misc other cleanup * Do not pass through `renewable` flag when role requests are set This adds additional restrictions on when a certificate's `renewable` flag is carried over to a new certificate. In particular, it now also denies the flag when either role requests are present, or the `disallowReissue` flag has been previously set. In practice `disallow-reissue` would have prevented any undesired behavior but this improves consistency and resolves a TODO. * Various tbot UX improvements; render SSH config * Fully flesh out config template rendering * Fix rendering for SSH configuration templates * Added `String()` impls for destination types * Improve certificate renewal logging; show more detail * Properly fall back to default (all) roles * Add mode hints for files * Add/update copyright headers * Add stubs for tbot init and watch commands * Add gRPC endpoints for managing bots * Add `CreateBot`, `DeleteBot`, and `GetBotUsers` gRPC endpoints * Replace `tctl bot (add|rm|ls)` implementations with gRPC calls * Define a few new constants, `DefaultBotJoinTTL`, `BotLabel`, `BotGenerationLabel` * Fix outdated destination flag in example tbot command * Bugfix pass for demo * Fixed a few nil pointer derefs when using config from CLI args * Properly create destination if `--destination-dir` flag is used * Remove improper default on CLI flag * `DestinationConfig` is now a list of pointers * Address first wave of review feedback Fixes the majority of smaller issues caught by reviewers, thanks all! * Add doc comments for bot.go functions * Return the token TTL from CreateBot * Split initial user cert issuance from `generateUserCerts()` Issuing initial renewable certificate ended up requiring a lot of hacks to skip checks that prevented anonymous bots from getting certs even though we'd verified their identity elsewhere (via token). This reverts all those hacks and splits initial bot cert logic into a dedicated `generateInitialRenewableUserCerts()` function which should make the whole process much easier to follow. * Set bot traits to silence log messages * tbot log message consistency pass * Resolve lints * Add config tests * Remove CreateBotJoinToken endpoint Users should instead use the CreateBot/DeleteBot endpoints. * Create a fresh private key for every impersonated identity renewal * Hide `config` subcommand * Rename bot label prefix to `teleport.internal/` * Use types.NewRole() to create bot roles * Clean up error handling in custom YAML unmarshallers Also, add notes about the supported YAML shapes. * Fetch proxy host via gRPC Ping() instead of GetProxies() * Update lib/auth/bot.go Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * Fix some review comments * Add renewable certificate generation checks (#10098) * Add renewable certificate generation checks This adds a new validation check for renewable certificates that maintains a renewal counter as both a certificate extension and a user label. This counter is used to ensure only a single certificate lineage can exist: for example, if a renewable certificate is stolen, only one copy of the certificate can be renewed as the generation counter will not match When renewing a certificate, first the generation counter presented by the user (via their TLS identity) is compared to a value stored with the associated user (in a new `teleport.dev/bot-generation` label field). If they aren't equal, the renewal attempt fails. Otherwise, the generation counter is incremented by 1, stored to the database using a `CompareAndSwap()` to ensure atomicity, and set on the generated certificate for use in future renewals. * Add unit tests for the generation counter This adds new unit tests to exercise the generation counter checks. Additionally, it fixes two other renewable cert tests that were failing. * Remove certRequestGeneration() function * Emit audit event when cert generations don't match * Fully implement `tctl bots lock` * Show bot name in `tctl bots ls` * Lock bots when a cert generation mismatch is found * Make CompareFailed respones from validateGenerationLabel() more actionable * Update lib/services/local/users.go Co-authored-by: Nic Klaassen <nic@goteleport.com> * Backend changes for tbot IoT and AWS joining (#10360) * backend changes * add token permission check * pass ctx from caller Co-authored-by: Roman Tkachenko <roman@goteleport.com> * fix comment typo Co-authored-by: Roman Tkachenko <roman@goteleport.com> * use UserMetadata instead of Identity in RenewableCertificateGenerationMismatch event * Client changes for tbot IoT joining (#10397) * client changes * delete replaced APIs * delete unused tbot/auth.go * add license header * don't unecessarily fetch host CA * log fixes * s/tunnelling/tunneling/ Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * auth server addresses may be proxies Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * comment typo fix Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * move *Server methods out of auth_with_roles.go (#10416) Co-authored-by: Tim Buckley <tim@goteleport.com> Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> Co-authored-by: Tim Buckley <tim@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com> Co-authored-by: Tim Buckley <tim@goteleport.com> Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> Co-authored-by: Nic Klaassen <nic@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com> Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> * Address another batch of review feedback * Addres another batch of review feedback Add `Role.SetMetadata()`, simplify more `trace.WrapWithMessage()` calls, clear some TODOs and lints, and address other misc feedback items. * Fix lint * Add missing doc comments to SaveIdentity / LoadIdentity * Remove pam tag from tbot build * Update note about bot lock deletion * Another pass of review feedback Ensure all requestable roles exist when creating a bot, adjust the default renewable cert TTL down to 1 hour, and check types during `CompareAndSwapUser()` Co-authored-by: Zac Bergquist <zmb3@users.noreply.github.com> Co-authored-by: Nic Klaassen <nic@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com>
2022-02-19 02:41:45 +00:00
// CertExtensionRenewable is a flag to indicate the certificate may be
// renewed.
CertExtensionRenewable = "renewable"
// CertExtensionGeneration counts the number of times a certificate has
// been renewed.
CertExtensionGeneration = "generation"
// CertExtensionAllowedResources lists the resources which this certificate
// should be allowed to access
CertExtensionAllowedResources = "teleport-allowed-resources"
ConnectionDiagnostics: SSH Tester (#15413) This PR implements the SSH Tester for ConnectionDiagnostic feature. This feature is also known as Test Connection, part of Teleport Discover. The goal here is to provide immediante feedback about a newly added resource. Can the user connect to it? We are targetting SSH Nodes as a first ResourceKind. To test the access to an SSH Node we require the ResourceName and a login username (ssh principal). Then a series of checks will occur in two places: - SSH client in the Web server - SSH server in the SSH agent The ssh client creates a new Connection Diagnostic with some initial state. Then it tries to build up the necessary SSH config This already gives us a couple of things to check for: - does the node exist and does the current user (inherited from websession) can access it? - is the node accepting TCP connections (in the specific port)? - is the node accepting SSH protocol on top of the TCP connection? Then, the ConnectionDiagnosticID is inject into the certificate and the SSH Server receives it and will also Append traces into it: - is the requested principal allowed for the current user? - does the requested principal exist in the target node? This is not an exhaustive list of checks. For a complete list of which checks are verified please see the TestDiagnoseSSHConnection test. After all those checks, it returns if the Connection was successful and what all of the traces generated along the way. Demo: ![image](https://user-images.githubusercontent.com/689271/187976940-55522fd9-f581-4c6d-9bfc-f6e501c1ed72.png) ![image](https://user-images.githubusercontent.com/689271/187976957-35075112-2b42-4726-8d50-19d02fab2464.png) ![image](https://user-images.githubusercontent.com/689271/187976967-81406e2c-0517-474b-b323-dad1f8be1571.png)
2022-09-02 08:17:21 +00:00
// CertExtensionConnectionDiagnosticID contains the ID of the ConnectionDiagnostic.
// The Node/Agent will append connection traces to this diagnostic instance.
CertExtensionConnectionDiagnosticID = "teleport-connection-diagnostic-id"
// CertExtensionPrivateKeyPolicy is used to mark certificates with their supported
// private key policy.
CertExtensionPrivateKeyPolicy = "private-key-policy"
// CertExtensionDeviceID is the trusted device identifier.
CertExtensionDeviceID = "teleport-device-id"
// CertExtensionDeviceAssetTag is the device inventory identifier.
CertExtensionDeviceAssetTag = "teleport-device-asset-tag"
// CertExtensionDeviceCredentialID is the identifier for the credential used
// by the device to authenticate itself.
CertExtensionDeviceCredentialID = "teleport-device-credential-id"
// CertCriticalOptionSourceAddress is a critical option that defines IP addresses (in CIDR notation)
// from which this certificate is accepted for authentication.
// See: https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD.
CertCriticalOptionSourceAddress = "source-address"
)
// Note: when adding new providers to this list, consider updating the help message for --provider flag
// for `tctl sso configure oidc` and `tctl sso configure saml` commands
// as well as docs at https://goteleport.com/docs/enterprise/sso/#provider-specific-workarounds
const (
// NetIQ is an identity provider.
NetIQ = "netiq"
2017-05-12 19:14:44 +00:00
// ADFS is Microsoft Active Directory Federation Services
ADFS = "adfs"
// Ping is the common backend for all Ping Identity-branded identity
// providers (including PingOne, PingFederate, etc).
Ping = "ping"
// Okta should be used for Okta OIDC providers.
Okta = "okta"
// JumpCloud is an identity provider.
JumpCloud = "jumpcloud"
)
const (
// RemoteCommandSuccess is returned when a command has successfully executed.
RemoteCommandSuccess = 0
// RemoteCommandFailure is returned when a command has failed to execute and
// we don't have another status code for it.
RemoteCommandFailure = 255
// HomeDirNotFound is returned when a the "teleport checkhomedir" command cannot
// find the user's home directory.
HomeDirNotFound = 254
)
// MaxEnvironmentFileLines is the maximum number of lines in a environment file.
const MaxEnvironmentFileLines = 1000
// MaxResourceSize is the maximum size (in bytes) of a serialized resource. This limit is
// typically only enforced against resources that are likely to arbitrarily grow (e.g. PluginData).
const MaxResourceSize = 1000000
// MaxHTTPRequestSize is the maximum accepted size (in bytes) of the body of
// a received HTTP request. This limit is meant to be used with utils.ReadAtMost
// to prevent resource exhaustion attacks.
const MaxHTTPRequestSize = 10 * 1024 * 1024
// MaxHTTPResponseSize is the maximum accepted size (in bytes) of the body of
// a received HTTP response. This limit is meant to be used with utils.ReadAtMost
// to prevent resource exhaustion attacks.
const MaxHTTPResponseSize = 10 * 1024 * 1024
const (
// CertificateFormatOldSSH is used to make Teleport interoperate with older
// versions of OpenSSH.
CertificateFormatOldSSH = "oldssh"
// CertificateFormatUnspecified is used to check if the format was specified
// or not.
CertificateFormatUnspecified = ""
)
2017-07-24 22:18:46 +00:00
const (
// TraitInternalPrefix is the role variable prefix that indicates it's for
// local accounts.
TraitInternalPrefix = "internal"
// TraitExternalPrefix is the role variable prefix that indicates the data comes from an external identity provider.
TraitExternalPrefix = "external"
2021-12-31 12:06:41 +00:00
// TraitTeams is the name of the role variable use to store team
// membership information
TraitTeams = "github_teams"
// TraitJWT is the name of the trait containing JWT header for app access.
TraitJWT = "jwt"
// TraitInternalLoginsVariable is the variable used to store allowed
2017-07-24 22:18:46 +00:00
// logins for local accounts.
TraitInternalLoginsVariable = "{{internal.logins}}"
// TraitInternalWindowsLoginsVariable is the variable used to store
// allowed Windows Desktop logins for local accounts.
TraitInternalWindowsLoginsVariable = "{{internal.windows_logins}}"
// TraitInternalKubeGroupsVariable is the variable used to store allowed
// kubernetes groups for local accounts.
TraitInternalKubeGroupsVariable = "{{internal.kubernetes_groups}}"
// TraitInternalKubeUsersVariable is the variable used to store allowed
// kubernetes users for local accounts.
TraitInternalKubeUsersVariable = "{{internal.kubernetes_users}}"
2021-01-15 02:21:38 +00:00
// TraitInternalDBNamesVariable is the variable used to store allowed
// database names for local accounts.
TraitInternalDBNamesVariable = "{{internal.db_names}}"
// TraitInternalDBUsersVariable is the variable used to store allowed
// database users for local accounts.
TraitInternalDBUsersVariable = "{{internal.db_users}}"
// TraitInternalDBRolesVariable is the variable used to store allowed
// database roles for automatic database user provisioning.
TraitInternalDBRolesVariable = "{{internal.db_roles}}"
// TraitInternalAWSRoleARNs is the variable used to store allowed AWS
// role ARNs for local accounts.
TraitInternalAWSRoleARNs = "{{internal.aws_role_arns}}"
2022-12-12 19:34:53 +00:00
// TraitInternalAzureIdentities is the variable used to store allowed
// Azure identities for local accounts.
TraitInternalAzureIdentities = "{{internal.azure_identities}}"
2023-01-11 12:33:40 +00:00
// TraitInternalGCPServiceAccounts is the variable used to store allowed
// GCP service accounts for local accounts.
TraitInternalGCPServiceAccounts = "{{internal.gcp_service_accounts}}"
// TraitInternalJWTVariable is the variable used to store JWT token for
// app sessions.
TraitInternalJWTVariable = "{{internal.jwt}}"
2017-07-24 22:18:46 +00:00
)
2017-11-29 00:15:46 +00:00
// SCP is Secure Copy.
const SCP = "scp"
// AdminRoleName is the name of the default admin role for all local users if
// another role is not explicitly assigned
2017-09-05 19:20:57 +00:00
const AdminRoleName = "admin"
2017-08-16 00:27:51 +00:00
const (
// PresetEditorRoleName is a name of a preset role that allows
// editing cluster configuration.
PresetEditorRoleName = "editor"
// PresetAccessRoleName is a name of a preset role that allows
// accessing cluster resources.
PresetAccessRoleName = "access"
// PresetAuditorRoleName is a name of a preset role that allows
// reading cluster events and playing back session records.
PresetAuditorRoleName = "auditor"
Add reviewer and requester roles. (#27673) * Add reviewer and requester roles. Reviewer and requester roles have been added to allow for easy defaults for reviewing and requesting applications and user groups for the Okta service. * Update comments, add in requester/reviewer references all over docs. * Update constants.go Co-authored-by: Zac Bergquist <zac.bergquist@goteleport.com> * Better names in resource-requests.mdx, no options for reviewer/requester presets. * Add dbreviewer to cspell.json. * Update init_test.go to handle the new number of presets. * Remove access review/request role creation from resource-requests.mdx. * Improve wording. * Only add reviewer/requester/group access if in enterprise. * Docs updates. * Remove errant scoped block. * Tune access controls getting started. * Prevent customer roles from being overwritten by new defaults. * Apply suggestions from code review Co-authored-by: Nic Klaassen <nic@goteleport.com> * Doc updates, function name updates, bootstrap label behavior changes. * Adjust TeleportManagedLabel comment to sound less dire. * Remove unnecessary references to reviewer/requester in the docs. * Apply suggestions from code review Co-authored-by: Paul Gottschling <paul.gottschling@goteleport.com> * Docs tweaks, removal of unnecessary login setting. * Use internal resource type as discussed offline. * Update api/types/constants.go Co-authored-by: Roman Tkachenko <roman@goteleport.com> --------- Co-authored-by: Zac Bergquist <zac.bergquist@goteleport.com> Co-authored-by: Nic Klaassen <nic@goteleport.com> Co-authored-by: Paul Gottschling <paul.gottschling@goteleport.com> Co-authored-by: Roman Tkachenko <roman@goteleport.com>
2023-06-20 17:21:16 +00:00
// PresetReviewerRoleName is a name of a preset role that allows
// for reviewing access requests.
PresetReviewerRoleName = "reviewer"
// PresetRequesterRoleName is a name of a preset role that allows
// for requesting access to resources.
PresetRequesterRoleName = "requester"
// PresetGroupAccessRoleName is a name of a preset role that allows
// access to all user groups.
PresetGroupAccessRoleName = "group-access"
// PresetDeviceAdminRoleName is the name of the "device-admin" role.
// The role is used to administer trusted devices.
PresetDeviceAdminRoleName = "device-admin"
// PresetDeviceEnrollRoleName is the name of the "device-enroll" role.
// The role is used to grant device enrollment powers to users.
PresetDeviceEnrollRoleName = "device-enroll"
// PresetRequireTrustedDeviceRoleName is the name of the
// "require-trusted-device" role.
// The role is used as a basis for requiring trusted device access to
// resources.
PresetRequireTrustedDeviceRoleName = "require-trusted-device"
// SystemAutomaticAccessApprovalRoleName names a preset role that may
// automatically approve any Role Access Request
SystemAutomaticAccessApprovalRoleName = "@teleport-access-approver"
Add role setup for Connect My Computer in tshd (#28891) * Ignore specific teleterm proto file rather than whole package * clusters.Storage: Avoid unnecessary casting of URI back to string * Storage.fromProfile: Move loading profile status to separate function * Return client.TeleportClient together with clusters.Cluster This is a stopgap to make clusters.Cluster a regular struct with no extra behavior and a much smaller interface. At the moment, almost all RPCs go through layers like these: gRPC handler → daemon.Service → clusters.Storage → clusters.Cluster → → TeleportClient As a result, clusters.Cluster has a gigantic interface that's hard to test. Instead, we want to make it hold basic information about the cluster. The real work will be done by collaborator structs which take clusters.Cluster and TeleportClient as args. This way we can mock them out more easily in tests. See the issue linked in the comment of clusters.Storage.GetByResourceURI for more details. * Make AddMetadataToRetryableError public Since we're moving away from doing everything in clusters.Cluster, we'll need to use this function from within the daemon package. * Add IsRoot and IsLeaf methods to uri.ResourceURI These methods will be used to return early if someone tries to set up roles for a leaf cluster. Connect My Computer works with root clusters only. * Add handler for creating Connect My Computer role * Add test for calling GenerateUserCerts with bogus request ID * Fix checking logins of existing role * Remove commented out argument to ReissueUserCerts * Add a stopgap comment to Storage.GetByURI * Remove invalid doc links * Ensure owner node label has expected value if role already exists * Add unit test which checks RoleSetup.Run idempotency * Add godoc for Resolver * Simplify watcher equality check, add a comment * Expand the comment for DropAccessRequests * Fix position of Connect My Computer popover * Integrate role setup with setup document * Add temporary action to additional actions This will be removed once the parent PR is approved. I just wanted to give reviewers a shortcut to triggering the endpoint from the actual app. * Add return type to createRole * Add useRetryWithRelogin * Wrap role setup in retryWithRelogin * Remove useRetryWithRelogin in favor of retryWithRelogin & useCallback * Remove CMC role setup from AdditionalActions * Prettier fix * Fix eslint
2023-07-20 11:42:22 +00:00
// ConnectMyComputerRoleNamePrefix is the prefix used for roles prepared for individual users
// during the setup of Connect My Computer. The prefix is followed by the name of the cluster
// user. See [teleterm.connectmycomputer.RoleSetup].
ConnectMyComputerRoleNamePrefix = "connect-my-computer-"
)
var PresetRoles = []string{PresetEditorRoleName, PresetAccessRoleName, PresetAuditorRoleName}
const (
// SystemAccessApproverUserName names a Teleport user that acts as
// an Access Request approver for access plugins
SystemAccessApproverUserName = "@teleport-access-approval-bot"
)
// MinClientVersion is the minimum client version required by the server.
var MinClientVersion string
func init() {
// Per https://github.com/gravitational/teleport/blob/master/rfd/0012-teleport-versioning.md,
// only one major version backwards is supported for clients.
ver := semver.New(Version)
MinClientVersion = fmt.Sprintf("%d.0.0", ver.Major-1)
}
const (
// RemoteClusterStatusOffline indicates that cluster is considered as
// offline, since it has missed a series of heartbeats
RemoteClusterStatusOffline = "offline"
// RemoteClusterStatusOnline indicates that cluster is sending heartbeats
// at expected interval
RemoteClusterStatusOnline = "online"
)
const (
// SharedDirMode is a mode for a directory shared with group
SharedDirMode = 0o750
// PrivateDirMode is a mode for private directories
PrivateDirMode = 0o700
)
const (
// SessionEvent is sent by servers to clients when an audit event occurs on
// the session.
SessionEvent = "x-teleport-event"
// VersionRequest is sent by clients to server requesting the Teleport
// version they are running.
VersionRequest = "x-teleport-version"
// ForceTerminateRequest is an SSH request to forcefully terminate a session.
ForceTerminateRequest = "x-teleport-force-terminate"
// TerminalSizeRequest is a request for the terminal size of the session.
TerminalSizeRequest = "x-teleport-terminal-size"
// MFAPresenceRequest is an SSH request to notify clients that MFA presence is required for a session.
MFAPresenceRequest = "x-teleport-mfa-presence"
// EnvSSHJoinMode is the SSH environment variable that contains the requested participant mode.
EnvSSHJoinMode = "TELEPORT_SSH_JOIN_MODE"
// EnvSSHSessionReason is a reason attached to started sessions meant to describe their intent.
EnvSSHSessionReason = "TELEPORT_SESSION_REASON"
// EnvSSHSessionInvited is an environment variable listning people invited to a session.
EnvSSHSessionInvited = "TELEPORT_SESSION_JOIN_MODE"
// EnvSSHSessionDisplayParticipantRequirements is set to true or false to indicate if participant
// requirement information should be printed.
EnvSSHSessionDisplayParticipantRequirements = "TELEPORT_SESSION_PARTICIPANT_REQUIREMENTS"
// SSHSessionJoinPrincipal is the SSH principal used when joining sessions.
// This starts with a hyphen so it isn't a valid unix login.
SSHSessionJoinPrincipal = "-teleport-internal-join"
)
const (
// EnvKubeConfig is environment variable for kubeconfig
EnvKubeConfig = "KUBECONFIG"
// KubeConfigDir is a default directory where k8s stores its user local config
KubeConfigDir = ".kube"
// KubeConfigFile is a default filename where k8s stores its user local config
KubeConfigFile = "config"
// KubeRunTests turns on kubernetes tests
KubeRunTests = "TEST_KUBE"
// KubeSystemAuthenticated is a builtin group that allows
// any user to access common API methods, e.g. discovery methods
// required for initial client usage
KubeSystemAuthenticated = "system:authenticated"
// UsageKubeOnly specifies certificate usage metadata
// that limits certificate to be only used for kubernetes proxying
UsageKubeOnly = "usage:kube"
// UsageAppOnly specifies a certificate metadata that only allows it to be
// used for proxying applications.
UsageAppsOnly = "usage:apps"
2021-01-15 02:21:38 +00:00
// UsageDatabaseOnly specifies certificate usage metadata that only allows
// it to be used for proxying database connections.
UsageDatabaseOnly = "usage:db"
// UsageWindowsDesktopOnly specifies certificate usage metadata that limits
// certificate to be only used for Windows desktop access
UsageWindowsDesktopOnly = "usage:windows_desktop"
)
const (
// NodeIsAmbiguous serves as an identifying error string indicating that
// the proxy subsystem found multiple nodes matching the specified hostname.
NodeIsAmbiguous = "err-node-is-ambiguous"
// MaxLeases serves as an identifying error string indicating that the
// semaphore system is rejecting an acquisition attempt due to max
// leases having already been reached.
MaxLeases = "err-max-leases"
)
2018-07-25 20:56:14 +00:00
const (
// OpenBrowserLinux is the command used to open a web browser on Linux.
OpenBrowserLinux = "xdg-open"
2018-07-25 20:56:14 +00:00
// OpenBrowserDarwin is the command used to open a web browser on macOS/Darwin.
OpenBrowserDarwin = "open"
// OpenBrowserWindows is the command used to open a web browser on Windows.
OpenBrowserWindows = "rundll32.exe"
// BrowserNone is the string used to suppress the opening of a browser in
// response to 'tsh login' commands.
BrowserNone = "none"
2018-07-25 20:56:14 +00:00
)
const (
// ExecSubCommand is the sub-command Teleport uses to re-exec itself for
// command execution (exec and shells).
ExecSubCommand = "exec"
// ForwardSubCommand is the sub-command Teleport uses to re-exec itself
// for port forwarding.
ForwardSubCommand = "forward"
// CheckHomeDirSubCommand is the sub-command Teleport uses to re-exec itself
// to check if the user's home directory exists.
CheckHomeDirSubCommand = "checkhomedir"
// ParkSubCommand is the sub-command Teleport uses to re-exec itself as a
// specific UID to prevent the matching user from being deleted before
// spawning the intended child process.
ParkSubCommand = "park"
// SFTPSubCommand is the sub-command Teleport uses to re-exec itself to
// handle SFTP connections.
SFTPSubCommand = "sftp"
// WaitSubCommand is the sub-command Teleport uses to wait
// until a domain name stops resolving. Its main use is to ensure no
// auth instances are still running the previous major version.
WaitSubCommand = "wait"
)
const (
// ChanDirectTCPIP is a SSH channel of type "direct-tcpip".
ChanDirectTCPIP = "direct-tcpip"
// ChanSession is a SSH channel of type "session".
ChanSession = "session"
)
const (
// GetHomeDirSubsystem is an SSH subsystem request that Teleport
// uses to get the home directory of a remote user.
GetHomeDirSubsystem = "gethomedir"
// SFTPSubsystem is the SFTP SSH subsystem.
SFTPSubsystem = "sftp"
)
// A principal name for use in SSH certificates.
type Principal string
const (
// The localhost domain, for talking to a proxy or node on the same
// machine.
PrincipalLocalhost Principal = "localhost"
// The IPv4 loopback address, for talking to a proxy or node on the same
// machine.
PrincipalLoopbackV4 Principal = "127.0.0.1"
// The IPv6 loopback address, for talking to a proxy or node on the same
// machine.
PrincipalLoopbackV6 Principal = "::1"
)
// UserSystem defines a user as system.
const UserSystem = "system"
const (
// internal application being proxied.
AppJWTHeader = "teleport-jwt-assertion"
2021-05-06 18:24:49 +00:00
// HostHeader is the name of the Host header.
HostHeader = "Host"
)
// UserSingleUseCertTTL is a TTL for per-connection user certificates.
const UserSingleUseCertTTL = time.Minute
// StandardHTTPSPort is the default port used for the https URI scheme,
// cf. RFC 7230 § 2.7.2.
const StandardHTTPSPort = 443
const (
// KubeSessionDisplayParticipantRequirementsQueryParam is the query parameter used to
// indicate that the client wants to display the participant requirements
// for the given session.
KubeSessionDisplayParticipantRequirementsQueryParam = "displayParticipantRequirements"
// KubeSessionReasonQueryParam is the query parameter used to indicate the reason
// for the session request.
KubeSessionReasonQueryParam = "reason"
// KubeSessionInvitedQueryParam is the query parameter used to indicate the users
// to invite to the session.
KubeSessionInvitedQueryParam = "invite"
)
const (
// KubeLegacyProxySuffix is the suffix used for legacy proxy services when
// generating their names Server names.
KubeLegacyProxySuffix = "-proxy_service"
)