This code is not caught by linters because it's exported and they assume
there's some external users.
Since teleport is relatively self-contained, we can tell for sure
whether something is called or not.
* Add UpdateUser rpc to proto
* Differentiate between create and update in github,oidc,saml
* Edit updated_by event field to be more generic (used with contexts to capture user modifying records)
* Update security issue by removing secrets from user when update/upsert/create (forrest)
* Update createUser in resource_command and require force for updates
With gocheck, tests only run if you call `check.TestingT(t)` from a
dummy `func Test(t *testing.T)`.
Added the missing dummy function call in: `lib/services/suite`,
`lib/shell`. The `lib/shell` tests also turned out to be broken.
If you call the dummy wrapper twice, all tests will run twice.
This was happening in `lib/events/s3sessions` and `lib/services/local`.
Fixed findings:
```
lib/services/github_test.go:99:2: SA4006: this value of `kubeUsers` is never used (staticcheck)
logins, kubeGroups, kubeUsers = connector.MapClaims(GithubClaims{
^
lib/services/github_test.go:107:2: SA4006: this value of `kubeUsers` is never used (staticcheck)
logins, kubeGroups, kubeUsers = connector.MapClaims(GithubClaims{
^
lib/services/local/configuration_test.go:84:2: SA4006: this value of `clusterConfig` is never used (staticcheck)
clusterConfig, err := services.NewClusterConfig(services.ClusterConfigSpecV3{
^
lib/services/local/configuration_test.go:102:2: SA4006: this value of `clusterConfig` is never used (staticcheck)
clusterConfig, err := services.NewClusterConfig(services.ClusterConfigSpecV3{})
^
lib/services/local/presence_test.go:108:2: SA4006: this value of `gotTC` is never used (staticcheck)
gotTC, err = presenceBackend.GetTrustedCluster("foo")
^
lib/services/suite/suite.go:157:2: SA4006: this value of `err` is never used (staticcheck)
out, err := s.WebS.GetUser("user1", false)
^
lib/services/suite/suite.go:208:2: SA4006: this value of `u` is never used (staticcheck)
u, err = s.WebS.GetUser("foo", false)
^
lib/services/suite/suite.go:277:2: SA4006: this value of `err` is never used (staticcheck)
err = s.CAS.CompareAndSwapCertAuthority(&newCA, ca)
^
lib/services/suite/suite.go:339:2: SA4006: this value of `err` is never used (staticcheck)
out, err = s.PresenceS.GetProxies()
^
lib/services/suite/suite.go:1136:5: SA4006: this value of `err` is never used (staticcheck)
role, err := services.NewRole("role1", services.RoleSpecV3{
^
lib/services/suite/suite.go:1166:5: SA4006: this value of `err` is never used (staticcheck)
err := s.Users().UpsertUser(user)
^
```
With OSS version and without using the github connector (only local
auth), logged in user won't have any `kubernetes_groups`. Without
usernames too, user can login but can't use kubectl.
All changes should be noop, except for
`integration/integration_test.go`.
The integration test was ignoring `recordingMode` test case parameter
and always used `RecordAtNode`. When switching to `recordingMode`, test
cases with `RecordAtProxy` fail with a confusing error about missing
user agent. Filed https://github.com/gravitational/teleport/issues/3606
to track that separately and unblock enabling `structcheck` linter.
* Add monorepo
* Add reset/passwd capability for local users (#3287)
* Add UserTokens to allow password resets
* Pass context down through ChangePasswordWithToken
* Rename UserToken to ResetPasswordToken
* Add auto formatting for proto files
* Add common Marshaller interfaces to reset password token
* Allow enterprise "tctl" reuse OSS user methods (#3344)
* Pass localAuthEnabled flag to UI (#3412)
* Added LocalAuthEnabled prop to WebConfigAuthSetting struct in webconfig.go
* Added LocalAuthEnabled state as part of webCfg in apiserver.go
* update e-refs
* Fix a regression bug after merge
* Update tctl CLI output msgs (#3442)
* Use local user client when resolving user roles
* Update webapps ref
* Add and retrieve fields from Cluster struct (#3476)
* Set Teleport versions for node, auth, proxy init heartbeat
* Add and retrieve fields NodeCount, PublicURL, AuthVersion from Clusters
* Remove debug logging to avoid log pollution when getting public_addr of proxy
* Create helper func GuessProxyHost to get the public_addr of a proxy host
* Refactor newResetPasswordToken to use GuessProxyHost and remove publicUrl func
* Remove webapps submodule
* Add webassets submodule
* Replace webapps sub-module reference with webassets
* Update webassets path in Makefile
* Update webassets
1b11b26 Simplify and clean up Makefile (#62) https://github.com/gravitational/webapps/commit/1b11b26
* Retrieve cluster details for user context (#3515)
* Let GuessProxyHost also return proxy's version
* Unit test GuessProxyHostAndVersion & GetClusterDetails
* Update webassets
4dfef4e Fix build pipeline (#66) https://github.com/gravitational/webapps/commit/4dfef4e
* Update e-ref
* Update webassets
0647568 Fix OSS redirects https://github.com/gravitational/webapps/commit/0647568
* update e-ref
* Update webassets
e0f4189 Address security audit warnings Updates "minimist" package which is used by 7y old "optimist". https://github.com/gravitational/webapps/commit/e0f4189
* Add new attr to Session struct (#3574)
* Add fields ServerHostname and ServerAddr
* Set these fields on newSession
* Ensure webassets submodule during build
* Update e-ref
* Ensure webassets before running unit-tests
* Update E-ref
Co-authored-by: Lisa Kim <lisa@gravitational.com>
Co-authored-by: Pierre Beaucamp <pierre@gravitational.com>
Co-authored-by: Jenkins <jenkins@gravitational.io>
Spring cleaning!
A very mechanical cleanup using several linters (unused, deadcode,
structcheck). Build and tests still pass so no behavior should be
affected.
This commit adds support for custom OIDC prompt values.
Read about possible prompt values here:
https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
Three cases are possible:
* Prompt value is not set, this defaults to
OIDC prompt value to select_account value to preserve backwards
compatibility.
```yaml
kind: oidc
version: v2
metadata:
name: connector
spec:
prompt: 'login consent'
```
* Prompt value is set to empty string, it will be omitted
from the auth request.
```yaml
kind: oidc
version: v2
metadata:
name: connector
spec:
prompt: ''
```
* Prompt value is set to non empty string, it will be included
in the auth request as is.
```yaml
kind: oidc
version: v2
metadata:
name: connector
spec:
prompt: 'login consent'
```
Tested with Auth0 OIDC connector on teleport 4.2 enterprise.
This commit fixes#3369, refs #3374
It adds support for kuberenetes_users section in roles,
allowing Teleport proxy to impersonate user identities.
It also extends variable interpolation syntax by adding
suffix and prefix to variables and function `email.local`:
Example:
```yaml
kind: role
version: v3
metadata:
name: admin
spec:
allow:
# extract email local part from the email claim
logins: ['{{email.local(external.email)}}']
# impersonate a kubernetes user with IAM prefix
kubernetes_users: ['IAM#{{external.email}}']
# the deny section uses the identical format as the 'allow' section.
# the deny rules always override allow rules.
deny: {}
```
Some notes on email.local behavior:
* This is the only function supported in the template variables for now
* In case if the email.local will encounter invalid email address,
it will interpolate to empty value, will be removed from resulting
output.
Changes in impersonation behavior:
* By default, if no kubernetes_users is set, which is a majority of cases,
user will impersonate themselves, which is the backwards-compatible behavior.
* As long as at least one `kubernetes_users` is set, the forwarder will start
limiting the list of users allowed by the client to impersonate.
* If the users' role set does not include actual user name, it will be rejected,
otherwise there will be no way to exclude the user from the list).
* If the `kuberentes_users` role set includes only one user
(quite frequently that's the real intent), teleport will default to it,
otherwise it will refuse to select.
This will enable the use case when `kubernetes_users` has just one field to
link the user identity with the IAM role, for example `IAM#{{external.email}}`
* Previous versions of the forwarding proxy were denying all external
impersonation headers, this commit allows 'Impesrsonate-User' and
'Impersonate-Group' header values that are allowed by role set.
* Previous versions of the forwarding proxy ignored 'Deny' section of the roles
when applied to impersonation, this commit fixes that - roles with deny
kubernetes_users and kubernetes_groups section will not allow
impersonation of those users and groups.
This commit fixes#3252
Security patches 4.2 introduced a regression - leaf clusters ignore role mapping
and attempt to use role names coming from identity of the root cluster
whenever GetNodes method was used.
This commit reverts back the logic, however it ensures that the original
fix is preserved - traits and groups are updated on the user object.
Integration test has been extended to avoid the regression in the future.
If the option for port forwarding is not specified, it's enabled by
default. Port forwarding is not specified in the default-implicit-role.
Since it's included in all role sets, port forwarding is always
enabled for all roles.
To fix this, port forwarding in the default-implicit-role is set to
false.
Added package cgroup to orchestrate cgroups. Only support for cgroup2
was added to utilize because cgroup2 cgroups have unique IDs that can be
used correlated with BPF events.
Added bpf package that contains three BPF programs: execsnoop,
opensnoop, and tcpconnect. The bpf package starts and stops these
programs as well correlating their output with Teleport sessions
and emitting them to the audit log.
Added support for Teleport to re-exec itself before launching a shell.
This allows Teleport to start a child process, capture it's PID, place
the PID in a cgroup, and then continue to process. Once the process is
continued it can be tracked by it's cgroup ID.
Reduced the total number of connections to a host so Teleport does not
quickly exhaust all file descriptors. Exhausting all file descriptors
happens very quickly when disk events are emitted to the audit log which
are emitted at a very high rate.
Added tarballs for exec sessions. Updated session.start and session.end
events with additional metadata. Updated the format of session tarballs
to include enhanced events.
Added file configuration for enhanced session recording. Added code to
startup enhanced session recording and pass package to SSH nodes.
This commit fixes goroutine leak - whenever
a leaf cluster disconnects from the root cluster,
the caching access point cache update loop has to be closed
as well.
If an attacker can force a username change at an IdP, upon second login,
the services.User object of the original user can be updated with new
roles and traits. If these new roles and traits differ, the original
user can have their privileges raised (or lowered).
To mitigate this, encode roles and traits within the certificate and use
these when fetching roles to make RBAC decisions. If roles and traits are
not encoded within an certificate (for example for old style SSH
certificates then fallback to using the services.User object and log a
warning.
* Support resource-based bootstrapping for backend.
Outside of static configuration, most of the persistent state of an
auth server exists as a collection of resources, stored in its
backend. The resource API also forms the basis of Teleport's more
advanced dynamic configuration options.
This commit extends the usefulness of the resource API by adding
the ability to bootstrap backend state with a set of previously
exported resources. This allows the resource API to serve as a
rudimentary backup/migration tool.
Notes: This features is a work in progress, and very easy to misuse;
while it will prevent you from overwriting the state of an existing
auth server, it won't stop you from bootstrapping into a wildly
misconfigured state. In general, resource-based bootstrapping is
not a complete solution for backup or migration.
* update e-ref
This commit implements #2543
In SSH terms ProxyJump is a shortcut for SSH client
connecting the proxy/jumphost and requesting .port forwarding to the
target node.
This commit adds support for direct-tcpip request support
in teleport proxy service that is an alias to the existing proxy
subsystem and reuses most of the code.
This commit also adds support to "route to cluster" metadata
encoded in SSH certificate making it possible to have client
SSH certificates to include the metadata that will cause the proxy
to route the client requests to a specific cluster.
`tsh ssh -J proxy:port ` is supported in a limited way:
Only one jump host is supported (-J supports chaining
that teleport does not utilise) and tsh will return with error
in case of two jumphosts: -J a,b will not work.
In case if `tsh ssh -J user@proxy` is used, it overrides
the SSH proxy coming from the tsh profile and port-forwarding
is used instead of the existing teleport proxy subsystem
* Improve help text and error messages for tctl rm, fixes#2594
* Change 'kind' to 'type' for consistency
* Changed examples from role/admin to connector/github
* Added link to Teleport Enterprise
* Update e ref
Update mirror mode (for both the memory and SQLite backends) to no
longer emit events when an element expires. This allows caches to handle
update/delete logic themselves.
This fixes an issue where services.ProxyWatcher was not getting updates
to the list of proxies.
This commit refactor discovery protocol
to make it less dependent on the database and
scale better on large numbers of tunnels.
Reverse tunnel is now always sending
back the list of all proxies registered in the
cluster in the form of discovery requests.
Before this commit, reverse tunnel server was comparing
existing TunnelConnection with the Proxies
and sending back the list of proxies that were not
discovered.
This required nodes to register tunnel connections
in the database and servers poll the connections.
On 10K clusters this is not scalable. Instead,
the change assumes that there is not a lot of
proxies so it's OK to send the information about
them back to all connected agents.
Agent pools can make up their own mind about what to
do with the information - they can ignore
the request as long as they observe all agents
connected to the requested proxies.
At the same time, to avoid using too much traffic,
reverse tunnel server only sends the discovery requests
after the first agent heartbeat and in case if
proxy list changes. To make it possible reverse tunnel
sets up a watch on the proxies.
Added "local_auth" to file configuration and "LocalAuth" to
services.ClusterConfig to control cluster-wide local authentication.
Check local auth settings when generating signup tokens, creating local
users, and login.
Whenever many IOT style nodes are connecting
back to the web proxy server, they all
call /find endpoint to discover the configuration.
This new endpoint is designed to be fast and not
hit the database.
In addition to that every proxy reverse tunnel
connection handler was fetching auth servers and
this commit adds caching for the auth servers
on the proxy side.
Updated services.ReverseTunnel to support type (proxy or node). For
proxy types, which represent trusted cluster connections, when a
services.ReverseTunnel is created, it's created on the remote side with
name /reverseTunnels/example.com. For node types, services.ReverseTunnel
is created on the main side as /reverseTunnels/{nodeUUID}.clusterName.
Updated services.TunnelConn to support type (proxy or node). For proxy
types, which represent trusted cluster connections, tunnel connections
are created on the main side under
/tunnelConnections/remote.example.com/{proxyUUID}-remote.example.com.
For nodes, tunnel connections are created on the main side under
/tunnelConnections/example.com/{proxyUUID}-example.com. This allows
searching for tunnel connections by cluster then allows easily creating
a set of proxies that are missing matching services.TunnelConn.
The reverse tunnel server has been updated to handle heartbeats from
proxies as well as nodes. Proxy heartbeat behavior has not changed.
Heartbeats from nodes now add remote connections to the matching local
site. In addition, the reverse tunnel server now proxies connection to
the Auth Server for requests that are already authenticated (a second
authentication to the Auth Server is required).
For registration, nodes try and connect to the Auth Server to fetch host
credentials. Upon failure, nodes now try and fallback to fetching host
credentials from the web proxy.
To establish a connection to an Auth Server, nodes first try and connect
directly, and if the connection fails, fallback to obtaining a
connection to the Auth Server through the reverse tunnel. If a
connection is established directly, node startup behavior has not
changed. If a node establishes a connection through the reverse tunnel,
it creates an AgentPool that attempts to dial back to the cluster and
establish a reverse tunnel.
When nodes heartbeat, they also heartbeat if they are connected directly
to the cluster or through a reverse tunnel. For nodes that are connected
through a reverse tunnel, the proxy subsystem now directs the reverse
tunnel server to establish a connection through the reverse tunnel
instead of directly.
When sending discovery requests, the domain field has been replaced with
tunnelID. The tunnelID field is either the cluster name (same as before)
for proxies, or {nodeUUID}.example.com for nodes.
Buffer fan out used simple prefix match
in a loop, what resulted in high CPU load
on many connected watchers.
This commit switches to RADIX trees for
prefix matching what reduces CPU load
substantially for 5K+ connected watchers.
This commit expands the usage of the caching layer
for auth server API:
* Introduces in-memory cache that is used to serve all
Auth server API requests. This is done to achieve scalability
on 10K+ node clusters, where each node fetches certificate authorities,
roles, users and join tokens. It is not possible to scale
DynamoDB backend or other backends on 10K reads per seconds
on a single shard or partition. The solution is to introduce
an in-memory cache of the backend state that is always used
for reads.
* In-memory cache has been expanded to support all resources
required by the auth server.
* Experimental `tctl top` command has been introduced to display
common single node metrics.
Replace SQLite Memory Backend with BTree
SQLite in memory backend was suffering from
high tail latencies under load (up to 8 seconds
in 99.9%-ile on load configurations).
This commit replaces the SQLite memory caching
backend with in-memory BTree backend that
brought down tail latencies to 2 seconds (99.9%-ile)
and brought overall performance improvement.
Moved expiry field from spec to metadata for services.Users and updated
expiry check to prefer metadata and fallback to spec if not found. Added
test coverage.